WorldWideScience

Sample records for group averaging techniques

  1. Spectral averaging techniques for Jacobi matrices

    CERN Document Server

    del Rio, Rafael; Schulz-Baldes, Hermann

    2008-01-01

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  2. GROUP PROFILE Computer Technique

    Directory of Open Access Journals (Sweden)

    Andrey V. Sidorenkov

    2015-01-01

    Full Text Available This article contains a description of the structure, the software and functional capabilities, and the scope and purposes of application of the Group Profile (GP computer technique. This technique rests on a conceptual basis (the microgroup theory, includes 16 new and modified questionnaires, and a unique algorithm, tied to the questionnaires, for identification of informal groups. The GP yields a wide range of data about the group as a whole (47 indices, each informal group (43 indices, and each group member (16 indices. The GP technique can be used to study different types of groups: production (work groups, design teams, military units, etc., academic (school classes, student groups, and sports.

  3. Spectral averaging techniques for Jacobi matrices with matrix entries

    CERN Document Server

    Sadel, Christian

    2009-01-01

    A Jacobi matrix with matrix entries is a self-adjoint block tridiagonal matrix with invertible blocks on the off-diagonals. Averaging over boundary conditions leads to explicit formulas for the averaged spectral measure which can potentially be useful for spectral analysis. Furthermore another variant of spectral averaging over coupling constants for these operators is presented.

  4. Characteristics of phase-averaged equations for modulated wave groups

    NARCIS (Netherlands)

    Klopman, G.; Petit, H.A.H.; Battjes, J.A.

    2000-01-01

    The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993). Th

  5. Average Exceptional Lie Group Hierarchy and High Energy Physics

    CERN Document Server

    El Naschie, M S

    2008-01-01

    Starting from an invariant total dimension for an exceptional Lie symmetry groups hierarchy, we drive all the essential characteristic and coupling constants of the fundamental interactions of physics. It is shown in a most simplistic fashion that all physical fields are various transfinite scaling transformation and topological deformation of each other. An extended standard model on the other hand turned out to be a compact sub group H of a version of E7 exceptional Lie group E7(−5) with dimH =69. Thus particle physics, electromagnetism as well as gravity and the bulk are all representable via modular spaces akin to the famous compactified version of F. Klein’s modular curve.

  6. Renormalization group and scaling within the microcanonical fermionic average approach

    CERN Document Server

    Azcoiti, V; Di Carlo, G; Galante, A; Grillo, A F; Azcoiti, V; Laliena, V; Di Carlo, G; Galante, A; Grillo, A F

    1994-01-01

    The MFA approach for simulations with dynamical fermions in lattice gauge theories allows in principle to explore the parameters space of the theory (e.g. the \\beta, m plane for the study of chiral condensate in QED) without the need of computing the fermionic determinant at each point. We exploit this possibility for extracting both the renormalization group trajectories ("constant physics lines") and the scaling function, and we test it in the Schwinger Model. We discuss the applicability of this method to realistic theories.

  7. Improved Lattice Renormalization Group Techniques

    CERN Document Server

    Petropoulos, Gregory; Hasenfratz, Anna; Schaich, David

    2013-01-01

    We compute the bare step-scaling function $s_b$ for SU(3) lattice gauge theory with $N_f = 12$ massless fundamental fermions, using the non-perturbative Wilson-flow-optimized Monte Carlo Renormalization Group two-lattice matching technique. We use a short Wilson flow to approach the renormalized trajectory before beginning RG blocking steps. By optimizing the length of the Wilson flow, we are able to determine an $s_b$ corresponding to a unique discrete $\\beta$ function, after a few blocking steps. We carry out this study using new ensembles of 12-flavor gauge configurations generated with exactly massless fermions, using volumes up to $32^4$. The results are consistent with the existence of an infrared fixed point (IRFP) for all investigated lattice volumes and number of blocking steps. We also compare different renormalization schemes, each of which indicates an IRFP at a slightly different value of the bare coupling, as expected for an IR-conformal theory.

  8. A Hybrid Islanding Detection Technique Using Average Rate of Voltage Change and Real Power Shift

    DEFF Research Database (Denmark)

    Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte

    2009-01-01

    technique is proposed to solve this problem. An average rate of voltage change (passive technique) has been used to initiate a real power shift (active technique), which changes the eal power of distributed generation (DG), when the passive technique cannot have a clear discrimination between islanding......The mainly used islanding detection techniques may be classified as active and passive techniques. Passive techniques don't perturb the system but they have larger nondetection znes, whereas active techniques have smaller nondetection zones but they perturb the system. In this paper, a new hybrid...

  9. Lie Group Techniques for Neural Learning

    Science.gov (United States)

    2005-01-03

    Lie group techniques for Neural Learning Edinburgh June 2004 Elena Celledoni SINTEF Applied Mathematics, IMF-NTNU Lie group techniques for Neural...ORGANIZATION NAME(S) AND ADDRESS(ES) SINTEF Applied Mathematics, IMF-NTNU 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND

  10. A group's physical attractiveness is greater than the average attractiveness of its members : The group attractiveness effect

    NARCIS (Netherlands)

    van Osch, Y.M.J.; Blanken, Irene; Meijs, Maartje H. J.; van Wolferen, Job

    2015-01-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of

  11. A group's physical attractiveness is greater than the average attractiveness of its members : The group attractiveness effect

    NARCIS (Netherlands)

    van Osch, Y.M.J.; Blanken, Irene; Meijs, Maartje H. J.; van Wolferen, Job

    2015-01-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of

  12. A group's physical attractiveness is greater than the average attractiveness of its members: the group attractiveness effect.

    Science.gov (United States)

    van Osch, Yvette; Blanken, Irene; Meijs, Maartje H J; van Wolferen, Job

    2015-04-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of physical attractiveness are more positive than the average ratings of the group members. A meta-analysis on 33 comparisons reveals that the effect is medium to large (Cohen's d = 0.60) and moderated by group size. We explored two explanations for the GA-effect: (a) selective attention to attractive group members, and (b) the Gestalt principle of similarity. The results of our studies are in favor of the selective attention account: People selectively attend to the most attractive members of a group and their attractiveness has a greater influence on the evaluation of the group. © 2015 by the Society for Personality and Social Psychology, Inc.

  13. Average OH density in alternating current dielectric barrier discharge by laser-induced fluorescence technique

    Science.gov (United States)

    Yang, Hongliang; Feng, Chunlei; Gao, Liang; Ding, Hongbin

    2015-10-01

    The average OH density in atmospheric He-H2O(0.4%) needle-plate dielectric barrier discharge (DBD) was measured by the asynchronous laser-induced fluorescence (LIF) technique and the fluctuation of OH radical density was measured simultaneously to prove that the average OH density can be obtained by the asynchronous LIF technique. The evolution of the average OH density in four different discharge patterns, namely, negative barrier corona discharge, glow discharge, multi glow discharge, and streamer discharge, was studied, and it was found that the average OH density has an observable increase from corona discharge to streamer discharge. The main mechanism of OH production in the four different discharge patterns was analyzed. It was shown that the main mechanism of OH production in negative barrier corona discharge is electron direct collision dissociation, whereas in the other three discharge patterns the He metastable Penning ionization is the main process.

  14. How many carboxyl groups does an average molecule of humic-like substances contain?

    Directory of Open Access Journals (Sweden)

    I. Salma

    2008-05-01

    Full Text Available The carboxyl groups of atmospheric humic-like substances (HULIS are of special interest because they influence the solubility in water, affect the water activity and surface tension of droplets in the air, and allow formation of chelates with biologically active elements. Experimentally determined abundances of the carboxyl group within HULIS by functional group analysis are consistent with our knowledge on the average molecular mass of HULIS if the number of dissociable carboxyl groups is assumed to be rather small. The best agreement between the average molecular mass derived from the existing abundance data and the average molecular mass published earlier occurs for assuming approximately one dissociable carboxyl group only. This implies that HULIS can not be regarded as polycarboxilic acid. The average molecular mass of HULIS derived from our electrochemical measurements with the assumption of one dissociable carboxyl group per molecule ranges from 248 to 305 Da. It was concluded that HULIS are a moderately strong/weak acid with a dissociation constant of about pK=3.4, which fits well into the interval represented by fulvic and humic acids. The mean number of dissociable carboxyl groups in HULIS molecules was refined to be between 1.1 and 1.4.

  15. How many carboxyl groups does an average molecule of humic-like substances contain?

    Directory of Open Access Journals (Sweden)

    I. Salma

    2008-10-01

    Full Text Available The carboxyl groups of atmospheric humic-like substances (HULIS are of special interest because they influence the solubility in water, affect the water activity and surface tension of droplets in the air, and allow formation of chelates with biologically active elements. Experimentally determined abundances of the carboxyl group within HULIS by functional group analysis are consistent with our knowledge on the average molecular mass of HULIS if the number of dissociable carboxyl groups is assumed to be rather small. The best agreement between the average molecular mass derived from the existing abundance data and the average molecular mass published earlier occurs for assuming approximately one dissociable carboxyl group only. This implies that HULIS can not be regarded as polycarboxilic acid in diluted solutions. The average molecular mass of HULIS derived from our electrochemical measurements with the assumption of one dissociable carboxyl group or equivalently, one dissociable sulphate ester per molecule ranges from 250 to 310 Da. It was concluded that HULIS are a moderately strong/weak acid with a dissociation constant of about pK=3.4, which fits well into the interval represented by fulvic and humic acids. The mean number of dissociable hydrogen (i.e. of carboxyl groups and sulphate esters jointly in HULIS molecules was refined to be between 1.1 and 1.4 in acidic solutions.

  16. Using Group Theory to Obtain Eigenvalues of Nonsymmetric Systems by Symmetry Averaging

    Directory of Open Access Journals (Sweden)

    Marion L. Ellzey

    2009-08-01

    Full Text Available If the Hamiltonian in the time independent Schrödinger equation, HΨ = EΨ, is invariant under a group of symmetry transformations, the theory of group representations can help obtain the eigenvalues and eigenvectors of H. A finite group that is not a symmetry group of H is nevertheless a symmetry group of an operator Hsym projected from H by the process of symmetry averaging. In this case H = Hsym + HR where HR is the nonsymmetric remainder. Depending on the nature of the remainder, the solutions for the full operator may be obtained by perturbation theory. It is shown here that when H is represented as a matrix [H] over a basis symmetry adapted to the group, the reduced matrix elements of [Hsym] are simple averages of certain elements of [H], providing a substantial enhancement in computational efficiency. A series of examples are given for the smallest molecular graphs. The first is a two vertex graph corresponding to a heteronuclear diatomic molecule. The symmetrized component then corresponds to a homonuclear system. A three vertex system is symmetry averaged in the first case to Cs and in the second case to the nonabelian C3v. These examples illustrate key aspects of the symmetry-averaging process.

  17. A novel car following model considering average speed of preceding vehicles group

    Science.gov (United States)

    Sun, Dihua; Kang, Yirong; Yang, Shuhong

    2015-10-01

    In this paper, a new car following model is presented by considering the average speed effect of preceding vehicles group in cyber-physical systems (CPS) environment. The effect of this new consideration upon the stability of traffic flow is examined through linear stability analysis. A modified Korteweg-de Vries (mKdV) equation was derived via nonlinear analysis to describe the propagating behavior of traffic density wave near the critical point. Good agreement between the simulation and the analytical results shows that average speed of preceding vehicles group leads to the stabilization of traffic systems, and thus can efficiently suppress the emergence of traffic jamming.

  18. Effective Techniques for English Conversation Groups.

    Science.gov (United States)

    Dobson, Julia M.

    This book gathers ideas and practices in teaching English as a second language to serve as a reference for the leader of a conversation group. A variety of tested techniques is included for stimulating conversation among students with a basic command of English. The book begins with a discussion of what is involved in directed conversation…

  19. INTEGRAL AVERAGING TECHNIQUE FOR OSCILLATION OF ELLIPTIC EQUATIONS OF SECOND ORDER

    Institute of Scientific and Technical Information of China (English)

    徐志庭; 贾保国; 马东魁

    2003-01-01

    The elliptic differential equations of second order n∑i,j=1 Di[Aij(x,y)Djy] +P(x,y) +Q(x,y, (△)y) == e(x), x ∈Ω.will be considered in an exterior domain Ω (∩) Rn, n ≥ 2. Some oscillation criteria are given by integral averaging technique.

  20. OSCILLATION RESULTS RELATED TO INTEGRAL AVERAGING TECHNIQUE FOR EVEN ORDER NEUTRAL DIFFERENTIAL EQUATION WITH DEVIATING ARGUMENTS

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we study an even order neutral differential equation with deviating arguments, and obtain new oscillation results without the assumptions which were required for related results given before. Our results extend and improve many known oscillation criteria, based on the standard integral averaging technique.

  1. [Solid phase techniques in blood group serology].

    Science.gov (United States)

    Uthemann, H; Sturmfels, L; Lenhard, V

    1993-06-01

    As alternatives to hemagglutination, solid-phase red blood cell adherence assays are of increasing importance. The adaptation of the new techniques to microplates offers several advantages over hemagglutination. Using microplates the assays may be processed semiautomatically, and the results can be read spectrophotometrically and interpreted by a personal computer. In this paper, different red blood cell adherence assays for AB0 grouping, Rh typing, Rh phenotyping, antibody screening and identification, as well as crossmatching will be described.

  2. The effect of sensor sheltering and averaging techniques on wind measurements at the Shuttle Landing Facility

    Science.gov (United States)

    Merceret, Francis J.

    1995-01-01

    This document presents results of a field study of the effect of sheltering of wind sensors by nearby foliage on the validity of wind measurements at the Space Shuttle Landing Facility (SLF). Standard measurements are made at one second intervals from 30-feet (9.1-m) towers located 500 feet (152 m) from the SLF centerline. The centerline winds are not exactly the same as those measured by the towers. A companion study, Merceret (1995), quantifies the differences as a function of statistics of the observed winds and distance between the measurements and points of interest. This work examines the effect of nearby foliage on the accuracy of the measurements made by any one sensor, and the effects of averaging on interpretation of the measurements. The field program used logarithmically spaced portable wind towers to measure wind speed and direction over a range of conditions as a function of distance from the obstructing foliage. Appropriate statistics were computed. The results suggest that accurate measurements require foliage be cut back to OFCM standards. Analysis of averaging techniques showed that there is no significant difference between vector and scalar averages. Longer averaging periods reduce measurement error but do not otherwise change the measurement in reasonably steady flow regimes. In rapidly changing conditions, shorter averaging periods may be required to capture trends.

  3. Modeling and Control of a Photovoltaic Energy System Using the State-Space Averaging Technique

    Directory of Open Access Journals (Sweden)

    Mohd S. Jamri

    2010-01-01

    Full Text Available Problem statement: This study presented the modeling and control of a stand-alone Photovoltaic (PV system using the state-space averaging technique. Approach: The PV module was modeled based on the parameters obtained from a commercial PV data sheet while state-space method is used to model the power converter. A DC-DC boost converter was chosen to step up the input DC voltage of the PV module while the DC-AC single-phase full-bridge square-wave inverter was chosen to convert the input DC comes from boost converter into AC element. The integrated state-space model was simulated under a constant and a variable change of solar irradiance and temperature. In addition to that, maximum power point tracking method was also included in the model to ensure that optimum use of PV module is made. A circuitry simulation was performed under the similar test conditions in order to validate the state-space model. Results: Results showed that the state-space averaging model yields the similar performance as produced by the circuitry simulation in terms of the voltage, current and power generated. Conclusion/Recommendations: The state-space averaging technique is simple to be implemented in modeling and control of either simple or complex system, which yields the similar performance as the results from circuitry method.

  4. Bisulfite-based epityping on pooled genomic DNA provides an accurate estimate of average group DNA methylation

    Directory of Open Access Journals (Sweden)

    Docherty Sophia J

    2009-03-01

    Full Text Available Abstract Background DNA methylation plays a vital role in normal cellular function, with aberrant methylation signatures being implicated in a growing number of human pathologies and complex human traits. Methods based on the modification of genomic DNA with sodium bisulfite are considered the 'gold-standard' for DNA methylation profiling on genomic DNA; however, they require relatively large amounts of DNA and may be prohibitively expensive when used on the large sample sizes necessary to detect small effects. We propose that a high-throughput DNA pooling approach will facilitate the use of emerging methylomic profiling techniques in large samples. Results Compared with data generated from 89 individual samples, our analysis of 205 CpG sites spanning nine independent regions of the genome demonstrates that DNA pools can be used to provide an accurate and reliable quantitative estimate of average group DNA methylation. Comparison of data generated from the pooled DNA samples with results averaged across the individual samples comprising each pool revealed highly significant correlations for individual CpG sites across all nine regions, with an average overall correlation across all regions and pools of 0.95 (95% bootstrapped confidence intervals: 0.94 to 0.96. Conclusion In this study we demonstrate the validity of using pooled DNA samples to accurately assess group DNA methylation averages. Such an approach can be readily applied to the assessment of disease phenotypes reducing the time, cost and amount of DNA starting material required for large-scale epigenetic analyses.

  5. Interval Generalized Ordered Weighted Utility Multiple Averaging Operators and Their Applications to Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Yunna Wu

    2017-07-01

    Full Text Available We propose a new class of aggregation operator based on utility function and apply them to group decision-making problem. First of all, based on an optimal deviation model, a new operator called the interval generalized ordered weighted utility multiple averaging (IGOWUMA operator is proposed, it incorporates the risk attitude of decision-makers (DMs in the aggregation process. Some desirable properties of the IGOWUMA operator are studied afterward. Subsequently, under the hyperbolic absolute risk aversion (HARA utility function, another new operator named as interval generalized ordered weighted hyperbolic absolute risk aversion utility multiple averaging-HARA (IGOWUMA-HARA operator is also defined. Then, we discuss its families and find that it includes a wide range of aggregation operators. To determine the weights of the IGOWUMA-HARA operator, a preemptive nonlinear objective programming model is constructed, which can determine a uniform weighting vector to guarantee the uniform standard comparison between the alternatives and measure their fair competition under the condition of valid comparison between various alternatives. Moreover, a new approach for group decision-making is developed based on the IGOWUMA-HARA operator. Finally, a comparison analysis is carried out to illustrate the superiority of the proposed method and the result implies that our operator is superior to the existing operator.

  6. Investigations of Tracking Phenomena in Silicone Rubber Using Moving Average Current Technique

    Institute of Scientific and Technical Information of China (English)

    R. Sarathi; S. Chandrasekar

    2004-01-01

    In the present work, tracking phenomenon in Silicone rubber material has been studied under AC and DC voltage, with ammonium chloride as a contaminant. It is observed that the tracking is more severe under the DC voltages. The tracking time is less under negative DC compared to the positive DC voltage. The tracking mechanism is explained in detail. The leakage current during the tracking studies was as measured and the moving average technique was adopted to understand the trend in current flow. The leakage current magnitude is high with thermally aged specimens compared to the virgin specimen, irrespective of the type of applied voltage. It is realized that the tracking time and the leakage current magnitude shows an inverse relationship.

  7. Homogenization via formal multiscale asymptotics and volume averaging: How do the two techniques compare?

    KAUST Repository

    Davit, Yohan

    2013-12-01

    A wide variety of techniques have been developed to homogenize transport equations in multiscale and multiphase systems. This has yielded a rich and diverse field, but has also resulted in the emergence of isolated scientific communities and disconnected bodies of literature. Here, our goal is to bridge the gap between formal multiscale asymptotics and the volume averaging theory. We illustrate the methodologies via a simple example application describing a parabolic transport problem and, in so doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points with great care, and reference many fundamental works. © 2013 Elsevier Ltd.

  8. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    Science.gov (United States)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  9. Biosphere Dose Conversion Factors for Reasonably Maximally Exposed Individual and Average Member of Critical Group

    Energy Technology Data Exchange (ETDEWEB)

    K. Montague

    2000-02-23

    The purpose of this calculation is to develop additional Biosphere Dose Conversion Factors (BDCFs) for a reasonably maximally exposed individual (RMEI) for the periods 10,000 years and 1,000,000 years after the repository closure. In addition, Biosphere Dose Conversion Factors for the average member of a critical group are calculated for those additional radionuclides postulated to reach the environment during the period after 10,000 years and up to 1,000,000 years. After the permanent closure of the repository, the engineered systems within the repository will eventually lose their abilities to contain radionuclide inventory, and the radionuclides will migrate through the geosphere and eventually enter the local water table moving toward inhabited areas. The primary release scenario is a groundwater well used for drinking water supply and irrigation, and this calculation takes these postulated releases and follows them through various pathways until they result in a dose to either a member of critical group or a reasonably maximally exposed individual. The pathways considered in this calculation include inhalation, ingestion, and direct exposure.

  10. Retention of features on a mapped Drosophila brain surface using a Bézier-tube-based surface model averaging technique.

    Science.gov (United States)

    Chen, Guan-Yu; Wu, Cheng-Chi; Shao, Hao-Chiang; Chang, Hsiu-Ming; Chiang, Ann-Shyn; Chen, Yung-Chang

    2012-12-01

    Model averaging is a widely used technique in biomedical applications. Two established model averaging methods, iterative shape averaging (ISA) method and virtual insect brain (VIB) method, have been applied to several organisms to generate average representations of their brain surfaces. However, without sufficient samples, some features of the average Drosophila brain surface obtained using the above methods may disappear or become distorted. To overcome this problem, we propose a Bézier-tube-based surface model averaging strategy. The proposed method first compensates for disparities in position, orientation, and dimension of input surfaces, and then evaluates the average surface by performing shape-based interpolation. Structural features with larger individual disparities are simplified with half-ellipse-shaped Bézier tubes, and are unified according to these tubes to avoid distortion during the averaging process. Experimental results show that the average model yielded by our method could preserve fine features and avoid structural distortions even if only a limit amount of input samples are used. Finally, we qualitatively compare our results with those obtained by ISA and VIB methods by measuring the surface-to-surface distances between input surfaces and the averaged ones. The comparisons show that the proposed method could generate a more representative average surface than both ISA and VIB methods.

  11. The development of early numeracy skills in kindergarten in low-, average- and high-performance groups

    NARCIS (Netherlands)

    Aunio, P.; Heiskari, P.; van Luit, J.E.H.; Vuorio, J.-M.

    2015-01-01

    In this study, we investigated how early numeracy skills develop in kindergarten-age children. The participants were 235 Finnish children (111 girls and 124 boys). At the time of the first measurement, the average age of the children was 6 years. The measurements were conducted three times during 1

  12. Carrier Noise Reduction in Speckle Correlation Interferometry by a Unique Averaging Technique

    Energy Technology Data Exchange (ETDEWEB)

    Pechersky, M.J.

    1999-01-20

    We present experimental result of carrier speckle noise averaging by a novel approach to generate numerous identical correlation fringes with randomly different speckles. The surface under study is sprayed with a new dry paint or a layer each time for the repetitive experiments to generate randomly different surfaces of the carrier speckle patterns.

  13. Capillary-wave models and the effective-average-action scheme of functional renormalization group.

    Science.gov (United States)

    Jakubczyk, P

    2011-08-01

    We reexamine the functional renormalization-group theory of wetting transitions. As a starting point of the analysis we apply an exact equation describing renormalization group flow of the generating functional for irreducible vertex functions. We show how the standard nonlinear renormalization group theory of wetting transitions can be recovered by a very simple truncation of the exact flow equation. The derivation makes all the involved approximations transparent and demonstrates the applicability of the approach in any spatial dimension d≥2. Exploiting the nonuniqueness of the renormalization-group cutoff scheme, we find, however, that the capillary parameter ω is a scheme-dependent quantity below d=3. For d=3 the parameter ω is perfectly robust against scheme variation.

  14. Investigation of the Potts model of a diluted magnet by local field averaging technique

    Science.gov (United States)

    Semkin, S. V.; Smagin, V. P.

    2016-08-01

    Averaging of the local interatomic interaction fields has been applied to the Potts model of a diluted magnet. A self-consistent equation for the magnetization and an equation for the phase transition temperature have been derived. The temperature and magnetic atom density dependences of the spontaneous magnetization have been found for the lattices with the coordination numbers 3 and 4 and various numbers of spin states.

  15. New Tone Reservation Technique for Peak to Average Power Ratio Reduction

    Science.gov (United States)

    Wilharm, Joachim; Rohling, Hermann

    2014-09-01

    In Orthogonal Frequency Division Multiplexing (OFDM) the transmit signals have a highly fluctuating, non-constant envelope which is a technical challenge for the High Power Amplifier (HPA). Without any signal processing procedures the amplitude peaks of the transmit signal will be clipped by the HPA resulting in out-ofband radiation and in bit error rate (BER) performance degradation. The classical Tone Reservation (TR) technique calculates a correction signal in an iterative way to reduce the amplitude peaks. However this step leads to a high computational complexity. Therefore, in this paper an alternative TR technique is proposed. In this case a predefined signal pattern is shifted to any peak position inside the transmit signal and reduces thereby all amplitude peaks. This new procedure is able to outperform the classical TR technique and has a much lower computational complexity.

  16. Loop expansion of the average effective action in the functional renormalization group approach

    Science.gov (United States)

    Lavrov, Peter M.; Merzlikin, Boris S.

    2015-10-01

    We formulate a perturbation expansion for the effective action in a new approach to the functional renormalization group method based on the concept of composite fields for regulator functions being their most essential ingredients. We demonstrate explicitly the principal difference between the properties of effective actions in these two approaches existing already on the one-loop level in a simple gauge model.

  17. Loop expansion of average effective action in functional renormalization group approach

    CERN Document Server

    Lavrov, Peter M

    2015-01-01

    We formulate a perturbation expansion for the effective action in new approach to the functional renormalization group (FRG) method based on concept of composite fields for regulator functions being therein most essential ingredients. We demonstrate explicitly the principal difference between properties of effective actions in these two approaches existing already on the one-loop level in a simple gauge model.

  18. Highly efficient sparse-matrix inversion techniques and average procedures applied to collisional-radiative codes

    CERN Document Server

    Poirier, M

    2009-01-01

    The behavior of non-local thermal-equilibrium (NLTE) plasmas plays a central role in many fields of modern-day physics, such as laser-produced plasmas, astrophysics, inertial or magnetic confinement fusion devices, or X-ray sources. The proper description of these media in stationary cases requires to solve linear systems of thousands or more rate equations. A possible simplification for this arduous numerical task may lie in some type of statistical average, such as configuration or superconfiguration average. However to assess the validity of this procedure and to handle cases where isolated lines play an important role, it may be important to deal with detailed levels systems. This involves matrices with sometimes billions of elements, which are rather sparse but still involve thousands of diagonals. We propose here a numerical algorithm based on the LU decomposition for such linear systems. This method turns out to be orders of magnitude faster than the traditional Gauss elimination. And at variance with ...

  19. A Novel Averaging Technique for Discrete Entropy-Stable Dissipation Operators for Ideal MHD

    CERN Document Server

    Derigs, Dominik; Gassner, Gregor J; Walch, Stefanie

    2016-01-01

    Entropy stable schemes can be constructed with a specific choice of the numerical flux function. First, an entropy conserving flux is constructed. Secondly, an entropy stable dissipation term is added to this flux to guarantee dissipation of the discrete entropy. Present works in the field of entropy stable numerical schemes are concerned with thorough derivations of entropy conservative fluxes for ideal MHD. However, as we show in this work, if the dissipation operator is not constructed in a very specific way, it cannot lead to a generally stable numerical scheme. The two main findings presented in this paper are that the entropy conserving flux of Ismail & Roe can easily break down for certain initial conditions commonly found in astrophysical simulations, and that special care must be taken in the derivation of a discrete dissipation matrix for an entropy stable numerical scheme to be robust. We present a convenient novel averaging procedure to evaluate the entropy Jacobians of the ideal MHD and the c...

  20. A novel averaging technique for discrete entropy-stable dissipation operators for ideal MHD

    Science.gov (United States)

    Derigs, Dominik; Winters, Andrew R.; Gassner, Gregor J.; Walch, Stefanie

    2017-02-01

    Entropy stable schemes can be constructed with a specific choice of the numerical flux function. First, an entropy conserving flux is constructed. Secondly, an entropy stable dissipation term is added to this flux to guarantee dissipation of the discrete entropy. Present works in the field of entropy stable numerical schemes are concerned with thorough derivations of entropy conservative fluxes for ideal MHD. However, as we show in this work, if the dissipation operator is not constructed in a very specific way, it cannot lead to a generally stable numerical scheme. The two main findings presented in this paper are that the entropy conserving flux of Ismail & Roe can easily break down for certain initial conditions commonly found in astrophysical simulations, and that special care must be taken in the derivation of a discrete dissipation matrix for an entropy stable numerical scheme to be robust. We present a convenient novel averaging procedure to evaluate the entropy Jacobians of the ideal MHD and the compressible Euler equations that yields a discretization with favorable robustness properties.

  1. ASSESSMENT OF DYNAMIC PRA TECHNIQUES WITH INDUSTRY AVERAGE COMPONENT PERFORMANCE DATA

    Energy Technology Data Exchange (ETDEWEB)

    Yadav, Vaibhav; Agarwal, Vivek; Gribok, Andrei V.; Smith, Curtis L.

    2017-06-01

    In the nuclear industry, risk monitors are intended to provide a point-in-time estimate of the system risk given the current plant configuration. Current risk monitors are limited in that they do not properly take into account the deteriorating states of plant equipment, which are unit-specific. Current approaches to computing risk monitors use probabilistic risk assessment (PRA) techniques, but the assessment is typically a snapshot in time. Living PRA models attempt to address limitations of traditional PRA models in a limited sense by including temporary changes in plant and system configurations. However, information on plant component health are not considered. This often leaves risk monitors using living PRA models incapable of conducting evaluations with dynamic degradation scenarios evolving over time. There is a need to develop enabling approaches to solidify risk monitors to provide time and condition-dependent risk by integrating traditional PRA models with condition monitoring and prognostic techniques. This paper presents estimation of system risk evolution over time by integrating plant risk monitoring data with dynamic PRA methods incorporating aging and degradation. Several online, non-destructive approaches have been developed for diagnosing plant component conditions in nuclear industry, i.e., condition indication index, using vibration analysis, current signatures, and operational history [1]. In this work the component performance measures at U.S. commercial nuclear power plants (NPP) [2] are incorporated within the various dynamic PRA methodologies [3] to provide better estimates of probability of failures. Aging and degradation is modeled within the Level-1 PRA framework and is applied to several failure modes of pumps and can be extended to a range of components, viz. valves, generators, batteries, and pipes.

  2. Average reservoir pressure determination for homogeneous and naturally fractured formations from multi-rate testing with the TDS technique

    Energy Technology Data Exchange (ETDEWEB)

    Escobar, Freddy Humberto; Ibagon, Oscar Eduardo; Montealegre-M, Matilde [Universidad Surcolombiana, Av. Pastrana-Cra. 1, Neiva, Huila (Colombia)

    2007-11-15

    Average reservoir pressure is an important parameter which is utilized in almost all reservoir and production engineering studies. It also plays a relevant role in the majority of well intervention jobs, field appraisal, well sizing and equipment and surface facilities design. The estimation of the average reservoir pressure is normally obtained from buildup tests. However, it has a tremendous economic impact caused by shutting-in the well during the entire test. Since buildup tests are the most particular case of multi-rate tests, these are also used for estimation of the average reservoir pressure. Among them, two-rate tests present drawbacks because it is operationally difficult to keep constant the flow rates. Conventional methods for determination of the average reservoir pressure can be readily extended to multi-rate tests once the rigorous time is converted to equivalent time by time superposition. In this article a new, easy and practical methodology is presented for the determination of the average pressure in both homogeneous and naturally fractured reservoirs from multi-rate tests conducted in vertical oil wells located inside a close drainage region. The methodology which follows the philosophy of the TDS technique uses a normalized pressure and pressure derivative point found on any arbitrary point during the pseudosteady-state flow regime to readily provide the average reservoir pressure value. For verification of the effectiveness of the proposed solution, several field and simulated examples were worked out. We found that the average reservoir pressure results obtained from the proposed methodology match very well with those estimated from either conventional techniques or simulations. (author)

  3. Average bond energies between boron and elements of the fourth, fifth, sixth, and seventh groups of the periodic table

    Science.gov (United States)

    Altshuller, Aubrey P

    1955-01-01

    The average bond energies D(gm)(B-Z) for boron-containing molecules have been calculated by the Pauling geometric-mean equation. These calculated bond energies are compared with the average bond energies D(exp)(B-Z) obtained from experimental data. The higher values of D(exp)(B-Z) in comparison with D(gm)(B-Z) when Z is an element in the fifth, sixth, or seventh periodic group may be attributed to resonance stabilization or double-bond character.

  4. Ultra low voltage and low power Static Random Access Memory design using average 6.5T technique

    Directory of Open Access Journals (Sweden)

    Nagalingam RAJESWARAN

    2015-12-01

    Full Text Available Power Stringent Static Random Access Memory (SRAM design is very much essential in embedded systems such as biomedical implants, automotive electronics and energy harvesting devices in which battery life, input power and execution delay are of main concern. With reduced supply voltage, SRAM cell design will go through severe stability issues. In this paper, we present a highly stable average nT SRAM cell for ultra-low power in 125nm technology. The distinct difference between the proposed technique and other conventional methods is about the data independent leakage in the read bit line which is achieved by newly introduced block mask transistors. An average 6.5T SRAM and average 8T SRAM are designed and compared with 6T SRAM, 8T SRAM, 9T SRAM, 10T SRAM and 14T SRAM cells. The result indicates that there is an appreciable decrease in power consumption and delay.

  5. Group Investigation Teaching Technique in Turkish Primary Science Courses

    Science.gov (United States)

    Aksoy, Gokhan; Gurbuz, Fatih

    2013-01-01

    This study examined the effectiveness of group investigation teaching technique in teaching "Light" unit 7th grade primary science education level. This study was carried out in two different classes in the Primary school during the 2011-2012 academic year in Erzurum, Turkey. One of the classes was the Experimental Group (group…

  6. Effective group training techniques in job-search training.

    Science.gov (United States)

    Vuori, Jukka; Price, Richard H; Mutanen, Pertti; Malmberg-Heimonen, Ira

    2005-07-01

    The aim was to examine the effects of group training techniques in job-search training on later reemployment and mental health. The participants were 278 unemployed workers in Finland in 71 job-search training groups. Five group-level dimensions of training were identified. The results of hierarchical linear modeling demonstrated that preparation for setbacks at the group level significantly predicted decreased psychological distress and decreased symptoms of depression at the half-year follow-up. Trainer skills at the group level significantly predicted decreased symptoms of depression and reemployment to stable jobs. Interaction analyses showed that preparation for setbacks at the group level predicted fewer symptoms of psychological distress and depression, and shared perceptions of skilled trainers at the group level predicted fewer symptoms of depression among those who had been at risk for depression. Copyright (c) 2005 APA, all rights reserved.

  7. Comparison of cognition abilities between groups of children with specific learning disability having average, bright normal and superior nonverbal intelligence

    Directory of Open Access Journals (Sweden)

    Karande Sunil

    2005-03-01

    Full Text Available BACKGROUND: Specific learning disabilities (SpLD viz. dyslexia, dysgraphia and dyscalculia are an important cause of academic underachievement. Aims: To assess whether cognition abilities vary in children with SpLD having different grades of nonverbal intelligence. SETTING: Government recognized clinic in a medical college. DESIGN: Cross-sectional study. SUBJECTS AND METHODS: Ninety-five children with SpLD (aged 9-14 years were assessed. An academic achievement of two years below the actual grade placement on educational assessment with a Curriculum-Based test was considered diagnostic of SpLD. On basis of their nonverbal Intelligence Quotient (IQ scores obtained on the Wechsler Intelligence Scale for Children test, the study children were divided into three groups: (i average-nonverbal intelligence group (IQ 90-109, (ii bright normal-nonverbal intelligence group (IQ 110-119, and (iii superior-nonverbal intelligence group (IQ 120-129. A battery of 13 Cognition Function tests (CFTs devised by Jnana Prabodhini′s Institute of Psychology, Pune based on Guilford′s Structure of Intellect Model was administered individually on each child in the four areas of information viz. figural, symbolic, semantic and behavioral. STATISTICAL ANALYSIS USED: The mean CFTs scores obtained in the four areas of information were calculated for each of the three groups and compared using one-way analysis of variance test. A P value < 0.05 was to be considered statistically significant. RESULTS: There were no statistically significant differences between their mean CFTs scores in any of the four areas of information. CONCLUSIONS: Cognition abilities are similar in children with SpLD having average, bright-normal and superior nonverbal intelligence.

  8. Average exceptional Lie and Coxeter group hierarchies with special reference to the standard model of high energy particle physics

    Energy Technology Data Exchange (ETDEWEB)

    El Naschie, M.S. [King Abdullah Al Saud Institute of Nano and Advanced Technologies, Riyadh (Saudi Arabia)], E-mail: Chaossf@aol.com

    2008-08-15

    The notions of the order of a symmetry group may be extended to that of an average, non-integer order. Building on this extension, it can be shown that the five classical exceptional Lie symmetry groups could be extended to a hierarchy, the total sum of which is four times {alpha}-bar{sub 0}=137+k{sub 0} of the electromagnetic field. Subsequently it can be shown that all known and conjectured physical fields may be derived by E-infinity transfinite scaling transformation. Consequently E{sub 8}E{sub 8} exceptional Lie symmetry groups manifold, the SL(2,7){sub c} holographic modular curve boundary {gamma}(7), Einstein-Kaluza gravity R{sup (n=4)} and R{sup (n=5)} as well as the electromagnetic field are all topological transformations of each other. It is largely a matter of mathematical taste to choose E{sub 8} or the electromagnetic field associated with {alpha}-bar{sub 0} as derived or as fundamental. However since E{sub 8} has been extensively studied by the founding father of group theory and has recently been mapped completely, it seems beneficial to discuss at least high energy physics starting from the largest of the exceptional groups.

  9. Expressed satisfaction with the nominal group technique among change agents

    NARCIS (Netherlands)

    Gresham, J.N.

    2006-01-01

    Expressed Satisfaction with the Nominal Group Technique Among Change Agents. Jon Neal Gresham The purpose of this study was to determine whether or not policymakers and change agents with differing professional backgrounds and responsibilities, who participated in the structured process of a nomi

  10. Expressed satisfaction with the nominal group technique among change agents

    NARCIS (Netherlands)

    Gresham, J.N.

    1986-01-01

    Expressed Satisfaction with the Nominal Group Technique Among Change Agents. Jon Neal Gresham The purpose of this study was to determine whether or not policymakers and change agents with differing professional backgrounds and responsibilities, who participated in the structured process of a

  11. How to use the nominal group and Delphi techniques.

    Science.gov (United States)

    McMillan, Sara S; King, Michelle; Tully, Mary P

    2016-06-01

    Introduction The Nominal Group Technique (NGT) and Delphi Technique are consensus methods used in research that is directed at problem-solving, idea-generation, or determining priorities. While consensus methods are commonly used in health services literature, few studies in pharmacy practice use these methods. This paper provides an overview of the NGT and Delphi technique, including the steps involved and the types of research questions best suited to each method, with examples from the pharmacy literature. Methodology The NGT entails face-to-face discussion in small groups, and provides a prompt result for researchers. The classic NGT involves four key stages: silent generation, round robin, clarification and voting (ranking). Variations have occurred in relation to generating ideas, and how 'consensus' is obtained from participants. The Delphi technique uses a multistage self-completed questionnaire with individual feedback, to determine consensus from a larger group of 'experts.' Questionnaires have been mailed, or more recently, e-mailed to participants. When to use The NGT has been used to explore consumer and stakeholder views, while the Delphi technique is commonly used to develop guidelines with health professionals. Method choice is influenced by various factors, including the research question, the perception of consensus required, and associated practicalities such as time and geography. Limitations The NGT requires participants to personally attend a meeting. This may prove difficult to organise and geography may limit attendance. The Delphi technique can take weeks or months to conclude, especially if multiple rounds are required, and may be complex for lay people to complete.

  12. Approximate Solution of the Point Reactor Kinetic Equations of Average One-Group of Delayed Neutrons for Step Reactivity Insertion

    Directory of Open Access Journals (Sweden)

    S. Yamoah

    2012-04-01

    Full Text Available The understanding of the time-dependent behaviour of the neutron population in a nuclear reactor in response to either a planned or unplanned change in the reactor conditions is of great importance to the safe and reliable operation of the reactor. In this study two analytical methods have been presented to solve the point kinetic equations of average one-group of delayed neutrons. These methods which are both approximate solution of the point reactor kinetic equations are compared with a numerical solution using the Euler’s first order method. To obtain accurate solution for the Euler method, a relatively small time step was chosen for the numerical solution. These methods are applied to different types of reactivity to check the validity of the analytical method by comparing the analytical results with the numerical results. From the results, it is observed that the analytical solution agrees well with the numerical solution.

  13. Generic features of the dynamics of complex open quantum systems: statistical approach based on averages over the unitary group.

    Science.gov (United States)

    Gessner, Manuel; Breuer, Heinz-Peter

    2013-04-01

    We obtain exact analytic expressions for a class of functions expressed as integrals over the Haar measure of the unitary group in d dimensions. Based on these general mathematical results, we investigate generic dynamical properties of complex open quantum systems, employing arguments from ensemble theory. We further generalize these results to arbitrary eigenvalue distributions, allowing a detailed comparison of typical regular and chaotic systems with the help of concepts from random matrix theory. To illustrate the physical relevance and the general applicability of our results we present a series of examples related to the fields of open quantum systems and nonequilibrium quantum thermodynamics. These include the effect of initial correlations, the average quantum dynamical maps, the generic dynamics of system-environment pure state entanglement and, finally, the equilibration of generic open and closed quantum systems.

  14. Comparison of Techniques to Estimate Ammonia Emissions at Cattle Feedlots Using Time-Averaged and Instantaneous Concentration Measurements

    Science.gov (United States)

    Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.

    2013-12-01

    Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first

  15. Description and interpretation of the bracts epidermis of Gramineae (Poaceae) with rotated image with maximum average power spectrum (RIMAPS) technique.

    Science.gov (United States)

    Favret, Eduardo A; Fuentes, Néstor O; Molina, Ana M; Setten, Lorena M

    2008-10-01

    During the last few years, RIMAPS technique has been used to characterize the micro-relief of metallic surfaces and recently also applied to biological surfaces. RIMAPS is an image analysis technique which uses the rotation of an image and calculates its average power spectrum. Here, it is presented as a tool for describing the morphology of the trichodium net found in some grasses, which is developed on the epidermal cells of the lemma. Three different species of grasses (herbarium samples) are analyzed: Podagrostis aequivalvis (Trin.) Scribn. & Merr., Bromidium hygrometricum (Nees) Nees & Meyen and Bromidium ramboi (Parodi) Rúgolo. Simple schemes representing the real microstructure of the lemma are proposed and studied. RIMAPS spectra of both the schemes and the real microstructures are compared. These results allow inferring how similar the proposed geometrical schemes are to the real microstructures. Each geometrical pattern could be used as a reference for classifying other species. Finally, this kind of analysis is used to determine the morphology of the trichodium net of Agrostis breviculmis Hitchc. As the dried sample had shrunk and the microstructure was not clear, two kinds of morphology are proposed for the trichodium net of Agrostis L., one elliptical and the other rectilinear, the former being the most suitable.

  16. Teleradiology based CT colonography to screen a population group of a remote island; at average risk for colorectal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Lefere, Philippe, E-mail: radiologie@skynet.be [VCTC, Virtual Colonoscopy Teaching Centre, Akkerstraat 32c, B-8830 Hooglede (Belgium); Silva, Celso, E-mail: caras@uma.pt [Human Anatomy of Medical Course, University of Madeira, Praça do Município, 9000-082 Funchal (Portugal); Gryspeerdt, Stefaan, E-mail: stefaan@sgryspeerdt.be [VCTC, Virtual Colonoscopy Teaching Centre, Akkerstraat 32c, B-8830 Hooglede (Belgium); Rodrigues, António, E-mail: nucleo@nid.pt [Nucleo Imagem Diagnostica, Rua 5 De Outubro, 9000-216 Funchal (Portugal); Vasconcelos, Rita, E-mail: rita@uma.pt [Department of Engineering and Mathematics, University of Madeira, Praça do Município, 9000-082 Funchal (Portugal); Teixeira, Ricardo, E-mail: j.teixeira1947@gmail.com [Department of Gastroenterology, Central Hospital of Funchal, Avenida Luís de Camões, 9004513 Funchal (Portugal); Gouveia, Francisco Henriques de, E-mail: fhgouveia@netmadeira.com [LANA, Pathology Centre, Rua João Gago, 10, 9000-071 Funchal (Portugal)

    2013-06-15

    Purpose: To prospectively assess the performance of teleradiology-based CT colonography to screen a population group of an island, at average risk for colorectal cancer. Materials and methods: A cohort of 514 patients living in Madeira, Portugal, was enrolled in the study. Institutional review board approval was obtained and all patients signed an informed consent. All patients underwent both CT colonography and optical colonoscopy. CT colonography was interpreted by an experienced radiologist at a remote centre using tele-radiology. Per-patient sensitivity, specificity, positive (PPV) and negative (NPV) predictive values with 95% confidence intervals (95%CI) were calculated for colorectal adenomas and advanced neoplasia ≥6 mm. Results: 510 patients were included in the study. CT colonography obtained a per-patient sensitivity, specificity, PPV and, NPV for adenomas ≥6 mm of 98.11% (88.6–99.9% 95% CI), 90.97% (87.8–93.4% 95% CI), 56.52% (45.8–66.7% 95% CI), 99.75% (98.4–99.9% 95% CI). For advanced neoplasia ≥6 mm per-patient sensitivity, specificity, PPV and, NPV were 100% (86.7–100% 95% CI), 87.07% (83.6–89.9% 95% CI), 34.78% (25.3–45.5% 95% CI) and 100% (98.8–100% 95% CI), respectively. Conclusion: In this prospective trial, teleradiology-based CT colonography was accurate to screen a patient cohort of a remote island, at average risk for colorectal cancer.

  17. Curriculum revision: reaching faculty consensus through the Nominal Group Technique.

    Science.gov (United States)

    Davis, D C; Rhodes, R; Baker, A S

    1998-10-01

    A fundamental concept to initiate change in the curriculum revision process is to overcome resistance to change and the boundaries of self-interest. Curriculum change cannot occur without an "unfreezing" of faculty values and interests. The Nominal Group Technique (NGT) was used to facilitate faculty identification of areas needing change in the undergraduate nursing curriculum. The process led to the generation of numerous independent ideas in which all faculty participated. The revised curriculum which resulted from the NGT process has had full and enthusiastic support of the faculty.

  18. Intersubject variability in the analysis of diffusion tensor images at the group level: fractional anisotropy mapping and fiber tracking techniques.

    Science.gov (United States)

    Müller, Hans-Peter; Unrath, Alexander; Riecker, Axel; Pinkhardt, Elmar H; Ludolph, Albert C; Kassubek, Jan

    2009-04-01

    Diffusion tensor imaging (DTI) provides comprehensive information about quantitative diffusion and connectivity in the human brain. Transformation into stereotactic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The objective of the present study was to optimize technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. Different averaging methods for mean diffusion-weighted images containing DTI information were compared, i.e., region of interest-based fractional anisotropy (FA) mapping, fiber tracking (FT) and corresponding tractwise FA statistics (TFAS). The novel technique of intersubject FT that takes into account directional information of single data sets during the FT process was compared to standard FT techniques. Application of the methods was shown in the comparison of normal subjects and subjects with defined white matter pathology (alterations of the corpus callosum). Fiber tracking was applied to averaged data sets and showed similar results compared with FT on single subject data. The application of TFAS to averaged data showed averaged FA values around 0.4 for normal controls. The values were in the range of the standard deviation for averaged FA values for TFAS applied to single subject data. These results were independent of the applied averaging technique. A significant reduction of the averaged FA values was found in comparison to TFAS applied to data from subjects with defined white matter pathology (FA around 0.2). The applicability of FT techniques in the analysis of different subjects at the group level was demonstrated. Group comparisons as well as FT on group averaged data were shown to be feasible. The objective of this work was to identify the most appropriate method for intersubject averaging and group comparison which incorporates intersubject variability of

  19. The Effect of Computer Based Instructional Technique for the Learning of Elementary Level Mathematics among High, Average and Low Achievers

    Science.gov (United States)

    Afzal, Muhammad Tanveer; Gondal, Bashir; Fatima, Nuzhat

    2014-01-01

    The major objective of the study was to elicit the effect of three instructional methods for teaching of mathematics on low, average and high achiever elementary school students. Three methods: traditional instructional method, computer assisted instruction (CAI) and teacher facilitated mathematics learning software were employed for the teaching…

  20. A micrometeorological technique for detecting small differences in methane emissions from two groups of cattle

    Science.gov (United States)

    Laubach, Johannes; Grover, Samantha P. P.; Pinares-Patiño, Cesar S.; Molano, German

    2014-12-01

    Potential approaches for reducing enteric methane (CH4) emissions from cattle will require verification of their efficacy at the paddock scale. We designed a micrometeorological approach to compare emissions from two groups of grazing cattle. The approach consists of measuring line-averaged CH4 mole fractions upwind and downwind of each group and using a backward-Lagrangian stochastic model to compute CH4 emission rates from the observed mole fractions, in combination with turbulence statistics measured by a sonic anemometer. With careful screening for suitable wind conditions, a difference of 10% in group emission rates could be detected. This result was corroborated by simultaneous measurements of daily CH4 emissions from each animal with the sulfur hexafluoride (SF6) tracer-ratio technique.

  1. Group Therapy Techniques for Sexually Abused Preteen Girls.

    Science.gov (United States)

    Berman, Pearl

    1990-01-01

    Describes an open-ended, structured, highly intensive therapy group for sexually abused preteen girls that was the primary mode of treatment for 11 girls from low-income, rural White families with numerous problems. Unique features of the group included simultaneous group and individualized goals. (Author/BB)

  2. Facile in situ characterization of gold nanoparticles on electrode surfaces by electrochemical techniques: average size, number density and morphology determination.

    Science.gov (United States)

    Wang, Ying; Laborda, Eduardo; Salter, Chris; Crossley, Alison; Compton, Richard G

    2012-10-21

    A fast and cheap in situ approach is presented for the characterization of gold nanoparticles from electrochemical experiments. The average size and number of nanoparticles deposited on a glassy carbon electrode are determined from the values of the total surface area and amount of gold obtained by lead underpotential deposition and by stripping of gold in hydrochloric acid solution, respectively. The morphology of the nanoparticle surface can also be analyzed from the "fingerprint" in lead deposition/stripping experiments. The method is tested through the study of gold nanoparticles deposited on a glassy carbon substrate by seed-mediated growth method which enables an easy control of the nanoparticle size. The procedure is also applied to the characterization of supplied gold nanoparticles. The results are in satisfactory agreement with those obtained via scanning electron microscopy.

  3. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  4. An efficient nonlinear relaxation technique for the three-dimensional, Reynolds-averaged Navier-Stokes equations

    Science.gov (United States)

    Edwards, Jack R.; Mcrae, D. S.

    1993-01-01

    An efficient implicit method for the computation of steady, three-dimensional, compressible Navier-Stokes flowfields is presented. A nonlinear iteration strategy based on planar Gauss-Seidel sweeps is used to drive the solution toward a steady state, with approximate factorization errors within a crossflow plane reduced by the application of a quasi-Newton technique. A hybrid discretization approach is employed, with flux-vector splitting utilized in the streamwise direction and central differences with artificial dissipation used for the transverse fluxes. Convergence histories and comparisons with experimental data are presented for several 3-D shock-boundary layer interactions. Both laminar and turbulent cases are considered, with turbulent closure provided by a modification of the Baldwin-Barth one-equation model. For the problems considered (175,000-325,000 mesh points), the algorithm provides steady-state convergence in 900-2000 CPU seconds on a single processor of a Cray Y-MP.

  5. Short-term sandbar variability based on video imagery: Comparison between Time-Average and Time-Variance techniques

    Science.gov (United States)

    Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.

    2011-01-01

    Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this

  6. Calculation of average molecular parameters, functional groups, and a surrogate molecule for heavy fuel oils using 1H and 13C NMR spectroscopy

    KAUST Repository

    Abdul Jameel, Abdul Gani

    2016-04-22

    Heavy fuel oil (HFO) is primarily used as fuel in marine engines and in boilers to generate electricity. Nuclear Magnetic Resonance (NMR) is a powerful analytical tool for structure elucidation and in this study, 1H NMR and 13C NMR spectroscopy were used for the structural characterization of 2 HFO samples. The NMR data was combined with elemental analysis and average molecular weight to quantify average molecular parameters (AMPs), such as the number of paraffinic carbons, naphthenic carbons, aromatic hydrogens, olefinic hydrogens, etc. in the HFO samples. Recent formulae published in the literature were used for calculating various derived AMPs like aromaticity factor 〖(f〗_a), C/H ratio, average paraffinic chain length (¯n), naphthenic ring number 〖(R〗_N), aromatic ring number〖 (R〗_A), total ring number〖 (R〗_T), aromatic condensation index (φ) and aromatic condensation degree (Ω). These derived AMPs help in understanding the overall structure of the fuel. A total of 19 functional groups were defined to represent the HFO samples, and their respective concentrations were calculated by formulating balance equations that equate the concentration of the functional groups with the concentration of the AMPs. Heteroatoms like sulfur, nitrogen, and oxygen were also included in the functional groups. Surrogate molecules were finally constructed to represent the average structure of the molecules present in the HFO samples. This surrogate molecule can be used for property estimation of the HFO samples and also serve as a surrogate to represent the molecular structure for use in kinetic studies.

  7. The focus group technique in electoral research - an experimental project

    Directory of Open Access Journals (Sweden)

    SANTOS NEVES, Manuela Lopes

    2012-01-01

    Full Text Available The article is about the application of focus group method in electoral research and its contribution to the strategic planning of campaigns. The methodological approach and analysis were based on the nature of information that this kind of research may provide. The starting point was an experimental research conducted by the campaign of a re-election candidate to the House of Representatives of the state of Espírito Santo.

  8. New reduction techniques for the group Steiner tree problem

    NARCIS (Netherlands)

    Oliveira Filho, F.M. de; Ferreira, C.E.

    2007-01-01

    The group Steiner tree problem consists of, given a graph $G$, a collection $\\mathcal{R}$ of subsets of $V(G)$, and a positive cost $c_e$ for each edge $e$ of $G$, finding a minimum-cost tree in $G$ that contains at least one vertex from each $R \\in \\mathcal{R}$. We call the sets in $\\mathcal{R}$ gr

  9. Facilitation from hand muscles innervated by the ulnar nerve to the extensor carpi radialis motoneurone pool in humans: a study with an electromyogram-averaging technique.

    Science.gov (United States)

    Suzuki, Katsuhiko; Ogawa, Keiichi; Sato, Toshiaki; Nakano, Haruki; Fujii, Hiromi; Shindo, Masaomi; Naito, Akira

    2012-10-01

    Effects of low-threshold afferents of hand muscles innervated by the ulnar nerve on an excitability of the extensor carpi radialis (ECR) motoneurone pool in humans were examined using an electromyogram-averaging (EMG-A) technique. Changes of EMG-A of ECR exhibiting 10% of the maximum contraction by electrical stimulation to the ulnar nerve at the wrist (ES-UN) and mechanical stimulation to the hypothenar muscles (MS-HTM) and first dorsal interosseus (MS-FDI) were evaluated in eight normal human subjects. The ES-UN with the intensity immediately below the motor threshold and MS-HTM and -FDI with the intensity below the threshold of the tendon(T)-reflex were delivered. Early and significant peaks in EMG-A were produced by ES-UN, MS-HTM, and MS-FDI in eight of eight subjects. The mean amplitudes of the peaks by ES-UN, MS-HTM, and MS-FDI were, respectively, 121.9%, 139.3%, and 149.9% of the control EMG (100%). The difference between latencies of the peaks by ES-UN and MS-HTM, and ES-UN and MS-FDI was almost equivalent to that of the Hoffmann(H)- and T-reflexes of HTM and FDI, respectively. The peaks by ES-UN, MS-HTM, and MS-FDI diminished with tonic vibration stimulation (TVS) to HTM and FDI, respectively. These findings suggest that group Ia afferents of the hand muscles facilitate the ECR motoneurone pool.

  10. Prevalence of colorectal polyps in a group of subjects at average-risk of colorectal cancer undergoing colonoscopic screening in Tehran, Iran between 2008 and 2013.

    Science.gov (United States)

    Sohrabi, Masoudreza; Zamani, Farhad; Ajdarkosh, Hossien; Rakhshani, Naser; Ameli, Mitra; Mohamadnejad, Mehdi; Kabir, Ali; Hemmasi, Gholamreza; Khonsari, Mahmoudreza; Motamed, Nima

    2014-01-01

    Colorectal cancer (CRC) is one of the prime causes of mortality around the globe, with a significantly rising incidence in the Middle East region in recent decades. Since detection of CRC in the early stages is an important issue, and also since to date there are no comprehensive epidemiologic studies depicting the Middle East region with special attention to the average risk group, further investigation is of significant necessity in this regard. Our aim was to investigate the prevalence of preneoplastic and neoplastic lesions of the colon in an average risk population. A total of 1,208 eligible asymptomatic, average- risk adults older than 40 years of age, referred to Firuzgar Hospotal in the years 2008-2012, were enrolled. They underwent colonoscopy screening and all polypoid lesions were removed and examined by an expert gastrointestinal pathologist. The lesions were classified by size, location, numbers and pathologic findings. Size of lesions was measured objectively by endoscopists. The mean age of participants was 56.5±9.59 and 51.6% were male. The overall polyp detection rate was 199/1208 (16.5 %), 26 subjects having non-neoplastic polyps, including hyperplastic lesions, and 173/1208 (14.3%) having neoplastic polyps, of which 26 (2.15%) were advanced neoplasms .The prevalence of colorectal neoplasia was more common among the 50-59 age group. Advanced adenoma was more frequent among the 60-69 age group. The majority of adenomas were detected in the distal colon, but a quarter of advanced adenomas were found in the proximal colon; advance age and male gender was associated with the presence of adenoma. It seems that CRC screening among average-risk population might be recommended in countries such as Iran. However, sigmioidoscopy alone would miss many colorectal adenomas. Furthermore, the 50-59 age group could be considered as an appropriate target population for this purpose in Iran.

  11. Establishing faculty needs and priorities for peer-mentoring groups using a nominal group technique.

    Science.gov (United States)

    Colón-Emeric, Cathleen S; Bowlby, Lynn; Svetkey, Laura

    2012-01-01

    Peer-mentoring groups are purported to enhance faculty productivity and retention, but the literature about implementation is sparse. Nominal Group Sessions (n=5) with 66 faculty members in different tracks developed prioritized lists of unmet professional development needs and potential group activities. Common items included mentor relationships, research skills, informal peer discussions of successes and challenges, and professional skills workshops. Items particular to specific academic tracks included integration of non-clinical faculty, and gaining recognition in non-research tracks.

  12. Three decision-making aids: brainstorming, nominal group, and Delphi technique.

    Science.gov (United States)

    McMurray, A R

    1994-01-01

    The methods of brainstorming, Nominal Group Technique, and the Delphi technique can be important resources for nursing staff development educators who wish to expand their decision-making skills. Staff development educators may find opportunities to use these methods for such tasks as developing courses, setting departmental goals, and forecasting trends for planning purposes. Brainstorming, Nominal Group Technique, and the Delphi technique provide a structured format that helps increase the quantity and quality of participant responses.

  13. Average correlation clustering algorithm (ACCA) for grouping of co-regulated genes with similar pattern of variation in their expression values.

    Science.gov (United States)

    Bhattacharya, Anindya; De, Rajat K

    2010-08-01

    Distance based clustering algorithms can group genes that show similar expression values under multiple experimental conditions. They are unable to identify a group of genes that have similar pattern of variation in their expression values. Previously we developed an algorithm called divisive correlation clustering algorithm (DCCA) to tackle this situation, which is based on the concept of correlation clustering. But this algorithm may also fail for certain cases. In order to overcome these situations, we propose a new clustering algorithm, called average correlation clustering algorithm (ACCA), which is able to produce better clustering solution than that produced by some others. ACCA is able to find groups of genes having more common transcription factors and similar pattern of variation in their expression values. Moreover, ACCA is more efficient than DCCA with respect to the time of execution. Like DCCA, we use the concept of correlation clustering concept introduced by Bansal et al. ACCA uses the correlation matrix in such a way that all genes in a cluster have the highest average correlation values with the genes in that cluster. We have applied ACCA and some well-known conventional methods including DCCA to two artificial and nine gene expression datasets, and compared the performance of the algorithms. The clustering results of ACCA are found to be more significantly relevant to the biological annotations than those of the other methods. Analysis of the results show the superiority of ACCA over some others in determining a group of genes having more common transcription factors and with similar pattern of variation in their expression profiles. Availability of the software: The software has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/~rajat. Then it needs to be installed. Two word files (included in the zip file) need to

  14. What's Average?

    Science.gov (United States)

    Stack, Sue; Watson, Jane; Hindley, Sue; Samson, Pauline; Devlin, Robyn

    2010-01-01

    This paper reports on the experiences of a group of teachers engaged in an action research project to develop critical numeracy classrooms. The teachers initially explored how contexts in the media could be used as bases for activities to encourage student discernment and critical thinking about the appropriate use of the underlying mathematical…

  15. Numerical simulation of flow induced by a pitched blade turbine. Comparison of the sliding mesh technique and an averaged source term method

    Energy Technology Data Exchange (ETDEWEB)

    Majander, E.O.J.; Manninen, M.T. [VTT Energy, Espoo (Finland)

    1996-12-31

    The flow induced by a pitched blade turbine was simulated using the sliding mesh technique. The detailed geometry of the turbine was modelled in a computational mesh rotating with the turbine and the geometry of the reactor including baffles was modelled in a stationary co-ordinate system. Effects of grid density were investigated. Turbulence was modelled by using the standard k-{epsilon} model. Results were compared to experimental observations. Velocity components were found to be in good agreement with the measured values throughout the tank. Averaged source terms were calculated from the sliding mesh simulations in order to investigate the reliability of the source term approach. The flow field in the tank was then simulated in a simple grid using these source terms. Agreement with the results of the sliding mesh simulations was good. Commercial CFD-code FLUENT was used in all simulations. (author)

  16. Multidisciplinary Teams and Group Decision-Making Techniques: Possible Solutions to Decision-Making Problems.

    Science.gov (United States)

    Kaiser, Steven M.; Woodman, Richard W.

    1985-01-01

    In placement decisions necessitated by PL 94-142, the multidimensional team approach may be hindered by group problems. The more structured nominal group technique (NGT) is suggested. NGT has six steps: silent, written generation of ideas; round robin reporting; group discussion for clarification; preliminary priority vote; discussion; and final…

  17. The tree clustering technique and the physical reality of galaxy groups

    Directory of Open Access Journals (Sweden)

    M.A. Sabry

    2012-12-01

    Full Text Available In this paper the tree clustering technique (the Euclidean separation distance coefficients is suggested to test how the Hickson compact groups of galaxies (HCGs are really physical groups. The method is applied on groups of 5 members only in Hickson’s catalog.

  18. A new technique for noise reduction at coronary CT angiography with multi-phase data-averaging and non-rigid image registration

    Energy Technology Data Exchange (ETDEWEB)

    Tatsugami, Fuminari; Higaki, Toru; Nakamura, Yuko; Yamagami, Takuji; Date, Shuji; Awai, Kazuo [Hiroshima University, Department of Diagnostic Radiology, Minami-ku, Hiroshima (Japan); Fujioka, Chikako; Kiguchi, Masao [Hiroshima University, Department of Radiology, Minami-ku, Hiroshima (Japan); Kihara, Yasuki [Hiroshima University, Department of Cardiovascular Medicine, Minami-ku, Hiroshima (Japan)

    2015-01-15

    To investigate the feasibility of a newly developed noise reduction technique at coronary CT angiography (CTA) that uses multi-phase data-averaging and non-rigid image registration. Sixty-five patients underwent coronary CTA with prospective ECG-triggering. The range of the phase window was set at 70-80 % of the R-R interval. First, three sets of consecutive volume data at 70 %, 75 % and 80 % of the R-R interval were prepared. Second, we applied non-rigid registration to align the 70 % and 80 % images to the 75 % image. Finally, we performed weighted averaging of the three images and generated a de-noised image. The image noise and contrast-to-noise ratio (CNR) in the proximal coronary arteries between the conventional 75 % and the de-noised images were compared. Two radiologists evaluated the image quality using a 5-point scale (1, poor; 5, excellent). On de-noised images, mean image noise was significantly lower than on conventional 75 % images (18.3 HU ± 2.6 vs. 23.0 HU ± 3.3, P < 0.01) and the CNR was significantly higher (P < 0.01). The mean image quality score for conventional 75 % and de-noised images was 3.9 and 4.4, respectively (P < 0.01). Our method reduces image noise and improves image quality at coronary CTA. (orig.)

  19. EXTRAPOLATION TECHNIQUES EVALUATING 24 HOURS OF AVERAGE ELECTROMAGNETIC FIELD EMITTED BY RADIO BASE STATION INSTALLATIONS: SPECTRUM ANALYZER MEASUREMENTS OF LTE AND UMTS SIGNALS.

    Science.gov (United States)

    Mossetti, Stefano; de Bartolo, Daniela; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina; Nava, Elisa

    2016-12-01

    International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures.

  20. TCD-Profiling Using AVERAGE. A New Technique to Evaluate Transcranial Doppler Ultrasound Flow Spectra of Subjects with Cerebral Small Vessel Disease.

    Science.gov (United States)

    Fisse, Anna Lena; Pueschel, Johannes; Deppe, Michael; Ringelstein, E Bernd; Ritter, Martin A

    2016-01-01

    There is an unmet need for screening methods to detect and quantify cerebral small vessel disease (SVD). Transcranial Doppler ultrasound (TCD) flow spectra of the larger intracranial arteries probably contain relevant information about the microcirculation. However, it has not yet been possible to exploit this information as a valuable biomarker. We developed a technique to generate normalized and averaged flow spectra during middle cerebral artery Doppler ultrasound examinations. Second, acceleration curves were calculated, and the absolute amount of the maximum positive and negative acceleration was calculated. Findings were termed 'TCD-profiling coefficient' (TPC). Validation study: we applied this noninvasive method to 5 young adults for reproducibility. Degenerative microangiopathy study: we also tested this new technique in 30 elderly subjects: 15 free of symptoms but with MRI-verified presence of cerebral SVD, and 15 healthy controls. SVD severity was graded according to a predefined score. Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoencephalopathy (CADASIL) study: TPC values of 10 CADASIL patients were compared with those of 10 healthy controls. Pulse wave analysis and local measurements of carotid stiffness were also performed. CADASIL patients were tested for cognitive impairment with the Montreal Cognitive Assessment scale. White matter and basal ganglia lesions in their cerebral MRI were evaluated according to the Wahlund score. Validation study: the technique delivered reproducible results. Degenerative microangiopathy study: patients with SVD had significantly larger TPCs compared with controls (SVD: 2,132; IQR 1,960-2,343%/s vs. 1,935; IQR 1,782-2,050%/s, p = 0.01). TPC values of subjects with SVD significantly correlated with SVD severity scores (R = 0.58, n = 15, p SVD from the degenerative microangiopathy study (p = 0.007). CADASIL patients had significantly worse cognitive test results than healthy controls. TCD

  1. Employing Cognitive Chunking Techniques to Enhance Sight-Reading Performance of Undergraduate Group-Piano Students

    Science.gov (United States)

    Pike, Pamela D.; Carter, Rebecca

    2010-01-01

    The purpose of this study was to compare the effect of cognitive chunking techniques among first-semester group-piano music majors. The ability to group discrete pieces of information into larger, more meaningful chunks is essential for efficient cognitive processing. Since reading keyboard music and playing the piano is a cognitively complex…

  2. Employing Cognitive Chunking Techniques to Enhance Sight-Reading Performance of Undergraduate Group-Piano Students

    Science.gov (United States)

    Pike, Pamela D.; Carter, Rebecca

    2010-01-01

    The purpose of this study was to compare the effect of cognitive chunking techniques among first-semester group-piano music majors. The ability to group discrete pieces of information into larger, more meaningful chunks is essential for efficient cognitive processing. Since reading keyboard music and playing the piano is a cognitively complex…

  3. Spontaneous emergence, imitation and spread of alternative foraging techniques among groups of vervet monkeys.

    Directory of Open Access Journals (Sweden)

    Erica van de Waal

    Full Text Available Animal social learning has become a subject of broad interest, but demonstrations of bodily imitation in animals remain rare. Based on Voelkl and Huber's study of imitation by marmosets, we tested four groups of semi-captive vervet monkeys presented with food in modified film canisters ("aethipops'. One individual was trained to take the tops off canisters in each group and demonstrated five openings to them. In three groups these models used their mouth to remove the lid, but in one of the groups the model also spontaneously pulled ropes on a canister to open it. In the last group the model preferred to remove the lid with her hands. Following these spontaneous differentiations of foraging techniques in the models, we observed the techniques used by the other group members to open the canisters. We found that mouth opening was the most common technique overall, but the rope and hands methods were used significantly more in groups they were demonstrated in than in groups where they were not. Our results show bodily matching that is conventionally described as imitation. We discuss the relevance of these findings to discoveries about mirror neurons, and implications of the identity of the model for social transmission.

  4. Comparison on three classification techniques for sex estimation from the bone length of Asian children below 19 years old: an analysis using different group of ages.

    Science.gov (United States)

    Darmawan, M F; Yusuf, Suhaila M; Kadir, M R Abdul; Haron, H

    2015-02-01

    Sex estimation is used in forensic anthropology to assist the identification of individual remains. However, the estimation techniques tend to be unique and applicable only to a certain population. This paper analyzed sex estimation on living individual child below 19 years old using the length of 19 bones of left hand applied for three classification techniques, which were Discriminant Function Analysis (DFA), Support Vector Machine (SVM) and Artificial Neural Network (ANN) multilayer perceptron. These techniques were carried out on X-ray images of the left hand taken from an Asian population data set. All the 19 bones of the left hand were measured using Free Image software, and all the techniques were performed using MATLAB. The group of age "16-19" years old and "7-9" years old were the groups that could be used for sex estimation with as their average of accuracy percentage was above 80%. ANN model was the best classification technique with the highest average of accuracy percentage in the two groups of age compared to other classification techniques. The results show that each classification technique has the best accuracy percentage on each different group of age. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. The Application of Group Investigation Technique: The Views of the Teacher and Students

    Directory of Open Access Journals (Sweden)

    Adnan Baki

    2010-03-01

    Full Text Available This study aims to determine the views of the teacher and students views on a practice carried out by using group investigation technique of cooperative learning method. The study was conducted with 20 students at 8th grade at a public elementary school in Trabzon during the spring term of 2008-2009 school year. Case study research method was used in this study. Data were collected via informal interviews and observation forms. The data gathered from the interviews were analyzed and presented in tables and networks whereas observation data were given in tables. In the light of the data collected from this study; during the group study, the students enjoyed working in groups, found the group investigation technique useful, undertook several roles and moved from an individual to a cooperative stance in the group. Based on these results, group investigation technique is recommended to be used in secondary and higher education mathematics courses.Key Words: Coperative learning, group investigation, teacher and student views

  6. Design of synchronization technique for uncertain discrete network group with diverse structures

    Science.gov (United States)

    Lü, Ling; Li, Chengren; Li, Gang; Sun, Ao; Yan, Zhe; Rong, Tingting; Gao, Yan

    2017-01-01

    In this work, we design a novel synchronization technique to realize the synchronization of the network group which is constituted by some uncertain discrete networks with diverse structures. Based on Lyapunov theorem, the selection principle of the control inputs and identification law of uncertain parameters in networks are determined, and the synchronization conditions of network group are obtained. Finally, numerical simulations using one-dimensional convective equations with spatiotemporal chaos behaviors illustrate the performance of the synchronization scheme. The research results show that our synchronization technique can be suitable for the network connecting arbitrarily, and not only the network number but also node number in each network can also be chosen freely.

  7. Nurses' Educational Needs Assessment for Financial Management Education Using the Nominal Group Technique.

    Science.gov (United States)

    Noh, Wonjung; Lim, Ji Young

    2015-06-01

    The purpose of this study was to identify the financial management educational needs of nurses in order to development an educational program to strengthen their financial management competencies. Data were collected from two focus groups using the nominal group technique. The study consisted of three steps: a literature review, focus group discussion using the nominal group technique, and data synthesis. After analyzing the results, nine key components were selected: corporate management and accounting, introduction to financial management in hospitals, basic structure of accounting, basics of hospital accounting, basics of financial statements, understanding the accounts of financial statements, advanced analysis of financial statements, application of financial management, and capital financing of hospitals. The present findings can be used to develop a financial management education program to strengthen the financial management competencies of nurses. Copyright © 2015. Published by Elsevier B.V.

  8. Influence of the surface averaging procedure of the current density in assessing compliance with the ICNIRP low-frequency basic restrictions by means of numerical techniques

    Energy Technology Data Exchange (ETDEWEB)

    Zoppetti, N; Andreuccetti, D [IFAC-CNR (' Nello Carrara' Institute for Applied Physics of the Italian National Research Council), Via Madonna del Piano 10, 50019 Sesto Fiorentino (Italy)], E-mail: N.Zoppetti@ifac.cnr.it, E-mail: D.Andreuccetti@ifac.cnr.it

    2009-08-07

    Although the calculation of the surface average of the low-frequency current density distribution over a cross-section of 1 cm{sup 2} is required by ICNIRP guidelines, no reference averaging algorithm is indicated, neither in the ICNIRP guidelines nor in the Directive 2004/40/EC that is based on them. The lack of a general standard algorithm that fulfils the ICNIRP guidelines' requirements is particularly critical in the prospective of the 2004/40/EC Directive endorsement, since the compliance to normative limits refers to well-defined procedures. In this paper, two case studies are considered, in which the calculation of the surface average is performed using a simplified approach widely used in the literature and an original averaging procedure. This analysis, aimed at quantifying the expected differences and to single out their sources, shows that the choice of the averaging algorithm represents an important source of uncertainty in the application of the guideline requirements.

  9. Group Guidance Services with Self-Regulation Technique to Improve Student Learning Motivation in Junior High School (JHS)

    Science.gov (United States)

    Pranoto, Hadi; Atieka, Nurul; Wihardjo, Sihadi Darmo; Wibowo, Agus; Nurlaila, Siti; Sudarmaji

    2016-01-01

    This study aims at: determining students motivation before being given a group guidance with self-regulation technique, determining students' motivation after being given a group counseling with self-regulation technique, generating a model of group counseling with self-regulation technique to improve motivation of learning, determining the…

  10. Investigating the Effects of Group Practice Performed Using Psychodrama Techniques on Adolescents' Conflict Resolution Skills

    Science.gov (United States)

    Karatas, Zeynep

    2011-01-01

    The aim of this study is to examine the effects of group practice which is performed using psychodrama techniques on adolescents' conflict resolution skills. The subjects, for this study, were selected among the high school students who have high aggression levels and low problem solving levels attending Haci Zekiye Arslan High School, in Nigde.…

  11. Identifying the Professional Development Needs of Early Career Teachers in Scotland Using Nominal Group Technique

    Science.gov (United States)

    Kennedy, Aileen; Clinton, Colleen

    2009-01-01

    This paper reports on phase 1 of a project commissioned by Learning and Teaching Scotland to explore the continuing professional development (CPD) needs of teachers in Scotland in years 2-6 of their careers. Nominal group technique (NGT) was employed to identify the CPD needs of year 2-6 teachers and to identify the relative priority of these…

  12. Group techniques as a methodological strategy in acquiring teamwork abilities by college students

    Directory of Open Access Journals (Sweden)

    César Torres Martín

    2013-02-01

    Full Text Available From the frame of the European Higher Education Space an adaptation of teaching-learning process is being promoted by means of the pedagogical renewal, introducing into the class a major number of active or participative methodologies in order to provide students with a major autonomy in said process. This requires taking into account the incorporation of basic skills within university curriculum, especially “teamwork”. By means of group techniques students can acquire interpersonal and cognitive skills, as well as abilities that will enable them to face different group situations throughout their academic and professional career. These techniques are necessary not only as a methodological strategy in the classroom, but also as a reflection instrument for students to assess their behavior in group, with an aim to modify conduct strategies that make that relationship with others influences their learning process. Hence the importance of this ability to sensitize students positively for collective work. Thus using the research-action method in the academic classroom during one semester and making systematic intervention with different group techniques, we manage to present obtained results by means of an analysis of the qualitative data, where the selected instruments are group discussion and personal reflection.

  13. Nominal Group Technique and its Applications in Managing Quality in Higher Education

    Directory of Open Access Journals (Sweden)

    Rafikul Islam

    2011-09-01

    Full Text Available Quality management is an important aspect in all kinds of businesses – manufacturing or service. Idea generation plays a pivotal role in managing quality in organizations. It is thenew and innovative ideas which can help corporations to survive in the turbulent business environment. Research in group dynamics has shown that more ideas are generated by individuals working alone but in a group environment than the individuals engaged in a formal group discussion. In Nominal Group Technique (NGT, individuals work alone but in a group setting. This paper shows how NGT can be applied to generate large number of ideas to solve quality related problems specifically in Malaysian higher education setting. The paper also discusses the details of NGT working procedure andexplores the areas of its further applications.

  14. Developing a Framework for Objective Structured Clinical Examinations Using the Nominal Group Technique

    Science.gov (United States)

    Crum, Matthew F.; White, Paul J.; Larson, Ian; Malone, Daniel T.; Manallack, David T.; Nicolazzo, Joseph A.; McDowell, Jennifer; Lim, Angelina S.; Kirkpatrick, Carl M.

    2016-01-01

    Objective. To use the nominal group technique to develop a framework to improve existing and develop new objective structured clinical examinations (OSCEs) within a four-year bachelor of pharmacy course. Design. Using the nominal group technique, a unique method of group interview that combines qualitative and quantitative data collection, focus groups were conducted with faculty members, practicing pharmacists, and undergraduate pharmacy students. Five draft OSCEs frameworks were suggested and participants were asked to generate new framework ideas. Assessment. Two focus groups (n=9 and n=7) generated nine extra frameworks. Two of these frameworks, one from each focus group, ranked highest (mean scores of 4.4 and 4.1 on a 5-point scale) and were similar in nature. The project team used these two frameworks to produce the final framework, which includes an OSCE in every year of the course, earlier implementation of teaching OSCEs, and the use of independent simulated patients who are not examiners. Conclusions. The new OSCE framework provides a consistent structure from course entry to exit and ensures graduates meet internship requirements. PMID:28090107

  15. ABO blood grouping from hard and soft tissues of teeth by modified absorption-elution technique

    Directory of Open Access Journals (Sweden)

    B K Ramnarayan

    2013-01-01

    Full Text Available Background: Teeth have always been known as stable tissue that can be preserved both physically and chemically for long periods of time. Blood group substances have been known to be present in both the hard and soft tissues of the teeth. Objectives: This study aimed at detection of ABO blood group substances from soft and hard tissues of teeth and also to evaluate the reliability of teeth stored for a relatively long period as a source of blood group substances by absorption-elution technique with some modifications. Results: Blood group obtained from the teeth was compared with those obtained from the blood sample. Pulp showed a very large correlation in both fresh and long-standing teeth though it decreased slightly in the latter. Hard tissue showed a large correlation in both the groups indicating that hard tissue is quite reliable to detect blood group and that there is no much difference in the reliability in both the groups. However, combining pulp and hard tissue, correlation is moderate. Correlation of blood grouping with the age, sex, and jaw distribution was carried out. Conclusion: Blood group identification from hard and soft tissues of teeth aids in the identification of an individual.

  16. Covariant approximation averaging

    CERN Document Server

    Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2014-01-01

    We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

  17. Memory,Time and Technique Aspects of Density Matrix Renormalization Group Method

    Institute of Scientific and Technical Information of China (English)

    QIN Shao-Jin; LOU Ji-Zhong

    2001-01-01

    We present the memory size,computational time,and technique aspects of density matrix renormalization group (DMRG) algorithm.We show how to estimate the memory size and computational time before starting a large scale DMRG calculation.We propose an implementation of the Hamiltonian wavefunction multiplication and a wavefunction initialization in DMRG with block matrix data structure.One-dimensional Heisenberg model is used to illustrate our study.``

  18. Discrete Cosine Transform-II for Reduction in Peak to Average Power Ratio of OFDM Signals Through μ-Law Companding Technique

    Directory of Open Access Journals (Sweden)

    Navneet Kaur

    2013-05-01

    Full Text Available Orthogonalfrequency Division multiplexing (OFDM is the most familiar word intelecommunicationand wireless communication systems as it provides enhanced spectral efficiency than Frequency divisionmultiplexing (FDM.Although itissustaininganorthogonal relationbetweencarriers but highpeak toaverage power ratio (PAPRis oneof the main disadvantages of OFDMsystem.Various PAPR reductiontechniques have beenused,including techniques based oncompanding. Incompanding,-Lawcompandinghas potential toreducethePAPRof OFDMsignals.-Law Compandingtechniquepreserves thedynamic range of samples at low amplitudes.Anew methodnamed as precoding which ishaving less complexity compared to the other power reductiontechniquesis proposed to reduce PAPR.This paper put forward combinationof two existing techniques namely-LawCompanding Transformand Discrete Cosine Transform-IIprecoding technique.The simulationresults show that, the proposedcombinedscheme givesbetter result for PAPR Reductionand resultsin no distortion

  19. The average numbers of outliers over groups of various splits into training and test sets: A criterion of the reliability of a QSPR? A case of water solubility

    Science.gov (United States)

    Toropova, Alla P.; Toropov, Andrey A.; Benfenati, Emilio; Gini, Giuseppina; Leszczynska, Danuta; Leszczynski, Jerzy

    2012-07-01

    The validation of quantitative structure-property/activity relationships (QSPR/QSAR) is an important challenge of modern theoretical chemistry. Analysis of QSPRs which are obtained with various distribution into sub-systems of training and of testing can be a useful approach to estimate reliability of QSPR predictions. The balance of correlation is an approach for the building up of QSPR with using three components of available data: (a) sub-training set (developer), (b) calibration set (critic), and (c) test set (estimator). Computational experiments have shown that the probabilistic interdependence between the distribution of available data into sub-training set, calibration set, and test set and the average numbers of outliers in the test set exists.

  20. Respiration monitoring by Electrical Bioimpedance (EBI) Technique in a group of healthy males. Calibration equations.

    Science.gov (United States)

    Balleza, M.; Vargas, M.; Kashina, S.; Huerta, M. R.; Delgadillo, I.; Moreno, G.

    2017-01-01

    Several research groups have proposed the electrical impedance tomography (EIT) in order to analyse lung ventilation. With the use of 16 electrodes, the EIT is capable to obtain a set of transversal section images of thorax. In previous works, we have obtained an alternating signal in terms of impedance corresponding to respiration from EIT images. Then, in order to transform those impedance changes into a measurable volume signal a set of calibration equations has been obtained. However, EIT technique is still expensive to attend outpatients in basics hospitals. For that reason, we propose the use of electrical bioimpedance (EBI) technique to monitor respiration behaviour. The aim of this study was to obtain a set of calibration equations to transform EBI impedance changes determined at 4 different frequencies into a measurable volume signal. In this study a group of 8 healthy males was assessed. From obtained results, a high mathematical adjustment in the group calibrations equations was evidenced. Then, the volume determinations obtained by EBI were compared with those obtained by our gold standard. Therefore, despite EBI does not provide a complete information about impedance vectors of lung compared with EIT, it is possible to monitor the respiration.

  1. Coexisting state of surge and rotating stall in a two-stage axial flow compressor using a double-phase-locked averaging technique

    Science.gov (United States)

    Sakata, Yuu; Ohta, Yutaka

    2017-02-01

    The interaction between surge and rotating stall in an axial flow compressor was investigated from the viewpoint of an unsteady inner flow structure. The aim of this study was to identify the key factor that determines the switching phenomenon of a surge cycle. The main feature of the tested compressor is a shock tube connected in series to the compressor outlet through a diaphragm, slits, and a concentric duplex pipe: this system allows surge and rotating stall to be generated by connecting the shock tube with the compressor, or enables the compression plane wave injection. The unsteady characteristics and the internal flow velocity fluctuations were measured in detail, and the stall cell structure was averaged and visualized along the movement of the operation point under a coexisting state of surge. A coefficient of the cell scale fluctuation was calculated using the result of the averaging, and it confirmed that the processes of inner flow structure change differed from each other according to the next cycle of the surge. The result suggests that the key factor that determines the next cycle is the transformation of the internal flow structure, particularly between the stall cell and the entire circumferential stall, in both the recovering and stalling processes.

  2. Nominal group technique for individuals with cognitive disability: a systematic review.

    Science.gov (United States)

    Lakhani, Ali; Watling, David P; Zeeman, Heidi; Wright, Courtney J; Bishara, Jason

    2017-05-13

    Considering the perspectives of individuals with cognitive disability is important for their participation in their self-directed health care. The nominal group technique (NGT) has been identified as a method to gather opinions of people with cognitive disability; however, a synthesis of methodological considerations to undertake when employing the approach among people with cognitive disability is non-existent. A systematic review guided by the preferred reporting items for systematic review and meta-analysis protocols was undertaken. Five databases (CINAHL, ISI Web of Science, ProQuest Social Science Journals, Scopus, and MEDLINE) were searched for peer-reviewed literature published before September 2016. Methodological considerations pertaining to the four stages of the NGT- generating ideas, recording ideas, clarification, and ranking - were extracted from each study. Nine publications contributing to eight studies were included. Methodological considerations focused on (i) the number of participants within discussion groups, (ii) research question introduction, (iii) support individuals and accessible methods, (iv) ranking, and (v) researcher training and counselling services. The use of the NGT to gain the health care perspectives of adults with cognitive disability is promising. Conducting nominal group techniques informed by the methodological considerations identified within this review can work towards ensuring that the health care perspectives of people with cognitive disability are considered. Implications for rehabilitation The emergent policy move towards self-directed health care for people with disability requires that the health care perspectives of people with disability are considered. Effective consultation and discussion techniques are essential to gain the health care perspectives of people with cognitive disability. After undertaking methodological considerations, the NGT can be an effective approach towards gaining the health care

  3. Identification of strategies to facilitate organ donation among African Americans using the nominal group technique.

    Science.gov (United States)

    Locke, Jayme E; Qu, Haiyan; Shewchuk, Richard; Mannon, Roslyn B; Gaston, Robert; Segev, Dorry L; Mannon, Elinor C; Martin, Michelle Y

    2015-02-06

    African Americans are disproportionately affected by ESRD, but few receive a living donor kidney transplant. Surveys assessing attitudes toward donation have shown that African Americans are less likely to express a willingness to donate their own organs. Studies aimed at understanding factors that may facilitate the willingness of African Americans to become organ donors are needed. A novel formative research method was used (the nominal group technique) to identify and prioritize strategies for facilitating increases in organ donation among church-attending African Americans. Four nominal group technique panel interviews were convened (three community and one clergy). Each community panel represented a distinct local church; the clergy panel represented five distinct faith-based denominations. Before nominal group technique interviews, participants completed a questionnaire that assessed willingness to become a donor; 28 African-American adults (≥19 years old) participated in the study. In total, 66.7% of participants identified knowledge- or education-related strategies as most important strategies in facilitating willingness to become an organ donor, a view that was even more pronounced among clergy. Three of four nominal group technique panels rated a knowledge-based strategy as the most important and included strategies, such as information on donor involvement and donation-related risks; 29.6% of participants indicated that they disagreed with deceased donation, and 37% of participants disagreed with living donation. Community participants' reservations about becoming an organ donor were similar for living (38.1%) and deceased (33.4%) donation; in contrast, clergy participants were more likely to express reservations about living donation (33.3% versus 16.7%). These data indicate a greater opposition to living donation compared with donation after one's death among African Americans and suggest that improving knowledge about organ donation, particularly

  4. Performance of some supervised and unsupervised multivariate techniques for grouping authentic and unauthentic Viagra and Cialis

    Directory of Open Access Journals (Sweden)

    Michel J. Anzanello

    2014-09-01

    Full Text Available A typical application of multivariate techniques in forensic analysis consists of discriminating between authentic and unauthentic samples of seized drugs, in addition to finding similar properties in the unauthentic samples. In this paper, the performance of several methods belonging to two different classes of multivariate techniques–supervised and unsupervised techniques–were compared. The supervised techniques (ST are the k-Nearest Neighbor (KNN, Support Vector Machine (SVM, Probabilistic Neural Networks (PNN and Linear Discriminant Analysis (LDA; the unsupervised techniques are the k-Means CA and the Fuzzy C-Means (FCM. The methods are applied to Infrared Spectroscopy by Fourier Transform (FTIR from authentic and unauthentic Cialis and Viagra. The FTIR data are also transformed by Principal Components Analysis (PCA and kernel functions aimed at improving the grouping performance. ST proved to be a more reasonable choice when the analysis is conducted on the original data, while the UT led to better results when applied to transformed data.

  5. Manipulating the affiliative interactions of group-housed rhesus macaques using positive reinforcement training techniques.

    Science.gov (United States)

    Schapiro, S J; Perlman, J E; Boudreau, B A

    2001-11-01

    Social housing, whether continuous, intermittent, or partial contact, typically provides many captive primates with opportunities to express affiliative behaviors, important components of the species-typical behavioral repertoire. Positive reinforcement training techniques have been successfully employed to shape many behaviors important for achieving primate husbandry goals. The present study was conducted to determine whether positive reinforcement training techniques could also be employed to alter levels of affiliative interactions among group-housed rhesus macaques. Twenty-eight female rhesus were divided into high (n = 14) and low (n = 14) affiliators based on a median split of the amount of time they spent affiliating during the baseline phase of the study. During the subsequent training phase, half of the low affiliators (n = 7) were trained to increase their time spent affiliating, and half of the high affiliators (n = 7) were trained to decrease their time spent affiliating. Trained subjects were observed both during and outside of training sessions. Low affiliators significantly increased the amount of time they spent affiliating, but only during nontraining sessions. High affiliators on the other hand, significantly decreased the amount of time they spent affiliating, but only during training sessions. These data suggest that positive reinforcement techniques can be used to alter the affiliative behavior patterns of group-housed, female rhesus monkeys, although the two subgroups of subjects responded differently to the training process. Low affiliators changed their overall behavioral repertoire, while high affiliators responded to the reinforcement contingencies of training, altering their proximity patterns but not their overall behavior patterns. Thus, positive reinforcement training can be used not only as a means to promote species-typical or beneficial behavior patterns, but also as an important experimental manipulation to facilitate systematic

  6. Nominal group technique to select attributes for discrete choice experiments: an example for drug treatment choice in osteoporosis

    Directory of Open Access Journals (Sweden)

    Hiligsmann M

    2013-02-01

    Full Text Available Mickael Hiligsmann,1-3 Caroline van Durme,2 Piet Geusens,2 Benedict GC Dellaert,4 Carmen D Dirksen,3 Trudy van der Weijden,5 Jean-Yves Reginster,6 Annelies Boonen21Department of Health Services Research, School for Public Health and Primary Care (CAPHRI, Maastricht University, The Netherlands, 2Department of Internal Medicine, CAPHRI, Maastricht University, The Netherlands, 3Department of Clinical Epidemiology and Medical Technology Assessment, CAPHRI, Maastricht University, The Netherlands, 4Department of Business Economics, Erasmus Rotterdam University, The Netherlands, 5Department of General Practice, CAPHRI, Maastricht University, The Netherlands, 6Department of Public Health, Epidemiology and Health Economics, University of Liege, BelgiumBackground: Attribute selection represents an important step in the development of discrete-choice experiments (DCEs, but is often poorly reported. In some situations, the number of attributes identified may exceed what one may find possible to pilot in a DCE. Hence, there is a need to gain insight into methods to select attributes in order to construct the final list of attributes. This study aims to test the feasibility of using the nominal group technique (NGT to select attributes for DCEs.Methods: Patient group discussions (4–8 participants were convened to prioritize a list of 12 potentially important attributes for osteoporosis drug therapy. The NGT consisted of three steps: an individual ranking of the 12 attributes by importance from 1 to 12, a group discussion on each of the attributes, including a group review of the aggregate score of the initial rankings, and a second ranking task of the same attributes.Results: Twenty-six osteoporotic patients participated in five NGT sessions. Most (80% of the patients changed their ranking after the discussion. However, the average initial and final ranking did not differ markedly. In the final ranking, the most important medication attributes were

  7. Program planning for a community pharmacy residency support service using the nominal group technique.

    Science.gov (United States)

    Rupp, Michael T

    2002-01-01

    To define programmatic objectives and initial operational priorities for CommuniRes, a university-based education and support service designed to help community pharmacists successfully implement and sustain community pharmacy residency programs (CPRPs). Advisory committee of nationally recognized experts in CPRPs in a small-group planning session. CPRPs are postgraduate clinical training experiences conducted in chain and independent community pharmacies. The nominal group technique (NGT), a structured approach to group planning and decision making, was used to identify and prioritize the needs of CPRPs. Results of the NGT exercise were used as input to a brainstorming session that defined specific CommuniRes services and resources that must be developed to meet high priority needs of CPRPs. Group consensus on the priority needs of CPRPs was determined through rank order voting. The advisory committee identified 20 separate CPRP needs that it believed must be met to ensure that CPRPs will be successful and sustainable. Group voting resulted in the selection of six needs that were considered to be consensus priorities for services and resources provided through CommuniRes: image parity for CPRPs; CPRP marketing materials; attractive postresidency employment opportunities; well-defined goals, objectives, and residency job descriptions; return on investment and sources of ongoing funding for the residency; and opportunities and mechanisms for communicating/networking with other residents and preceptors. The needs-based programmatic priorities defined by the advisory committee are now being implemented through a tripartite program consisting of live training seminars for CPRP preceptors and directors, an Internet site (www.communires.com), and a host of continuing support services available to affiliated CPRP sites. Future programmatic planning will increasingly involve CPRP preceptors, directors, and former residents to determine the ongoing needs of CPRPs.

  8. Differential Effects on Student Demographic Groups of Using ACT® College Readiness Assessment Composite Score, Act Benchmarks, and High School Grade Point Average for Predicting Long-Term College Success through Degree Completion. ACT Research Report Series, 2013 (5)

    Science.gov (United States)

    Radunzel, Justine; Noble, Julie

    2013-01-01

    In this study, we evaluated the differential effects on racial/ethnic, family income, and gender groups of using ACT® College Readiness Assessment Composite score and high school grade point average (HSGPA) for predicting long-term college success. Outcomes included annual progress towards a degree (based on cumulative credit-bearing hours…

  9. Tools, techniques, organisation and culture of the CADD group at Sygnature Discovery

    Science.gov (United States)

    St-Gallay, Steve A.; Sambrook-Smith, Colin P.

    2016-10-01

    Computer-aided drug design encompasses a wide variety of tools and techniques, and can be implemented with a range of organisational structures and focus in different organisations. Here we outline the computational chemistry skills within Sygnature Discovery, along with the software and hardware at our disposal, and briefly discuss the methods that are not employed and why. The goal of the group is to provide support for design and analysis in order to improve the quality of compounds synthesised and reduce the timelines of drug discovery projects, and we reveal how this is achieved at Sygnature. Impact on medicinal chemistry is vital to demonstrating the value of computational chemistry, and we discuss the approaches taken to influence the list of compounds for synthesis, and how we recognise success. Finally we touch on some of the areas being developed within the team in order to provide further value to the projects and clients.

  10. Tools, techniques, organisation and culture of the CADD group at Sygnature Discovery

    Science.gov (United States)

    St-Gallay, Steve A.; Sambrook-Smith, Colin P.

    2017-03-01

    Computer-aided drug design encompasses a wide variety of tools and techniques, and can be implemented with a range of organisational structures and focus in different organisations. Here we outline the computational chemistry skills within Sygnature Discovery, along with the software and hardware at our disposal, and briefly discuss the methods that are not employed and why. The goal of the group is to provide support for design and analysis in order to improve the quality of compounds synthesised and reduce the timelines of drug discovery projects, and we reveal how this is achieved at Sygnature. Impact on medicinal chemistry is vital to demonstrating the value of computational chemistry, and we discuss the approaches taken to influence the list of compounds for synthesis, and how we recognise success. Finally we touch on some of the areas being developed within the team in order to provide further value to the projects and clients.

  11. Tools, techniques, organisation and culture of the CADD group at Sygnature Discovery.

    Science.gov (United States)

    St-Gallay, Steve A; Sambrook-Smith, Colin P

    2016-10-31

    Computer-aided drug design encompasses a wide variety of tools and techniques, and can be implemented with a range of organisational structures and focus in different organisations. Here we outline the computational chemistry skills within Sygnature Discovery, along with the software and hardware at our disposal, and briefly discuss the methods that are not employed and why. The goal of the group is to provide support for design and analysis in order to improve the quality of compounds synthesised and reduce the timelines of drug discovery projects, and we reveal how this is achieved at Sygnature. Impact on medicinal chemistry is vital to demonstrating the value of computational chemistry, and we discuss the approaches taken to influence the list of compounds for synthesis, and how we recognise success. Finally we touch on some of the areas being developed within the team in order to provide further value to the projects and clients.

  12. Comparison of antibody titers using conventional tube technique versus column agglutination technique in ABO blood group incompatible renal transplant.

    Science.gov (United States)

    Bhangale, Amit; Pathak, Amardeep; Pawar, Smita; Jeloka, Tarun

    2017-01-01

    Measurement of alloantibody titer to a red cell antigen (ABO titers) is an integral part of management of ABO incompatible kidney transplants (ABOiKT). There are different methods of titer estimation. Alloantibody detection by tube titration and Gel agglutination columns are accepted methodologies. It is essential to find the difference in titers between the two methods so as to set the 'cut-off' titer accordingly, depending upon the method used. We did a prospective observational study to compare and correlate the ABO titers using these two different techniques - conventional tube technique (CTT) and the newer column agglutination technique (CAT). A total of 67 samples were processed in parallel for anti-A/B antibodies by both tube dilution and column agglutination methods. The mean titer by conventional tube method was 38.5 + 96.6 and by the column agglutination test was 96.4 + 225. The samples correlated well with Spearman rho correlation coefficient of 0.94 (P = 0.01). The column agglutination method for anti A/B titer estimation in an ABO incompatible kidney transplant is more sensitive, with the column agglutination results being approximately two and half fold higher (one more dilution) than that of tube method.

  13. Occupational therapy with people with depression: using nominal group technique to collate clinician opinion.

    Science.gov (United States)

    Hitch, Danielle; Taylor, Michelle; Pepin, Genevieve

    2015-05-01

    This aim of this study was to obtain a consensus from clinicians regarding occupational therapy for people with depression, for the assessments and practices they use that are not currently supported by research evidence directly related to functional performance. The study also aimed to discover how many of these assessments and practices were currently supported by research evidence. Following a previously reported systematic review of assessments and practices used in occupational therapy for people with depression, a modified nominal group technique was used to discover which assessments and practices occupational therapists currently utilize. Three online surveys gathered initial data on therapeutic options (survey 1), which were then ranked (survey 2) and re-ranked (survey 3) to gain the final consensus. Twelve therapists completed the first survey, whilst 10 clinicians completed both the second and third surveys. Only 30% of the assessments and practices identified by the clinicians were supported by research evidence. A consensus was obtained on a total of 35 other assessments and interventions. These included both occupational-therapy-specific and generic assessments and interventions. Principle conclusion. Very few of the assessments and interventions identified were supported by research evidence directly related to functional performance. While a large number of options were generated, the majority of these were not occupational therapy specific.

  14. Evaluation of the antihypertensive effect of barnidipine, a dihydropyridine calcium entry blocker, as determined by the ambulatory blood pressure level averaged for 24 h, daytime, and nighttime. Barnidipine Study Group.

    Science.gov (United States)

    Imai, Y; Abe, K; Nishiyama, A; Sekino, M; Yoshinaga, K

    1997-12-01

    We evaluated the effect of barnidipine, a dihydropyridine calcium antagonist, administered once daily in the morning in a dose of 5, 10, or 15 mg on ambulatory blood pressure (BP) in 34 patients (51.3+/-9.6 years). Hypertension was diagnosed based on the clinic BP. The patients were classified into groups according to the ambulatory BP: group 1, dippers with true hypertension; group 2, nondippers with true hypertension; group 3, dippers with false hypertension; and Group 4, nondippers with false hypertension. Barnidipine reduced the clinic systolic BP (SBP) and diastolic BP (DBP) in all groups and significantly reduced the average 24 h ambulatory BP (133.0+/-16.5/90.7+/-12.3 mm Hg v 119.7+/-13.7/81.8+/-10.3 mm Hg, P Barnidipine significantly reduced the daytime ambulatory SBP in groups 1, 2, and 3, but not in group 4, and significantly reduced daytime ambulatory DBP in group 1 but not in groups 2, 3, and 4. Barnidipine significantly reduced the nighttime ambulatory SBP only in group 2 and the nighttime ambulatory DBP in groups 2 and 4. Once-a-day administration of barnidipine influenced 24 h BP on true hypertensives (the ratio of the trough to peak effect > 50%), but had minimal effect on low BP such as the nocturnal BP in dippers and the ambulatory BP in false hypertensives. These findings suggest that barnidipine can be used safely in patients with isolated clinic ("white coat") hypertension and in those with dipping patterns of circadian BP variation whose nocturnal BP is low before treatment.

  15. Burnei's "double X" internal fixation technique for supracondylar humerus fractures in children: indications, technique, advantages and alternative interventions : Study and Research Group in Pediatric Orthopaedics-2012.

    Science.gov (United States)

    Georgescu, I; Gavriliu, S; Pârvan, A; Martiniuc, A; Japie, E; Ghiță, R; Drăghici, I; Hamei, Ş; Ţiripa, I; El Nayef, T; Dan, D

    2013-06-15

    The Study and Research Group in Pediatric Orthopedics-2012 initated this retrospective study due to the fact that in Romania and in other countries, the numerous procedures do not ensure the physicians a definite point of view related to the therapeutic criteria in the treatment of supracondylar fractures. That is why the number of complications and their severity brought into notice these existent deficiencies. In order to correct some of these complications, cubitus varus or valgus, Prof. Al. Pesamosca communicated a paper called "Personal procedure in the treatment of posttraumatic cubitus varus" at the County Conference from Bacău, in June 24, 1978. This procedure has next been made popular by Prof. Gh. Burnei and his coworkers by operating patients with cubitus varus or valgus due to supracondylar humeral fractures and by presenting papers related to the subject at the national and international congresses. The latest paper regarding this problem has been presented at the 29th Annual Meeting of the European Pediatric Orthopedic Society in Zagreb, Croatia, April 7-10, 2010, being titled "Distal humeral Z-osteotomy for posttraumatic cubitus varus or valgus", having as authors Gh. Burnei, Ileana Georgescu, Ştefan Gavriliu, Costel Vlad and Daniela Dan. As members of this group, based on the performed studies, we wish to make popular this type of osteosynthesis, which ensures a tight fixation, avoids complications and allows a rapid postoperative activity. The acknowledged treatment for these types of fractures is the orthopedic one and it must be accomplished as soon as possible, in the first 6 hours, by reduction and cast immobilization or by closed or open reduction and fixation, using one of the several methods (Judet, Boehler, Kapandji, San Antonio, San Diego, Burnei's double X technique). The exposed treatment is indicated in irreducible supracondylar humeral fractures, in reducible, but unstable type, in polytraumatized patients with supracondylar

  16. 经食管信号平均心电图记录窦房结电图的研究%Experimental and clinical study of sinus node electrogram by transesophageal signal averaging technique

    Institute of Scientific and Technical Information of China (English)

    丁世芳; 郭赤; 马大波

    2007-01-01

    目的 探讨无创性经食管信号平均技术直接记录窦房结电位(SNP)的方法.方法 采用自制三导心电微电位检测仪对256例窦房结功能正常者进行检测,其中男142例,女114例,年龄10~74(44.2±12.4)岁.将食管导联的信号放大(增益达到100μV/cm)、滤波(0.1~50)Hz,16位模/数(A/D)转换,系统采样频率2 kHz,对食管SNP进行信号平均,并通过同步信号平均对人食管和心内膜所记录的SNP,犬食管和心外膜所记录的SNP进行分析研究.结果 记录到食管SNP 189例(74%),所测信号平均食管SNP为P波前的低幅、低频波,可见有2种形态:园顶型(60%)和上斜型(40%);窦房传导时间为(83.3±26.7)ms,分布范围为(23~118)ms;波幅为(3.5~27.7)μV;dv/dt为(0.43~1.93)mV/s.结论 在适当滤波、高增益和抗基线漂移技术条件下,利用经食管信号平均技术,大多数窦房结功能正常的患者可直接记录到食管SNP.%AIM To develop a noninvasive transesophageal signal averaging technique for direct recording of sinus node electrogram. METHODS Sinus node electrograms were recorded by conventional transesophageal technique from 256 patients (142 male and 114 female, aged from 10 to 74 (mean 44.2 + 12.4). The signals from lead I, surface averaged lead and esophagus averaged lead were amplified (up to 100 μV/cm), filtered (0.1 -50)Hz, AD converted to 16-bit accuracy at a sampling rate of 2 kHz and averaged by using the three-channel low-noise amplifier. To verify sinus node potentials recorded by esophageal lead, electric activities of esophagus/sinus node area and right atrium were simultaneously recorded in humans and in dogs by intracardiac direct record method. RESULTS The signal averaged esophageal sinus node potentials were deflections of low-amplitude and low-frequency preceding the P wave. Two morphologies, the domed wave(114 of 189 patients, 60% ) and the smooth upstroke slope (75 of 189 patients, 40% ) were seen. The directly recorded

  17. Using the IGCRA (individual, group, classroom reflective action technique to enhance teaching and learning in large accountancy classes

    Directory of Open Access Journals (Sweden)

    Cristina Poyatos

    2011-02-01

    Full Text Available First year accounting has generally been perceived as one of the more challenging first year business courses for university students. Various Classroom Assessment Techniques (CATs have been proposed to attempt to enrich and enhance student learning, with these studies generally positioning students as learners alone. This paper uses an educational case study approach and examines the implementation of the IGCRA (individual, group, classroom reflective action technique, a Classroom Assessment Technique, on first year accounting students’ learning performance. Building on theoretical frameworks in the areas of cognitive learning, social development, and dialogical learning, the technique uses reports to promote reflection on both learning and teaching. IGCRA was found to promote feedback on the effectiveness of student, as well as teacher satisfaction. Moreover, the results indicated formative feedback can assist to improve the learning and learning environment for a large group of first year accounting students. Clear guidelines for its implementation are provided in the paper.

  18. Investigating the Effects of Group Investigation (GI and Cooperative Integrated Reading and Comprehension (CIRC as the Cooperative Learning Techniques on Learner's Reading Comprehension

    Directory of Open Access Journals (Sweden)

    Mohammad Amin Karafkan

    2015-11-01

    Full Text Available Cooperative learning consists of some techniques for helping students work together more effectively. This study investigated the effects of Group Investigation (GI and Cooperative Integrated Reading and Composition (CIRC as cooperative learning techniques on Iranian EFL learners’ reading comprehension at an intermediate level. The participants of the study were 207 male students who studied at an intermediate level at ILI. The participants were randomly assigned into three equal groups: one control group and two experimental groups. The control group was instructed via conventional technique following an individualistic instructional approach. One experimental group received GI technique. The other experimental group received CIRC technique. The findings showed that there was a meaningful difference between the mean of the reading comprehension score of GI experimental group and CRIC experimental group. CRIC technique is more effective than GI technique in enhancing the reading comprehension test scores of students.Keywords: GI, CIRC, Cooperative Learning Techniques, Reading Comprehension

  19. Averaged Electroencephalic Audiometry in Infants

    Science.gov (United States)

    Lentz, William E.; McCandless, Geary A.

    1971-01-01

    Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)

  20. Prosopography of social and political groups historically located: method or research technique?

    Directory of Open Access Journals (Sweden)

    Lorena Madruga Monteiro

    2014-06-01

    Full Text Available The prosopographical approach has been questioned in different disciplinary domains as its scientific nature. The debate prosopography is a technique, a tool for research, an auxiliary science or method transpires in scientific arguments and those who are dedicated to explaining the prosopographical research assumptions. In the social sciences, for example, prosopography is not seen only as an instrument of research, but as a method associated with a theoretical construct to apprehend the social world. The historians that use prosopographic analysis, in turn, oscillate about the analysis of collective biography is a method or a polling technique. Given this setting we aimed at in this article, discuss the prosopographical approach from their different uses. The study presents a literature review, demonstrating the technique of prosopography as historical research, and further as a method of sociological analysis, and then highlight your procedures and methodological limits.

  1. Group Rhythm and Drumming with Older Adults: Music Therapy Techniques and Multimedia Training Guide

    Science.gov (United States)

    Reuer, Barbara Louise; Crowe, Barbara; Bernstein, Barry

    2007-01-01

    Written by a team of creative and highly qualified music therapists, this publication provides content training for the use of group percussion strategies with mature adults. In fact, the book promotes senior peers as group facilitators and/or coleaders. The grace of this approach is that no previous musical training is necessary in order to…

  2. A Construction of String 2-Group Models using a Transgression-Regression Technique

    CERN Document Server

    Waldorf, Konrad

    2012-01-01

    In this note we present a new construction of the string group that ends optionally in two different contexts: strict diffeological 2-groups or finite-dimensional Lie 2-groups. It is canonical in the sense that no choices are involved; all the data is written down and can be looked up (at least somewhere). The basis of our construction is the basic gerbe of Gawedzki-Reis and Meinrenken. The main new insight is that under a transgression-regression procedure, the basic gerbe picks up a multiplicative structure coming from the Mickelsson product over the loop group. The conclusion of the construction is a relation between multiplicative gerbes and 2-group extensions for which we use recent work of Schommer-Pries.

  3. Scratch This! The IF-AT as a Technique for Stimulating Group Discussion and Exposing Misconceptions

    Science.gov (United States)

    Cotner, Sehoya; Baepler, Paul; Kellerman, Anne

    2008-01-01

    Frequent and immediate feedback is critical for learning and retaining content as well as developing effective learning teams (Michaelson, Knight, and Fink 2004). The Immediate Feedback Assessment Technique (IF-AT) provides a single and efficient way for learners to self-assess their progress in a course and to structure significant small-group…

  4. Using Psychodrama Techniques to Promote Counselor Identity Development in Group Supervision

    Science.gov (United States)

    Scholl, Mark B.; Smith-Adcock, Sondra

    2007-01-01

    The authors briefly introduce the concepts, techniques, and theory of identity development associated with J. L. Moreno's (1946, 1969, 1993) Psychodrama. Based upon Loganbill, Hardy, and Delworth's (1982) model, counselor identity development is conceptualized as consisting of seven developmental themes or vectors (e.g., issues of awareness and…

  5. Techniques for searching first integrals by Lie group and application to gyroscope system

    Institute of Scientific and Technical Information of China (English)

    HU Yanxia; GUAN Keying

    2005-01-01

    In the paper, the methods of finding first integrals of an autonomous system using one-parameter Lie groups are discussed. A class of nontrivial one-parameter Lie groups admitted by the classical gyroscope system is found, and based on the properties of first integral determined by the one-parameter Lie group, the fourth first integral of the gyroscope system in Euler case, Lagrange case and Kovalevskaya case can be obtained in a uniform idea. An error on the fourth first integral in general Kovalevskaya case (A=B=2C,zG=0), which appeared in literature is found and corrected.

  6. Line group techniques in description of the structural phase transitions in some superconductors

    Science.gov (United States)

    Meszaros, CS.; Balint, A.; Bankuti, J.

    1995-01-01

    The main features of the theory of line groups, and their irreducible representations are briefly discussed, as well as the most important applications of them. A new approach in the general symmetry analysis of the modulated systems is presented. It is shown, that the line group formalism could be a very effective tool in the examination of the structural phase transitions in High Temperature SUperconductors. As an example, the material YBa2Cu3O(7-x) is discussed briefly.

  7. Research on Key Techniques of Condition Monitoring and Fault Diagnosing Systems of Machine Groups

    Institute of Scientific and Technical Information of China (English)

    WANG Yan-kai; LIAO Ming-fu; WANG Si-ji

    2005-01-01

    This paper describes the development of the condition monitoring and fault diagnosing system of a group of rotating machinery. The data management is performed by means of double redundant data bases stored simultaneously in both the analyzing server and monitoring client. In this way, high reliability of the storage of data is guaranteed. Condensation of trend data releases much space resource of the hard disk. Diagnosing strategies orientated to different typical faults of rotating machinery are developed and incorporated into the system. Experimental verification shows that the system is suitable and effective for condition monitoring and fault diagnosing for a rotating machine group.

  8. Line group techniques in description of the structural phase transitions in some superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Meszaros, C.; Bankuti, J. [Roland Eoetvoes Univ., Budapest (Hungary); Balint, A. [Univ. of Agricultural Sciences, Goedoello (Hungary)

    1994-12-31

    The main features of the theory of line groups, and their irreducible representations are briefly discussed, as well as the most important applications of them. A new approach in the general symmetry analysis of the modulated systems is presented. It is shown, that the line group formalism could be a very effective tool in the examination of the structural phase transitions in High Temperature Superconductors. As an example, the material YBa{sub 2}Cu{sub 3}O{sub 7-x} is discussed briefly.

  9. A Framework for Conducting Critical Dialectical Pluralist Focus Group Discussions Using Mixed Research Techniques

    Science.gov (United States)

    Onwuegbuzie, Anthony J.; Frels, Rebecca K.

    2015-01-01

    Although focus group discussions (FGDs) represent a popular data collection tool for researchers, they contain an extremely serious flaw: FGD researchers have ultimate power over all decisions made at every stage of the research process--from the conceptualization of the research, to the planning of the research study, to the implementation of the…

  10. Using Visualization and Art to Promote Ego Development: An Evolving Technique for Groups.

    Science.gov (United States)

    Bloomgarden, Joan; Kaplan, Frances F.

    1993-01-01

    Describes procedure for promoting specific aspects of ego development in groups. Notes that procedure employs two visualization and art experiences and is guided by two therapeutic models, transactional analysis and existential therapy. Includes description of Loevinger's conception of ego development which provides larger framework in which to…

  11. Hydrocarbon group type analysis of petroleum heavy fractions using the TLC-FID technique

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, B.K.; Sarowha, S.L.S.; Bhagat, S.D. [Indian Institute of Petroleum, Dehradun (India); Tiwari, R.K.; Gupta, S.K.; Venkataramani, P.S. [Defence Materials and Stores, Research and Development, Establishment, Kanpur (India)

    1998-03-01

    Hydrocarbon group type analysis is important in all conversion processes and in preparation of feed for these conversion processes so as to learn the selectivity of the different type of catalysts for product yield and quality. The use of the Mark 5 Iatroscan detector and the method reported here allowed for a rapid and quantitative hydrocarbon group type analysis of petroleum residues without prior separation of asphaltenes. SARA type analyses of petroleum residues have been performed by a three stage development using n-hexane, toluene and DCM (95%):MeOH (5%). The standard deviation and coefficient of variation in repeated measurements by this method were as low as 0.65 wt% or less and 3.5 wt% or less, respectively. The time required for analysis of 10 samples could be as short as 90 min. (orig.) With 2 figs., 6 tabs., 21 refs.

  12. Aggregation and Averaging.

    Science.gov (United States)

    Siegel, Irving H.

    The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)

  13. Requirements for effective academic leadership in Iran: A Nominal Group Technique exercise

    OpenAIRE

    Shoghli Alireza; Brommels Mats; Bikmoradi Ali; Sohrabi Zohreh; Masiello Italo

    2008-01-01

    Abstract Background During the last two decades, medical education in Iran has shifted from elite to mass education, with a considerable increase in number of schools, faculties, and programs. Because of this transformation, it is a good case now to explore academic leadership in a non-western country. The objective of this study was to explore the views on effective academic leadership requirements held by key informants in Iran's medical education system. Methods A nominal group study was c...

  14. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    J C Travers

    2010-11-01

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.

  15. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  16. Translating Public Policy: Enhancing the Applicability of Social Impact Techniques for Grassroots Community Groups

    Directory of Open Access Journals (Sweden)

    Melissa Edwards

    2013-08-01

    Full Text Available This paper reports on an exploratory action research study designed to understand how grassroots community organisations engage in the measurement and reporting of social impact and how they demonstrate their social impact to local government funders. Our findings suggest that the relationships between small non-profit organisations, the communities they serve or represent and their funders are increasingly driven from the top down formalised practices. Volunteer-run grassroots organisations can be marginalized in this process. Members may lack awareness of funders’ strategic approaches or the formalized auditing and control requirements of funders mean grassroots organisations lose capacity to define their programs and projects. We conclude that, to help counter this trend, tools and techniques which open up possibilities for dialogue between those holding power and those seeking support are essential.

  17. Group Assessment and Structured Learning.

    Science.gov (United States)

    Lambert, Warren; And Others

    1980-01-01

    Two new techniques that were used with a group of seven blind, multiply handicapped young adults in a half-way house are described. Structured learning therapy is a social skills training technique and group assessment is a method of averaging psychological data on a group of clients to facilitate program planning based on client needs.…

  18. Difficulties and Problematic Steps in Teaching the Onstep Technique for Inguinal Hernia Repair, Results from a Focus Group Interview

    DEFF Research Database (Denmark)

    Andresen, Kristoffer; Laursen, Jannie; Rosenberg, Jacob

    2016-01-01

    Background. When a new surgical technique is brought into a department, it is often experienced surgeons that learn it first and then pass it on to younger surgeons in training. This study seeks to clarify the problems and positive experiences when teaching and training surgeons in the Onstep...... technique for inguinal hernia repair, seen from the instructor's point of view. Methods. We designed a qualitative study using a focus group to allow participants to elaborate freely and facilitate a discussion. Participants were surgeons with extensive experience in performing the Onstep technique from...... Germany, UK, France, Belgium, Italy, Greece, and Sweden. Results. Four main themes were found, with one theme covering three subthemes: instruction of others (experience, patient selection, and tailored teaching), comfort, concerns/fear, and anatomy. Conclusion. Surgeons receiving a one-day training...

  19. New techniques for computing the ideal class group and a system of fundamental units in number fields

    CERN Document Server

    Biasse, Jean-François

    2012-01-01

    We describe a new algorithm for computing the ideal class group, the regulator and a system of fundamental units in number fields under the generalized Riemann hypothesis. We use sieving techniques adapted from the number field sieve algorithm to derive relations between elements of the ideal class group, and $p$-adic approximations to manage the loss of precision during the computation of units. This new algorithm is particularily efficient for number fields of small degree for which a speed-up of an order of magnitude is achieved with respect to the standard methods.

  20. Determination of heat capacity of ionic liquid based nanofluids using group method of data handling technique

    Science.gov (United States)

    Sadi, Maryam

    2017-07-01

    In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.

  1. Requirements for effective academic leadership in Iran: A Nominal Group Technique exercise

    Directory of Open Access Journals (Sweden)

    Shoghli Alireza

    2008-04-01

    Full Text Available Abstract Background During the last two decades, medical education in Iran has shifted from elite to mass education, with a considerable increase in number of schools, faculties, and programs. Because of this transformation, it is a good case now to explore academic leadership in a non-western country. The objective of this study was to explore the views on effective academic leadership requirements held by key informants in Iran's medical education system. Methods A nominal group study was conducted by strategic sampling in which participants were requested to discuss and report on requirements for academic leadership, suggestions and barriers. Written notes from the discussions were transcribed and subjected to content analysis. Results Six themes of effective academic leadership emerged: 1shared vision, goal, and strategy, 2 teaching and research leadership, 3 fair and efficient management, 4 mutual trust and respect, 5 development and recognition, and 6 transformational leadership. Current Iranian academic leadership suffers from lack of meritocracy, conservative leaders, politicization, bureaucracy, and belief in misconceptions. Conclusion The structure of the Iranian medical university system is not supportive of effective academic leadership. However, participants' views on effective academic leadership are in line with what is also found in the western literature, that is, if the managers could create the premises for a supportive and transformational leadership, they could generate mutual trust and respect in academia and increase scientific production.

  2. Prioritization governmental insurance company according to BSC procedure by AHP group technique

    Directory of Open Access Journals (Sweden)

    Amene Kiarazm

    2014-05-01

    Full Text Available Insurance industry is one of the industries that have special importance and validity in modern economy, domestic and foreign trade. Performance evaluation and grading the insurance company in addition to determining the general position of agency in industry, market and informing the beneficiaries, cause increase in competition, dynamism in industry, and development in community. On the other hand, organization strategic performance evaluation is always one the first and most basic prerequisites for compiling improvement programs in organizations and it has a high importance. One of the strategic efficient models in this aspect is BSC that equally analyses all aspects of organization. The statistical population in this research is consists of four governmental insurance (Iran, Asia, Dana and Alborz. For collecting data, haphazard sampling procedure was used. Study tool is questionnaire whose reliability was measured by consistency ratio and whose validity was measured by content-construct method by acquiring the opinions of experts and some managers in this field of study and the results showed appropriate reliability and validity. In analysis data section, the group integrative procedures AHP and BSC were used. The results showed that the D insurance company had the higher final score than the other companies. After that the C, A and B insurance companies were respectively.

  3. Your Average Nigga

    Science.gov (United States)

    Young, Vershawn Ashanti

    2004-01-01

    "Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…

  4. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  5. Risks identification and ranking using AHP and group decision making technique: Presenting “R index”

    Directory of Open Access Journals (Sweden)

    Safar Fazli

    2013-02-01

    Full Text Available One of the primary concerns in project development is to detect all sorts of risks associated with a particular project. The main objective of this article is to identify the risks in the construction project and to grade them based on their importance on the project. The designed indicator in this paper is the combinational model of the Analytical Hierarchal Process (AHP method and the group decision – making applied for risks measurement and ranking. This indicator is called "R" which includes three main steps: creating the risks broken structure (RBS, obtaining each risk weight and efficacy, and finally performing the model to rank the risks. A questionnaire is used for gathering data. Based on the results of this survey, there are important risks associated with construction projects. There we need to use some guidelines to reduce the inherent risks including recognition of the common risks beside the political risks; suggestion of a simple, understandable, and practical model; and using plenty of the experts and specialists' opinions through applying step. After analyzing data, the final result from applying R index showed that the risk “economic changes / currency rate and inflation change" has the most importance for the analysis. In the other words, if these risks occur, the project may face with the more threats and it is suggested that an organization should centralize its equipment, personnel, cost, and time on the risk more than ever. The most obvious issue in this paper is a tremendous difference between an importance of the financial risks and the other risks.

  6. Techniques for searching first integrals by Lie group and application to gyroscope system

    Institute of Scientific and Technical Information of China (English)

    HU; Yanxia

    2005-01-01

    [1]Arnold, V. I., Dynamical Systems Ⅲ, New York: Springer-Verlag, 1988, 120.[2]Arnold, V. I., Kozlov, V. V., Neishtadt, A. I., Mathematical Aspects of Classical and Celestial Mechanics, 2nd ed.,New York: Springer-Verlag, 1997, 120.[3]Arnold, V. I., Mathematical Methods of Classical Mechanics, 2nd ed., New York: Springer-Verlag, 1989.[4]Olver, P. J., Applications of Lie Groups to Differential Equations, 2nd ed., New York: Springer-Verlag, 1989.[5]Bluman, G. W., Kumei, S., Symmetrics and Differential Equations, 2nd ed., New York: Springer-Verlag, 1989.[6]Guan, K. Y, Liu, S, Lei, J. Z., The Lie algebra admitted by an ordinary differential equation system, Ann. of Diff.Eqs., 1998, 14(2): 131-142.[7]Mei, F. X., Liu, R., Luo, Y., Advanced Analytical Mechanics (in Chinese), Beijing: Beijing Institute of Technology University Press, 1991.[8]Hagihara, Y., Machanics, C., Vol. Ⅰ: Dynamical Principles and Transformation Theory, Boston: The MIT (Massachusettes Institute of Technology) Press, 1976.[9]Cooke, R., The Mathematics of Sonya Kovalevskaya, New York: Springer-Verlag, 1984.[10]Kovalevsky, S., Sur le problème de la rotayion d'un corps solide autour d'un point fixe, Acta Mathematica, 1889,12: 177-232.[11]McCauley, J. L., Classical Mechanics, London: Cambridge University Press, 1997, 238.[12]Guan, K. Hu, Y. X., Generalized Homogeneous Autonomous Systems and Kovalevskaya Gyroscope, toappear.[13]see footnote on p. 1136.

  7. The use of nominal group technique in identifying community health priorities in Moshi rural district, northern Tanzania

    DEFF Research Database (Denmark)

    Makundi, E A; Manongi, R; Mushi, A K

    2005-01-01

    . It is the provision of ownership of the derived health priorities to partners including the community that enhances research utilization of the end results. In addition to disease-based methods, the Nominal Group Technique is being proposed as an important research tool for involving the non-experts in priority....... The patients/caregivers, women's group representatives, youth leaders, religious leaders and community leaders/elders constituted the principal subjects. Emphasis was on providing qualitative data, which are of vital consideration in multi-disciplinary oriented studies, and not on quantitative information from...... larger samples. We found a high level of agreement across groups, that malaria remains the leading health problem in Moshi rural district in Tanzania both in the highland and lowland areas. Our findings also indicate that 'non-medical' issues including lack of water, hunger and poverty heralded priority...

  8. Dependability in Aggregation by Averaging

    CERN Document Server

    Jesus, Paulo; Almeida, Paulo Sérgio

    2010-01-01

    Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a funda...

  9. Derivation of a Nonlinear Reynolds Stress Model Using Renormalization Group Analysis and Two-Scale Expansion Technique

    Institute of Scientific and Technical Information of China (English)

    LIU Zheng-Feng; WANG Xiao-Hong

    2008-01-01

    Adopting Yoshizawa's two-scale expansion technique,the fluctuating field is expanded around the isotropic field.The renormalization group method is applied for calculating the covariance of the fluctuating field at the lower order expansion,A nonlinear Reynolds stress model is derived and the turbulent constants inside are evaluated analytically.Compared with the two-scale direct interaction approximation analysis for turbulent shear flows proposed by Yoshizawa,the calculation is much more simple.The analytical model presented here is close to the Speziale model,which is widely applied in the numerical simulations for the complex turbulent flows.

  10. Negative Average Preference Utilitarianism

    Directory of Open Access Journals (Sweden)

    Roger Chao

    2012-03-01

    Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.

  11. Re-grouping stars based on the chemical tagging technique: A case study of M67 and IC4651

    CERN Document Server

    Blanco-Cuaresma, S

    2016-01-01

    The chemical tagging technique proposed by Freeman & Bland-Hawthorn (2002) is based on the idea that stars formed from the same molecular cloud should share the same chemical signature. Thus, using only the chemical composition of stars we should be able to re-group the ones that once belonged to the same stellar aggregate. In Blanco-Cuaresma et al. (2015), we tested the technique on open cluster stars using iSpec (Blanco-Cuaresma et al. 2014a), we demonstrated their chemical homogeneity but we found that the 14 studied elements lead to chemical signatures too similar to reliably distinguish stars from different clusters. This represents a challenge to the technique and a new question was open: Could the inclusion of other elements help to better distinguish stars from different aggregates? With an updated and improved version of iSpec, we derived abundances for 28 elements using spectra from HARPS, UVES and NARVAL archives for the open clusters M67 and IC4651, and we found that the chemical signatures of...

  12. Gastrointestinal neuromuscular pathology: guidelines for histological techniques and reporting on behalf of the Gastro 2009 International Working Group.

    Science.gov (United States)

    Knowles, Charles H; De Giorgio, Roberto; Kapur, Raj P; Bruder, Elisabeth; Farrugia, Gianrico; Geboes, Karel; Gershon, Michael D; Hutson, John; Lindberg, Greger; Martin, Joanne E; Meier-Ruge, William A; Milla, Peter J; Smith, Virpi V; Vandervinden, Jean Marie; Veress, Béla; Wedel, Thilo

    2009-08-01

    The term gastrointestinal neuromuscular disease describes a clinically heterogeneous group of disorders of children and adults in which symptoms are presumed or proven to arise as a result of neuromuscular, including interstitial cell of Cajal, dysfunction. Such disorders commonly have impaired motor activity, i.e. slowed or obstructed transit with radiological evidence of transient or persistent visceral dilatation. Whilst sensorimotor abnormalities have been demonstrated by a variety of methods in these conditions, standards for histopathological reporting remain relatively neglected. Significant differences in methodologies and expertise continue to confound the reliable delineation of normality and specificity of particular pathological changes for disease. Such issues require urgent clarification to standardize acquisition and handling of tissue specimens, interpretation of findings and make informed decisions on risk-benefit of full-thickness tissue biopsy of bowel or other diagnostic procedures. Such information will also allow increased certainty of diagnosis, facilitating factual discussion between patients and caregivers, as well as giving prognostic and therapeutic information. The following report, produced by an international working group, using established consensus methodology, presents proposed guidelines on histological techniques and reporting for adult and paediatric gastrointestinal neuromuscular pathology. The report addresses the main areas of histopathological practice as confronted by the pathologist, including suction rectal biopsy and full-thickness tissue obtained with diagnostic or therapeutic intent. For each, indications, safe acquisition of tissue, histological techniques, reporting and referral recommendations are presented.

  13. Evaluating a nursing erasmus exchange experience: Reflections on the use and value of the Nominal Group Technique for evaluation.

    Science.gov (United States)

    Cunningham, Sheila

    2017-09-01

    This paper discusses the use of Nominal Group Technique (NGT) for European nursing exchange evaluation at one university. The NGT is a semi-quantitative evaluation method derived from the Delphi method popular in the 1970s and 1980s. The NGT was modified from the traditional version retaining the structured cycles and but adding a broader group discussion. The NGT had been used for 2 successive years but required analysis and evaluation itself for credibility and 'fit' for purpose which is presented here. It aimed to explore nursing students' exchange experiences and aid programme development futures exchanges and closure from exchange. Results varied for the cohorts and students as participants enthusiastically engaged generating ample data which they ranked and categorised collectively. Evaluation of the NGT itself was two fold: by the programme team who considered purpose, audience, inclusivity, context and expertise. Secondly, students were asked for their thoughts using a graffiti board. Students avidly engaged with NGT but importantly also reported an effect from the process itself as an opportunity to reflect and share their experiences. The programme team concluded the NGT offered a credible evaluation tool which made use of authentic student voice and offered interactive group processes. Pedagogially, it enabled active reflection thus aiding reorientation back to the United Kingdom and awareness of 'transformative' consequences of their exchange experiences. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. ELECTRICAL RESISTANCE IMAGING OF TWO-PHASE FLOW WITH A MESH GROUPING TECHNIQUE BASED ON PARTICLE SWARM OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    BO AN LEE

    2014-02-01

    Full Text Available An electrical resistance tomography (ERT technique combining the particle swarm optimization (PSO algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes. Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images.

  15. Third-Order Optical Nonlinearities of Squarylium Dyes with Benzothiazole Donor Groups Measured Using the Picosecond Z-Scan Technique

    Science.gov (United States)

    Li, Zhong-Yu; Xu, Song; Chen, Zi-Hui; Zhang, Fu-Shi; Kasatani, Kazuo

    2011-08-01

    Third-order optical nonlinearities of two squarylium dyes with benzothiazole donor groups (BSQ1 and BSQ2) in chloroform solution are measured by a picosecond Z-scan technique at 532 nm. It is found that the two compounds show the saturation absorption and nonlinear self-focus refraction effect. The molecular second hyperpolarizabilities are calculated to be 7.46 × 10-31 esu and 5.01 × 10-30 esu for BSQ1 and BSQ2, respectively. The large optical nonlinearities of squarylium dyes can be attributed to their rigid and intramolecular charge transfer structure. The difference in γ values is attributed to the chloro group of benzene rings of BSQ2 and the one-photon resonance effect. It is found that the third-order nonlinear susceptibilities of two squarylium dyes are mainly determined by the real parts of χ(3), and the large optical nonlinearities of studied squarylium dyes can be attributed to the nonlinear refraction.

  16. Third-Order Optical Nonlinearities of Squarylium Dyes with Benzothiazole Donor Groups Measured Using the Picosecond Z-Scan Technique

    Institute of Scientific and Technical Information of China (English)

    LI Zhong-Yu; XU Song; CHEN Zi-Hui; ZHANG Fu-Shi; KASATANI Kazuo

    2011-01-01

    @@ Third-order optical nonlinearities of two squarylium dyes with benzothiazole donor groups (BSQ1 and BSQ2)in chloroform solution are measured by a picosecond Z-scan technique at 532 nm.It is found that the two compounds show the saturation absorption and nonlinear self-focus refraction effect.The molecular second hyperpolarizabilities are calculated to be 7.46×10-31 esu and 5.01×10-30 esu for BSQ1 and BSQ2, respectively.The large optical nonlinearities of squarylium dyes can be attributed to their rigid and intramolecular charge transfer structure.The difference in γvalues is attributed to the chloro group of benzene rings of BSQ2 and the one-photon resonance effect.It is found that the third-order nonlinear susceptibilities of two squarylium dyes are mainly determined by the real parts of X(3), and the large optical nonlinearities of studied squarylium dyes can be attributed to the nonlinear refraction.

  17. Evaluation of an automated microplate technique in the Galileo system for ABO and Rh(D) blood grouping.

    Science.gov (United States)

    Xu, Weiyi; Wan, Feng; Lou, Yufeng; Jin, Jiali; Mao, Weilin

    2014-01-01

    A number of automated devices for pretransfusion testing have recently become available. This study evaluated the Immucor Galileo System, a fully automated device based on the microplate hemagglutination technique for ABO/Rh (D) determinations. Routine ABO/Rh typing tests were performed on 13,045 samples using the Immucor automated instruments. Manual tube method was used to resolve ABO forward and reverse grouping discrepancies. D-negative test results were investigated and confirmed manually by the indirect antiglobulin test (IAT). The system rejected 70 tests for sample inadequacy. 87 samples were read as "No-type-determined" due to forward and reverse grouping discrepancies. 25 tests gave these results because of sample hemolysis. After further tests, we found 34 tests were caused by weakened RBC antibodies, 5 tests were attributable to weak A and/or B antigens, 4 tests were due to mixed-field reactions, and 8 tests had high titer cold agglutinin with blood qualifications which react only at temperatures below 34 degrees C. In the remaining 11 cases, irregular RBC antibodies were identified in 9 samples (seven anti-M and two anti-P) and two subgroups were identified in 2 samples (one A1 and one A2) by a reference laboratory. As for D typing, 2 weak D+ samples missed by automated systems gave negative results, but weak-positive reactions were observed in the IAT. The Immucor Galileo System is reliable and suited for ABO and D blood groups, some reasons may cause a discrepancy in ABO/D typing using a fully automated system. It is suggested that standardization of sample collection may improve the performance of the fully automated system.

  18. COST Action TU1208 - Working Group 3 - Electromagnetic modelling, inversion, imaging and data-processing techniques for Ground Penetrating Radar

    Science.gov (United States)

    Pajewski, Lara; Giannopoulos, Antonios; Sesnic, Silvestar; Randazzo, Andrea; Lambot, Sébastien; Benedetto, Francesco; Economou, Nikos

    2017-04-01

    This work aims at presenting the main results achieved by Working Group (WG) 3 "Electromagnetic methods for near-field scattering problems by buried structures; data processing techniques" of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.GPRadar.eu, www.cost.eu). The main objective of the Action, started in April 2013 and ending in October 2017, is to exchange and increase scientific-technical knowledge and experience of Ground Penetrating Radar (GPR) techniques in civil engineering, whilst promoting in Europe the effective use of this safe non-destructive technique. The Action involves more than 150 Institutions from 28 COST Countries, a Cooperating State, 6 Near Neighbour Countries and 6 International Partner Countries. Among the most interesting achievements of WG3, we wish to mention the following ones: (i) A new open-source version of the finite-difference time-domain simulator gprMax was developed and released. The new gprMax is written in Python and includes many advanced features such as anisotropic and dispersive-material modelling, building of realistic heterogeneous objects with rough surfaces, built-in libraries of antenna models, optimisation of parameters based on Taguchi's method - and more. (ii) A new freeware CAD was developed and released, for the construction of two-dimensional gprMax models. This tool also includes scripts easing the execution of gprMax on multi-core machines or network of computers and scripts for a basic plotting of gprMax results. (iii) A series of interesting freeware codes were developed will be released by the end of the Action, implementing differential and integral forward-scattering methods, for the solution of simple electromagnetic problems by buried objects. (iv) An open database of synthetic and experimental GPR radargrams was created, in cooperation with WG2. The idea behind this initiative is to give researchers the

  19. Derivation of a second-order model for Reynolds stress using renormalization group analysis and the two-scale expansion technique

    Institute of Scientific and Technical Information of China (English)

    Xiao-Hong Wang; Zheng-Feng Liu; Xiao-Xia Lu

    2011-01-01

    With the two-scale expansion technique proposed by Yoshizawa,the turbulent fluctuating field is expanded around the isotropic field.At a low-order two-scale expansion,applying the mode coupling approximation in the Yakhot-Orszag renormalization group method to analyze the fluctuating field,the Reynolds-average terms in the Reynolds stress transport equation,such as the convective term,the pressure-gradient-velocity correlation term and the dissipation term,are modeled.Two numerical examples:turbulent flow past a backward-facing step and the fully developed flow in a rotating channel,are presented for testing the efficiency of the proposed second-order model.For these two numerical examples,the proposed model performs as well as the Gibson-Launder (GL) model,giving better prediction than the standard k-ε model,especially in the abilities to calculate the secondary flow in the backward-facing step flow and to capture the asymmetric turbulent structure caused by frame rotation.

  20. Understanding neighborhood environment related to Hong Kong children's physical activity: a qualitative study using nominal group technique.

    Directory of Open Access Journals (Sweden)

    Gang He

    Full Text Available BACKGROUND: Relationships between the neighborhood environment and children's physical activity have been well documented in Western countries but are less investigated in ultra-dense Asian cities. The aim of this study was to identify the environmental facilitators and barriers of physical activity behaviors among Hong Kong Chinese children using nominal group technique. METHODS: Five nominal groups were conducted among 34 children aged 10-11 years from four types of neighborhoods varying in socio-economic status and walkability in Hong Kong. Environmental factors were generated by children in response to the question "What neighborhood environments do you think would increase or decrease your willingness to do physical activity?" Factors were prioritized in order of their importance to children's physical activity. RESULTS: Sixteen unique environmental factors, which were perceived as the most important to children's physical activity, were identified. Factors perceived as physical activity-facilitators included "Sufficient lighting", "Bridge or tunnel", "Few cars on roads", "Convenient transportation", "Subway station", "Recreation grounds", "Shopping malls with air conditioning", "Fresh air", "Interesting animals", and "Perfume shop". Factors perceived as physical activity-barriers included "People who make me feel unsafe", "Crimes nearby", "Afraid of being taken or hurt at night", "Hard to find toilet in shopping mall", "Too much noise", and "Too many people in recreation grounds". CONCLUSIONS: Specific physical activity-related environmental facilitators and barriers, which are unique in an ultra-dense city, were identified by Hong Kong children. These initial findings can inform future examinations of the physical activity-environment relationship among children in Hong Kong and similar Asian cities.

  1. Working Group 1 "Advanced GNSS Processing Techniques" of the COST Action GNSS4SWEC: Overview of main achievements

    Science.gov (United States)

    Douša, Jan; Dick, Galina; Kačmařík, Michal; Václavovic, Pavel; Pottiaux, Eric; Zus, Florian; Brenot, Hugues; Moeller, Gregor; Hinterberger, Fabian; Pacione, Rosa; Stuerze, Andrea; Eben, Kryštof; Teferle, Norman; Ding, Wenwu; Morel, Laurent; Kaplon, Jan; Hordyniec, Pavel; Rohm, Witold

    2017-04-01

    The COST Action ES1206 GNSS4SWEC addresses new exploitations of the synergy between developments in GNSS and meteorological communities. The Working Group 1 (Advanced GNSS processing techniques) deals with implementing and assessing new methods for GNSS tropospheric monitoring and precise positioning exploiting all modern GNSS constellations, signals, products etc. Besides other goals, WG1 coordinates development of advanced tropospheric products in support of weather numerical and non-numerical nowcasting. These are ultra-fast and high-resolution tropospheric products available in real time or in a sub-hourly fashion and parameters in support of monitoring an anisotropy of the troposphere, e.g. horizontal gradients and tropospheric slant path delays. This talk gives an overview of WG1 activities and, particularly, achievements in two activities, Benchmark and Real-time demonstration campaigns. For the Benchmark campaign a complex data set of GNSS observations and various meteorological data were collected for a two-month period in 2013 (May-June) which included severe weather events in central Europe. An initial processing of data sets from GNSS and numerical weather models (NWM) provided independently estimated reference parameters - ZTDs and tropospheric horizontal gradients. The comparison of horizontal tropospheric gradients from GNSS and NWM data demonstrated a very good agreement among independent solutions with negligible biases and an accuracy of about 0.5 mm. Visual comparisons of maps of zenith wet delays and tropospheric horizontal gradients showed very promising results for future exploitations of advanced GNSS tropospheric products in meteorological applications such as severe weather event monitoring and weather nowcasting. The Benchmark data set is also used for an extensive validation of line-of-sight tropospheric Slant Total Delays (STD) from GNSS, NWM-raytracing and Water Vapour Radiometer (WVR) solutions. Seven institutions delivered their STDs

  2. Assessing treatment-as-usual provided to control groups in adherence trials: Exploring the use of an open-ended questionnaire for identifying behaviour change techniques

    NARCIS (Netherlands)

    Oberjé, E.J.M.; Dima, A.L.; Pijnappel, F.J.; Prins, J.M.; Bruin, M. de

    2015-01-01

    OBJECTIVE: Reporting guidelines call for descriptions of control group support in equal detail as for interventions. However, how to assess the active content (behaviour change techniques (BCTs)) of treatment-as-usual (TAU) delivered to control groups in trials remains unclear. The objective of this

  3. An Intercomparison of Techniques to Determine the Area-Averaged Latent Heat Flux from Individual in Situ Observations: A remote Sensing Approach Using the European Field Experiment in a Desertification-Threatened Area Data

    Science.gov (United States)

    Pelgrum, H.; Bastiaanssen, W. G. M.

    1996-04-01

    A knowledge of the area-averaged latent heat flux is necessary to validate large-scale model predictions of heat fluxes over heterogeneous land surfaces. This paper describes different procedures to obtain as a weighted average of ground-based observations. The weighting coefficients are obtained from remote sensing measurements. The remote sensing data used in this study consist of a Landsat thematic mapper image of the European Field Experiment in a Desertification-Threatened Area (EFEDA) grid box in central Spain, acquired on June 12, 1991. A newly developed remote sensing algorithm, the surface energy balance for land algorithm (SEBAL), solves the energy budget on a pixel-by-pixel basis. From the resulting frequency distribution of the latent heat flux, the area-averaged latent heat flux was calculated as = 164 W m-2. This method was validated with field measurements of latent heat flux, sensible heat flux, and soil moisture. In general, the SEBAL-derived output compared well with field measurements. Two other methods for retrieval of weighting coefficients were tested against SEBAL. The second method combines satellite images of surface temperature, surface albedo, and normalized difference vegetation index (NDVI) into an index on a pixel-by-pixel basis. After inclusion of ground-based measurements of the latent heat flux, a linear relationship between the index and the latent heat flux was established. This relationship was used to map the latent heat flux on a pixel-by-pixel basis, resulting in = 194 W m-2. The third method makes use of a supervised classification of the thematic mapper image into eight land use classes. An average latent heat flux was assigned to each class by using field measurements of the latent heat flux. According to the percentage of occurrence of each class in the image, was calculated as 110 W m-2. A weighting scheme was produced to make an estimation of possible from in situ observations. The weighting scheme contained a

  4. Comparison of different sampling techniques and of different culture methods for detection of group B streptococcus carriage in pregnant women.

    Science.gov (United States)

    El Aila, Nabil A; Tency, Inge; Claeys, Geert; Saerens, Bart; Cools, Piet; Verstraelen, Hans; Temmerman, Marleen; Verhelst, Rita; Vaneechoutte, Mario

    2010-09-29

    Streptococcus agalactiae (group B streptococcus; GBS) is a significant cause of perinatal and neonatal infections worldwide. To detect GBS colonization in pregnant women, the CDC recommends isolation of the bacterium from vaginal and anorectal swab samples by growth in a selective enrichment medium, such as Lim broth (Todd-Hewitt broth supplemented with selective antibiotics), followed by subculture on sheep blood agar. However, this procedure may require 48 h to complete. We compared different sampling and culture techniques for the detection of GBS. A total of 300 swabs was taken from 100 pregnant women at 35-37 weeks of gestation. For each subject, one rectovaginal, one vaginal and one rectal ESwab were collected. Plating onto Columbia CNA agar (CNA), group B streptococcus differential agar (GBSDA) (Granada Medium) and chromID Strepto B agar (CA), with and without Lim broth enrichment, were compared. The isolates were confirmed as S. agalactiae using the CAMP test on blood agar and by molecular identification with tDNA-PCR or by 16S rRNA gene sequence determination. The overall GBS colonization rate was 22%. GBS positivity for rectovaginal sampling (100%) was significantly higher than detection on the basis of vaginal sampling (50%), but not significantly higher than for rectal sampling (82%). Direct plating of the rectovaginal swab on CNA, GBSDA and CA resulted in detection of 59, 91 and 95% of the carriers, respectively, whereas subculturing of Lim broth yielded 77, 95 and 100% positivity, respectively. Lim broth enrichment enabled the detection of only one additional GBS positive subject. There was no significant difference between GBSDA and CA, whereas both were more sensitive than CNA. Direct culture onto GBSDA or CA (91 and 95%) detected more carriers than Lim broth enrichment and subculture onto CNA (77%). One false negative isolate was observed on GBSDA, and three false positives on CA. In conclusion, rectovaginal sampling increased the number GBS

  5. Comparison of group B streptococci colonization in vaginal and rectal specimens by culture method and polymerase chain reaction technique.

    Science.gov (United States)

    Bidgani, Shahrokh; Navidifar, Tahereh; Najafian, Mahin; Amin, Mansour

    2016-03-01

    Streptococcus agalactiae (group B streptococci, GBS) is a colonizing microorganism in pregnant women and without causing symptoms. Colonization of GBS in the rectovaginal region in late of pregnancy is a risk factor for newborn diseases. GBS infection in newborn babies is acquired by the aspiration of infected amniotic fluid or vertical transmission during delivery through the birth canal. The aim of this study was determination of GBS prevalence among vaginal and anorectal specimens at gestation females by polymerase chain reaction (PCR) and culture-based methods. In this study, 137 rectal and vaginal swabs were separately collected from women with gestational age 35-37 weeks from July 2013 to March 2014 at the teaching hospital of Razi, Ahvaz, Iran. All samples were enrichment in selective culture media Todd-Hewitt broth for 24 hours and recognized by standard culture using blood agar, phenotypic tests, and amplification of the CFB gene. Age range was 16-45 years (mean, 28.34 ± 0.7 years). Of rectal samples, 42 (30.7%) were positive based on culture method and 57 (41.6%) samples were positive by PCR. Of 137 vaginal samples, 38 (27.7%) were positive by culture and 60 (43.8%) samples were positive by PCR. The chance of colonization with GBS was increased in women with a history of urinary tract infection. The frequency of GBS culture from rectal samples was higher than vaginal samples. However, the detection percentage of GBS using PCR from vaginal samples was higher than rectal samples. By contrast, the culture is a time-consuming method requiring at least 48 hours for GBS fully identification but PCR is a sensitive and rapid technique in detection of GBS, with the result was acquired during 3 hours. Copyright © 2015. Published by Elsevier Taiwan LLC.

  6. Physical Theories with Average Symmetry

    OpenAIRE

    Alamino, Roberto C.

    2013-01-01

    This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violat...

  7. Average Convexity in Communication Situations

    NARCIS (Netherlands)

    Slikker, M.

    1998-01-01

    In this paper we study inheritance properties of average convexity in communication situations. We show that the underlying graph ensures that the graphrestricted game originating from an average convex game is average convex if and only if every subgraph associated with a component of the underlyin

  8. Sampling Based Average Classifier Fusion

    Directory of Open Access Journals (Sweden)

    Jian Hou

    2014-01-01

    fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness.

  9. Average utility maximization: A preference foundation

    NARCIS (Netherlands)

    A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)

    2014-01-01

    textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen

  10. Physical Theories with Average Symmetry

    CERN Document Server

    Alamino, Roberto C

    2013-01-01

    This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.

  11. Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization techniques.

    Science.gov (United States)

    Chen, Shyi-Ming; Manalu, Gandhi Maruli Tua; Pan, Jeng-Shyang; Liu, Hsiang-Chuan

    2013-06-01

    In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization (PSO) techniques. First, we fuzzify the historical training data of the main factor and the secondary factor, respectively, to form two-factors second-order fuzzy logical relationships. Then, we group the two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, we obtain the optimal weighting vector for each fuzzy-trend logical relationship group by using PSO techniques to perform the forecasting. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index and the NTD/USD exchange rates. The experimental results show that the proposed method gets better forecasting performance than the existing methods.

  12. Comparison of minimally invasive surgery and mini-incision technique for total hip arthroplasty: a sub-group meta-analysis

    Institute of Scientific and Technical Information of China (English)

    ZHAO Xiang; LIN Tiao; CAI Xun-zi; YAN Shi-gui

    2011-01-01

    Background It is well accepted that the minimally invasive surgery (MIS) for total hip arthroplasty (THA) should combine with less or no muscle damage and is different from mini-incision technique and MIS should have better outcomes than mini-incision surgery.The aim of current analysis was to apply an explicitly defined sub-group analysis to confirm whether this hypothesis is true.Methods A computerized literature search was applied to find any data concerning MIS or mini-incision THAs.A multistage screening was then performed to identify randomized studies fulfilling the inclusive criteria for the analysis.The data were extracted,and sub-group analyses of MIS or mini-incision surgery for different kinds of outcomes were carried out.The P(sub) value for difference between MIS sub-group and mini-incision sub-group was also calculated.Results Eleven studies that fulfilling the inclusion criteria were included,with 472 cases in the study group (MIS or mini-incision) and 492 cases in the conventional group.The overall analysis showed the study group would achieve less surgical duration (P=0.037),intraoperative blood (P <0.001) and incision length (P <0.001) than conventional group.The difference between sub-groups showed,the MIS would achieve shorter incision length (P(sub) <0.05) and bigger cup abduction angle (P(sub) <0.05),and cause more blood loss (P (sub) <0.05) than mini-incision technique.Other indexeswere comparable between the two sub-groups.Conclusions Though further high quality studies are still needed,the result of current analysis offered an initial conclusion that MIS THA failed to achieve a better clinical outcome than mini-incision technique.The exact definition of MIS still needs to be improved.

  13. Using the IGCRA (Individual, Group, Classroom Reflective Action) Technique to Enhance Teaching and Learning in Large Accountancy Classes

    Science.gov (United States)

    Poyatos Matas, Cristina; Ng, Chew; Muurlink, Olav

    2011-01-01

    First year accounting has generally been perceived as one of the more challenging first year business courses for university students. Various Classroom Assessment Techniques (CATs) have been proposed to attempt to enrich and enhance student learning, with these studies generally positioning students as learners alone. This paper uses an…

  14. Quantized average consensus with delay

    NARCIS (Netherlands)

    Jafarian, Matin; De Persis, Claudio

    2012-01-01

    Average consensus problem is a special case of cooperative control in which the agents of the network asymptotically converge to the average state (i.e., position) of the network by transferring information via a communication topology. One of the issues of the large scale networks is the cost of co

  15. Review: Ralf Bohnsack, Aglaja Przyborski & Burkhard Schäffer (Eds. (2010. Das Gruppendiskussionsverfahren in der Forschungspraxis [The Group Discussion Technique in Research Practice

    Directory of Open Access Journals (Sweden)

    Diana Schmidt-Pfister

    2011-03-01

    Full Text Available This edited volume comprises a range of studies that have employed a group discussion technique in combination with a specific strategy for reconstructive social research—the so-called documentary method. The latter is an empirical research strategy based on the meta-theoretical premises of the praxeological sociology of knowledge, as developed by Ralf BOHNSACK. It seeks to access practice in a more appropriate manner, namely by differentiating between various dimensions of knowledge and sociality. It holds that habitual collective orientations, in particular, are best accessed through group discussions. Thus this book does not address the group discussion technique in general, as might be expected from the title. Instead, it presents various contributions from researchers interpreting transcripts of group discussions according to the documentary method. The chapters are grouped into three main sections, representing different frameworks of practice and habitual orientation: childhood, adolescence, and organizational or societal context. A fourth section includes chapters on further, potentially useful ways of employing this particular technique and approach, as well as a chapter on teaching it in a meaningful way. Each chapter is structured in the same way: introduction to the research field and focus; methodological discussion; exemplary interpretation of group discussions; and concluding remarks. Whilst the transcripts referred to by the authors are very helpfully presented in the chapters, there is a lack of methodological reflection on the group discussion technique itself, which, as mentioned above, is only evaluated in regard to the documentary method. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs110225

  16. Comparison of group B streptococci colonization in vaginal and rectal specimens by culture method and polymerase chain reaction technique

    Directory of Open Access Journals (Sweden)

    Shahrokh Bidgani

    2016-03-01

    Conclusion: The frequency of GBS culture from rectal samples was higher than vaginal samples. However, the detection percentage of GBS using PCR from vaginal samples was higher than rectal samples. By contrast, the culture is a time-consuming method requiring at least 48 hours for GBS fully identification but PCR is a sensitive and rapid technique in detection of GBS, with the result was acquired during 3 hours.

  17. The Effect of Group Counseling Using Ellis's A-B-C Technique on Irrational Beliefs and Self-Efficacy About Breast Self-Awareness of Women Health Volunteers.

    Science.gov (United States)

    Rouzbeh, Mahnaz; Namadian, Masoumeh; Shakibazadeh, Elham; Hasani, Jafar; Rouzbeh, Robabeh

    2017-08-01

    This preliminary pilot effort assessed the effect of group counseling using A-B-C technique on irrational beliefs and self-efficacy for women health volunteers (WHVs) in breast self-awareness. In this randomized controlled trial, 40 WHVs from three health centers (Abhar, Iran) were randomly allocated into two groups. Seven weekly group counseling sessions were held for the intervention group. Data about cancer fatalism belief, dissatisfaction of body, anxiety, and self-efficacy were collected through validated questionnaires 1 month before and 2 weeks after the intervention. Mean scores of anxiety ( p = .036), body dissatisfaction ( p = .002), cancer fatalism belief ( p ≤ .0001), and self-efficacy ( p ≤ .0001) were improved in the intervention group compared with control group. Group counseling using A-B-C technique was effective in improving irrational beliefs and self-efficacy of the WHVs about breast self-awareness. The findings may help in further development of strategies and cultural programs to improve health-related irrational beliefs.

  18. Gaussian moving averages and semimartingales

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas

    2008-01-01

    In the present paper we study moving averages (also known as stochastic convolutions) driven by a Wiener process and with a deterministic kernel. Necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration. Our results...... are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...

  19. Applying the nominal group technique in an employment relations conflict situation: A case study of a university maintenance section in South Africa

    Directory of Open Access Journals (Sweden)

    Cornelis (Kees S. van der Waal

    2009-04-01

    Full Text Available After a breakdown in employment relations in the maintenance section of a higher education institution, the authors were asked to intervene in order to try and solve the employment relations conflict situation. It was decided to employ the Nominal Group Technique (NGT as a tool in problem identification during conflict in the workplace. An initial investigation of documentation and interviews with prominent individuals in the organisation was carried out. The NGT was then used in four focus group discussions to determine the important issues as seen by staff members. The NGT facilitates the determination of shared perceptions and the ranking of ideas. The NGT was used in diverse groups, necessitating adaptations to the technique. The perceived causes of the conflict were established. The NGT can be used in a conflict situation in the workplace in order to establish the perceived causes of employment relations conflict.

  20. Measuring cutaneous thermal nociception in group-housed pigs using laser technique - effects of laser power output

    DEFF Research Database (Denmark)

    Herskin, Mette S.; Ladevig, Jan; Arendt-Nielsen, Lars

    2009-01-01

    of the metatarsus were examined using 15 gilts kept in one group and tested in individual feeding stalls after feeding. Increasing the power output led to gradually decreasing latency to respond (P ... are available, especially methodology which is applicable for pigs kept in group-housing without disturbing the daily routines of the animals. To validate a laser-based method to measure thermal nociception in group-housed pigs, we performed two experiments observing the behavioural responses toward cutaneous...... nociceptive stimulation from a computer-controlled CO2-laser beam applied to either the caudal part of the metatarsus on the hind legs or the shoulder region of gilts. In Exp. 1, effects of laser power output (0, 0.5, 1, 1.5 and 2 W) on nociceptive responses toward stimulation on the caudal aspects...

  1. Books average previous decade of economic misery.

    Directory of Open Access Journals (Sweden)

    R Alexander Bentley

    Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  2. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  3. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  4. Using Breakout Groups as an Active Learning Technique in a Large Undergraduate Nutrition Classroom at the University of Guelph

    Directory of Open Access Journals (Sweden)

    Genevieve Newton

    2012-12-01

    Full Text Available Breakout groups have been widely used under many different conditions, but the lack of published information related to their use in undergraduate settings highlights the need for research related to their use in this context. This paper describes a study investigating the use of breakout groups in undergraduate education as it specifically relates to teaching a large 4th year undergraduate Nutrition class in a physically constrained lecture space. In total, 220 students completed a midterm survey and 229 completed a final survey designed to measure student satisfaction. Survey results were further analyzed to measure relationships between student perception of breakout group effectiveness and (1 gender and (2 cumulative GPA. Results of both surveys revealed that over 85% of students either agreed or strongly agreed that using breakout groups enhanced their learning experience, with females showing a significantly greater level of satisfaction and higher final course grade than males. Although not stratified by gender, a consistent finding between surveys was a lower perception of breakout group effectiveness by students with a cumulative GPA above 90%. The majority of respondents felt that despite the awkward room space, the breakout groups were easy to create and participate in, which suggests that breakout groups can be successfully used in a large undergraduate classroom despite physical constraints. The findings of this work are relevant given the applicability of breakout groups to a wide range of disciplines, and the relative ease of integration into a traditional lecture format.Les enseignants ont recours aux petits groupes dans de nombreuses conditions différentes, cependant, le manque d’information publiée sur leur utilisation au premier cycle confirme la nécessité d’effectuer des recherches sur ce format dans ce contexte. Le présent article rend compte d’une étude portant sur l’utilisation des petits groupes au premier

  5. Active cooling of pulse compression diffraction gratings for high energy, high average power ultrafast lasers.

    Science.gov (United States)

    Alessi, David A; Rosso, Paul A; Nguyen, Hoang T; Aasen, Michael D; Britten, Jerald A; Haefner, Constantin

    2016-12-26

    Laser energy absorption and subsequent heat removal from diffraction gratings in chirped pulse compressors poses a significant challenge in high repetition rate, high peak power laser development. In order to understand the average power limitations, we have modeled the time-resolved thermo-mechanical properties of current and advanced diffraction gratings. We have also developed and demonstrated a technique of actively cooling Petawatt scale, gold compressor gratings to operate at 600W of average power - a 15x increase over the highest average power petawatt laser currently in operation. Combining this technique with low absorption multilayer dielectric gratings developed in our group would enable pulse compressors for petawatt peak power lasers operating at average powers well above 40kW.

  6. First Clinical Investigations of New Ultrasound Techniques in Three Patient Groups: Patients with Liver Tumors, Arteriovenous Fistulas, and Arteriosclerotic Femoral Arteries

    DEFF Research Database (Denmark)

    Hansen, Peter Møller

    arteriosclerotic lesion was raised, recordings of the flow were made. The recordings were subsequently analyzed, and for each recording blood flow velocity at the lesion was compared with the flow velocity in a healthy adjacent arterial segment. If the velocity at the lesion was higher than in the healthy segment...... of the new ultrasound techniques in selected groups of patients. For all three studies the results are promising, and hopefully the techniques will find their way into everyday clinical practice for the benefit of both patients and healthcare practitioners....

  7. [Drama and forgiveness: the mechanism of action of a new technique in group therapy--face therapy].

    Science.gov (United States)

    Csigó, Katalin; Bender, Márta; Németh, Attila

    2006-01-01

    In our article we relate our experiences of the face therapy--group therapy sessions held at 2nd Psychiatric Ward of Nyíró Gyula Hospital. Face therapy uses the elements of art therapy and psychodrama: patients form their own head from gypsum and paint it. During the sessions, we analyse the heads and patients reveal their relation to their head. Our paper also presents the structure of thematic sessions and the features of the creative and processing phase. The phenomena that occur during group therapy (self-presentation, self-destruction, creativity) are interpreted with the concepts of psychodynamics and psychodrama. Finally, possible areas of indication are suggested for face therapy and the treatment possibilities for self-destructive phenomena.

  8. Comparison of group B streptococci colonization in vaginal and rectal specimens by culture method and polymerase chain reaction technique

    OpenAIRE

    Bidgani, Shahrokh; Navidifar, Tahereh; Najafian, Mahin; Amin, Mansour

    2016-01-01

    Background: Streptococcus agalactiae (group B streptococci, GBS) is a colonizing microorganism in pregnant women and without causing symptoms. Colonization of GBS in the rectovaginal region in late of pregnancy is a risk factor for newborn diseases. GBS infection in newborn babies is acquired by the aspiration of infected amniotic fluid or vertical transmission during delivery through the birth canal. The aim of this study was determination of GBS prevalence among vaginal and anorectal specim...

  9. Comparison of different sampling techniques and of different culture methods for detection of group B streptococcus carriage in pregnant women

    OpenAIRE

    Verhelst Rita; Temmerman Marleen; Verstraelen Hans; Cools Piet; Saerens Bart; Claeys Geert; Tency Inge; El Aila Nabil A; Vaneechoutte Mario

    2010-01-01

    Abstract Background Streptococcus agalactiae (group B streptococcus; GBS) is a significant cause of perinatal and neonatal infections worldwide. To detect GBS colonization in pregnant women, the CDC recommends isolation of the bacterium from vaginal and anorectal swab samples by growth in a selective enrichment medium, such as Lim broth (Todd-Hewitt broth supplemented with selective antibiotics), followed by subculture on sheep blood agar. However, this procedure may require 48 h to complete....

  10. Vocal attractiveness increases by averaging.

    Science.gov (United States)

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.

  11. Average Case Analysis of an Adjacency Map Searching Technique.

    Science.gov (United States)

    1981-12-01

    next section (see Figure 2.2 (c)). A pidgin algol program for procedure SEARCHTREE is given in Table 1. Q Q 1 1 Q29 U, Z are queues; s Q and Q s denote...rectangle to which they belong. In a second scan, a list Pi of sorted points is easily built for I° Table 1. A pidgin algol program for procedure

  12. Developing Policy for Integrating Biomedicine and Traditional Chinese Medical Practice Using Focus Groups and the Delphi Technique

    Directory of Open Access Journals (Sweden)

    Vincent C. H. Chung

    2012-01-01

    Full Text Available In Hong Kong, statutory regulation for traditional Chinese medicine (TCM practitioners has been implemented in the past decade. Increasing use of TCM on top of biomedicine (BM services by the population has been followed; but corresponding policy development to integrate their practices has not yet been discussed. Using focus group methodology, we explore policy ideas for integration by collating views from frontline BM (n=50 and TCM clinicians (n=50. Qualitative data were analyzed under the guidance of structuration model of collaboration, a theoretical model for understanding interprofessional collaboration. From focus group findings we generated 28 possible approaches, and subsequently their acceptability was assessed by a two round Delphi survey amongst BM and TCM policy stakeholders (n=12. Consensus was reached only on 13 statements. Stakeholders agreed that clinicians from both paradigms should share common goals of providing patient-centered care, promoting the development of protocols for shared care and information exchange, as well as strengthening interprofessional connectivity and leadership for integration. On the other hand, attitudes amongst policy stakeholders were split on the possibility of fostering trust and mutual learning, as well as on enhancing innovation and governmental support. Future policy initiatives should focus on these controversial areas.

  13. Practical Research of creative Techniques of Sports Dance Group%体育舞蹈团体舞编创技法的研究与实践

    Institute of Scientific and Technical Information of China (English)

    朱丹妮

    2015-01-01

    Using methods of documentation, expert interview, practice research, video analysis and logical analysis, taking creative techniques theory of choreographer and works of excellent sports dance group at home and abroad as the research object, this paper summarizes the creative technique theory system applicable to sports group dance. It is suggested that in training activities of regional sports dance teachers and referee, the training contents of creative sports group dance should be increased, to make sports dance workers have learning sports dance group creative technique theory system, and to make sports dance group creative technique can be widely used. Sports dance coaches should consciously learn about sports dance group creative techniques theory system, and improve their ability of choreography through the combination of theory and practice.%运用文献资料法、专家访谈法、实践研究法、录像分析法和逻辑分析法等综合方法,以舞蹈编导学编创技法的若干理论和国内外优秀体育舞蹈团体舞作品为研究对象,总结出适用于体育舞蹈团体舞编创技法理论体系。建议:在地区性体育舞蹈教师裁判培训活动中,增加有关编创体育舞蹈团体舞的培训内容,使体育舞蹈工作者有学习体育舞蹈团体舞编创技法理论体系的机会,使体育舞蹈团体舞编创技法能够被广泛运用;作为体育舞蹈教练员要有意识地去学习有关体育舞蹈团体舞编创技法理论体系,通过理论与实践相结合来提高自身的舞蹈编排能力。

  14. Construction of a technique plan repository and evaluation system based on AHP group decision-making for emergency treatment and disposal in chemical pollution accidents

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Shenggang [College of Environmental Science and Engineering, Beijing Forestry University, Beijing 100083 (China); College of Chemistry, Baotou Teachers’ College, Baotou 014030 (China); Cao, Jingcan; Feng, Li; Liang, Wenyan [College of Environmental Science and Engineering, Beijing Forestry University, Beijing 100083 (China); Zhang, Liqiu, E-mail: zhangliqiu@163.com [College of Environmental Science and Engineering, Beijing Forestry University, Beijing 100083 (China)

    2014-07-15

    Highlights: • Different chemical pollution accidents were simplified using the event tree analysis. • Emergency disposal technique plan repository of chemicals accidents was constructed. • The technique evaluation index system of chemicals accidents disposal was developed. • A combination of group decision and analytical hierarchy process (AHP) was employed. • Group decision introducing similarity and diversity factor was used for data analysis. - Abstract: The environmental pollution resulting from chemical accidents has caused increasingly serious concerns. Therefore, it is very important to be able to determine in advance the appropriate emergency treatment and disposal technology for different types of chemical accidents. However, the formulation of an emergency plan for chemical pollution accidents is considerably difficult due to the substantial uncertainty and complexity of such accidents. This paper explains how the event tree method was used to create 54 different scenarios for chemical pollution accidents, based on the polluted medium, dangerous characteristics and properties of chemicals involved. For each type of chemical accident, feasible emergency treatment and disposal technology schemes were established, considering the areas of pollution source control, pollutant non-proliferation, contaminant elimination and waste disposal. Meanwhile, in order to obtain the optimum emergency disposal technology schemes as soon as the chemical pollution accident occurs from the plan repository, the technique evaluation index system was developed based on group decision-improved analytical hierarchy process (AHP), and has been tested by using a sudden aniline pollution accident that occurred in a river in December 2012.

  15. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary ...

  16. Cosmic structure, averaging and dark energy

    CERN Document Server

    Wiltshire, David L

    2013-01-01

    These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...

  17. What Hispanic parents do to encourage and discourage 3-5 year old children to be active: a qualitative study using nominal group technique.

    Science.gov (United States)

    O'Connor, Teresia M; Cerin, Ester; Hughes, Sheryl O; Robles, Jessica; Thompson, Deborah; Baranowski, Tom; Lee, Rebecca E; Nicklas, Theresa; Shewchuk, Richard M

    2013-08-06

    Hispanic preschoolers are less active than their non-Hispanic peers. As part of a feasibility study to assess environmental and parenting influences on preschooler physical activity (PA) (Niños Activos), the aim of this study was to identify what parents do to encourage or discourage PA among Hispanic 3-5 year old children to inform the development of a new PA parenting practice instrument and future interventions to increase PA among Hispanic youth. Nominal Group Technique (NGT), a structured multi-step group procedure, was used to elicit and prioritize responses from 10 groups of Hispanic parents regarding what parents do to encourage (5 groups) or discourage (5 groups) preschool aged children to be active. Five groups consisted of parents with low education (less than high school) and 5 with high education (high school or greater) distributed between the two NGT questions. Ten NGT groups (n = 74, range 4-11/group) generated 20-46 and 42-69 responses/group for practices that encourage or discourage PA respectively. Eight to 18 responses/group were elected as the most likely to encourage or discourage PA. Parental engagement in child activities, modeling PA, and feeding the child well were identified as parenting practices that encourage child PA. Allowing TV and videogame use, psychological control, physical or emotional abuse, and lack of parental engagement emerged as parenting practices that discourage children from being active. There were few differences in the pattern of responses by education level. Parents identified ways they encourage and discourage 3-5 year-olds from PA, suggesting both are important targets for interventions. These will inform the development of a new PA parenting practice scale to be further evaluated. Further research should explore the role parents play in discouraging child PA, especially in using psychological control or submitting children to abuse, which were new findings in this study.

  18. Measuring Complexity through Average Symmetry

    OpenAIRE

    Alamino, Roberto C.

    2015-01-01

    This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle - measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalised, including to continuous cases an...

  19. Sample Selected Averaging Method for Analyzing the Event Related Potential

    Science.gov (United States)

    Taguchi, Akira; Ono, Youhei; Kimura, Tomoaki

    The event related potential (ERP) is often measured through the oddball task. On the oddball task, subjects are given “rare stimulus” and “frequent stimulus”. Measured ERPs were analyzed by the averaging technique. In the results, amplitude of the ERP P300 becomes large when the “rare stimulus” is given. However, measured ERPs are included samples without an original feature of ERP. Thus, it is necessary to reject unsuitable measured ERPs when using the averaging technique. In this paper, we propose the rejection method for unsuitable measured ERPs for the averaging technique. Moreover, we combine the proposed method and Woody's adaptive filter method.

  20. The modulated average structure of mullite.

    Science.gov (United States)

    Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X

    2015-06-01

    Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real

  1. Model averaging and muddled multimodel inferences.

    Science.gov (United States)

    Cade, Brian S

    2015-09-01

    statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.

  2. Model averaging and muddled multimodel inferences

    Science.gov (United States)

    Cade, Brian S.

    2015-01-01

     tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.

  3. Mirror averaging with sparsity priors

    CERN Document Server

    Dalalyan, Arnak

    2010-01-01

    We consider the problem of aggregating the elements of a (possibly infinite) dictionary for building a decision procedure, that aims at minimizing a given criterion. Along with the dictionary, an independent identically distributed training sample is available, on which the performance of a given procedure can be tested. In a fairly general set-up, we establish an oracle inequality for the Mirror Averaging aggregate based on any prior distribution. This oracle inequality is applied in the context of sparse coding for different problems of statistics and machine learning such as regression, density estimation and binary classification.

  4. Ensemble vs. time averages in financial time series analysis

    Science.gov (United States)

    Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2012-12-01

    Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.

  5. Using non-invasive molecular spectroscopic techniques to detect unique aspects of protein Amide functional groups and chemical properties of modeled forage from different sourced-origins.

    Science.gov (United States)

    Ji, Cuiying; Zhang, Xuewei; Yu, Peiqiang

    2016-03-05

    The non-invasive molecular spectroscopic technique-FT/IR is capable to detect the molecular structure spectral features that are associated with biological, nutritional and biodegradation functions. However, to date, few researches have been conducted to use these non-invasive molecular spectroscopic techniques to study forage internal protein structures associated with biodegradation and biological functions. The objectives of this study were to detect unique aspects and association of protein Amide functional groups in terms of protein Amide I and II spectral profiles and chemical properties in the alfalfa forage (Medicago sativa L.) from different sourced-origins. In this study, alfalfa hay with two different origins was used as modeled forage for molecular structure and chemical property study. In each forage origin, five to seven sources were analyzed. The molecular spectral profiles were determined using FT/IR non-invasive molecular spectroscopy. The parameters of protein spectral profiles included functional groups of Amide I, Amide II and Amide I to II ratio. The results show that the modeled forage Amide I and Amide II were centered at 1653 cm(-1) and 1545 cm(-1), respectively. The Amide I spectral height and area intensities were from 0.02 to 0.03 and 2.67 to 3.36 AI, respectively. The Amide II spectral height and area intensities were from 0.01 to 0.02 and 0.71 to 0.93 AI, respectively. The Amide I to II spectral peak height and area ratios were from 1.86 to 1.88 and 3.68 to 3.79, respectively. Our results show that the non-invasive molecular spectroscopic techniques are capable to detect forage internal protein structure features which are associated with forage chemical properties.

  6. Using non-invasive molecular spectroscopic techniques to detect unique aspects of protein Amide functional groups and chemical properties of modeled forage from different sourced-origins

    Science.gov (United States)

    Ji, Cuiying; Zhang, Xuewei; Yu, Peiqiang

    2016-03-01

    The non-invasive molecular spectroscopic technique-FT/IR is capable to detect the molecular structure spectral features that are associated with biological, nutritional and biodegradation functions. However, to date, few researches have been conducted to use these non-invasive molecular spectroscopic techniques to study forage internal protein structures associated with biodegradation and biological functions. The objectives of this study were to detect unique aspects and association of protein Amide functional groups in terms of protein Amide I and II spectral profiles and chemical properties in the alfalfa forage (Medicago sativa L.) from different sourced-origins. In this study, alfalfa hay with two different origins was used as modeled forage for molecular structure and chemical property study. In each forage origin, five to seven sources were analyzed. The molecular spectral profiles were determined using FT/IR non-invasive molecular spectroscopy. The parameters of protein spectral profiles included functional groups of Amide I, Amide II and Amide I to II ratio. The results show that the modeled forage Amide I and Amide II were centered at 1653 cm- 1 and 1545 cm- 1, respectively. The Amide I spectral height and area intensities were from 0.02 to 0.03 and 2.67 to 3.36 AI, respectively. The Amide II spectral height and area intensities were from 0.01 to 0.02 and 0.71 to 0.93 AI, respectively. The Amide I to II spectral peak height and area ratios were from 1.86 to 1.88 and 3.68 to 3.79, respectively. Our results show that the non-invasive molecular spectroscopic techniques are capable to detect forage internal protein structure features which are associated with forage chemical properties.

  7. High-Average Power Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Dowell, David H.; /SLAC; Power, John G.; /Argonne

    2012-09-05

    There has been significant progress in the development of high-power facilities in recent years yet major challenges remain. The task of WG4 was to identify which facilities were capable of addressing the outstanding R&D issues presently preventing high-power operation. To this end, information from each of the facilities represented at the workshop was tabulated and the results are presented herein. A brief description of the major challenges is given, but the detailed elaboration can be found in the other three working group summaries.

  8. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  9. Bivariate phase-rectified signal averaging

    CERN Document Server

    Schumann, Aicko Y; Bauer, Axel; Schmidt, Georg

    2008-01-01

    Phase-Rectified Signal Averaging (PRSA) was shown to be a powerful tool for the study of quasi-periodic oscillations and nonlinear effects in non-stationary signals. Here we present a bivariate PRSA technique for the study of the inter-relationship between two simultaneous data recordings. Its performance is compared with traditional cross-correlation analysis, which, however, does not work well for non-stationary data and cannot distinguish the coupling directions in complex nonlinear situations. We show that bivariate PRSA allows the analysis of events in one signal at times where the other signal is in a certain phase or state; it is stable in the presence of noise and impassible to non-stationarities.

  10. Electromagnetic modelling, inversion and data-processing techniques for GPR: ongoing activities in Working Group 3 of COST Action TU1208

    Science.gov (United States)

    Pajewski, Lara; Giannopoulos, Antonis; van der Kruk, Jan

    2015-04-01

    This work aims at presenting the ongoing research activities carried out in Working Group 3 (WG3) 'EM methods for near-field scattering problems by buried structures; data processing techniques' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. WG3 is structured in four Projects. Project 3.1 deals with 'Electromagnetic modelling for GPR applications.' Project 3.2 is concerned with 'Inversion and imaging techniques for GPR applications.' The topic of Project 3.3 is the 'Development of intrinsic models for describing near-field antenna effects, including antenna-medium coupling, for improved radar data processing using full-wave inversion.' Project 3.4 focuses on 'Advanced GPR data-processing algorithms.' Electromagnetic modeling tools that are being developed and improved include the Finite-Difference Time-Domain (FDTD) technique and the spectral domain Cylindrical-Wave Approach (CWA). One of the well-known freeware and versatile FDTD simulators is GprMax that enables an improved realistic representation of the soil/material hosting the sought structures and of the GPR antennas. Here, input/output tools are being developed to ease the definition of scenarios and the visualisation of numerical results. The CWA expresses the field scattered by subsurface two-dimensional targets with arbitrary cross-section as a sum of cylindrical waves. In this way, the interaction is taken into account of multiple scattered fields within the medium hosting the sought targets. Recently, the method has been extended to deal with through-the-wall scenarios. One of the

  11. Detrending moving average algorithm for multifractals

    Science.gov (United States)

    Gu, Gao-Feng; Zhou, Wei-Xing

    2010-07-01

    The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.

  12. Evaluating the abnormal ossification in tibiotarsi of developing chick embryos exposed to 1.0ppm doses of platinum group metals by spectroscopic techniques.

    Science.gov (United States)

    Stahler, Adam C; Monahan, Jennifer L; Dagher, Jessica M; Baker, Joshua D; Markopoulos, Marjorie M; Iragena, Diane B; NeJame, Britney M; Slaughter, Robert; Felker, Daniel; Burggraf, Larry W; Isaac, Leon A C; Grossie, David; Gagnon, Zofia E; Sizemore, Ioana E Pavel

    2013-04-01

    Platinum group metals (PGMs), i.e., palladium (Pd), platinum (Pt) and rhodium (Rh), are found at pollutant levels in the environment and are known to accumulate in plant and animal tissues. However, little is known about PGM toxicity. Our previous studies showed that chick embryos exposed to PGM concentrations of 1mL of 5.0ppm (LD50) and higher exhibited severe skeletal deformities. This work hypothesized that 1.0ppm doses of PGMs will negatively impact the mineralization process in tibiotarsi. One milliliter of 1.0ppm of Pd(II), Pt(IV), Rh(III) aqueous salt solutions and a PGM-mixture were injected into the air sac on the 7th and 14th day of incubation. Control groups with no-injection and vehicle injections were included. On the 20th day, embryos were sacrificed to analyze the PGM effects on tibiotarsi using four spectroscopic techniques. 1) Micro-Raman imaging: Hyperspectral Raman data were collected on paraffin embedded cross-sections of tibiotarsi, and processed using in-house-written MATLAB codes. Micro-Raman univariate images that were created from the ν1(PO4(3-)) integrated areas revealed anomalous mineral inclusions within the bone marrow for the PGM-mixture treatment. The age of the mineral crystals (ν(CO3(2-))/ν1(PO4(3-))) was statistically lower for all treatments when compared to controls (p≤0.05). 2) FAAS: The percent calcium content of the chemically digested tibiotarsi in the Pd and Pt groups changed by ~45% with respect to the no-injection control (16.1±0.2%). 3) Micro-XRF imaging: Abnormal calcium and phosphorus inclusions were found within the inner longitudinal sections of tibiotarsi for the PGM-mixture treatment. A clear increase in the mineral content was observed for the outer sections of the Pd treatment. 4) ICP-OES: PGM concentrations in tibiotarsi were undetectable (<5ppb). The spectroscopic techniques gave corroborating results, confirmed the hypothesis, and explained the observed pathological (skeletal developmental abnormalities

  13. 日本三角涡虫单克隆群体的繁殖技术%Technique for Propagation of Monoclonal Groups in Dugesia japonica

    Institute of Scientific and Technical Information of China (English)

    叶海燕; 侯晓薇; 黄原

    2011-01-01

    日本三角涡虫是扁形动物门涡虫纲的代表动物,其独特的结构和极强的再生能力在研究胚胎发育和细胞分化中受到重视.但作为实验材料其个体较小,要取得足够量的组织存在困难,是研究工作中存在的问题.由于涡虫在受伤或被截断后,可以依赖体内的副胚层干细胞完整再生,因此,利用人工切割和饲养的方法可以培养大量的单克隆涡虫品系.研究目的 在于建立涡虫单克隆群体的养殖体系标准,为后续的基因组和蛋白质组研究工作提供了依据.%It is essential to establish a standard culture system for monoclonal Dugesia japonica strains in the lab, since Dugesia japonica has been considered as a potential model in cell differentiation and embryogenesis studies due to its super regenerative ability. In this study, Dugesia japonica was cut into fragments, which then were cultured and regenerated to monoclonal groups. The monoclonal strain was consisted of 5 groups and the size of each group is from 37 to 45. The current technique for culturing Dugesia japonica may provide an useful tool for subsequent genomic and proteomic studies.

  14. Group Grammar

    Science.gov (United States)

    Adams, Karen

    2015-01-01

    In this article Karen Adams demonstrates how to incorporate group grammar techniques into a classroom activity. In the activity, students practice using the target grammar to do something they naturally enjoy: learning about each other.

  15. The average visual response in patients with cerebrovascular disease

    NARCIS (Netherlands)

    Oostehuis, H.J.G.H.; Ponsen, E.J.; Jonkman, E.J.; Magnus, O.

    1969-01-01

    The average visual response (AVR) was recorded in thirty patients after a cerebrovascular accident and in fourteen control subjects from the same age group. The AVR was obtained with the aid of a 16-channel EEG machine, a Computer of Average Transients and a tape recorder with 13 FM channels. This

  16. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS....12 On average. On average means a rolling average of production or imports during the last two...

  17. Average life of oxygen vacancies of quartz in sediments

    Institute of Scientific and Technical Information of China (English)

    DIAO; Shaobo(刁少波); YE; Yuguang(业渝光)

    2002-01-01

    Average life of oxygen vacancies of quartz in sediments is estimated by using the ESR (electron spin resonance) signals of E( centers from the thermal activation technique. The experimental results show that the second-order kinetics equation is more applicable to the life estimation compared with the first order equation. The average life of oxygen vacancies of quartz from 4895 to 4908 deep sediments in the Tarim Basin is about 1018 a at 27℃.

  18. Construction of a technique plan repository and evaluation system based on AHP group decision-making for emergency treatment and disposal in chemical pollution accidents.

    Science.gov (United States)

    Shi, Shenggang; Cao, Jingcan; Feng, Li; Liang, Wenyan; Zhang, Liqiu

    2014-07-15

    The environmental pollution resulting from chemical accidents has caused increasingly serious concerns. Therefore, it is very important to be able to determine in advance the appropriate emergency treatment and disposal technology for different types of chemical accidents. However, the formulation of an emergency plan for chemical pollution accidents is considerably difficult due to the substantial uncertainty and complexity of such accidents. This paper explains how the event tree method was used to create 54 different scenarios for chemical pollution accidents, based on the polluted medium, dangerous characteristics and properties of chemicals involved. For each type of chemical accident, feasible emergency treatment and disposal technology schemes were established, considering the areas of pollution source control, pollutant non-proliferation, contaminant elimination and waste disposal. Meanwhile, in order to obtain the optimum emergency disposal technology schemes as soon as the chemical pollution accident occurs from the plan repository, the technique evaluation index system was developed based on group decision-improved analytical hierarchy process (AHP), and has been tested by using a sudden aniline pollution accident that occurred in a river in December 2012.

  19. Finding "hard to find" literature on hard to find groups: A novel technique to search grey literature on refugees and asylum seekers.

    Science.gov (United States)

    Enticott, Joanne; Buck, Kimberly; Shawyer, Frances

    2017-09-04

    There is a lack of information on how to execute effective searches of the grey literature on refugee and asylum seeker groups for inclusion in systematic reviews. High-quality government reports and other grey literature relevant to refugees may not always be identified in conventional literature searches. During the process of conducting a recent systematic review, we developed a novel strategy for systematically searching international refugee and asylum seeker-related grey literature. The approach targets governmental health departments and statistical agencies, who have considerable access to refugee and asylum seeker populations for research purposes but typically do not publish findings in academic forums. Compared to a conventional grey literature search strategy, our novel technique yielded an eightfold increase in relevant high-quality grey sources that provided valuable content in informing our review. Incorporating a search of the grey literature into systematic reviews of refugee and asylum seeker research is essential to providing a more complete view of the evidence. Our novel strategy offers a practical and feasible method of conducting systematic grey literature searches that may be adaptable to a range of research questions, contexts, and resource constraints. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Which outcomes are most important to people with aphasia and their families? an international nominal group technique study framed within the ICF.

    Science.gov (United States)

    Wallace, Sarah J; Worrall, Linda; Rose, Tanya; Le Dorze, Guylaine; Cruice, Madeline; Isaksen, Jytte; Kong, Anthony Pak Hin; Simmons-Mackie, Nina; Scarinci, Nerina; Gauvreau, Christine Alary

    2017-07-01

    To identify important treatment outcomes from the perspective of people with aphasia and their families using the ICF as a frame of reference. The nominal group technique was used with people with aphasia and their family members in seven countries to identify and rank important treatment outcomes from aphasia rehabilitation. People with aphasia identified outcomes for themselves; and family members identified outcomes for themselves and for the person with aphasia. Outcomes were analysed using qualitative content analysis and ICF linking. A total of 39 people with aphasia and 29 family members participated in one of 16 nominal groups. Inductive qualitative content analysis revealed the following six themes: (1) Improved communication; (2) Increased life participation; (3) Changed attitudes through increased awareness and education about aphasia; (4) Recovered normality; (5) Improved physical and emotional well-being; and (6) Improved health (and support) services. Prioritized outcomes for both participant groups linked to all ICF components; primary activity/participation (39%) and body functions (36%) for people with aphasia, and activity/participation (49%) and environmental factors (28%) for family members. Outcomes prioritized by family members relating to the person with aphasia, primarily linked to body functions (60%). People with aphasia and their families identified treatment outcomes which span all components of the ICF. This has implications for research outcome measurement and clinical service provision which currently focuses on the measurement of body function outcomes. The wide range of desired outcomes generated by both people with aphasia and their family members, highlights the importance of collaborative goal setting within a family-centred approach to rehabilitation. These results will be combined with other stakeholder perspectives to establish a core outcome set for aphasia treatment research. Implications for Rehabilitation Important

  1. The Optimal Selection for Restricted Linear Models with Average Estimator

    Directory of Open Access Journals (Sweden)

    Qichang Xie

    2014-01-01

    Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.

  2. Level sets of multiple ergodic averages

    CERN Document Server

    Ai-Hua, Fan; Ma, Ji-Hua

    2011-01-01

    We propose to study multiple ergodic averages from multifractal analysis point of view. In some special cases in the symbolic dynamics, Hausdorff dimensions of the level sets of multiple ergodic average limit are determined by using Riesz products.

  3. Accurate Switched-Voltage voltage averaging circuit

    OpenAIRE

    金光, 一幸; 松本, 寛樹

    2006-01-01

    Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.

  4. 贵州高产油菜的群体结构特征及高产栽培技术%The Group Structure Characteristics and High-yield Culture Technique of High-yield Rape in Guizhou

    Institute of Scientific and Technical Information of China (English)

    冯文豪; 冯泽蔚; 苏跃

    2012-01-01

    In order to promote the integration and application of high yield cultivation technology of rape,the rape yield and component features in high-yield fields were analyzed in 2010-2011. The results showed that; 1) the group structure characteristics with the yield of over 200 kg/667m2 were as follows; average cultivation density 6 099. 2 plants/667m2 , mean pod number 3. 509 106 per 667 m2, grains per pod 18. 5, 1 000-seed weight 3. 7 g, plant height 191. 4 cm, branch number 10. 9, main inflorescence length 72. 8 cm, average yield 216. 2 kg/667m2. 2) The high-yield cultivation techniques included sowing in September 5-10, cultivating strong seedlings, transplanting on moderate-fertility soil in October 10 ?20, fine preparation of soil, soil testing and formulated fertilization, in time irrigation, disease, pest and grass control.%为了促进贵州油菜高产栽培技术的集成和运用,对2010-2011年参加贵州省油菜高产创建活动的高产田块油菜产量及其群体构成特征进行分析.结果表明:1)产量为200 kg/667m2以上油菜群体的结构特征为平均栽培密度达6 099.2株/667m2,平均角果数3.509×106个/667m2,每果粒数18.5粒,千粒重3.7g,株高191.4 cm,分枝数10.9个,主花序长72.8 cm,平均产量达216.2 kg/667m2.2)高产田块的栽培措施为9月5-10日播种,培育壮苗,10月10-20日移栽,土壤肥力中等,精细整地,测土配方施肥,及时灌排水和防治病虫草害等.

  5. Average-Time Games on Timed Automata

    OpenAIRE

    Jurdzinski, Marcin; Trivedi, Ashutosh

    2009-01-01

    An average-time game is played on the infinite graph of configurations of a finite timed automaton. The two players, Min and Max, construct an infinite run of the automaton by taking turns to perform a timed transition. Player Min wants to minimise the average time per transition and player Max wants to maximise it. A solution of average-time games is presented using a reduction to average-price game on a finite graph. A direct consequence is an elementary proof of determinacy for average-tim...

  6. Evaluation of long-term patency rates of different techniques of arterial anastomosis in rabbits.

    Science.gov (United States)

    Siemionow, M

    1987-01-01

    The results of 160 arterial anastomoses performed with four different techniques are presented. The popliteal arteries (mean diameter 0.7 mm) of 40 rabbits in each of four groups were anastomosed using: I. End-to-end technique with assymetrical sleeving of the adventitia; II. end-to-end technique with symetrical sleeving of the adventitia and wrapping of the suture site with a collagen cuff; III. end-to-end technique with symetrical trimming of the adventitia; and IV. end-in-end technique. Long-term patency rates for the above techniques were as follows: Group I, 92.5%; Group II, 87.5%; Group III, 92.5%; Group IV, 90.0%. Group IV anastomoses were completed in an average of 15 minutes compared with an overall average of 24 minutes for the other three groups.

  7. An averaging method for nonlinear laminar Ekman layers

    DEFF Research Database (Denmark)

    Andersen, Anders Peter; Lautrup, B.; Bohr, T.

    2003-01-01

    We study steady laminar Ekman boundary layers in rotating systems using,an averaging method similar to the technique of von Karman and Pohlhausen. The method allows us to explore nonlinear corrections to the standard Ekman theory even at large Rossby numbers. We consider both the standard self...

  8. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  9. On the average exponent of elliptic curves modulo $p$

    CERN Document Server

    Freiberg, Tristan

    2012-01-01

    Given an elliptic curve $E$ defined over $\\mathbb{Q}$ and a prime $p$ of good reduction, let $\\tilde{E}(\\mathbb{F}_p)$ denote the group of $\\mathbb{F}_p$-points of the reduction of $E$ modulo $p$, and let $e_p$ denote the exponent of said group. Assuming a certain form of the Generalized Riemann Hypothesis (GRH), we study the average of $e_p$ as $p \\le X$ ranges over primes of good reduction, and find that the average exponent essentially equals $p\\cdot c_{E}$, where the constant $c_{E} > 0$ depends on $E$. For $E$ without complex multiplication (CM), $c_{E}$ can be written as a rational number (depending on $E$) times a universal constant. Without assuming GRH, we can determine the average exponent when $E$ has CM, as well as give an upper bound on the average in the non-CM case.

  10. A new approach for Bayesian model averaging

    Institute of Scientific and Technical Information of China (English)

    TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun

    2012-01-01

    Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.

  11. Model Averaging Software for Dichotomous Dose Response Risk Estimation

    Directory of Open Access Journals (Sweden)

    Matthew W. Wheeler

    2008-02-01

    Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, fits the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulfills a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.

  12. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  13. WIDTHS AND AVERAGE WIDTHS OF SOBOLEV CLASSES

    Institute of Scientific and Technical Information of China (English)

    刘永平; 许贵桥

    2003-01-01

    This paper concerns the problem of the Kolmogorov n-width, the linear n-width, the Gel'fand n-width and the Bernstein n-width of Sobolev classes of the periodicmultivariate functions in the space Lp(Td) and the average Bernstein σ-width, averageKolmogorov σ-widths, the average linear σ-widths of Sobolev classes of the multivariatequantities.

  14. A procedure to average 3D anatomical structures.

    Science.gov (United States)

    Subramanya, K; Dean, D

    2000-12-01

    Creating a feature-preserving average of three dimensional anatomical surfaces extracted from volume image data is a complex task. Unlike individual images, averages present right-left symmetry and smooth surfaces which give insight into typical proportions. Averaging multiple biological surface images requires careful superimposition and sampling of homologous regions. Our approach to biological surface image averaging grows out of a wireframe surface tessellation approach by Cutting et al. (1993). The surface delineating wires represent high curvature crestlines. By adding tile boundaries in flatter areas the 3D image surface is parametrized into anatomically labeled (homology mapped) grids. We extend the Cutting et al. wireframe approach by encoding the entire surface as a series of B-spline space curves. The crestline averaging algorithm developed by Cutting et al. may then be used for the entire surface. Shape preserving averaging of multiple surfaces requires careful positioning of homologous surface regions such as these B-spline space curves. We test the precision of this new procedure and its ability to appropriately position groups of surfaces in order to produce a shape-preserving average. Our result provides an average that well represents the source images and may be useful clinically as a deformable model or for animation.

  15. Stochastic averaging of quasi-Hamiltonian systems

    Institute of Scientific and Technical Information of China (English)

    朱位秋

    1996-01-01

    A stochastic averaging method is proposed for quasi-Hamiltonian systems (Hamiltonian systems with light dampings subject to weakly stochastic excitations). Various versions of the method, depending on whether the associated Hamiltonian systems are integrable or nonintegrable, resonant or nonresonant, are discussed. It is pointed out that the standard stochastic averaging method and the stochastic averaging method of energy envelope are special cases of the stochastic averaging method of quasi-Hamiltonian systems and that the results obtained by this method for several examples prove its effectiveness.

  16. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  17. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    Energy Technology Data Exchange (ETDEWEB)

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  18. AC Small Signal Modeling of PWM Y-Source Converter by Circuit Averaging and Averaged Switch Modeling Technique

    DEFF Research Database (Denmark)

    Forouzesh, Mojtaba; Siwakoti, Yam Prasad; Blaabjerg, Frede

    2016-01-01

    Magnetically coupled Y-source impedance network is a newly proposed structure with versatile features intended for various power converter applications e.g. in the renewable energy technologies. The voltage gain of the Y-source impedance network rises exponentially as a function of turns ratio......, which is inherited from a special coupled inductor with three windings. Due to the importance of modeling in the converter design procedure, this paper is dedicated to dc and ac small signal modeling of the PWM Y-source converter. The derived transfer functions are presented in detail and have been...

  19. Average Transmission Probability of a Random Stack

    Science.gov (United States)

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  20. Average sampling theorems for shift invariant subspaces

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The sampling theorem is one of the most powerful results in signal analysis. In this paper, we study the average sampling on shift invariant subspaces, e.g. wavelet subspaces. We show that if a subspace satisfies certain conditions, then every function in the subspace is uniquely determined and can be reconstructed by its local averages near certain sampling points. Examples are given.

  1. Testing linearity against nonlinear moving average models

    NARCIS (Netherlands)

    de Gooijer, J.G.; Brännäs, K.; Teräsvirta, T.

    1998-01-01

    Lagrange multiplier (LM) test statistics are derived for testing a linear moving average model against an additive smooth transition moving average model. The latter model is introduced in the paper. The small sample performance of the proposed tests are evaluated in a Monte Carlo study and compared

  2. Averaging Einstein's equations : The linearized case

    NARCIS (Netherlands)

    Stoeger, William R.; Helmi, Amina; Torres, Diego F.

    2007-01-01

    We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW situ

  3. Averaging Einstein's equations : The linearized case

    NARCIS (Netherlands)

    Stoeger, William R.; Helmi, Amina; Torres, Diego F.

    We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW

  4. Average excitation potentials of air and aluminium

    NARCIS (Netherlands)

    Bogaardt, M.; Koudijs, B.

    1951-01-01

    By means of a graphical method the average excitation potential I may be derived from experimental data. Average values for Iair and IAl have been obtained. It is shown that in representing range/energy relations by means of Bethe's well known formula, I has to be taken as a continuously changing fu

  5. Average Transmission Probability of a Random Stack

    Science.gov (United States)

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  6. New results on averaging theory and applications

    Science.gov (United States)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  7. Analogue Divider by Averaging a Triangular Wave

    Science.gov (United States)

    Selvam, Krishnagiri Chinnathambi

    2017-08-01

    A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.

  8. Further fMRI Validation of the Visual Half Field Technique as an Indicator of Language Laterality: A Large-Group Analysis

    Science.gov (United States)

    Van der Haegen, Lise; Cai, Qing; Seurinck, Ruth; Brysbaert, Marc

    2011-01-01

    The best established lateralized cerebral function is speech production, with the majority of the population having left hemisphere dominance. An important question is how to best assess the laterality of this function. Neuroimaging techniques such as functional Magnetic Resonance Imaging (fMRI) are increasingly used in clinical settings to…

  9. Averaged Lema\\^itre-Tolman-Bondi dynamics

    CERN Document Server

    Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried

    2016-01-01

    We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.

  10. Average-passage flow model development

    Science.gov (United States)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  11. FREQUENTIST MODEL AVERAGING ESTIMATION: A REVIEW

    Institute of Scientific and Technical Information of China (English)

    Haiying WANG; Xinyu ZHANG; Guohua ZOU

    2009-01-01

    In applications, the traditional estimation procedure generally begins with model selection.Once a specific model is selected, subsequent estimation is conducted under the selected model without consideration of the uncertainty from the selection process. This often leads to the underreporting of variability and too optimistic confidence sets. Model averaging estimation is an alternative to this procedure, which incorporates model uncertainty into the estimation process. In recent years, there has been a rising interest in model averaging from the frequentist perspective, and some important progresses have been made. In this paper, the theory and methods on frequentist model averaging estimation are surveyed. Some future research topics are also discussed.

  12. Averaging of Backscatter Intensities in Compounds

    Science.gov (United States)

    Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.

    2002-01-01

    Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752

  13. Experimental Demonstration of Squeezed State Quantum Averaging

    CERN Document Server

    Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.

  14. The Average Lower Connectivity of Graphs

    Directory of Open Access Journals (Sweden)

    Ersin Aslan

    2014-01-01

    Full Text Available For a vertex v of a graph G, the lower connectivity, denoted by sv(G, is the smallest number of vertices that contains v and those vertices whose deletion from G produces a disconnected or a trivial graph. The average lower connectivity denoted by κav(G is the value (∑v∈VGsvG/VG. It is shown that this parameter can be used to measure the vulnerability of networks. This paper contains results on bounds for the average lower connectivity and obtains the average lower connectivity of some graphs.

  15. Cosmic inhomogeneities and averaged cosmological dynamics.

    Science.gov (United States)

    Paranjape, Aseem; Singh, T P

    2008-10-31

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.

  16. Changing mortality and average cohort life expectancy

    DEFF Research Database (Denmark)

    Schoen, Robert; Canudas-Romo, Vladimir

    2005-01-01

    of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL) has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate...

  17. 40 CFR 63.1332 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... Standards for Hazardous Air Pollutant Emissions: Group IV Polymers and Resins § 63.1332 Emissions averaging... if pollution prevention measures are used to control five or more of the emission points included in... additional emission points if pollution prevention measures are used to control five or more of the...

  18. How Group Size and Composition Influences the Effectiveness of Collaborative Screen-Based Simulation Training: A Study of Dental and Nursing University Students Learning Radiographic Techniques

    Directory of Open Access Journals (Sweden)

    Tor Söderström

    2012-12-01

    Full Text Available This study analyses how changes in the design of screen-based computer simulation training influence the collaborative training process. Specifically, this study examine how the size of a group and a group’s composition influence the way these tools are used. One case study consisted of 18+18 dental students randomized into either collaborative 3D simulation training or conventional collaborative training. The students worked in groups of three. The other case consisted of 12 nursing students working in pairs (partners determined by the students with a 3D simulator. The results showed that simulation training encouraged different types of dialogue compared to conventional training and that the communication patterns were enhanced in the nursing students ́ dyadic simulation training. The concrete changes concerning group size and the composition of the group influenced the nursing students’ engagement with the learning environment and consequently the communication patterns that emerged. These findings suggest that smaller groups will probably be more efficient than larger groups in a free collaboration setting that uses screen-based simulation training.

  19. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  20. Appeals Council Requests - Average Processing Time

    Data.gov (United States)

    Social Security Administration — This dataset provides annual data from 1989 through 2015 for the average processing time (elapsed time in days) for dispositions by the Appeals Council (AC) (both...

  1. Average Vegetation Growth 1990 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  2. Average Vegetation Growth 1997 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  3. Average Vegetation Growth 1992 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  4. Average Vegetation Growth 2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  5. Average Vegetation Growth 1995 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1995 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  6. Average Vegetation Growth 2000 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2000 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  7. Average Vegetation Growth 1998 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  8. Average Vegetation Growth 1994 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  9. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  10. Average Vegetation Growth 1996 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1996 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  11. Average Vegetation Growth 2005 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2005 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  12. Average Vegetation Growth 1993 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  13. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  14. Spacetime Average Density (SAD) Cosmological Measures

    CERN Document Server

    Page, Don N

    2014-01-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmolo...

  15. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  16. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  17. Rotational averaging of multiphoton absorption cross sections

    Science.gov (United States)

    Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth

    2014-11-01

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  18. Monthly snow/ice averages (ISCCP)

    Data.gov (United States)

    National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...

  19. Average Annual Precipitation (PRISM model) 1961 - 1990

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...

  20. Testing averaged cosmology with type Ia supernovae and BAO data

    CERN Document Server

    Santos, B; Devi, N Chandrachani; Alcaniz, J S

    2016-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard $\\Lambda$CDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  1. Symmetric Euler orientation representations for orientational averaging.

    Science.gov (United States)

    Mayerhöfer, Thomas G

    2005-09-01

    A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.

  2. Cosmic Inhomogeneities and the Average Cosmological Dynamics

    OpenAIRE

    Paranjape, Aseem; Singh, T. P.

    2008-01-01

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a `dark energy'. However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the \\emph{in}homogeneous Universe, the averaged \\emph{homogeneous} Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic ini...

  3. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  4. The 'subjective' risk mapping: understanding of a technical risk representation by a professional group; La cartographie 'subjective' des risques: comprendre la representation d'un risque technique par un groupe professionnel

    Energy Technology Data Exchange (ETDEWEB)

    Bertin, H.; Deleuze, G. [Electricite de France (EDF-RD), Management des Risques Industriels, 92 - Clamart (France)

    2006-07-01

    The paper presents the application of a particular way to make risk maps, called 'subjective risk mapping'. It has been used to understand how the risk of tube rupture under pressure is understood, defined, and set in perspective with other risks in a professional group working in an industrial plant. (authors)

  5. Low-mode averaging for baryon correlation functions

    CERN Document Server

    Giusti, Leonardo; Giusti, Leonardo; Necco, Silvia

    2005-01-01

    The low-mode averaging technique is a powerful tool for reducing large fluctuations in correlation functions due to low-mode eigenvalues of the Dirac operator. In this work we propose a generalization to baryons and test our method on two-point correlation functions of left-handed nucleons, computed with quenched Neuberger fermions on a lattice with extension L=1.5 fm. We show that the statistical fluctuations can be reduced and the baryon signal significantly improved.

  6. Perceptual averaging in individuals with Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jennifer Elise Corbett

    2016-11-01

    Full Text Available There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value of a visual feature (e.g., mean size appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles (mean task despite poor accuracy in recalling individual circle sizes (member task. In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment.

  7. An integrative technique based on synergistic coremoval and sequential recovery of copper and tetracycline with dual-functional chelating resin: roles of amine and carboxyl groups.

    Science.gov (United States)

    Ling, Chen; Liu, Fu-Qiang; Xu, Chao; Chen, Tai-Peng; Li, Ai-Min

    2013-11-27

    A novel chelating resin (R-AC) bearing dual-functional groups (amino and carboxyl groups) was self-synthesized and it showed superior properties on synergistic coremoval of Cu(II) and tetracycline (TC) to commercial resins (amine, carboxyl, and hydrophobic types), which was deeply investigated by equilibrium and kinetic tests in binary, preloading, and saline systems. The adsorption of TC on R-AC was markedly enhanced when coexisted with Cu(II), up to 13 times of that in sole system, whereas Cu(II) uptake seldom decreased in the copresence of TC. Decomplexing-bridging, which included [Cu-TC] decomplexing and [R-Cu] bridging for TC, was demonstrated as the leading mechanism for the synergistic coremoval of Cu(II) and TC. Carboxyl groups of R-AC played a dominant role in decomplexing of [Cu-TC] complex and releasing free TC. Cu(II) coordinated with amine groups of R-AC was further proved to participate in bridging interaction with free TC, and the bridging stoichiometric ratio ([NH-Cu]: TC) possibly was 2:1. About 96.9% of TC and 99.3% of Cu could be sequentially recovered with dilute NaOH followed by HCl. Considering stable application for five cycles in simulated and practical wastewater, R-AC shows great potential in green and simple coremoval of antibiotic and heavy metal ions.

  8. Averaged controllability of parameter dependent conservative semigroups

    Science.gov (United States)

    Lohéac, Jérôme; Zuazua, Enrique

    2017-02-01

    We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.

  9. Average Temperatures in the Southwestern United States, 2000-2015 Versus Long-Term Average

    Data.gov (United States)

    U.S. Environmental Protection Agency — This indicator shows how the average air temperature from 2000 to 2015 has differed from the long-term average (1895–2015). To provide more detailed information,...

  10. Transversely isotropic higher-order averaged structure tensors

    Science.gov (United States)

    Hashlamoun, Kotaybah; Federico, Salvatore

    2017-08-01

    For composites or biological tissues reinforced by statistically oriented fibres, a probability distribution function is often used to describe the orientation of the fibres. The overall effect of the fibres on the material response is accounted for by evaluating averaging integrals over all possible directions in space. The directional average of the structure tensor (tensor product of the unit vector describing the fibre direction by itself) is of high significance. Higher-order averaged structure tensors feature in several models and carry similarly important information. However, their evaluation has a quite high computational cost. This work proposes to introduce mathematical techniques to minimise the computational cost associated with the evaluation of higher-order averaged structure tensors, for the case of a transversely isotropic probability distribution of orientation. A component expression is first introduced, using which a general tensor expression is obtained, in terms of an orthonormal basis in which one of the vectors coincides with the axis of symmetry of transverse isotropy. Then, a higher-order transversely isotropic averaged structure tensor is written in an appropriate basis, constructed starting from the basis of the space of second-order transversely isotropic tensors, which is constituted by the structure tensor and its complement to the identity.

  11. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  12. High Average Power Yb:YAG Laser

    Energy Technology Data Exchange (ETDEWEB)

    Zapata, L E; Beach, R J; Payne, S A

    2001-05-23

    We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.

  13. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    J M M Senovilla

    2007-07-01

    Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.

  14. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  15. SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS

    Energy Technology Data Exchange (ETDEWEB)

    K. L. Goluoglu

    2000-06-09

    The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.

  16. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...

  17. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  18. Limitations of the Spike-Triggered Averaging for Estimating Motor Unit Twitch Force: A Theoretical Analysis

    OpenAIRE

    Francesco Negro; Ş Utku Yavuz; Dario Farina

    2014-01-01

    Contractile properties of human motor units provide information on the force capacity and fatigability of muscles. The spike-triggered averaging technique (STA) is a conventional method used to estimate the twitch waveform of single motor units in vivo by averaging the joint force signal. Several limitations of this technique have been previously discussed in an empirical way, using simulated and experimental data. In this study, we provide a theoretical analysis of this technique in the freq...

  19. First Clinical Investigations of New Ultrasound Techniques in Three Patient Groups: Patients with Liver Tumors, Arteriovenous Fistulas, and Arteriosclerotic Femoral Arteries

    DEFF Research Database (Denmark)

    Hansen, Peter Møller

    ultrasound images of very high quality with high frame rate. Synthetic aperture is unfortunately very demanding computationally, and is therefore used only in experimental scanners. SASB reduces the data volume by a factor of 64, thereby making it possible to implement the technology on a commercial...... ultrasound scanner, to perform wireless data transfer and in the future to develop e.g. a wireless ultrasonic transducer. Nineteen patients with either primary liver cancer or liver metastases from colon cancer were ultrasound scanned the day before planned liver resection. Patients were scanned...... of the significant data reduction, is suitable for clinical use. In Study II, 20 patients with arteriovenous fistulas for hemodialysis were ultrasound scanned directly on the most superficial and accessible part of the fistula. The vector ultrasound technique Vector Flow Imaging (VFI) was used. VFI can...

  20. Parameterized Traveling Salesman Problem: Beating the Average

    NARCIS (Netherlands)

    Gutin, G.; Patel, V.

    2016-01-01

    In the traveling salesman problem (TSP), we are given a complete graph Kn together with an integer weighting w on the edges of Kn, and we are asked to find a Hamilton cycle of Kn of minimum weight. Let h(w) denote the average weight of a Hamilton cycle of Kn for the weighting w. Vizing in 1973 asked

  1. On averaging methods for partial differential equations

    NARCIS (Netherlands)

    Verhulst, F.

    2001-01-01

    The analysis of weakly nonlinear partial differential equations both qualitatively and quantitatively is emerging as an exciting eld of investigation In this report we consider specic results related to averaging but we do not aim at completeness The sections and contain important material which

  2. Discontinuities and hysteresis in quantized average consensus

    NARCIS (Netherlands)

    Ceragioli, Francesca; Persis, Claudio De; Frasca, Paolo

    2011-01-01

    We consider continuous-time average consensus dynamics in which the agents’ states are communicated through uniform quantizers. Solutions to the resulting system are defined in the Krasowskii sense and are proven to converge to conditions of ‘‘practical consensus’’. To cope with undesired chattering

  3. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation...

  4. A Functional Measurement Study on Averaging Numerosity

    Science.gov (United States)

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  5. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic li...

  6. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...

  7. Quantum Averaging of Squeezed States of Light

    DEFF Research Database (Denmark)

    Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...

  8. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  9. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  10. High average-power induction linacs

    Energy Technology Data Exchange (ETDEWEB)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.

    1989-03-15

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.

  11. High Average Power Optical FEL Amplifiers

    CERN Document Server

    Ben-Zvi, I; Litvinenko, V

    2005-01-01

    Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...

  12. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter;

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...

  13. Full averaging of fuzzy impulsive differential inclusions

    Directory of Open Access Journals (Sweden)

    Natalia V. Skripnik

    2010-09-01

    Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.

  14. Materials for high average power lasers

    Energy Technology Data Exchange (ETDEWEB)

    Marion, J.E.; Pertica, A.J.

    1989-01-01

    Unique materials properties requirements for solid state high average power (HAP) lasers dictate a materials development research program. A review of the desirable laser, optical and thermo-mechanical properties for HAP lasers precedes an assessment of the development status for crystalline and glass hosts optimized for HAP lasers. 24 refs., 7 figs., 1 tab.

  15. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    C. Chiarella; X.Z. He; C.H. Hommes

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type use

  16. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  17. Distribution of platinum group elements (Pt, Pd, Rh) in environmental and clinical matrices: Composition, analytical techniques and scientific outlook: Status report.

    Science.gov (United States)

    Hees, T; Wenclawiak, B; Lustig, S; Schramel, P; Schwarzer, M; Schuster, M; Verstraete, D; Dams, R; Helmers, E

    1998-01-01

    Trace concentrations of the platinum group elements (PGE; here: Pt, Pd and Rh) play an important role in environmental analysis and assessment. Their importance is based on 1. their increasing use as active compartments in automobile exhaust catalysts, 2. their use as cancer anti-tumor agents in medicine. Due to their allergenic and cytotoxic potential, it is necessary to improve selectivity and sensitivity during analytical investigation of matrices like soil, grass, urine or blood. This paper summarizes the present knowledge of PGE in the fields of analytical chemistry, automobile emission rates, bioavailability, toxicology and medicine.

  18. Amino acids analysis using grouping and parceling of neutrons cross sections techniques; Analise de aminoacidos atraves das tecnicas do agrupamento e parcelamento de secoes de choque para neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Voi, Dante Luiz Voi [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil); Rocha, Helio Fenandes da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Inst. de Puericultura e Pediatria Martagao Gesteira

    2002-07-01

    Amino acids used in parenteral administration in hospital patients with special importance in nutritional applications were analyzed to compare with the manufactory data. Individual amino acid samples of phenylalanine, cysteine, methionine, tyrosine and threonine were measured with the neutron crystal spectrometer installed at the J-9 irradiation channel of the 1 kW Argonaut Reactor of the Instituto de Engenharia Nuclear (IEN). Gold and D{sub 2}O high purity samples were used for the experimental system calibration. Neutron cross section values were calculated from chemical composition, conformation and molecular structure analysis of the materials. Literature data were manipulated by parceling and grouping neutron cross sections. (author)

  19. Benefícios de técnicas cognitivocomportamentais em terapia de grupo para o uso indevido de álcool e drogas Benefits of cognitive behavior techniques in group therapy for alcohol and drug abuse

    Directory of Open Access Journals (Sweden)

    Ana Carolina Robbe Mathias

    2007-01-01

    Full Text Available A entrevista motivacional e a prevenção de recaída são abordagens de tratamento para pessoas com problemas relativos ao uso indevido de álcool e drogas. Neste trabalho, apresentamos o caso de um paciente demonstrando a utilização das duas abordagens associadas em tratamento em grupo e descrevemos o uso das técnicas, as várias etapas do tratamento e os resultados alcançados. São discutidos os resultados encontrados e as vantagens das técnicas.Motivational Interviewing and relapse prevention are treatment approaches for individuals with alcohol or drug abuse problems. This article describes a group therapy treatment case, showing the association of both techniques. Each step of the treatment techniques is demonstrated and exemplified as long as their results. Results and advantages of the techniques are discussed.

  20. Light chains removal by extracorporeal techniques in acute kidney injury due to multiple myeloma: a position statement of the Onconephrology Work Group of the Italian Society of Nephrology.

    Science.gov (United States)

    Fabbrini, P; Finkel, K; Gallieni, M; Capasso, G; Cavo, M; Santoro, A; Pasquali, S

    2016-12-01

    Acute kidney injury (AKI) is a frequent complication of multiple myeloma and is associated with increased short-term mortality. Additionally, even a single episode of AKI can eventually lead to end-stage renal disease (ESRD), significantly reducing quality of life and long-term survival. In the setting of multiple myeloma, severe AKI (requiring dialysis) is typically secondary to cast nephropathy (CN). Renal injury in CN is due to intratubular obstruction from precipitation of monoclonal serum free light chains (sFLC) as well as direct tubular toxicity of sFLC via stimulation of nuclear factor (NF)κB inflammatory pathways. Current mainstays of CN treatment are early removal of precipitating factors such as nephrotoxic drugs, acidosis and dehydration, together with rapid reduction of sFLC levels. Introduction of the proteasome inhibitor bortezomib has significantly improved the response rates in multiple myeloma due to its ability to rapidly reduce sFLC levels and has been referred to as "renoprotective" therapy. As an adjunct to chemotherapy, several new extracorporeal techniques have raised interest as a further means to reduce sFLC concentrations in the treatment of CN. Whether addition of extracorporeal therapies to renoprotective therapy can result in better renal recovery is still a matter of debate and there are currently no guidelines in this field. In this positon paper, we offer an overview of the available data and the authors' perspectives on extracorporeal treatments in CN.

  1. Averaged Extended Tree Augmented Naive Classifier

    Directory of Open Access Journals (Sweden)

    Aaron Meehan

    2015-07-01

    Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.

  2. Trajectory averaging for stochastic approximation MCMC algorithms

    CERN Document Server

    Liang, Faming

    2010-01-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...

  3. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  4. Time-average dynamic speckle interferometry

    Science.gov (United States)

    Vladimirov, A. P.

    2014-05-01

    For the study of microscopic processes occurring at structural level in solids and thin biological objects, a method of dynamic speckle interferometry successfully applied. However, the method has disadvantages. The purpose of the report is to acquaint colleagues with the method of averaging in time in dynamic speckle - interferometry of microscopic processes, allowing eliminating shortcomings. The main idea of the method is the choice the averaging time, which exceeds the characteristic time correlation (relaxation) the most rapid process. The method theory for a thin phase and the reflecting object is given. The results of the experiment on the high-cycle fatigue of steel and experiment to estimate the biological activity of a monolayer of cells, cultivated on a transparent substrate is given. It is shown that the method allows real-time visualize the accumulation of fatigue damages and reliably estimate the activity of cells with viruses and without viruses.

  5. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  6. Average Annual Rainfall over the Globe

    Science.gov (United States)

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  7. Endogenous average cost based access pricing

    OpenAIRE

    Fjell, Kenneth; Foros, Øystein; Pal, Debashis

    2006-01-01

    We consider an industry where a downstream competitor requires access to an upstream facility controlled by a vertically integrated and regulated incumbent. The literature on access pricing assumes the access price to be exogenously fixed ex-ante. We analyze an endogenous average cost based access pricing rule, where both firms realize the interdependence among their quantities and the regulated access price. Endogenous access pricing neutralizes the artificial cost advantag...

  8. The Ghirlanda-Guerra identities without averaging

    CERN Document Server

    Chatterjee, Sourav

    2009-01-01

    The Ghirlanda-Guerra identities are one of the most mysterious features of spin glasses. We prove the GG identities in a large class of models that includes the Edwards-Anderson model, the random field Ising model, and the Sherrington-Kirkpatrick model in the presence of a random external field. Previously, the GG identities were rigorously proved only `on average' over a range of temperatures or under small perturbations.

  9. Average Annual Rainfall over the Globe

    Science.gov (United States)

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  10. Average Light Intensity Inside a Photobioreactor

    Directory of Open Access Journals (Sweden)

    Herby Jean

    2011-01-01

    Full Text Available For energy production, microalgae are one of the few alternatives with high potential. Similar to plants, algae require energy acquired from light sources to grow. This project uses calculus to determine the light intensity inside of a photobioreactor filled with algae. Under preset conditions along with estimated values, we applied Lambert-Beer's law to formulate an equation to calculate how much light intensity escapes a photobioreactor and determine the average light intensity that was present inside the reactor.

  11. Geomagnetic effects on the average surface temperature

    Science.gov (United States)

    Ballatore, P.

    Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.

  12. On Backus average for generally anisotropic layers

    CERN Document Server

    Bos, Len; Slawinski, Michael A; Stanoev, Theodore

    2016-01-01

    In this paper, following the Backus (1962) approach, we examine expressions for elasticity parameters of a homogeneous generally anisotropic medium that is long-wave-equivalent to a stack of thin generally anisotropic layers. These expressions reduce to the results of Backus (1962) for the case of isotropic and transversely isotropic layers. In over half-a-century since the publications of Backus (1962) there have been numerous publications applying and extending that formulation. However, neither George Backus nor the authors of the present paper are aware of further examinations of mathematical underpinnings of the original formulation; hence, this paper. We prove that---within the long-wave approximation---if the thin layers obey stability conditions then so does the equivalent medium. We examine---within the Backus-average context---the approximation of the average of a product as the product of averages, and express it as a proposition in terms of an upper bound. In the presented examination we use the e...

  13. A simple algorithm for averaging spike trains.

    Science.gov (United States)

    Julienne, Hannah; Houghton, Conor

    2013-02-25

    Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.

  14. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  15. Disk-averaged synthetic spectra of Mars

    Science.gov (United States)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  16. Spatial averaging infiltration model for layered soil

    Institute of Scientific and Technical Information of China (English)

    HU HePing; YANG ZhiYong; TIAN FuQiang

    2009-01-01

    To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.

  17. Spatial averaging infiltration model for layered soil

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial hetero- geneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overes- timate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hy- drological and land surface process modeling in a promising way.

  18. Disk-averaged synthetic spectra of Mars

    CERN Document Server

    Tinetti, G; Fong, W; Meadows, V S; Snively, H; Velusamy, T; Crisp, David; Fong, William; Meadows, Victoria S.; Snively, Heather; Tinetti, Giovanna; Velusamy, Thangasamy

    2004-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-cur...

  19. Disk-averaged synthetic spectra of Mars

    Science.gov (United States)

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  20. Combining vibrational biomolecular spectroscopy with chemometric techniques for the study of response and sensitivity of molecular structures/functional groups mainly related to lipid biopolymer to various processing applications.

    Science.gov (United States)

    Yu, Gloria Qingyu; Yu, Peiqiang

    2015-09-01

    The objectives of this project were to (1) combine vibrational spectroscopy with chemometric multivariate techniques to determine the effect of processing applications on molecular structural changes of lipid biopolymer that mainly related to functional groups in green- and yellow-type Crop Development Centre (CDC) pea varieties [CDC strike (green-type) vs. CDC meadow (yellow-type)] that occurred during various processing applications; (2) relatively quantify the effect of processing applications on the antisymmetric CH3 ("CH3as") and CH2 ("CH2as") (ca. 2960 and 2923 cm(-1), respectively), symmetric CH3 ("CH3s") and CH2 ("CH2s") (ca. 2873 and 2954 cm(-1), respectively) functional groups and carbonyl C=O ester (ca. 1745 cm(-1)) spectral intensities as well as their ratios of antisymmetric CH3 to antisymmetric CH2 (ratio of CH3as to CH2as), ratios of symmetric CH3 to symmetric CH2 (ratio of CH3s to CH2s), and ratios of carbonyl C=O ester peak area to total CH peak area (ratio of C=O ester to CH); and (3) illustrate non-invasive techniques to detect the sensitivity of individual molecular functional group to the various processing applications in the recently developed different types of pea varieties. The hypothesis of this research was that processing applications modified the molecular structure profiles in the processed products as opposed to original unprocessed pea seeds. The results showed that the different processing methods had different impacts on lipid molecular functional groups. Different lipid functional groups had different sensitivity to various heat processing applications. These changes were detected by advanced molecular spectroscopy with chemometric techniques which may be highly related to lipid utilization and availability. The multivariate molecular spectral analyses, cluster analysis, and principal component analysis of original spectra (without spectral parameterization) are unable to fully distinguish the structural differences in the

  1. Real-Time Pretreatment Review Limits Unacceptable Deviations on a Cooperative Group Radiation Therapy Technique Trial: Quality Assurance Results of RTOG 0933

    Energy Technology Data Exchange (ETDEWEB)

    Gondi, Vinai, E-mail: vgondi@chicagocancer.org [Cadence Brain Tumor Center and CDH Proton Center, Warrenville, Illinois (United States); University of Wisconsin School of Medicine & Public Health, Madison, Wisconsin (United States); Cui, Yunfeng [Duke University School of Medicine, Durham, North Carolina (United States); Mehta, Minesh P. [University of Maryland School of Medicine, Baltimore, Maryland (United States); Manfredi, Denise [Radiation Therapy Oncology Group—RTQA, Philadelphia, Pennsylvania (United States); Xiao, Ying; Galvin, James M. [Thomas Jefferson University Hospital, Philadelphia, Pennsylvania (United States); Rowley, Howard [University of Wisconsin School of Medicine & Public Health, Madison, Wisconsin (United States); Tome, Wolfgang A. [Montefiore Medical Center and Institute for Onco-Physics, Albert Einstein College of Medicine of Yeshiva University, Bronx, New York (United States)

    2015-03-01

    cases passed the pre-enrollment credentialing, the pretreatment centralized review disqualified 5.7% of reviewed cases, prevented unacceptable deviations in 24% of reviewed cases, and limited the final unacceptable deviation rate to 5%. Thus, pretreatment review is deemed necessary in future hippocampal avoidance trials and is potentially useful in other similarly challenging radiation therapy technique trials.

  2. The stability of a zonally averaged thermohaline circulation model

    CERN Document Server

    Schmidt, G A

    1995-01-01

    A combination of analytical and numerical techniques are used to efficiently determine the qualitative and quantitative behaviour of a one-basin zonally averaged thermohaline circulation ocean model. In contrast to earlier studies which use time stepping to find the steady solutions, the steady state equations are first solved directly to obtain the multiple equilibria under identical mixed boundary conditions. This approach is based on the differentiability of the governing equations and especially the convection scheme. A linear stability analysis is then performed, in which the normal modes and corresponding eigenvalues are found for the various equilibrium states. Resonant periodic solutions superimposed on these states are predicted for various types of forcing. The results are used to gain insight into the solutions obtained by Mysak, Stocker and Huang in a previous numerical study in which the eddy diffusivities were varied in a randomly forced one-basin zonally averaged model. Resonant stable oscillat...

  3. Homodyne measurement of average photon number

    CERN Document Server

    Webb, J G; Huntington, E H

    2005-01-01

    We describe a new scheme for the measurement of mean photon flux at an arbitrary optical sideband frequency using homodyne detection. Experimental implementation of the technique requires an AOM in addition to the homodyne detector, and does not require phase locking. The technique exhibits polarisation, frequency and spatial mode selectivity, as well as much improved speed, resolution and dynamic range when compared to linear photodetectors and avalanche photo diodes (APDs), with potential application to quantum state tomography and information encoding using an optical frequency basis. Experimental data also directly confirms the Quantum Mechanical description of vacuum noise.

  4. Average radiation widths of levels in natural xenon isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Noguere, G., E-mail: gilles.noguere@cea.fr [CEA, DEN, Cadarache, F-13108 Saint Paul les Durance (France); Litaize, O.; Archier, P.; De Saint Jean, C. [CEA, DEN, Cadarache, F-13108 Saint Paul les Durance (France); Mutti, P. [Institut Laue-Langevin, F-38042 Grenoble (France)

    2011-11-15

    Average radiation widths <{Gamma}{sub {gamma}>} for the stable xenon isotopes have been estimated using neutron resonance spectroscopic information deduced from high-resolution capture and transmission data measured at the electron linear accelerator GELINA of the Institute for Reference Materials and Measurements (IRMM) in Geel, Belgium. The combination of conventional Neutron Resonance Shape Analysis techniques (NRSA) with high-energy model calculations in a simple Bayesian learning method permit to calculate a consistent local systematic in the xenon's mass region (Z=54) from A=124 to A=136.

  5. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    Science.gov (United States)

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  6. Recent advances in phase shifted time averaging and stroboscopic interferometry

    Science.gov (United States)

    Styk, Adam; Józwik, Michał

    2016-08-01

    Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.

  7. Bayesian Model Averaging and Weighted Average Least Squares : Equivariance, Stability, and Numerical Issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    This article is concerned with the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals which implement, respectively, the exact Bayesian Model Averaging (BMA) estimator and the Weighted Average Least Squa

  8. A sixth order averaged vector field method

    OpenAIRE

    Li, Haochen; Wang, Yushun; Qin, Mengzhao

    2014-01-01

    In this paper, based on the theory of rooted trees and B-series, we propose the concrete formulas of the substitution law for the trees of order =5. With the help of the new substitution law, we derive a B-series integrator extending the averaged vector field (AVF) method to high order. The new integrator turns out to be of order six and exactly preserves energy for Hamiltonian systems. Numerical experiments are presented to demonstrate the accuracy and the energy-preserving property of the s...

  9. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  10. Sparsity averaging for radio-interferometric imaging

    CERN Document Server

    Carrillo, Rafael E; Wiaux, Yves

    2014-01-01

    We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.

  11. Fluctuations of wavefunctions about their classical average

    Energy Technology Data Exchange (ETDEWEB)

    Benet, L [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Flores, J [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Hernandez-Saldana, H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Izrailev, F M [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Leyvraz, F [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Seligman, T H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico)

    2003-02-07

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  12. The average free volume model for liquids

    CERN Document Server

    Yu, Yang

    2014-01-01

    In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.

  13. Fluctuations of wavefunctions about their classical average

    CERN Document Server

    Bénet, L; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  14. Grassmann Averages for Scalable Robust PCA

    OpenAIRE

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA do not scale beyond small-to-medium sized datasets. To address this, we introduce the Grassmann Average (GA), whic...

  15. Condition monitoring of gearboxes using synchronously averaged electric motor signals

    Science.gov (United States)

    Ottewill, J. R.; Orkisz, M.

    2013-07-01

    Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different

  16. High-average-power diode-pumped Yb: YAG lasers

    Energy Technology Data Exchange (ETDEWEB)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-10-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M{sup 2} = 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M{sup 2} value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M{sup 2} < 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods.

  17. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  18. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  19. MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert

    2003-05-01

    A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.

  20. Intensity contrast of the average supergranule

    CERN Document Server

    Langfellner, J; Gizon, L

    2016-01-01

    While the velocity fluctuations of supergranulation dominate the spectrum of solar convection at the solar surface, very little is known about the fluctuations in other physical quantities like temperature or density at supergranulation scale. Using SDO/HMI observations, we characterize the intensity contrast of solar supergranulation at the solar surface. We identify the positions of ${\\sim}10^4$ outflow and inflow regions at supergranulation scales, from which we construct average flow maps and co-aligned intensity and magnetic field maps. In the average outflow center, the maximum intensity contrast is $(7.8\\pm0.6)\\times10^{-4}$ (there is no corresponding feature in the line-of-sight magnetic field). This corresponds to a temperature perturbation of about $1.1\\pm0.1$ K, in agreement with previous studies. We discover an east-west anisotropy, with a slightly deeper intensity minimum east of the outflow center. The evolution is asymmetric in time: the intensity excess is larger 8 hours before the reference t...

  1. Local average height distribution of fluctuating interfaces

    Science.gov (United States)

    Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel V.

    2017-01-01

    Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1 +1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1 +1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2 +1 dimensions.

  2. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  3. The Increased of Activity and Content Reading Understanding Ability Through Problem Based Learning in Technique Discussion Group-Peningkatan Aktivitas dan Kemampuan Memahami Isi Bacaan Melalui Pembelajaran Berbasis Masalah dengan Teknik Diskusi Kelompok

    Directory of Open Access Journals (Sweden)

    Yakobus Paluru

    2015-02-01

    Full Text Available Abstract: The purpose of this research is to improve students’ learning activities and the ability to understand the information in the text. The results showed that the ability to understand written information students has increased after gaining experience learning through problem-based strategy with group discussion techniques. The increase is due to the emergence of motivation and the interest of students who constructed through problem-based learning strategies with group discussion techniques. The increase in activity caused by the adjustment of learning to students’ needs related to the topic of reading materials used in teaching and habits and learning styles that is performed by the students.Key Words: learning, the content of text, group discussion Abstrak: Penelitian ini bertujuan untuk meningkatkan aktivitas belajar dan kemampuan siswa dalam memahami informasi yang ada dalam bacaan. Hasil penelitian menunjukkan bahwa kemampuan memahami informasi tertulis siswa telah meningkat setelah mendapatkan pengalaman belajar melalui strategi berbasis masalah dengan teknik diskusi kelompok. Terjadinya peningkatan aktivitas disebabkan munculnya motivasi dan minat siswa yang dibangun melalui strategi pembelajaran berbasis masalah dengan teknik diskusi kelompok. Peningkatan aktivitas disebabkan adanya penyesuaian pembelajaran dengan kebutuhan siswa yang berkaitan dengan topik materi bacaan yang digunakan dalam pembelajaran dan kebiasaan serta gaya belajar yang dilakukan oleh siswa.Kata kunci: pembelajaran, isi bacaan, diskusi kelompok

  4. Combinatorial techniques

    CERN Document Server

    Sane, Sharad S

    2013-01-01

    This is a basic text on combinatorics that deals with all the three aspects of the discipline: tricks, techniques and theory, and attempts to blend them. The book has several distinctive features. Probability and random variables with their interconnections to permutations are discussed. The theme of parity has been specially included and it covers applications ranging from solving the Nim game to the quadratic reciprocity law. Chapters related to geometry include triangulations and Sperner's theorem, classification of regular polytopes, tilings and an introduction to the Eulcidean Ramsey theory. Material on group actions covers Sylow theory, automorphism groups and a classification of finite subgroups of orthogonal groups. All chapters have a large number of exercises with varying degrees of difficulty, ranging from material suitable for Mathematical Olympiads to research.

  5. COMPARISON OF THREE KINESIO TECHNIQUE APPLICATION ON JUMPING IN COLLEGIATE FEMALE ATHLETES

    Directory of Open Access Journals (Sweden)

    Kailash Sharma

    2015-06-01

    Full Text Available Background: Kinesio Tape (KT is a somewhat new type of taping technique gaining popularity as both treatment and performance enhancement tool. Considering the fact that KT can improve muscle performance, however, limited research has been done on the different technique of strips application of KT on functional performance. Therefore purpose of this study is to compare Comparison of three kinesio technique application on jumping in collegiate female athletes. Methods: 45 healthy collegiate female athletes were recruited based on inclusion and exclusion criteria. The subjects were randomly divided into three equal groups (group I, n=15, Group II, n=15 & group III, n=15. Group I received Y application of kinesio taping, Group II received I application of kinesio taping while, Group III underwent combined Y & I application of kinesio tape on triceps surae. Pre and post measurement of vertical jump (in terms of power average, power peak and horizontal distance were documented. Results: Statistically significant differences were found between the difference power average, difference power peak and difference horizontal jump in group I, II and III (p<0.001. Within group comparison also revealed statistically significant differences in power average, power peak and horizontal jump in all the three groups (p<0.001. Conclusion: Combined technique (Y and I application of kinesio was more effective in improving vertical jump (power average, power peak and horizontal jump as compared to Y and I application alone.

  6. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  7. Averaged Null Energy Condition from Causality

    CERN Document Server

    Hartman, Thomas; Tajdini, Amirhossein

    2016-01-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey microcausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, $\\int du T_{uu}$, must be positive. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to $n$-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form $\\int du X_{uuu\\cdots u} \\geq 0$. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment ...

  8. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  9. Geographic Gossip: Efficient Averaging for Sensor Networks

    CERN Document Server

    Dimakis, Alexandros G; Wainwright, Martin J

    2007-01-01

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log ...

  10. Data mining for average images in a digital hand atlas

    Science.gov (United States)

    Zhang, Aifeng; Cao, Fei; Pietka, Ewa; Liu, Brent J.; Huang, H. K.

    2004-04-01

    Bone age assessment is a procedure performed in pediatric patients to quickly evaluate parameters of maturation and growth from a left hand and wrist radiograph. Pietka and Cao have developed a Computer-aided diagnosis (CAD) method of bone age assessment based on a digital hand atlas. The aim of this paper is to extend their work by automatically select the best representative image from a group of normal children based on specific bony features that reflect skeletal maturity. The group can be of any ethnic origin and gender from one year to 18 year old in the digital atlas. This best representative image is defined as the "average" image of the group that can be augmented to Piekta and Cao's method to facilitate in the bone age assessment process.

  11. Cortical evoked potentials recorded from the guinea pig without averaging.

    Science.gov (United States)

    Walloch, R A

    1975-01-01

    Potentials evoked by tonal pulses and recorded with a monopolar electrode on the pial surface over the auditory cortex of the guinea pig are presented. These potentials are compared with average potentials recorded in previous studies with an electrode on the dura. The potentials recorded by these two techniques have similar waveforms, peak latencies and thresholds. They appear to be generated within the same region of the cerebral cortex. As can be expected, the amplitude of the evoked potentials recorded from the pial surface is larger than that recorded from the dura. Consequently, averaging is not needed to extract the evoked potential once the dura is removed. The thresholds for the evoked cortical potential are similar to behavioral thresholds for the guinea pig at high frequencies; however, evoked potential thresholds are eleveate over behavioral thresholds at low frequencies. The removal of the dura and the direct recording of the evoked potential appears most appropriate for acute experiments. The recording of an evoked potential with dura electrodes empploying averaging procedures appears most appropriate for chronic studies.

  12. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  13. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  14. Role of spatial averaging in multicellular gradient sensing

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-06-01

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  15. Average dimension of fixed point spaces with applications

    CERN Document Server

    Guralnick, Robert M

    2010-01-01

    Let $G$ be a finite group, $F$ a field, and $V$ a finite dimensional $FG$-module such that $G$ has no trivial composition factor on $V$. Then the arithmetic average dimension of the fixed point spaces of elements of $G$ on $V$ is at most $(1/p) \\dim V$ where $p$ is the smallest prime divisor of the order of $G$. This answers and generalizes a 1966 conjecture of Neumann which also appeared in a paper of Neumann and Vaughan-Lee and also as a problem in The Kourovka Notebook posted by Vaughan-Lee. Our result also generalizes a recent theorem of Isaacs, Keller, Meierfrankenfeld, and Moret\\'o. Various applications are given. For example, another conjecture of Neumann and Vaughan-Lee is proven and some results of Segal and Shalev are improved and/or generalized concerning BFC groups.

  16. Interpreting Sky-Averaged 21-cm Measurements

    Science.gov (United States)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  17. Impulsive synchronization schemes of stochastic complex networks with switching topology: average time approach.

    Science.gov (United States)

    Li, Chaojie; Yu, Wenwu; Huang, Tingwen

    2014-06-01

    In this paper, a novel impulsive control law is proposed for synchronization of stochastic discrete complex networks with time delays and switching topologies, where average dwell time and average impulsive interval are taken into account. The side effect of time delays is estimated by Lyapunov-Razumikhin technique, which quantitatively gives the upper bound to increase the rate of Lyapunov function. By considering the compensation of decreasing interval, a better impulsive control law is recast in terms of average dwell time and average impulsive interval. Detailed results from a numerical illustrative example are presented and discussed. Finally, some relevant conclusions are drawn.

  18. Signal-averaged electrocardiogram in chronic Chagas' heart disease

    Directory of Open Access Journals (Sweden)

    Aguinaldo Pereira de Moraes

    Full Text Available The aim of the study was to register the prevalence of late potentials (LP in patients with chronic Chagas' heart disease (CCD and the relationship with sustained ventricular tachycardia (SVT. 192 patients (96 males, mean age 42.9 years, with CCD were studied through a Signal Averaged ECG using time domain analysis. According to presence or absence of bundle branch block (BBB and SVT, four groups of patients were created: Group I (n = 72: without SVT (VT- and without BBB (BBB-: Group II (n = 27: with SVT (VT+ and BBB-; Group III (n = 63: VT- and with BBB (BBB+; and Group IV (N = 30: VT+ and BBB+. The LP was admitted, with 40 Hz filter, in the groups without BBB using standard criteria of the method. In the group with BBB, the root-mean-square amplitude of the last 40 ms (RMS < =14µV was considered as an indicator of LP. RESULTS: In groups I and II, LP was present in 21 (78% of the patients with SVT and in 22 (31% of the patients without SVT (p < 0.001, with Sensitivity (S 78%; Specificity (SP 70% and Accuracy (Ac 72%. LP was present in 30 (48% of the patients without and 20 (67% of the patients with SVT, in groups III and IV. p = 0.066, with S = 66%; SP = 52%; and Ac = 57%. In the follow-up, there were 4 deaths unrelated to arrhythmic events, all of them did not have LP. Eight (29,6% of the patients from group II and 4 (13% from group IV presented recurrence of SVT and 91,6% of these patients had LP. CONCLUSIONS: LP occurred in 77.7% of the patients with SVT and without BBB. In the groups with BBB, there was association of LP with SVT in 66,6% of the cases. The recurrence of SVT was present in 21% of the cases from which 91,6% had LP.

  19. New Nordic diet versus average Danish diet

    DEFF Research Database (Denmark)

    Khakimov, Bekzod; Poulsen, Sanne Kellebjerg; Savorani, Francesco

    2016-01-01

    A previous study has shown effects of the New Nordic Diet (NND) to stimulate weight loss and lower systolic and diastolic blood pressure in obese Danish women and men in a randomized, controlled dietary intervention study. This work demonstrates long-term metabolic effects of the NND as compared...... metabolites reflecting specific differences in the diets, especially intake of plant foods and seafood, and in energy metabolism related to ketone bodies and gluconeogenesis, formed the predominant metabolite pattern discriminating the intervention groups. Among NND subjects higher levels of vaccenic acid...... diets high in fish, vegetables, fruit, and wholegrain facilitated weight loss and improved insulin sensitivity by increasing ketosis and gluconeogenesis in the fasting state....

  20. Clustering economies based on multiple criteria decision making techniques

    OpenAIRE

    2011-01-01

    One of the primary concerns on many countries is to determine different important factors affecting economic growth. In this paper, we study some factors such as unemployment rate, inflation ratio, population growth, average annual income, etc to cluster different countries. The proposed model of this paper uses analytical hierarchy process (AHP) to prioritize the criteria and then uses a K-mean technique to cluster 59 countries based on the ranked criteria into four groups. The first group i...

  1. Order-Optimal Consensus through Randomized Path Averaging

    CERN Document Server

    Benezit, F; Thiran, P; Vetterli, M

    2008-01-01

    Gossip algorithms have recently received significant attention, mainly because they constitute simple and robust message-passing schemes for distributed information processing over networks. However for many topologies that are realistic for wireless ad-hoc and sensor networks (like grids and random geometric graphs), the standard nearest-neighbor gossip converges as slowly as flooding ($O(n^2)$ messages). A recently proposed algorithm called geographic gossip improves gossip efficiency by a $\\sqrt{n}$ factor, by exploiting geographic information to enable multi-hop long distance communications. In this paper we prove that a variation of geographic gossip that averages along routed paths, improves efficiency by an additional $\\sqrt{n}$ factor and is order optimal ($O(n)$ messages) for grids and random geometric graphs. We develop a general technique (travel agency method) based on Markov chain mixing time inequalities, which can give bounds on the performance of randomized message-passing algorithms operating...

  2. Ultra-low noise miniaturized neural amplifier with hardware averaging

    Science.gov (United States)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  3. Counting primes, groups, and manifolds.

    Science.gov (United States)

    Goldfeld, Dorian; Lubotzky, Alexander; Nikolov, Nikolay; Pyber, László

    2004-09-14

    Let Lambda=SL(2)(Z) be the modular group and let c(n)(Lambda) be the number of congruence subgroups of Lambda of index at most n. We prove that lim(n--> infinity )(log c(n)(Lambda)/((log n)(2)/log log n))=(3-2(sqrt)2)/4. The proof is based on the Bombieri-Vinogradov "Riemann hypothesis on the average" and on the solution of a new type of extremal problem in combinatorial number theory. Similar surprisingly sharp estimates are obtained for the subgroup growth of lattices in higher rank semisimple Lie groups. If G is such a Lie group and Gamma is an irreducible lattice of G it turns out that the subgroup growth of Gamma is independent of the lattice and depends only on the Lie type of the direct factors of G. It can be calculated easily from the root system. The most general case of this result relies on the Generalized Riemann Hypothesis, but many special cases are unconditional. The proofs use techniques from number theory, algebraic groups, finite group theory, and combinatorics.

  4. Spreading of oil and the concept of average oil thickness

    Energy Technology Data Exchange (ETDEWEB)

    Goodman, R. [Innovative Ventures Ltd., Cochrane, AB (Canada); Quintero-Marmol, A.M. [Pemex E and P, Campeche (Mexico); Bannerman, K. [Radarsat International, Vancouver, BC (Canada); Stevenson, G. [Calgary Univ., AB (Canada)

    2004-07-01

    The area of on oil slick on water can be readily measured using simple techniques ranging from visual observations to satellite-based radar systems. However, it is necessary to know the volume of spilled oil in order to determine the environmental impacts and best response strategy. The volume of oil must be known to determine spill quantity, response effectiveness and weathering rates. The relationship between volume and area is the average thickness of the oil over the spill area. This paper presents the results of several experiments conducted in the Gulf of Mexico that determined if average thickness of the oil is a characteristic of a specific crude oil, independent of spill size. In order to calculate the amount of oil on water from the area of slick requires information on the oil thickness, the inhomogeneity of the oil thickness and the oil-to-water ratio in the slick if it is emulsified. Experimental data revealed that an oil slick stops spreading very quickly after the application of oil. After the equilibrium thickness has been established, the slick is very sensitive to disturbances on the water surface, such as wave action, which causes the oil circle to dissipate into several small irregular shapes. It was noted that the spill source and oceanographic conditions are both critical to the final shape of the spill. 31 refs., 2 tabs., 8 figs.

  5. Monthly streamflow forecasting with auto-regressive integrated moving average

    Science.gov (United States)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  6. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  7. Geometric covering arguments and ergodic theorems for free groups

    CERN Document Server

    Bowen, Lewis

    2009-01-01

    We present a new approach to the proof of ergodic theorems for actions of free groups based on geometric covering and asymptotic invariance arguments. Our approach can be viewed as a direct generalization of the classical geometric covering and asymptotic invariance arguments used in the ergodic theory of amenable groups. We use this approach to generalize the existing maximal and pointwise ergodic theorems for free group actions to a large class of geometric averages which were not accessible by previous techniques. Some applications of our approach to other groups and other problems in ergodic theory are also briefly discussed.

  8. Hearing Office Average Processing Time Ranking Report, February 2016

    Data.gov (United States)

    Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...

  9. ANTINOMY OF THE MODERN AVERAGE PROFESSIONAL EDUCATION

    Directory of Open Access Journals (Sweden)

    A. A. Listvin

    2017-01-01

    of ways of their decision and options of the valid upgrade of the SPE system answering to the requirements of economy. The inefficiency of the concept of one-leveled SPE and its non-competitiveness against the background of development of an applied bachelor degree at the higher school is shown. It is offered to differentiate programs of basic level for training of skilled workers and the program of the increased level for training of specialists of an average link (technicians, technologists on the basis of basic level for forming of a single system of continuous professional training and effective functioning of regional systems of professional education. Such system will help to eliminate disproportions in a triad «a worker – a technician – an engineer», and will increase the quality of professional education. Furthermore, it is indicated the need of polyprofessional education wherein the integrated educational structures differing in degree of formation of split-level educational institutions on the basis of network interaction, convergence and integration are required. According to the author, in the regions it is necessary to develop two types of organizations and SPE organizations: territorial multi-profile colleges with flexible variable programs and the organizations realizing educational programs of applied qualifications in specific industries (metallurgical, chemical, construction, etc. according to the specifics of economy of territorial subjects.Practical significance. The results of the research can be useful to specialists of management of education, heads and pedagogical staff of SPE institutions, and also representatives of regional administrations and employers while organizing the multilevel network system of training of skilled workers and experts of middle ranking.

  10. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  11. The monthly-averaged and yearly-averaged cosine effect factor of a heliostat field

    Energy Technology Data Exchange (ETDEWEB)

    Al-Rabghi, O.M.; Elsayed, M.M. (King Abdulaziz Univ., Jeddah (Saudi Arabia). Dept. of Thermal Engineering)

    1992-01-01

    Calculations are carried out to determine the dependence of the monthly-averaged and the yearly-averaged daily cosine effect factor on the pertinent parameters. The results are plotted on charts for each month and for the full year. These results cover latitude angles between 0 and 45[sup o]N, for fields with radii up to 50 tower height. In addition, the results are expressed in mathematical correlations to facilitate using them in computer applications. A procedure is outlined to use the present results to preliminary layout the heliostat field, and to predict the rated MW[sub th] reflected by the heliostat field during a period of a month, several months, or a year. (author)

  12. Lagrangian averages, averaged Lagrangians, and the mean effects of fluctuations in fluid dynamics.

    Science.gov (United States)

    Holm, Darryl D.

    2002-06-01

    We begin by placing the generalized Lagrangian mean (GLM) equations for a compressible adiabatic fluid into the Euler-Poincare (EP) variational framework of fluid dynamics, for an averaged Lagrangian. This is the Lagrangian averaged Euler-Poincare (LAEP) theorem. Next, we derive a set of approximate small amplitude GLM equations (glm equations) at second order in the fluctuating displacement of a Lagrangian trajectory from its mean position. These equations express the linear and nonlinear back-reaction effects on the Eulerian mean fluid quantities by the fluctuating displacements of the Lagrangian trajectories in terms of their Eulerian second moments. The derivation of the glm equations uses the linearized relations between Eulerian and Lagrangian fluctuations, in the tradition of Lagrangian stability analysis for fluids. The glm derivation also uses the method of averaged Lagrangians, in the tradition of wave, mean flow interaction. Next, the new glm EP motion equations for incompressible ideal fluids are compared with the Euler-alpha turbulence closure equations. An alpha model is a GLM (or glm) fluid theory with a Taylor hypothesis closure. Such closures are based on the linearized fluctuation relations that determine the dynamics of the Lagrangian statistical quantities in the Euler-alpha equations. Thus, by using the LAEP theorem, we bridge between the GLM equations and the Euler-alpha closure equations, through the small-amplitude glm approximation in the EP variational framework. We conclude by highlighting a new application of the GLM, glm, and alpha-model results for Lagrangian averaged ideal magnetohydrodynamics. (c) 2002 American Institute of Physics.

  13. Experimental techniques; Techniques experimentales

    Energy Technology Data Exchange (ETDEWEB)

    Roussel-Chomaz, P. [GANIL CNRS/IN2P3, CEA/DSM, 14 - Caen (France)

    2007-07-01

    This lecture presents the experimental techniques, developed in the last 10 or 15 years, in order to perform a new class of experiments with exotic nuclei, where the reactions induced by these nuclei allow to get information on their structure. A brief review of the secondary beams production methods will be given, with some examples of facilities in operation or under project. The important developments performed recently on cryogenic targets will be presented. The different detection systems will be reviewed, both the beam detectors before the targets, and the many kind of detectors necessary to detect all outgoing particles after the reaction: magnetic spectrometer for the heavy fragment, detection systems for the target recoil nucleus, {gamma} detectors. Finally, several typical examples of experiments will be detailed, in order to illustrate the use of each detector either alone, or in coincidence with others. (author)

  14. Analytic continuation by averaging Padé approximants

    Science.gov (United States)

    Schött, Johan; Locht, Inka L. M.; Lundin, Elin; Grânäs, Oscar; Eriksson, Olle; Di Marco, Igor

    2016-02-01

    The ill-posed analytic continuation problem for Green's functions and self-energies is investigated by revisiting the Padé approximants technique. We propose to remedy the well-known problems of the Padé approximants by performing an average of several continuations, obtained by varying the number of fitted input points and Padé coefficients independently. The suggested approach is then applied to several test cases, including Sm and Pr atomic self-energies, the Green's functions of the Hubbard model for a Bethe lattice and of the Haldane model for a nanoribbon, as well as two special test functions. The sensitivity to numerical noise and the dependence on the precision of the numerical libraries are analyzed in detail. The present approach is compared to a number of other techniques, i.e., the nonnegative least-squares method, the nonnegative Tikhonov method, and the maximum entropy method, and is shown to perform well for the chosen test cases. This conclusion holds even when the noise on the input data is increased to reach values typical for quantum Monte Carlo simulations. The ability of the algorithm to resolve fine structures is finally illustrated for two relevant test functions.

  15. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    Science.gov (United States)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  16. 论大学生职业生涯规划团体心理辅导技术的应用%Application of the university student career planning group psychological counseling techniques

    Institute of Scientific and Technical Information of China (English)

    周云; 李海黔; 李黛; 罗辑

    2013-01-01

      大学生职业生涯规划是大学生就业辅导的一项重要内容,对大学生在正确自我认知的基础上确定自身发展方向有着非常重要的意义。在大学生职业生涯规划中应用团体心理辅导技术能达到事半功倍的效果,构建一个好的团体心理辅导方案能有效的指导大学生进行职业生涯规划。%The university student career planning is an important content of the employment guidance of college students, for students based on proper self-perception. Determine its development direction has a very important significance.Application of group psychological counseling techniques in college student career planning can achieve twice the result with half the effort, building a good group psychological counseling scheme can guide students to effectively carry out the occupation career planning.

  17. Uso de técnicas de regressão na avaliação, em bovinos de corte, da eficiência de conversão do alimento em produto: comparação entre grupos experimentais Use of regression techniques in the evaluation, in beef cattle, of feed conversion into product: comparison between experimental groups

    Directory of Open Access Journals (Sweden)

    Edenio Detmann

    2012-01-01

    Full Text Available Objetivou-se neste estudo propor e discutir um método de avaliação comparativa da eficiência de conversão do alimento em produto entre dois grupos experimentais de bovinos de corte baseado na utilização de técnicas de regressão. Os procedimentos matemáticos e estatísticos foram desenvolvidos a partir de banco de dados formado pela mensuração do consumo de matéria seca e do ganho médio diário em 380 bovinos zebuínos puros ou mestiços em 15 experimentos constantes na base de dados do sistema BR-CORTE. Foram selecionados dois grupos constituídos por animais Nelore (n = 156 e animais F1 Europeu × Nelore (n = 139. Foram propostas duas aproximações baseadas no ajustamento de modelos de regressão linear e não-linear, tendo o ganho médio diário com variável independente e o consumo de matéria seca como variável dependente. A utilização do alimento pelo animal foi estratificada em demanda para mantença e eficiência real de conversão em produto e as diferenças entre grupos foram avaliadas por intermédio de variáveis dummy. O critério de informação de Akaike foi proposto como ferramenta para escolha entre os modelos linear e não-linear. A avaliação comparativa da eficiência de transformação do alimento em produto entre dois grupos experimentais de bovinos de corte por meio de técnicas de regressão permite, caso pertinente, a estratificação dos grupos em função da eficiência de uso da dieta para suprimento de demandas para mantença e para produção.The objective of this study was to propose and discuss a method to compare the feed into product conversion between two experimental groups of beef cattle based on regression techniques. The mathematical and statistical procedures were developed using a databank of dry matter intake and average daily gain of 380 purebred or crossbred zebu animals on the BR-CORTE nutritional system database. Two groups were selected from the databank and were formed by Nellore

  18. A novel approach for the averaging of magnetocardiographically recorded heart beats

    Science.gov (United States)

    Di Pietro Paolo, D.; Müller, H.-P.; Erné, S. N.

    2005-05-01

    Performing signal averaging in an efficient and correct way is indispensable since it is a prerequisite for a broad variety of magnetocardiographic (MCG) analysis methods. One of the most common procedures for performing the signal averaging to increase the signal-to-noise ratio (SNR) in magnetocardiography, as well as in electrocardiography (ECG), is done by means of spatial or temporal techniques. In this paper, an improvement of the temporal averaging method is presented. In order to obtain an accurate signal detection, temporal alignment methods and objective classification criteria are developed. The processing technique based on hierarchical clustering is introduced to take into account the non-stationarity of the noise and, to some extent, the biological variability of the signals reaching the optimum SNR. The method implemented is especially designed to run fast and does not require any interaction from the operator. The averaging procedure described in this work is applied to the averaging of MCG data as an example, but with its intrinsic properties it can also be applied to the averaging of ECG recording, averaging of body-surface-potential mapping (BSPM) and averaging of magnetoencephalographic (MEG) or electroencephalographic (EEG) signals.

  19. Secure Obfuscation for Encrypted Group Signatures.

    Directory of Open Access Journals (Sweden)

    Yang Shi

    Full Text Available In recent years, group signature techniques are widely used in constructing privacy-preserving security schemes for various information systems. However, conventional techniques keep the schemes secure only in normal black-box attack contexts. In other words, these schemes suppose that (the implementation of the group signature generation algorithm is running in a platform that is perfectly protected from various intrusions and attacks. As a complementary to existing studies, how to generate group signatures securely in a more austere security context, such as a white-box attack context, is studied in this paper. We use obfuscation as an approach to acquire a higher level of security. Concretely, we introduce a special group signature functionality-an encrypted group signature, and then provide an obfuscator for the proposed functionality. A series of new security notions for both the functionality and its obfuscator has been introduced. The most important one is the average-case secure virtual black-box property w.r.t. dependent oracles and restricted dependent oracles which captures the requirement of protecting the output of the proposed obfuscator against collision attacks from group members. The security notions fit for many other specialized obfuscators, such as obfuscators for identity-based signatures, threshold signatures and key-insulated signatures. Finally, the correctness and security of the proposed obfuscator have been proven. Thereby, the obfuscated encrypted group signature functionality can be applied to variants of privacy-preserving security schemes and enhance the security level of these schemes.

  20. Assembly Test of Elastic Averaging Technique to Improve Mechanical Alignment for Accelerating Structure Assemblies in CLIC

    CERN Document Server

    Huopana, J

    2010-01-01

    The CLIC (Compact LInear Collider) is being studied at CERN as a potential multi-TeV e+e- collider [1]. The manufacturing and assembly tolerances for the required RF-components are important for the final efficiency and for the operation of CLIC. The proper function of an accelerating structure is very sensitive to errors in shape and location of the accelerating cavity. This causes considerable issues in the field of mechanical design and manufacturing. Currently the design of the accelerating structures is a disk design. Alternatively it is possible to create the accelerating assembly from quadrants, which favour the mass manufacturing. The functional shape inside of the accelerating structure remains the same and a single assembly uses less parts. The alignment of these quadrants has been previously made kinematic by using steel pins or spheres to align the pieces together. This method proved to be a quite tedious and time consuming method of assembly. To limit the number of different error sources, a meth...

  1. 40 CFR 1033.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1033.710... Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. You may average emission credits only as allowed by § 1033.740. (b) You may certify one or more engine...

  2. 7 CFR 51.577 - Average midrib length.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...

  3. 7 CFR 760.640 - National average market price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average... average quality loss factors that are reflected in the market by county or part of a county. (c)...

  4. Average gluon and quark jet multiplicities at higher orders

    Energy Technology Data Exchange (ETDEWEB)

    Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2013-05-15

    We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.

  5. Colorectal Cancer Screening in Average Risk Populations: Evidence Summary

    Directory of Open Access Journals (Sweden)

    Jill Tinmouth

    2016-01-01

    Full Text Available Introduction. The objectives of this systematic review were to evaluate the evidence for different CRC screening tests and to determine the most appropriate ages of initiation and cessation for CRC screening and the most appropriate screening intervals for selected CRC screening tests in people at average risk for CRC. Methods. Electronic databases were searched for studies that addressed the research objectives. Meta-analyses were conducted with clinically homogenous trials. A working group reviewed the evidence to develop conclusions. Results. Thirty RCTs and 29 observational studies were included. Flexible sigmoidoscopy (FS prevented CRC and led to the largest reduction in CRC mortality with a smaller but significant reduction in CRC mortality with the use of guaiac fecal occult blood tests (gFOBTs. There was insufficient or low quality evidence to support the use of other screening tests, including colonoscopy, as well as changing the ages of initiation and cessation for CRC screening with gFOBTs in Ontario. Either annual or biennial screening using gFOBT reduces CRC-related mortality. Conclusion. The evidentiary base supports the use of FS or FOBT (either annual or biennial to screen patients at average risk for CRC. This work will guide the development of the provincial CRC screening program.

  6. Kinetic energy equations for the average-passage equation system

    Science.gov (United States)

    Johnson, Richard W.; Adamczyk, John J.

    1989-01-01

    Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.

  7. Kinetic energy equations for the average-passage equation system

    Science.gov (United States)

    Johnson, Richard W.; Adamczyk, John J.

    1989-01-01

    Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.

  8. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  9. Average Consensus Problems in Networks of Agents with Fixed and Switching Topology and Unknown Control Direction

    Directory of Open Access Journals (Sweden)

    Caixian Sun

    2014-01-01

    Full Text Available This paper is devoted to the average consensus problems in directed networks of agents with unknown control direction. In this paper, by using Nussbaum function techniques and Laplacian matrix, novel average consensus protocols are designed for multiagent systems with unknown control direction in the cases of directed networks with fixed and switching topology. In the case of switching topology, the disagreement vector is utilized. Finally, simulation is provided to demonstrate the effectiveness of our results.

  10. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  11. Liver resection with a new technique

    Directory of Open Access Journals (Sweden)

    Mustafa Turan

    2014-06-01

    Full Text Available Aim. In this retrospective study, we scrutinized the patients in whom we had used Radio-Frequency (RF technique in liver resection procedures. Methods. In this retrospective study, indications for liver resection were malignant tumors in 17 (Elective cases and 6 patients with trauma etiology (Emergency cases. Results. Left lateral segmentectomy (segments II-III was done in 9 patients. Segment VI resection was performed in 4 patients. Non-anatomical (wedge resections were done in 10 patients. The average time necessary for transection of the liver parenchyma was 34±5 min. in Elective group and 37±5min. in Emergency group. Average blood loss was 32±5 mL in Elective group and 89±8 mL in Emergency group. In the postoperative period, we did not see any subcapsular or perihepatic hematoma responsible for delayed hemorrhage. No signs of infectious disease or abscesses were observed. Conclusion. This RF assisted technique is effective in bloodless liver resections.

  12. Measurement properties of painDETECT by average pain severity

    Directory of Open Access Journals (Sweden)

    Cappelleri JC

    2014-11-01

    Full Text Available Joseph C Cappelleri,1 E Jay Bienen,2 Vijaya Koduru,3 Alesia Sadosky4 1Pfizer, Groton, CT, 2Outcomes research consultant, New York, NY, 3Eliassen Group, New London, CT, USA; 4Pfizer, New York, NY, USA Background: Since the burden of neuropathic pain (NeP increases with pain severity, it is important to characterize and quantify pain severity when identifying NeP patients. This study evaluated whether painDETECT, a screening questionnaire to identify patients with NeP, can distinguish pain severity. Materials and methods: Subjects (n=614, 55.4% male, 71.8% white, mean age 55.5 years with confirmed NeP were identified during office visits to US community-based physicians. The Brief Pain Inventory – Short Form stratified subjects by mild (score 0–3, n=110, moderate (score 4–6, n=297, and severe (score 7–10, n=207 average pain. Scores on the nine-item painDETECT (seven pain-symptom items, one pain-course item, one pain-irradiation item range from -1 to 38 (worst NeP; the seven-item painDETECT scores (only pain symptoms range from 0 to 35. The ability of painDETECT to discriminate average pain-severity levels, based on the average pain item from the Brief Pain Inventory – Short Form (0–10 scale, was evaluated using analysis of variance or covariance models to obtain unadjusted and adjusted (age, sex, race, ethnicity, time since NeP diagnosis, number of comorbidities mean painDETECT scores. Cumulative distribution functions on painDETECT scores by average pain severity were compared (Kolmogorov–Smirnov test. Cronbach's alpha assessed internal consistency reliability. Results: Unadjusted mean scores were 15.2 for mild, 19.8 for moderate, and 24.0 for severe pain for the nine items, and 14.3, 18.6, and 22.7, respectively, for the seven items. Adjusted nine-item mean scores for mild, moderate, and severe pain were 17.3, 21.3, and 25.3, respectively; adjusted seven-item mean scores were 16.4, 20.1, and 24.0, respectively. All pair

  13. Técnica do grupo focal como método de avaliação do conhecimento de adolescentes sobre saúde bucal = The focus group technique as a method of evaluating teenager knowledge of oral health

    Directory of Open Access Journals (Sweden)

    Kléryson Martins Soares Francisco

    2009-02-01

    Full Text Available O presente estudo tem como objetivo analisar, por meio da técnica do Grupo Focal, o entendimento de adolescentes em relação à saúde bucal. A pesquisa foi realizada em três escolas públicas da cidade de Araçatuba, Estado de São Paulo, com dez alunos em cada uma delas. Para a realização dos grupos focais foram abordadas as seguintes palavras, presentes em perguntas de questionários sobre saúde bucal, as quais apresentaram altos índices de erros: saúde bucal; placa bacteriana; dente permanente; flúor; gengiva sangra?; fio dental; transmissão da cárie. Durante as discussões dos grupos focais, observou-se que muitos adolescentes ficavam surpresos com a situação a qual foram submetidos e com o tema que estavam discutindo. A palavra ‘saúde bucal’ foi associada à condição de limpeza dacavidade bucal, não identificando a saúde bucal como parte da saúde geral. O termo ‘transmissão da cárie’ não teve um entendimento suficiente. A expressão ‘dente permanente’ foi bem compreendida, sendo associada a um tipo de dente que não seria mais substituído.A palavra ‘flúor’ teve maior associação à função de limpeza do que à proteção dos dentes. Conclui-se que a utilização da técnica do Grupo Focal é de grande importância na interpretação do conhecimento dos adolescentes sobre saúde bucal e na adequação da terminologia de questionários sobre o mesmo tema.This study aims to the understanding of adolescents regarding oral health, using the Focus Group technique. The study was conducted at three public schools in the city of Araçatuba, São Paulo State, Brazil, with ten students in each. In order to conduct the focus groups, the following words, which featured high error levels, wereaddressed in survey questions on oral health: oral health; plaque, permanent teeth; fluoride; gum bleeds?; dental floss; transmission of cavities. During the discussions in the focus groups, it was observed that many

  14. CONSTRUCTION TECHNIQUE OF SUPER-HIGH HEAVY FORMWORK GROUP OF HANDAN CULTURE AND ARTS CENTER%邯郸文化艺术中心超高重型群体模架施工技术

    Institute of Scientific and Technical Information of China (English)

    刘京城; 白克利; 李辉坚; 王念念; 贾成亮

    2011-01-01

    The Handan Culture and Arts Center adopts high supporting frame, heavy structural load and complex structural section shape. Super-high heavy formwork group is adopted during construction, using many construction techniques, such as tool-type unilateral formwork supporting, cross-plate scaffold and bowl-buckle scaffold, etc. After contrastive analysis on the formwork support, it is indicated that cross -plate scaffold possesses higher safety in high load-bearing formwork supporting system above 20m.%邯郸文化艺术中心施工工程,支撑架高度大,结构荷载大,结构截面形状复杂.施工中采用了超高重型群体模架施工,该模架运用了工具式单侧支模、十字盘脚手架和碗扣式脚手架等产品施工技术.对模架进行对比分析,得出十字盘脚手架在20m以上,高大承重模架体系的应用更具安全性.

  15. Atrial activation during sinus rhythm in patients with rheumatic and non-rheumatic paroxysmal atrial fibrillation. Frequency-domain analysis using signal-averaged electrocardiography

    Directory of Open Access Journals (Sweden)

    Dantas Rogério Carregoza

    2001-01-01

    Full Text Available OBJECTIVE: Using P-wave signal-averaged electrocardiography, we assessed the patterns of atrial electrical activation in patients with idiopathic atrial fibrillation as compared with patterns in patients with atrial fibrillation associated with structural heart disease. METHODS: Eighty patients with recurrent paroxysmal atrial fibrillation were divided into 3 groups as follows: group I - 40 patients with atrial fibrillation associated with non-rheumatic heart disease; group II - 25 patients with rheumatic atrial fibrillation; and group III - 15 patients with idiopathic atrial fibrillation. All patients underwent P-wave signal-averaged electrocardiography for frequency-domain analysis using spectrotemporal mapping and statistical techniques for detecting and quantifying intraatrial conduction disturbances. RESULTS: We observed an important fragmentation in atrial electrical conduction in 27% of the patients in group I, 64% of the patients in group II, and 67% of the patients in group III (p=0.003. CONCLUSION: Idiopathic atrial fibrillation has important intraatrial conduction disturbances. These alterations are similar to those observed in individuals with rheumatic atrial fibrillation, suggesting the existence of some degree of structural involvement of the atrial myocardium that cannot be detected with conventional electrocardiography and echocardiography.

  16. First-Stage Development and Validation of a Web-Based Automated Dietary Modeling Tool: Using Constraint Optimization Techniques to Streamline Food Group and Macronutrient Focused Dietary Prescriptions for Clinical Trials

    Science.gov (United States)

    Morrison, Evan; Sullivan, Emma; Dam, Hoa Khanh

    2016-01-01

    Background Standardizing the background diet of participants during a dietary randomized controlled trial is vital to trial outcomes. For this process, dietary modeling based on food groups and their target servings is employed via a dietary prescription before an intervention, often using a manual process. Partial automation has employed the use of linear programming. Validity of the modeling approach is critical to allow trial outcomes to be translated to practice. Objective This paper describes the first-stage development of a tool to automatically perform dietary modeling using food group and macronutrient requirements as a test case. The Dietary Modeling Tool (DMT) was then compared with existing approaches to dietary modeling (manual and partially automated), which were previously available to dietitians working within a dietary intervention trial. Methods Constraint optimization techniques were implemented to determine whether nonlinear constraints are best suited to the development of the automated dietary modeling tool using food composition and food consumption data. Dietary models were produced and compared with a manual Microsoft Excel calculator, a partially automated Excel Solver approach, and the automated DMT that was developed. Results The web-based DMT was produced using nonlinear constraint optimization, incorporating estimated energy requirement calculations, nutrition guidance systems, and the flexibility to amend food group targets for individuals. Percentage differences between modeling tools revealed similar results for the macronutrients. Polyunsaturated fatty acids and monounsaturated fatty acids showed greater variation between tools (practically equating to a 2-teaspoon difference), although it was not considered clinically significant when the whole diet, as opposed to targeted nutrients or energy requirements, were being addressed. Conclusions Automated modeling tools can streamline the modeling process for dietary intervention trials

  17. Mindfulness for group facilitation

    DEFF Research Database (Denmark)

    Adriansen, Hanne Kirstine; Krohn, Simon

    2014-01-01

    In this paper, we argue that mindfulness techniques can be used for enhancing the outcome of group performance. The word mindfulness has different connotations in the academic literature. Broadly speaking there is ‘mindfulness without meditation’ or ‘Western’ mindfulness which involves active...... thinking and ‘Eastern’ mindfulness which refers to an open, accepting state of mind, as intended with Buddhist-inspired techniques such as meditation. In this paper, we are interested in the latter type of mindfulness and demonstrate how Eastern mindfulness techniques can be used as a tool for facilitation....... A brief introduction to the physiology and philosophy of Eastern mindfulness constitutes the basis for the arguments of the effect of mindfulness techniques. The use of mindfulness techniques for group facilitation is novel as it changes the focus from individuals’ mindfulness practice...

  18. Long-term average performance benefits of parabolic trough improvements

    Energy Technology Data Exchange (ETDEWEB)

    Gee, R.; Gaul, H.W.; Kearney, D.; Rabl, A.

    1980-03-01

    Improved parabolic trough concentrating collectors will result from better design, improved fabrication techniques, and the development and utilization of improved materials. The difficulty of achieving these improvements varies as does their potential for increasing parabolic trough performance. The purpose of this analysis is to quantify the relative merit of various technology advancements in improving the long-term average performance of parabolic trough concentrating collectors. The performance benefits of improvements are determined as a function of operating temperature for north-south, east-west, and polar mounted parabolic troughs. The results are presented graphically to allow a quick determination of the performance merits of particular improvements. Substantial annual energy gains are shown to be attainable. Of the improvements evaluated, the development of stable back-silvered glass reflective surfaces offers the largest performance gain for operating temperatures below 150/sup 0/C. Above 150/sup 0/C, the development of trough receivers that can maintain a vacuum is the most significant potential improvement. The reduction of concentrator slope errors also has a substantial performance benefit at high operating temperatures.

  19. Potential of high-average-power solid state lasers

    Energy Technology Data Exchange (ETDEWEB)

    Emmett, J.L.; Krupke, W.F.; Sooy, W.R.

    1984-09-25

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels.

  20. Average annual runoff in the United States, 1951-80

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This is a line coverage of average annual runoff in the conterminous United States, 1951-1980. Surface runoff Average runoff Surface waters United States

  1. Seasonal Sea Surface Temperature Averages, 1985-2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of four images showing seasonal sea surface temperature (SST) averages for the entire earth. Data for the years 1985-2001 are averaged to...

  2. Average American 15 Pounds Heavier Than 20 Years Ago

    Science.gov (United States)

    ... page: https://medlineplus.gov/news/fullstory_160233.html Average American 15 Pounds Heavier Than 20 Years Ago ... since the late 1980s and early 1990s, the average American has put on 15 or more additional ...

  3. Plant tissue culture techniques

    Directory of Open Access Journals (Sweden)

    Rolf Dieter Illg

    1991-01-01

    Full Text Available Plant cell and tissue culture in a simple fashion refers to techniques which utilize either single plant cells, groups of unorganized cells (callus or organized tissues or organs put in culture, under controlled sterile conditions.

  4. Plant tissue culture techniques

    OpenAIRE

    Rolf Dieter Illg

    1991-01-01

    Plant cell and tissue culture in a simple fashion refers to techniques which utilize either single plant cells, groups of unorganized cells (callus) or organized tissues or organs put in culture, under controlled sterile conditions.

  5. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  6. Influence of gestational diabetes on the fetal autonomic nervous system: A study using phase-rectified signal averaging analysis.

    Science.gov (United States)

    Lobmaier, Silvia M; Ortiz, Javier U; Sewald, Maria; Müller, Alexander; Schmidt, Georg; Haller, Bernhard; Oberhoffer, Renate; Schneider, Karl T M; Giussani, Dino A; Wacker-Gußmann, Annette

    2017-08-07

    Maternal gestational diabetes (GDM) is known to influence fetal physiology. Phase-rectified signal averaging (PRSA), an innovative signal processing technique, can be used to investigate signals obtained from fetal heart. The PRSA calculated variables "average acceleration capacity" (AAC) and "average deceleration capacity" (ADC) are established indices of autonomic nervous system (ANS) function. The aim of this study was to evaluate the influence of gestational diabetes on the fetal ANS in human pregnancy using PRSA. In a prospective human clinical case-control study during the third trimester of pregnancy, 58 mothers with diagnosed GDM and 58 gestational-age matched healthy controls were included. Fetal CTG registrations were performed in all cases at study entry and in 19 cases of gestational diabetes the registration was repeated again close to delivery. The ultrasound technique based innovative CTG parameters AAC, ADC as well as fetal heart rate short-term variation (STV) according to Dawes/Redman criteria were calculated. Mean gestational age of both groups at study entry was 35.7 ± 2.3 weeks. There was a significant difference in AAC (mean ± SD: 1.97 ± 0.33 vs. 2.42 ± 0.57 bpm; p<0.001) and ADC (1.94 ± 0.32 vs. 2.28 ± 0.46, p<0.001) between controls and fetuses of diabetic mothers. This difference between groups could not be demonstrated using standard computerized fetal CTG analysis (STV controls 10.8 ± 3.0 vs. cases 11.3 ± 2.5 ms; p=0.32). The longitudinal measurements in the diabetes group did not show significantdifferences to measurements at study entry. These data show increased ANS activity in late gestation fetuses of diabetic mothers. Analysis of human fetal cardiovascular and autonomic nervous system function by PRSA may offer improved surveillance over conventional techniques linking gestational diabetes pregnancy with future cardiovascular dysfunction in the offspring. This article is protected by copyright. All rights reserved.

  7. Trait valence and the better-than-average effect.

    Science.gov (United States)

    Gold, Ron S; Brown, Mark G

    2011-12-01

    People tend to regard themselves as having superior personality traits compared to their average peer. To test whether this "better-than-average effect" varies with trait valence, participants (N = 154 students) rated both themselves and the average student on traits constituting either positive or negative poles of five trait dimensions. In each case, the better-than-average effect was found, but trait valence had no effect. Results were discussed in terms of Kahneman and Tversky's prospect theory.

  8. Investigating Averaging Effect by Using Three Dimension Spectrum

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The eddy current displacement sensor's averaging effect has been investigated in this paper,and thefrequency spectrum property of the averaging effect was also deduced. It indicates that the averaging effect has no influences on measuring a rotor's rotating error, but it has visible influences on measuring the rotor's profile error. According to the frequency spectrum of the averaging effect, the actual sampling data can be adjusted reasonably, thus measuring precision is improved.

  9. Average of Distribution and Remarks on Box-Splines

    Institute of Scientific and Technical Information of China (English)

    LI Yue-sheng

    2001-01-01

    A class of generalized moving average operators is introduced, and the integral representations of an average function are provided. It has been shown that the average of Dirac δ-distribution is just the well known box-spline. Some remarks on box-splines, such as their smoothness and the corresponding partition of unity, are made. The factorization of average operators is derived. Then, the subdivision algorithm for efficient computing of box-splines and their linear combinations follows.

  10. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust...

  11. Averaging and Globalising Quotients of Informetric and Scientometric Data.

    Science.gov (United States)

    Egghe, Leo; Rousseau, Ronald

    1996-01-01

    Discussion of impact factors for "Journal Citation Reports" subject categories focuses on the difference between an average of quotients and a global average, obtained as a quotient of averages. Applications in the context of informetrics and scientometrics are given, including journal prices and subject discipline influence scores.…

  12. Perturbation resilience and superiorization methodology of averaged mappings

    Science.gov (United States)

    He, Hongjin; Xu, Hong-Kun

    2017-04-01

    We first prove the bounded perturbation resilience for the successive fixed point algorithm of averaged mappings, which extends the string-averaging projection and block-iterative projection methods. We then apply the superiorization methodology to a constrained convex minimization problem where the constraint set is the intersection of fixed point sets of a finite family of averaged mappings.

  13. 76 FR 6161 - Annual Determination of Average Cost of Incarceration

    Science.gov (United States)

    2011-02-03

    ... No: 2011-2363] DEPARTMENT OF JUSTICE Bureau of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2009 was $25,251. The average annual cost to confine an...

  14. 20 CFR 226.62 - Computing average monthly compensation.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is computed by first determining the employee's highest 60 months of railroad compensation...

  15. 40 CFR 1042.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1042.710..., Banking, and Trading for Certification § 1042.710 Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. (b) You may certify one or more engine families to...

  16. 27 CFR 19.37 - Average effective tax rate.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Average effective tax rate..., DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Taxes Effective Tax Rates § 19.37 Average effective tax rate. (a) The proprietor may establish an average effective tax rate for any...

  17. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  18. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...

  19. 7 CFR 1410.44 - Average adjusted gross income.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average adjusted gross income. 1410.44 Section 1410... Average adjusted gross income. (a) Benefits under this part will not be available to persons or legal entities whose average adjusted gross income exceeds $1,000,000 or as further specified in part...

  20. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...

  1. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth.... (d) Average the values by adding them and dividing by the number of readings along each radial....

  2. 34 CFR 668.196 - Average rates appeals.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.196 Section 668.196....196 Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under... calculated as an average rate under § 668.183(d)(2). (2) You may appeal a notice of a loss of...

  3. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  4. 34 CFR 668.215 - Average rates appeals.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.215 Section 668.215... Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under § 668... as an average rate under § 668.202(d)(2). (2) You may appeal a notice of a loss of eligibility...

  5. 7 CFR 51.2548 - Average moisture content determination.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548..., AND STANDARDS) United States Standards for Grades of Pistachio Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average moisture content of the lot is not a requirement...

  6. Influence of packing interactions on the average conformation of B-DNA in crystalline structures.

    Science.gov (United States)

    Tereshko, V; Subirana, J A

    1999-04-01

    The molecular interactions in crystals of oligonucleotides in the B form have been analysed and in particular the end-to-end interactions. Phosphate-phosphate interactions in dodecamers are also reviewed. A strong influence of packing constraints on the average conformation of the double helix is found. There is a strong relationship between the space group, the end-to-end interactions and the average conformation of DNA. Dodecamers must have a B-form average conformation with 10 +/- 0.1 base pairs per turn in order to crystallize in the P212121 and related space groups usually found. Decamers show a wider range of conformational variation, with 9.7-10. 6 base pairs per turn, depending on the terminal sequence and the space group. The influence of the space group in decamers is quite striking and remains unexplained. Only small variations are allowed in each case. Thus, crystal packing is strongly related to the average DNA conformation in the crystals and deviations from the average are rather limited. The constraints imposed by the crystal lattice explain why the average twist of the DNA in solution (10.6 base pairs per turn) is seldom found in oligonucleotides crystallized in the B form.

  7. Presentations of groups

    CERN Document Server

    Johnson, D L

    1997-01-01

    The aim of this book is to provide an introduction to combinatorial group theory. Any reader who has completed first courses in linear algebra, group theory and ring theory will find this book accessible. The emphasis is on computational techniques but rigorous proofs of all theorems are supplied. This new edition has been revised throughout, including new exercises and an additional chapter on proving that certain groups are infinite.

  8. Experimental Techniques

    DEFF Research Database (Denmark)

    Wyer, Jean

    2013-01-01

    Gas-phase ion spectroscopy requires specialised apparatus, both when it comes to measuring photon absorption and light emission (fluorescence). The reason is much lower ion densities compared to solution-phase spectroscopy. In this chapter different setups are described, all based on mass spectro...... in data interpretation, and the advantages and disadvantages of the different techniques are clarified. New instrumental developments involving cryo-cooled storage rings, which show great promise for the future, are briefly touched upon.......Gas-phase ion spectroscopy requires specialised apparatus, both when it comes to measuring photon absorption and light emission (fluorescence). The reason is much lower ion densities compared to solution-phase spectroscopy. In this chapter different setups are described, all based on mass...... to circumvent this is discussed based on a chemical approach, namely tagging of ammonium groups by crown ether. Prompt dissociation can sometimes be identified from the total beam depletion differing from that due to statistical dissociation. Special emphasis in this chapter is on the limitations and pitfalls...

  9. Group X

    Energy Technology Data Exchange (ETDEWEB)

    Fields, Susannah

    2007-08-16

    This project is currently under contract for research through the Department of Homeland Security until 2011. The group I was responsible for studying has to remain confidential so as not to affect the current project. All dates, reference links and authors, and other distinguishing characteristics of the original group have been removed from this report. All references to the name of this group or the individual splinter groups has been changed to 'Group X'. I have been collecting texts from a variety of sources intended for the use of recruiting and radicalizing members for Group X splinter groups for the purpose of researching the motivation and intent of leaders of those groups and their influence over the likelihood of group radicalization. This work included visiting many Group X websites to find information on splinter group leaders and finding their statements to new and old members. This proved difficult because the splinter groups of Group X are united in beliefs, but differ in public opinion. They are eager to tear each other down, prove their superiority, and yet remain anonymous. After a few weeks of intense searching, a list of eight recruiting texts and eight radicalizing texts from a variety of Group X leaders were compiled.

  10. Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2014

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y.; et al.

    2014-12-23

    This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2014. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, $CP$ violation parameters, parameters of semileptonic decays and CKM matrix elements.

  11. Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2016 arXiv

    CERN Document Server

    Amhis, Y.; Ben-Haim, E.; Bernlochner, F.; Bozek, A.; Bozzi, C.; Chrząszcz, M.; Dingfelder, J.; Duell, S.; Gersabeck, M.; Gershon, T.; Goldenzweig, P.; Harr, R.; Hayasaka, K.; Hayashii, H.; Kenzie, M.; Kuhr, T.; Leroy, O.; Lusiani, A.; Lyu, X.R.; Miyabayashi, K.; Naik, P.; Nanut, T.; Oyanguren Campos, A.; Patel, M.; Pedrini, D.; Petrič, M.; Rama, M.; Roney, M.; Rotondo, M.; Schneider, O.; Schwanda, C.; Schwartz, A.J.; Serrano, J.; Shwartz, B.; Tesarek, R.; Trabelsi, K.; Urquijo, P.; Van Kooten, R.; Yelton, J.; Zupanc, A.

    This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, \\CP~violation parameters, parameters of semileptonic decays and CKM matrix elements.

  12. Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2014

    CERN Document Server

    Amhis, Y; Ben-Haim, E; Blyth, S; Bozek, A; Bozzi, C; Carbone, A; Chistov, R; Chrząszcz, M; Cibinetto, G; Dingfelder, J; Gelb, M; Gersabeck, M; Gershon, T; Gibbons, L; Golob, B; Harr, R; Hayasaka, K; Hayashii, H; Kuhr, T; Leroy, O; Lusiani, A; Miyabayashi, K; Naik, P; Nishida, S; Campos, A Oyanguren; Patel, M; Pedrini, D; Petrič, M; Rama, M; Roney, M; Rotondo, M; Schneider, O; Schwanda, C; Schwartz, A J; Shwartz, B; Smith, J G; Tesarek, R; Trabelsi, K; Urquijo, P; Van Kooten, R; Zupanc, A

    2014-01-01

    This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2014. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, $CP$ violation parameters, parameters of semileptonic decays and CKM matrix elements.

  13. Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2016

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y.; et al.

    2016-12-21

    This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, \\CP~violation parameters, parameters of semileptonic decays and CKM matrix elements.

  14. 微柱凝胶法检测ABO疑难血型的临床应用%Clinical Application and Methodology Study on Micro-column Gel Technique in Detecting Suspicious ABO Blood Group

    Institute of Scientific and Technical Information of China (English)

    肖倩; 辛荣传; 周益强; 辛康

    2009-01-01

    目的 探讨微柱凝胶法(MGT)检测ABO疑难血型的敏感性及影响因素.方法 应用(MGT)法检测38 600例患者的ABO血型,同时用试管法进行复查对照,对正反定结果不一致的血标本,增加离心或37℃孵育时间去除血清中的纤维蛋白和冷凝集素,对血型抗原明显减弱或抗体效价降低者,采用增加定型血清或被检血清的剂量,预先将抗原抗体混合置37℃反应15-30 min,然后采用(MGT)法检测.结果 MGT法检测ABO血型抗原明显减弱或抗体效价降低者的敏感性高于试管法.MGT法检测ABO血型,对正反定结果不一致者35例,其中患者血浆含纤维蛋白9例、血清蛋白异常增高4例、高效价冷凝集素5例、血型抗原明显减弱8例、抗体效价降低9例.结论 纤维蛋白、高效价冷凝集素、血型抗原减弱或抗体效价降低均可干扰ABO血型鉴定结果.增加离心或37℃孵育时间去除定型血清或患者血浆剂量血清中的纤维蛋白和冷凝集素,增加定型血清或患者血浆的剂量,预先将抗原抗体混合置37℃反应15-30分钟,然后采用MGT法检测,可提高ABO血型鉴定的准确性和输血安全.%Objective To discuss the sensitivity and influencing factors of micro-column gel technique (MGT) in detecting suspicious ABO blood group.Methods A method of MGT was applied to detecting suspicious ABO blood group of 38600 patients,simultaneously, a test tube method was used to reexamine and contrast,centrifugation or 37 DEG C incubation time was increased to remove fibrin and cold agglutinin in serum as for blood samples whose positive and negative definite results were not consistent, a method for increasing doses of typing serum or detected serum was adopted as for the patients whose blood group antigen was prominently weakened or antibody titer was decreased, antigen-antibody was mixed in 37 DEG C and reacted for 15 to 30 minutes in advance, and then the method of MGT was adopted to detect.Results The

  15. Average number of iterations of some polynomial interior-point——Algorithms for linear programming

    Institute of Scientific and Technical Information of China (English)

    黄思明

    2000-01-01

    We study the behavior of some polynomial interior-point algorithms for solving random linear programming (LP) problems. We show that the average number of iterations of these algorithms, coupled with a finite termination technique, is bounded above by O( n1.5). The random LP problem is Todd’s probabilistic model with the standard Gauss distribution.

  16. Measurement of Plasma Ion Temperature and Flow Velocity from Chord-Averaged Emission Line Profile

    Indian Academy of Sciences (India)

    Xu Wei

    2011-03-01

    The distinction between Doppler broadening and Doppler shift has been analysed, the differences between Gaussian fitting and the distribution of chord-integral line shape have also been discussed. Local ion temperature and flow velocity have been derived from the chord-averaged emission line profile by a chosen-point Gaussian fitting technique.

  17. Average number of iterations of some polynomial interior-point--Algorithms for linear programming

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    We study the behavior of some polynomial interior-point algorithms for solving random linear programming (LP) problems. We show that the average number of iterations of these algorithms, coupled with a finite termination technique, is bounded above by O(n1.5). The random LP problem is Todd's probabilistic model with the standard Gauss distribution.

  18. Near-elastic vibro-impact analysis by discontinuous transformations and averaging

    DEFF Research Database (Denmark)

    Thomsen, Jon Juel; Fidlin, Alexander

    2008-01-01

    We show how near-elastic vibro-impact problems, linear or nonlinear in-between impacts, can be conveniently analyzed by a discontinuity-reducing transformation of variables combined with an extended averaging procedure. A general technique for this is presented, and illustrated by calculating tra...

  19. Resolving macromolecular structures from electron cryo-tomography data using subtomogram averaging in RELION.

    Science.gov (United States)

    Bharat, Tanmay A M; Scheres, Sjors H W

    2016-11-01

    Electron cryo-tomography (cryo-ET) is a technique that is used to produce 3D pictures (tomograms) of complex objects such as asymmetric viruses, cellular organelles or whole cells from a series of tilted electron cryo-microscopy (cryo-EM) images. Averaging of macromolecular complexes found within tomograms is known as subtomogram averaging, and this technique allows structure determination of macromolecular complexes in situ. Subtomogram averaging is also gaining in popularity for the calculation of initial models for single-particle analysis. We describe herein a protocol for subtomogram averaging from cryo-ET data using the RELION software (http://www2.mrc-lmb.cam.ac.uk/relion). RELION was originally developed for cryo-EM single-particle analysis, and the subtomogram averaging approach presented in this protocol has been implemented in the existing workflow for single-particle analysis so that users may conveniently tap into existing capabilities of the RELION software. We describe how to calculate 3D models for the contrast transfer function (CTF) that describe the transfer of information in the imaging process, and we illustrate the results of classification and subtomogram averaging refinement for cryo-ET data of purified hepatitis B capsid particles and Saccharomyces cerevisiae 80S ribosomes. Using the steps described in this protocol, along with the troubleshooting and optimization guidelines, high-resolution maps can be obtained in which secondary structure elements are resolved subtomogram.

  20. Near surface spatially averaged air temperature and wind speed determined by acoustic travel time tomography

    Directory of Open Access Journals (Sweden)

    Armin Raabe

    2001-03-01

    Full Text Available Acoustic travel time tomography is presented as a possibility for remote monitoring of near surface airtemperature and wind fields. This technique provides line-averaged effective sound speeds changing with temporally and spatially variable air temperature and wind vector. The effective sound speed is derived from the travel times of sound signals which propagate at defined paths between different acoustic sources and receivers. Starting with the travel time data a tomographic algorithm (Simultaneous Iterative Reconstruction Technique, SIRT is used to calculate area-averaged air temperature and wind speed. The accuracy of the experimental method and the tomographic inversion algorithm is exemplarily demonstrated for one day without remarkable differences in the horizontal temperature field, determined by independent in situ measurements at different points within the measuring field. The differences between the conventionally determined air temperature (point measurement and the air temperature determined by tomography (area-averaged measurement representative for the area of the measuring field 200m x 260m were below 0.5 K for an average of 10 minutes. The differences obtained between the wind speed measured at a meteorological mast and calculated from acoustic measurements are not higher than 0.5 ms-1 for the same averaging time. The tomographically determined area-averaged distribution of air temperature (resolution 50 m x 50 m can be used to estimate the horizontal gradient of air temperature as a pre-condition to detect horizontal turbulent fluxes of sensible heat.

  1. Retardance and flicker modeling and characterization of electro-optic linear retarders by averaged Stokes polarimetry.

    Science.gov (United States)

    Martínez, Francisco J; Márquez, Andrés; Gallego, Sergi; Francés, Jorge; Pascual, Inmaculada; Beléndez, Augusto

    2014-02-15

    A polarimetric method for the measurement of linear retardance in the presence of phase fluctuations is presented. This can be applied to electro-optic devices behaving as variable linear retarders. The method is based on an extended Mueller matrix model for the linear retarder containing the time-averaged effects of the instabilities. As a result, an averaged Stokes polarimetry technique is proposed to characterize both the retardance and its flicker magnitude. Predictive capability of the approach is experimentally demonstrated, validating the model and the calibration technique. The approach is applied to liquid crystal on silicon displays (LCoS) using a commercial Stokes polarimeter. Both the magnitude of the average retardance and the amplitude of its fluctuation are obtained for each gray level value addressed, thus enabling a complete phase characterization of the LCoS.

  2. Reconstruction of ionization probabilities from spatially averaged data in N-dimensions

    CERN Document Server

    Strohaber, J; Schuessler, H A

    2010-01-01

    We present an analytical inversion technique which can be used to recover ionization probabilities from spatially averaged data in an N-dimensional detection scheme. The solution is given as a power series in intensity. For this reason, we call this technique a multiphoton expansion (MPE). The MPE formalism was verified with an exactly solvable inversion problem in 2D, and probabilities in the postsaturation region, where the intensity-selective scanning approach breaks down, were recovered. In 3D, ionization probabilities of Xe were successfully recovered with MPE from simulated (using the ADK tunneling theory) ion yields. Finally, we tested our approach with intensity-resolved benzene ion yields showing a resonant multiphoton ionization process. By applying MPE to this data (which was artificially averaged) the resonant structure was recovered-suggesting that the resonance in benzene may have been observable in spatially averaged data taken elsewhere.

  3. Data mining and visualization of average images in a digital hand atlas

    Science.gov (United States)

    Zhang, Aifeng; Gertych, Arkadiusz; Liu, Brent J.; Huang, H. K.

    2005-04-01

    We have collected a digital hand atlas containing digitized left hand radiographs of normally developed children grouped accordingly by age, sex, and race. A set of features stored in a database reflecting patient's stage of skeletal development has been calculated by automatic image processing procedures. This paper addresses a new concept, "average" image in the digital hand atlas. The "average" reference image in the digital atlas is selected for each of the groups of normal developed children with the best representative skeletal maturity based on bony features. A data mining procedure was designed and applied to find the average image through average feature vector matching. It also provides a temporary solution for the missing feature problem through polynomial regression. As more cases are added to the digital hand atlas, it can grow to provide clinicians accurate reference images to aid the bone age assessment process.

  4. Group morphology

    NARCIS (Netherlands)

    Roerdink, Jos B.T.M.

    2000-01-01

    In its original form, mathematical morphology is a theory of binary image transformations which are invariant under the group of Euclidean translations. This paper surveys and extends constructions of morphological operators which are invariant under a more general group TT, such as the motion group

  5. Impedance group summary

    Science.gov (United States)

    Blaskiewicz, M.; Dooling, J.; Dyachkov, M.; Fedotov, A.; Gluckstern, R.; Hahn, H.; Huang, H.; Kurennoy, S.; Linnecar, T.; Shaposhnikova, E.; Stupakov, G.; Toyama, T.; Wang, J. G.; Weng, W. T.; Zhang, S. Y.; Zotter, B.

    1999-12-01

    The impedance working group was charged to reply to the following 8 questions relevant to the design of high-intensity proton machines such as the SNS or the FNAL driver. These questions were first discussed one by one in the whole group, then each ne of them assigned to one member to summarize. On the lst morning these contributions were publicly read, re-discussed and re-written where required—hence they are not the opinion of a particular person, but rather the averaged opinion of all members of the working group. (AIP)

  6. Declining average daily census. Part 2: Possible solutions.

    Science.gov (United States)

    Weil, T P

    1986-01-01

    Several possible solutions are available to hospitals experiencing a declining average daily census, including: Closure of some U.S. hospitals; Joint ventures between physicians and hospitals; Development of integrated and coordinated medical-fiscal-management information systems; Improvements in the hospital's short-term marketing strategy; Reduction of the facility's internal operation expenses; Vertical more than horizontal diversification to develop a multilevel (acute through home care) regional health care system with an alternative health care payment system that is a joint venture with the medical staff(s); Acquisition or management by a not-for-profit or investor-owned multihospital system (emphasis on horizontal versus vertical integration). Many reasons exist for an institution to choose the solution of developing a regional multilevel health care system rather than being part of a large, geographically scattered, multihospital system. Geographic proximity, lenders' preferences, service integration, management recruitment, and local remedies to a declining census all favor the regional system. More answers lie in emphasizing the basics of health care regionalization and focusing on vertical integration, including a prepayment plan, rather than stressing large multihospital systems with institutions in several states or selling out to the investor-owned groups.

  7. Perceptual learning in Williams syndrome: looking beyond averages.

    Directory of Open Access Journals (Sweden)

    Patricia Gervan

    Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.

  8. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  9. Vibrational resonance: a study with high-order word-series averaging

    CERN Document Server

    Murua, Ander

    2016-01-01

    We study a model problem describing vibrational resonance by means of a high-order averaging technique based on so-called word series. With the tech- nique applied here, the tasks of constructing the averaged system and the associ- ated change of variables are divided into two parts. It is first necessary to build recursively a set of so-called word basis functions and, after that, all the required manipulations involve only scalar coefficients that are computed by means of sim- ple recursions. As distinct from the situation with other approaches, with word- series, high-order averaged systems may be derived without having to compute the associated change of variables. In the system considered here, the construction of high-order averaged systems makes it possible to obtain very precise approxima- tions to the true dynamics.

  10. Going Beyond Average Joe's Happiness : Using Quantile Regressions to Analyze the Full Subjective Well-Being Distribution

    OpenAIRE

    Binder, Martin; Coad, Alex

    2010-01-01

    Standard regression techniques are only able to give an incomplete picture of the relationship between subjective well-being and its determinants since the very idea of conventional estimators such as OLS is the averaging out over the whole distribution: studies based on such regression techniques thus are implicitly only interested in Average Joe's happiness. Using cross-sectional data from the British Household Panel Survey (BHPS) for the year 2006, we apply quantile regressions to analyze ...

  11. Determination of averaging period parameter and its effects analysis for eddy covariance measurements

    Institute of Scientific and Technical Information of China (English)

    SUN; Xiaomin; ZHU; Zhilin; XU; Jinping; YUAN; Guofu

    2005-01-01

    It is more and more popular to estimate the exchange of water vapor, heat and CO2fluxes between the land surface and the atmosphere using the eddy covariance technique. To get believable fluxes, it is necessary to correct the observations based on the different surface conditions and to determine relevant techinical parameters. The raw 10 Hz eddy covariance data observed in the Yucheng and Changbai Mountains stations were recalculated by various averaging periods (from 1 to 720 min) respectively, and the recalculated results were compared with the results calculated by the averaging period of 30 mins. Meanwhile, the distinctions of fluxes calculated by different averaging periods were analyzed. The continuous 15 days observations over wheat fields in the Yucheng station were mainly analyzed. The results are shown that: (i) In the Yucheng station, compared with the observations by 30 min, when the averaging period changes from 10 to 60 min, the variations of the eddy-covariance estimates of fluxes were less than 2%; when the averaging period changes less than 10 min, the estimate of fluxes reduced obviously with the reduction of the averaging period (the max relative error was -12%); and when the averaging period exceeds 120 min, the eddy covariance estimates of fluxes will be increased and become unsteady (the max relative error is over 10%); (ii) the eddy covariance estimates of fluxes over wheat field in the Yucheng station suggusted that it is much better to take 10 min as an averaging period in studying diurnal change of fluxes, and take 30min for a long-term flux observation; and (iii) normalized ratio was put forward to determine the range of averaging period of eddy covariance measurements. By comparing the observations over farmlands and those over forests, it is indicated that the increase of eddy covariance estimates over tall forest was more than that over short vegetation when the averaging period increased.

  12. Averaging VMAT treatment plans for multi-criteria navigation

    CERN Document Server

    Craft, David; Unkelbach, Jan

    2013-01-01

    The main approach to smooth Pareto surface navigation for radiation therapy multi-criteria treatment planning involves taking real-time averages of pre-computed treatment plans. In fluence-based treatment planning, fluence maps themselves can be averaged, which leads to the dose distributions being averaged due to the linear relationship between fluence and dose. This works for fluence-based photon plans and proton spot scanning plans. In this technical note, we show that two or more sliding window volumetric modulated arc therapy (VMAT) plans can be combined by averaging leaf positions in a certain way, and we demonstrate that the resulting dose distribution for the averaged plan is approximately the average of the dose distributions of the original plans. This leads to the ability to do Pareto surface navigation, i.e. interactive multi-criteria exploration of VMAT plan dosimetric tradeoffs.

  13. Averaging and exact perturbations in LTB dust models

    CERN Document Server

    Sussman, Roberto A

    2012-01-01

    We introduce a scalar weighed average ("q-average") acting on concentric comoving domains in spherically symmetric Lemaitre-Tolman-Bondi (LTB) dust models. The resulting averaging formalism allows for an elegant coordinate independent dynamical study of the models, providing as well a valuable theoretical insight on the properties of scalar averaging in inhomogeneous spacetimes. The q-averages of those covariant scalars common to FLRW models (the "q-scalars") identically satisfy FLRW evolution laws and determine for every domain a unique FLRW background state. All curvature and kinematic proper tensors and their invariant contractions are expressible in terms of the q-scalars and their linear and quadratic local fluctuations, which convey the effects of inhomogeneity through the ratio of Weyl to Ricci curvature invariants and the magnitude of radial gradients. We define also non-local fluctuations associated with the intuitive notion of a "contrast" with respect to FLRW reference averaged values assigned to a...

  14. Improved performance of high average power semiconductor arrays for applications in diode pumped solid state lasers

    Energy Technology Data Exchange (ETDEWEB)

    Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.

    1994-01-01

    The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL`s). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL`s which are appropriate for material processing applications, low and intermediate average power DPSSL`s are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications.

  15. Validity of a Wearable Accelerometer Device to Measure Average Acceleration Values During High-Speed Running.

    Science.gov (United States)

    Alexander, Jeremy P; Hopkinson, Trent L; Wundersitz, Daniel W T; Serpell, Benjamin G; Mara, Jocelyn K; Ball, Nick B

    2016-11-01

    Alexander, JP, Hopkinson, TL, Wundersitz, DWT, Serpell, BG, Mara, JK, and Ball, NB. Validity of a wearable accelerometer device to measure average acceleration values during high-speed running. J Strength Cond Res 30(11): 3007-3013, 2016-The aim of this study was to determine the validity of an accelerometer to measure average acceleration values during high-speed running. Thirteen subjects performed three sprint efforts over a 40-m distance (n = 39). Acceleration was measured using a 100-Hz triaxial accelerometer integrated within a wearable tracking device (SPI-HPU; GPSports). To provide a concurrent measure of acceleration, timing gates were positioned at 10-m intervals (0-40 m). Accelerometer data collected during 0-10 m and 10-20 m provided a measure of average acceleration values. Accelerometer data was recorded as the raw output and filtered by applying a 3-point moving average and a 10-point moving average. The accelerometer could not measure average acceleration values during high-speed running. The accelerometer significantly overestimated average acceleration values during both 0-10 m and 10-20 m, regardless of the data filtering technique (p < 0.001). Body mass significantly affected all accelerometer variables (p < 0.10, partial η = 0.091-0.219). Body mass and the absence of a gravity compensation formula affect the accuracy and practicality of accelerometers. Until GPSports-integrated accelerometers incorporate a gravity compensation formula, the usefulness of any accelerometer-derived algorithms is questionable.

  16. Distributed Weighted Parameter Averaging for SVM Training on Big Data

    OpenAIRE

    Das, Ayan; Bhattacharya, Sourangshu

    2015-01-01

    Two popular approaches for distributed training of SVMs on big data are parameter averaging and ADMM. Parameter averaging is efficient but suffers from loss of accuracy with increase in number of partitions, while ADMM in the feature space is accurate but suffers from slow convergence. In this paper, we report a hybrid approach called weighted parameter averaging (WPA), which optimizes the regularized hinge loss with respect to weights on parameters. The problem is shown to be same as solving...

  17. On the average crosscap number Ⅱ: Bounds for a graph

    Institute of Scientific and Technical Information of China (English)

    Yi-chao CHEN; Yan-pei LIU

    2007-01-01

    The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less than 2β(G)-1/2β(G)-1β(G)β(G) and not larger than/β(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.

  18. On the average crosscap number II: Bounds for a graph

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less thanβ(G)-1/2β(G)-1β(G) and not larger thanβ(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.

  19. Group devaluation and group identification

    NARCIS (Netherlands)

    Leach, C.W.; Rodriguez Mosquera, P.M.; Vliek, M.L.W.; Hirt, E.

    2010-01-01

    In three studies, we showed that increased in-group identification after (perceived or actual) group devaluation is an assertion of a (preexisting) positive social identity that counters the negative social identity implied in societal devaluation. Two studies with real-world groups used order manip

  20. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  1. Practical definition of averages of tensors in general relativity

    CERN Document Server

    Boero, Ezequiel F

    2016-01-01

    We present a definition of tensor fields which are average of tensors over a manifold, with a straightforward and natural definition of derivative for the averaged fields; which in turn makes a suitable and practical construction for the study of averages of tensor fields that satisfy differential equations. Although we have in mind applications to general relativity, our presentation is applicable to a general n-dimensional manifold. The definition is based on the integration of scalars constructed from a physically motivated basis, making use of the least amount of geometrical structure. We also present definitions of covariant derivative of the averaged tensors and Lie derivative.

  2. TRAC Innovative Visualization Techniques

    Science.gov (United States)

    2016-11-14

    Design thinking”. In: Harvard business review 86.6 (2008), p. 84. [6] U.S. Department of the Army. ATP 5-0.1 Army Design Methodology. Washington D.C... business review 86.6 (2008), p. 84. 3 4. What other visualization techniques can assist TRAC analysts with addressing these issues? 5. What are the best...is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and

  3. A conversion formula for comparing pulse oximeter desaturation rates obtained with different averaging times.

    Directory of Open Access Journals (Sweden)

    Jan Vagedes

    Full Text Available OBJECTIVE: The number of desaturations determined in recordings of pulse oximeter saturation (SpO2 primarily depends on the time over which values are averaged. As the averaging time in pulse oximeters is not standardized, it varies considerably between centers. To make SpO2 data comparable, it is thus desirable to have a formula that allows conversion between desaturation rates obtained using different averaging times for various desaturation levels and minimal durations. METHODS: Oxygen saturation was measured for 170 hours in 12 preterm infants with a mean number of 65 desaturations <90% per hour of arbitrary duration by using a pulse oximeter in a 2-4 s averaging mode. Using 7 different averaging times between 3 and 16 seconds, the raw red-to-infrared data were reprocessed to determine the number of desaturations (D. The whole procedure was carried out for 7 different minimal desaturation durations (≥ 1, ≥ 5, ≥ 10, ≥ 15, ≥ 20, ≥ 25, ≥ 30 s below SpO2 threshold values of 80%, 85% or 90% to finally reach a conversion formula. The formula was validated by splitting the infants into two groups of six children each and using one group each as a training set and the other one as a test set. RESULTS: Based on the linear relationship found between the logarithm of the desaturation rate and the logarithm of the averaging time, the conversion formula is: D2 = D1 (T2/T1(c, where D2 is the desaturation rate for the desired averaging time T2, and D1 is the desaturation rate for the original averaging time T1, with the exponent c depending on the desaturation threshold and the minimal desaturation duration. The median error when applying this formula was 2.6%. CONCLUSION: This formula enables the conversion of desaturation rates between different averaging times for various desaturation thresholds and minimal desaturation durations.

  4. Scalp imaging techniques

    Science.gov (United States)

    Otberg, Nina; Shapiro, Jerry; Lui, Harvey; Wu, Wen-Yu; Alzolibani, Abdullateef; Kang, Hoon; Richter, Heike; Lademann, Jürgen

    2017-05-01

    Scalp imaging techniques are necessary tools for the trichological practice and for visualization of permeation, penetration and absorption processes into and through the scalp and for the research on drug delivery and toxicology. The present letter reviews different scalp imaging techniques and discusses their utility. Moreover, two different studies on scalp imaging techniques are presented in this letter: (1) scalp imaging with phototrichograms in combination with laser scanning microscopy, and (2) follicular measurements with cyanoacrylate surface replicas and light microscopy in combination with laser scanning microscopy. The experiments compare different methods for the determination of hair density on the scalp and different follicular measures. An average terminal hair density of 132 hairs cm-2 was found in 6 Caucasian volunteers and 135 hairs cm-2 in 6 Asian volunteers. The area of the follicular orifices accounts to 16.3% of the skin surface on average measured with laser scanning microscopy images. The potential volume of the follicular infundibulum was calculated based on the laser scanning measurements and is found to be 4.63 mm3 per cm2 skin on average. The experiments show that hair follicles are quantitatively relevant pathways and potential reservoirs for topically applied drugs and cosmetics.

  5. Reconstruction of time-averaged temperature of non-axisymmetric turbulent unconfined sooting flame by inverse radiation analysis

    CERN Document Server

    Liu, L H

    2003-01-01

    A multi-wavelength inversion method is extended to reconstruct the time-averaged temperature distribution in non-axisymmetric turbulent unconfined sooting flame by the multi-wavelength measured data of low time-resolution outgoing emission and transmission radiation intensities. Gaussian, beta and uniform distribution probability density functions (PDF) are used to simulate the turbulent fluctuation of temperature, respectively. The reconstruction of time-averaged temperature consists of three steps. First, the time-averaged spectral absorption coefficient is retrieved from the time-averaged transmissivity data by an algebraic reconstruction technique. Then, the time-averaged blackbody spectral radiation intensity is estimated from the outgoing spectral emission radiation intensities. Finally, the time-averaged temperature is approximately reconstructed from the multi-wavelength time-averaged spectral emission radiation data by the least-squares method. Noisy input data have been used to test the performance ...

  6. Clinical Study of Acoustic Densitometry Technique in Detecting Atherosclerotic Plaque

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Objective: To investigate the effect of Quyu Xiaoban Capsule (祛瘀消斑, QYXB) on the regressive treatment of atherosclerosis (AS) with acoustic densitometry (AD) technique. Methods: Eighty patients with AS were randomly divided into two groups, trial group was treated with QYXB and conventional medicine, and control group was treated with conventional medicine alone. Normal arterial wall and different types of atherosclerotic plaques were detected with AD technique before treatment and 10 months later. Resuits: The corrected averages in intimal echo intensity (AIIc%) were elevated in both groups but without significant difference, AIIc% of fatty plaques were increased in both groups and the value after treatment was significantly higher than that of pre-treatment in the trial group (68.12±5.54 vs 61.43±5.37, P<0.05).The increment rate of AIIc% in trial group was significantly higher than that in control group (10.9±5.1% vs2.5±5.5%, P<0.05). Conclusion: QYXB can stabilize the atherosclerotic plaque by increasing its acoustic density. Acoustic densitometry technique can differentiate the different histological plaques and monitor the histological changes of plaques during treatment.

  7. Charging for computer usage with average cost pricing

    CERN Document Server

    Landau, K

    1973-01-01

    This preliminary report, which is mainly directed to commercial computer centres, gives an introduction to the application of average cost pricing when charging for using computer resources. A description of the cost structure of a computer installation shows advantages and disadvantages of average cost pricing. This is completed by a discussion of the different charging-rates which are possible. (10 refs).

  8. On the Average-Case Complexity of Shellsort

    NARCIS (Netherlands)

    Vitányi, P.M.B.

    2015-01-01

    We prove a lower bound expressed in the increment sequence on the average-case complexity (number of inversions which is proportional to the running time) of Shellsort. This lower bound is sharp in every case where it could be checked. We obtain new results e.g. determining the average-case complexi

  9. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  10. Analytic computation of average energy of neutrons inducing fission

    Energy Technology Data Exchange (ETDEWEB)

    Clark, Alexander Rich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-12

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  11. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  12. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  13. A Simple Geometrical Derivation of the Spatial Averaging Theorem.

    Science.gov (United States)

    Whitaker, Stephen

    1985-01-01

    The connection between single phase transport phenomena and multiphase transport phenomena is easily accomplished by means of the spatial averaging theorem. Although different routes to the theorem have been used, this paper provides a route to the averaging theorem that can be used in undergraduate classes. (JN)

  14. Averaged EMG profiles in jogging and running at different speeds

    NARCIS (Netherlands)

    Gazendam, Marnix G. J.; Hof, At L.

    2007-01-01

    EMGs were collected from 14 muscles with surface electrodes in 10 subjects walking 1.25-2.25 m s(-1) and running 1.25-4.5 m s(-1). The EMGs were rectified, interpolated in 100% of the stride, and averaged over all subjects to give an average profile. In running, these profiles could be decomposed in

  15. Average widths of anisotropic Besov-Wiener classes

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper concerns the problem of average σ-K width and average σ-L width of some anisotropic Besov-Wiener classes Srp q θb(Rd) and Srp q θB(Rd) in Lq(Rd) (1≤q≤p<∞). The weak asymptotic behavior is established for the corresponding quantities.

  16. 7 CFR 701.17 - Average adjusted gross income limitation.

    Science.gov (United States)

    2010-01-01

    ... 9003), each applicant must meet the provisions of the Adjusted Gross Income Limitations at 7 CFR part... 7 Agriculture 7 2010-01-01 2010-01-01 false Average adjusted gross income limitation. 701.17... RELATED PROGRAMS PREVIOUSLY ADMINISTERED UNDER THIS PART § 701.17 Average adjusted gross income...

  17. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...

  18. (Average-) convexity of common pool and oligopoly TU-games

    NARCIS (Netherlands)

    Driessen, T.S.H.; Meinhardt, H.

    2000-01-01

    The paper studies both the convexity and average-convexity properties for a particular class of cooperative TU-games called common pool games. The common pool situation involves a cost function as well as a (weakly decreasing) average joint production function. Firstly, it is shown that, if the rele

  19. Average widths of anisotropic Besov-Wiener classes

    Institute of Scientific and Technical Information of China (English)

    蒋艳杰

    2000-01-01

    This paper concems the problem of average σ-K width and average σ-L width of some anisotropic Besov-wiener classes Spqθr(Rd) and Spqθr(Rd) in Lq(Rd) (1≤≤q≤p<∞). The weak asymptotic behavior is established for the corresponding quantities.

  20. Remarks on the Lower Bounds for the Average Genus

    Institute of Scientific and Technical Information of China (English)

    Yi-chao Chen

    2011-01-01

    Let G be a graph of maximum degree at most four. By using the overlap matrix method which is introduced by B. Mohar, we show that the average genus of G is not less than 1/3 of its maximum genus, and the bound is best possible. Also, a new lower bound of average genus in terms of girth is derived.