WorldWideScience

Sample records for group averaging techniques

  1. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  2. Time-dependence and averaging techniques in atomic photoionization calculations

    International Nuclear Information System (INIS)

    Scheibner, K.F.

    1984-01-01

    Two distinct problems in the development and application of averaging techniques to photoionization calculations are considered. The first part of the thesis is concerned with the specific problem of near-resonant three-photon ionization in hydrogen, a process for which no cross section exists. Effects of the inclusion of the laser pulse characteristics (both temporal and spatial) on the dynamics of the ionization probability and of the metastable state probability are examined. It is found, for example, that the ionization probability can decrease with increasing field intensity. The temporal profile of the laser pulse is found to affect the dynamics very little, whereas the spatial character of the pulse can affect the results drastically. In the second part of the thesis techniques are developed for calculating averaged cross sections directly without first calculating a detailed cross section. Techniques are developed whereby the detailed cross section never has to be calculated as an intermediate step, but rather, the averaged cross section is calculated directly. A variation of the moment technique and a new method based on the stabilization technique are applied successfully to atomic hydrogen and helium

  3. An application of commercial data averaging techniques in pulsed photothermal experiments

    International Nuclear Information System (INIS)

    Grozescu, I.V.; Moksin, M.M.; Wahab, Z.A.; Yunus, W.M.M.

    1997-01-01

    We present an application of data averaging technique commonly implemented in many commercial digital oscilloscopes or waveform digitizers. The technique was used for transient data averaging in the pulsed photothermal radiometry experiments. Photothermal signals are surrounded by an important amount of noise which affect the precision of the measurements. The effect of the noise level on photothermal signal parameters in our particular case, fitted decay time, is shown. The results of the analysis can be used in choosing the most effective averaging technique and estimating the averaging parameter values. This would help to reduce the data acquisition time while improving the signal-to-noise ratio

  4. Signal averaging technique for noninvasive recording of late potentials in patients with coronary artery disease

    Science.gov (United States)

    Abboud, S.; Blatt, C. M.; Lown, B.; Graboys, T. B.; Sadeh, D.; Cohen, R. J.

    1987-01-01

    An advanced non invasive signal averaging technique was used to detect late potentials in two groups of patients: Group A (24 patients) with coronary artery disease (CAD) and without sustained ventricular tachycardia (VT) and Group B (8 patients) with CAD and sustained VT. Recorded analog data were digitized and aligned using a cross correlation function with fast Fourier transform schema, averaged and band pass filtered between 60 and 200 Hz with a non-recursive digital filter. Averaged filtered waveforms were analyzed by computer program for 3 parameters: (1) filtered QRS (fQRS) duration (2) interval between the peak of the R wave peak and the end of fQRS (R-LP) (3) RMS value of last 40 msec of fQRS (RMS). Significant change was found between Groups A and B in fQRS (101 -/+ 13 msec vs 123 -/+ 15 msec; p < .0005) and in R-LP vs 52 -/+ 11 msec vs 71-/+18 msec, p <.002). We conclude that (1) the use of a cross correlation triggering method and non-recursive digital filter enables a reliable recording of late potentials from the body surface; (2) fQRS and R-LP durations are sensitive indicators of CAD patients susceptible to VT.

  5. The Effect of Buzz Group Technique and Clustering Technique in Teaching Writing at the First Class of SMA HKBP I Tarutung

    Science.gov (United States)

    Pangaribuan, Tagor; Manik, Sondang

    2018-01-01

    This research held at SMA HKBP 1 Tarutung North Sumatra on the research result of test XI[superscript 2] and XI[superscript 2] students, after they got treatment in teaching writing in recount text by using buzz group and clustering technique. The average score (X) was 67.7 and the total score buzz group the average score (X) was 77.2 and in…

  6. Diagram Techniques in Group Theory

    Science.gov (United States)

    Stedman, Geoffrey E.

    2009-09-01

    Preface; 1. Elementary examples; 2. Angular momentum coupling diagram techniques; 3. Extension to compact simple phase groups; 4. Symmetric and unitary groups; 5. Lie groups and Lie algebras; 6. Polarisation dependence of multiphoton processes; 7. Quantum field theoretic diagram techniques for atomic systems; 8. Applications; Appendix; References; Indexes.

  7. A group's physical attractiveness is greater than the average attractiveness of its members: the group attractiveness effect.

    Science.gov (United States)

    van Osch, Yvette; Blanken, Irene; Meijs, Maartje H J; van Wolferen, Job

    2015-04-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of physical attractiveness are more positive than the average ratings of the group members. A meta-analysis on 33 comparisons reveals that the effect is medium to large (Cohen's d = 0.60) and moderated by group size. We explored two explanations for the GA-effect: (a) selective attention to attractive group members, and (b) the Gestalt principle of similarity. The results of our studies are in favor of the selective attention account: People selectively attend to the most attractive members of a group and their attractiveness has a greater influence on the evaluation of the group. © 2015 by the Society for Personality and Social Psychology, Inc.

  8. A group's physical attractiveness is greater than the average attractiveness of its members : The group attractiveness effect

    NARCIS (Netherlands)

    van Osch, Y.M.J.; Blanken, Irene; Meijs, Maartje H. J.; van Wolferen, Job

    2015-01-01

    We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of

  9. A Group Neighborhood Average Clock Synchronization Protocol for Wireless Sensor Networks

    Science.gov (United States)

    Lin, Lin; Ma, Shiwei; Ma, Maode

    2014-01-01

    Clock synchronization is a very important issue for the applications of wireless sensor networks. The sensors need to keep a strict clock so that users can know exactly what happens in the monitoring area at the same time. This paper proposes a novel internal distributed clock synchronization solution using group neighborhood average. Each sensor node collects the offset and skew rate of the neighbors. Group averaging of offset and skew rate value are calculated instead of conventional point-to-point averaging method. The sensor node then returns compensated value back to the neighbors. The propagation delay is considered and compensated. The analytical analysis of offset and skew compensation is presented. Simulation results validate the effectiveness of the protocol and reveal that the protocol allows sensor networks to quickly establish a consensus clock and maintain a small deviation from the consensus clock. PMID:25120163

  10. A Group Neighborhood Average Clock Synchronization Protocol for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Lin Lin

    2014-08-01

    Full Text Available Clock synchronization is a very important issue for the applications of wireless sensor networks. The sensors need to keep a strict clock so that users can know exactly what happens in the monitoring area at the same time. This paper proposes a novel internal distributed clock synchronization solution using group neighborhood average. Each sensor node collects the offset and skew rate of the neighbors. Group averaging of offset and skew rate value are calculated instead of conventional point-to-point averaging method. The sensor node then returns compensated value back to the neighbors. The propagation delay is considered and compensated. The analytical analysis of offset and skew compensation is presented. Simulation results validate the effectiveness of the protocol and reveal that the protocol allows sensor networks to quickly establish a consensus clock and maintain a small deviation from the consensus clock.

  11. Large-signal analysis of DC motor drive system using state-space averaging technique

    International Nuclear Information System (INIS)

    Bekir Yildiz, Ali

    2008-01-01

    The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model

  12. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    OpenAIRE

    Bahubali K. Shiragapur; Uday Wali

    2016-01-01

    In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR) quantity. The Golay Code (24, 12), Reed-Muller code (16, 11), Hamming code (7, 4) and Hybrid technique (Combination of Signal Scrambling and Signal Distortion) proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conve...

  13. A Hybrid Islanding Detection Technique Using Average Rate of Voltage Change and Real Power Shift

    DEFF Research Database (Denmark)

    Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte

    2009-01-01

    The mainly used islanding detection techniques may be classified as active and passive techniques. Passive techniques don't perturb the system but they have larger nondetection znes, whereas active techniques have smaller nondetection zones but they perturb the system. In this paper, a new hybrid...... technique is proposed to solve this problem. An average rate of voltage change (passive technique) has been used to initiate a real power shift (active technique), which changes the eal power of distributed generation (DG), when the passive technique cannot have a clear discrimination between islanding...

  14. Exploring JLA supernova data with improved flux-averaging technique

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Shuang; Wen, Sixiang; Li, Miao, E-mail: wangshuang@mail.sysu.edu.cn, E-mail: wensx@mail2.sysu.edu.cn, E-mail: limiao9@mail.sysu.edu.cn [School of Physics and Astronomy, Sun Yat-Sen University, University Road (No. 2), Zhuhai (China)

    2017-03-01

    In this work, we explore the cosmological consequences of the ''Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the ( z {sub cut}, Δ z ) plane, where z {sub cut} and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying z {sub cut} and varying Δ z , revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is ( z {sub cut} = 0.6, Δ z =0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at z {sub cut} ≥ 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ω {sub m} . In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  15. Determination of arterial input function in dynamic susceptibility contrast MRI using group independent component analysis technique

    International Nuclear Information System (INIS)

    Chen, S.; Liu, H.-L.; Yang Yihong; Hsu, Y.-Y.; Chuang, K.-S.

    2006-01-01

    Quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast (DSC) magnetic resonance imaging (MRI) requires the determination of the arterial input function (AIF). The segmentation of surrounding tissue by manual selection is error-prone due to the partial volume artifacts. Independent component analysis (ICA) has the advantage in automatically decomposing the signals into interpretable components. Recently group ICA technique has been applied to fMRI study and showed reduced variance caused by motion artifact and noise. In this work, we investigated the feasibility and efficacy of the use of group ICA technique to extract the AIF. Both simulated and in vivo data were analyzed in this study. The simulation data of eight phantoms were generated using randomized lesion locations and time activity curves. The clinical data were obtained from spin-echo EPI MR scans performed in seven normal subjects. Group ICA technique was applied to analyze data through concatenating across seven subjects. The AIFs were calculated from the weighted average of the signals in the region selected by ICA. Preliminary results of this study showed that group ICA technique could not extract accurate AIF information from regions around the vessel. The mismatched location of vessels within the group reduced the benefits of group study

  16. Measurement of cross sections of threshold detectors with spectrum average technique

    International Nuclear Information System (INIS)

    Agus, Y.; Celenk, I.; Oezmen, A.

    2004-01-01

    Cross sections of the reactions 103 Rh(n, n') 103m Rh, 115 In(n, n') 115m In, 232 Th(n, f), 47 Ti(n, p) 47 Sc, 64 Zn(n, p) 64 Cu, 58 Ni(n, p) 58 Co, 54 Fe(n, p) 54 Mn, 46 Ti(n, p) 46 Sc, 27 Al(n, p) 27 Mg, 56 Fe(n, p) 56 Mn, 24 Mg(n, p) 24 Na, 59 Co(n, α) 56 Mn, 27 Al(n, α) 24 Na and 48 Ti(n, p) 48 Sc were measured with average neutron energies above effective threshold by using the activation method through usage of spectrum average technique in an irradiation system where there are three equivalent Am/Be sources, each of which has 592 GBq activity. The cross sections were determined with reference to the fast neutron fission cross section of 238 U. The measured values and published values are generally in agreement. (orig.)

  17. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    Directory of Open Access Journals (Sweden)

    Bahubali K. Shiragapur

    2016-03-01

    Full Text Available In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR quantity. The Golay Code (24, 12, Reed-Muller code (16, 11, Hamming code (7, 4 and Hybrid technique (Combination of Signal Scrambling and Signal Distortion proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conventional and Modified Selective mapping techniques. The simulation results are validated through statistical properties, for proposed technique’s autocorrelation value is maximum shows reduction in PAPR. The symbol preference is the key idea to reduce PAPR based on Hamming distance. The simulation results are discussed in detail, in this article.

  18. Grouping techniques in a EFL Classroom

    OpenAIRE

    Ramírez Salas, Marlene

    2005-01-01

    This article focuses on the need for English language teachers to use group work as a means to foster communication among students. The writer presents a definition of group work, its advantages and disadvantages, some activities for using group work and some grouping techniques created or adapted by the writer to illustrate the topic. Este artículo manifiesta la importancia del trabajo grupal en el aula, para motivar la comunicación entre los estudiantes. Asimismo, presenta la definición de...

  19. Human-experienced temperature changes exceed global average climate changes for all income groups

    Science.gov (United States)

    Hsiang, S. M.; Parshall, L.

    2009-12-01

    Global climate change alters local climates everywhere. Many climate change impacts, such as those affecting health, agriculture and labor productivity, depend on these local climatic changes, not global mean change. Traditional, spatially averaged climate change estimates are strongly influenced by the response of icecaps and oceans, providing limited information on human-experienced climatic changes. If used improperly by decision-makers, these estimates distort estimated costs of climate change. We overlay the IPCC’s 20 GCM simulations on the global population distribution to estimate local climatic changes experienced by the world population in the 21st century. The A1B scenario leads to a well-known rise in global average surface temperature of +2.0°C between the periods 2011-2030 and 2080-2099. Projected on the global population distribution in 2000, the median human will experience an annual average rise of +2.3°C (4.1°F) and the average human will experience a rise of +2.4°C (4.3°F). Less than 1% of the population will experience changes smaller than +1.0°C (1.8°F), while 25% and 10% of the population will experience changes greater than +2.9°C (5.2°F) and +3.5°C (6.2°F) respectively. 67% of the world population experiences temperature changes greater than the area-weighted average change of +2.0°C (3.6°F). Using two approaches to characterize the spatial distribution of income, we show that the wealthiest, middle and poorest thirds of the global population experience similar changes, with no group dominating the global average. Calculations for precipitation indicate that there is little change in average precipitation, but redistributions of precipitation occur in all income groups. These results suggest that economists and policy-makers using spatially averaged estimates of climate change to approximate local changes will systematically and significantly underestimate the impacts of climate change on the 21st century population. Top: The

  20. National survey provides average power quality profiles for different customer groups

    International Nuclear Information System (INIS)

    Hughes, B.; Chan, J.

    1996-01-01

    A three year survey, beginning in 1991, was conducted by the Canadian Electrical Association to study the levels of power quality that exist in Canada, and to determine ways to increase utility expertise in making power quality measurements. Twenty-two utilities across Canada were involved, with a total of 550 sites being monitored, including residential and commercial customers. Power disturbances, power outages and power quality were recorded for each site. To create a group average power quality plot, the transient disturbance activity for each site was normalized to a per channel, per month basis and then divided into a grid. Results showed that the average power quality provided by Canadian utilities was very good. Almost all the electrical disturbance within a customer premises were created and stayed within those premises. Disturbances were generally beyond utility control. Utilities could, however, reduce the amount of time the steady-state voltage exceeds the CSA normal voltage upper limit. 5 figs

  1. Grouping techniques in a EFL Classroom

    Directory of Open Access Journals (Sweden)

    Ramírez Salas, Marlene

    2005-03-01

    Full Text Available This article focuses on the need for English language teachers to use group work as a means to foster communication among students. The writer presents a definition of group work, its advantages and disadvantages, some activities for using group work and some grouping techniques created or adapted by the writer to illustrate the topic. Este artículo manifiesta la importancia del trabajo grupal en el aula, para motivar la comunicación entre los estudiantes. Asimismo, presenta la definición de trabajo en grupo, sus ventajas, desventajas y algunas actividades y técnicas para formar grupos.

  2. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  3. Improving the Grade Point Average of Our At-Risk Students: A Collaborative Group Action Research Approach.

    Science.gov (United States)

    Saurino, Dan R.; Hinson, Kenneth; Bouma, Amy

    This paper focuses on the use of a group action research approach to help student teachers develop strategies to improve the grade point average of at-risk students. Teaching interventions such as group work and group and individual tutoring were compared to teaching strategies already used in the field. Results indicated an improvement in the…

  4. Consolidated techniques for groups of enterprises with complex structure

    Directory of Open Access Journals (Sweden)

    Cristina Ciuraru-Andrica

    2009-12-01

    Full Text Available The preparation and disclosure of the financial statements of a group of enterprises involves some consolidation techniques. The Literature presents many techniques, but in practice are used two of them. They will be described first of all in a particular manner and after that in a comparative one. The group of entities can choose one of these techniques, the final result (the consolidated financial statements being the same, whatever the option.

  5. Power plant siting; an application of the nominal group process technique

    International Nuclear Information System (INIS)

    Voelker, A.H.

    1976-01-01

    The application of interactive group processes to the problem of facility siting is examined by this report. Much of the discussion is abstracted from experience gained in applying the Nominal Group Process Technique, an interactive group technique, to the identification and rating of factors important in siting nuclear power plants. Through this experience, interactive group process techniques are shown to facilitate the incorporation of the many diverse factors which play a role in siting. In direct contrast to mathematical optimization, commonly represented as the ultimate siting technique, the Nominal Group Process Technique described allows the incorporation of social, economic, and environmental factors and the quantification of the relative importance of these factors. The report concludes that the application of interactive group process techniques to planning and resource management will affect the consideration of social, economic, and environmental concerns and ultimately lead to more rational and credible siting decisions

  6. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  7. Three decision-making aids: brainstorming, nominal group, and Delphi technique.

    Science.gov (United States)

    McMurray, A R

    1994-01-01

    The methods of brainstorming, Nominal Group Technique, and the Delphi technique can be important resources for nursing staff development educators who wish to expand their decision-making skills. Staff development educators may find opportunities to use these methods for such tasks as developing courses, setting departmental goals, and forecasting trends for planning purposes. Brainstorming, Nominal Group Technique, and the Delphi technique provide a structured format that helps increase the quantity and quality of participant responses.

  8. A Survey of Spatio-Temporal Grouping Techniques

    National Research Council Canada - National Science Library

    Megret, Remi; DeMenthon, Daniel

    2002-01-01

    ...) segmentation by trajectory grouping, and (3) joint spatial and temporal segmentation. The first category is the broadest, as it inherits the legacy techniques of image segmentation and motion segmentation...

  9. Survey as a group interactive teaching technique

    Directory of Open Access Journals (Sweden)

    Ana GOREA

    2017-03-01

    Full Text Available Smooth running of the educational process and the results depend a great deal on the methods used. The methodology of teaching offers a great variety of teaching techniques that the teacher can make use of in the teaching/learning process. Such techniques as brainstorming, the cube, KLW, case study, Venn diagram, and many other are familiar to the teachers and they use them effectively in the classroom. The present article proposes a technique called ‘survey’, which has been successfully used by the author as a student-centered speaking activity in foreign language classes. It has certain advantages especially if used in large groups. It can be adapted for any other discipline in the case when the teacher wishes to offer the students space for cooperative activity and creativity.

  10. Comparison of small-group training with self-directed internet-based training in inhaler techniques.

    Science.gov (United States)

    Toumas, Mariam; Basheti, Iman A; Bosnic-Anticevich, Sinthia Z

    2009-08-28

    To compare the effectiveness of small-group training in correct inhaler technique with self-directed Internet-based training. Pharmacy students were randomly allocated to 1 of 2 groups: small-group training (n = 123) or self-directed Internet-based training (n = 113). Prior to intervention delivery, all participants were given a placebo Turbuhaler and product information leaflet and received inhaler technique training based on their group. Technique was assessed following training and predictors of correct inhaler technique were examined. There was a significant improvement in the number of participants demonstrating correct technique in both groups (small group training, 12% to 63%; p training, 9% to 59%; p groups in the percent change (n = 234, p > 0.05). Increased student confidence following the intervention was a predictor for correct inhaler technique. Self-directed Internet-based training is as effective as small-group training in improving students' inhaler technique.

  11. Software for the grouped optimal aggregation technique

    Science.gov (United States)

    Brown, P. M.; Shaw, G. W. (Principal Investigator)

    1982-01-01

    The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.

  12. Using Creative Group Techniques in High Schools

    Science.gov (United States)

    Veach, Laura J.; Gladding, Samuel T.

    2007-01-01

    Groups in high schools that use creative techniques help adolescents express their emotions appropriately, behave differently, and gain insight into themselves and others. This article looks at seven different creative arts media--music, movement, visual art, literature, drama, play, and humor--and offers examples of how they can be used in groups…

  13. P-R-R Study Technique, Group Counselling And Gender Influence ...

    African Journals Online (AJOL)

    Read-Recall (P-R-R) study technique and group counselling on the academic performance of senior secondary school students. The objectives of this study were to determine the effect of Group Counselling combined with P-R-R study ...

  14. Dose-reduction techniques for high-dose worker groups in nuclear power plants

    International Nuclear Information System (INIS)

    Khan, T.A.; Baum, J.W.; Dionne, B.J.

    1991-03-01

    This report summarizes the main findings of a study of the extent of radiation dose received by special work groups in the nuclear power industry. Work groups which chronically get large doses were investigated, using information provided by the industry. The tasks that give high doses to these work groups were examined and techniques described that were found to be particularly successful in reducing dose. Quantitative information on the extent of radiation doses to various work groups shows that significant numbers of workers in several critical groups receive doses greater than 1 and even 2 rem per year, particularly contract personnel and workers at BWR-type plants. The number of radiation workers whose lifetime dose is greater than their age is much less. Although the techniques presented would go some way in reducing dose, it is likely that a sizeable reduction to the high-dose work groups may require development of new dose-reduction techniques as well as major changes in procedures. 10 refs., 26 tabs

  15. Homogenization via formal multiscale asymptotics and volume averaging: How do the two techniques compare?

    KAUST Repository

    Davit, Yohan

    2013-12-01

    A wide variety of techniques have been developed to homogenize transport equations in multiscale and multiphase systems. This has yielded a rich and diverse field, but has also resulted in the emergence of isolated scientific communities and disconnected bodies of literature. Here, our goal is to bridge the gap between formal multiscale asymptotics and the volume averaging theory. We illustrate the methodologies via a simple example application describing a parabolic transport problem and, in so doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points with great care, and reference many fundamental works. © 2013 Elsevier Ltd.

  16. The Use of Nominal Group Technique to Determine Additional Support Needs for a Group of Victorian TAFE Managers and Senior Educators

    Science.gov (United States)

    Bailey, Anthony

    2013-01-01

    The nominal group technique (NGT) is a structured process to gather information from a group. The technique was first described in 1975 and has since become a widely-used standard to facilitate working groups. The NGT is effective for generating large numbers of creative new ideas and for group priority setting. This paper describes the process of…

  17. Group decision-making techniques for natural resource management applications

    Science.gov (United States)

    Coughlan, Beth A.K.; Armour, Carl L.

    1992-01-01

    This report is an introduction to decision analysis and problem-solving techniques for professionals in natural resource management. Although these managers are often called upon to make complex decisions, their training in the natural sciences seldom provides exposure to the decision-making tools developed in management science. Our purpose is to being to fill this gap. We present a general analysis of the pitfalls of group problem solving, and suggestions for improved interactions followed by the specific techniques. Selected techniques are illustrated. The material is easy to understand and apply without previous training or excessive study and is applicable to natural resource management issues.

  18. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  19. Characteristics of phase-averaged equations for modulated wave groups

    NARCIS (Netherlands)

    Klopman, G.; Petit, H.A.H.; Battjes, J.A.

    2000-01-01

    The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993).

  20. Deblurring of class-averaged images in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Park, Wooram; Chirikjian, Gregory S; Madden, Dean R; Rockmore, Daniel N

    2010-01-01

    This paper proposes a method for the deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid-body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre–Fourier expansions, and Hermite expansion and Laguerre–Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method

  1. ORLIB: a computer code that produces one-energy group, time- and spatially-averaged neutron cross sections

    International Nuclear Information System (INIS)

    Blink, J.A.; Dye, R.E.; Kimlinger, J.R.

    1981-12-01

    Calculation of neutron activation of proposed fusion reactors requires a library of neutron-activation cross sections. One such library is ACTL, which is being updated and expanded by Howerton. If the energy-dependent neutron flux is also known as a function of location and time, the buildup and decay of activation products can be calculated. In practice, hand calculation is impractical without energy-averaged cross sections because of the large number of energy groups. A widely used activation computer code, ORIGEN2, also requires energy-averaged cross sections. Accordingly, we wrote the ORLIB code to collapse the ACTL library, using the flux as a weighting function. The ORLIB code runs on the LLNL Cray computer network. We have also modified ORIGEN2 to accept the expanded activation libraries produced by ORLIB

  2. College grade point average as a personnel selection device: ethnic group differences and potential adverse impact.

    Science.gov (United States)

    Roth, P L; Bobko, P

    2000-06-01

    College grade point average (GPA) is often used in a variety of ways in personnel selection. Unfortunately, there is little empirical research literature in human resource management that informs researchers or practitioners about the magnitude of ethnic group differences and any potential adverse impact implications when using cumulative GPA for selection. Data from a medium-sized university in the Southeast (N = 7,498) indicate that the standardized average Black-White difference for cumulative GPA in the senior year is d = 0.78. The authors also conducted analyses at 3 GPA screens (3.00, 3.25, and 3.50) to demonstrate that employers (or educators) might face adverse impact at all 3 levels if GPA continues to be implemented as part of a selection system. Implications and future research are discussed.

  3. Nurses' Educational Needs Assessment for Financial Management Education Using the Nominal Group Technique

    OpenAIRE

    Wonjung Noh, PhD, RN; Ji Young Lim, PhD, RN, MBA

    2015-01-01

    Purpose: The purpose of this study was to identify the financial management educational needs of nurses in order to development an educational program to strengthen their financial management competencies. Methods: Data were collected from two focus groups using the nominal group technique. The study consisted of three steps: a literature review, focus group discussion using the nominal group technique, and data synthesis. Results: After analyzing the results, nine key components were s...

  4. Average bond energies between boron and elements of the fourth, fifth, sixth, and seventh groups of the periodic table

    Science.gov (United States)

    Altshuller, Aubrey P

    1955-01-01

    The average bond energies D(gm)(B-Z) for boron-containing molecules have been calculated by the Pauling geometric-mean equation. These calculated bond energies are compared with the average bond energies D(exp)(B-Z) obtained from experimental data. The higher values of D(exp)(B-Z) in comparison with D(gm)(B-Z) when Z is an element in the fifth, sixth, or seventh periodic group may be attributed to resonance stabilization or double-bond character.

  5. ON IMPROVEMENT OF METHODOLOGY FOR CALCULATING THE INDICATOR «AVERAGE WAGE»

    Directory of Open Access Journals (Sweden)

    Oksana V. Kuchmaeva

    2015-01-01

    Full Text Available The article describes the approaches to the calculation of the indicator of average wages in Russia with the use of several sources of information. The proposed method is based on data collected by Rosstat and the Pension Fund of the Russian Federation. The proposed approach allows capturing data on the wages of almost all groups of employees. Results of experimental calculations on the developed technique are present in this article.

  6. Evaluation of a Small-Group Technique as a Teacher Training Instrument. Final Report.

    Science.gov (United States)

    Whipple, Babette S.

    An exploratory study was designed to determine whether the use of a new, small group technique adds significantly to the level of training in early childhood education. Two groups of five student teachers learned the technique and were then evaluated. The evaluation procedure was designed to measure changes in their educational objectives, their…

  7. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    Science.gov (United States)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  8. Clustering Batik Images using Fuzzy C-Means Algorithm Based on Log-Average Luminance

    Directory of Open Access Journals (Sweden)

    Ahmad Sanmorino

    2012-06-01

    Full Text Available Batik is a fabric or clothes that are made ​​with a special staining technique called wax-resist dyeing and is one of the cultural heritage which has high artistic value. In order to improve the efficiency and give better semantic to the image, some researchers apply clustering algorithm for managing images before they can be retrieved. Image clustering is a process of grouping images based on their similarity. In this paper we attempt to provide an alternative method of grouping batik image using fuzzy c-means (FCM algorithm based on log-average luminance of the batik. FCM clustering algorithm is an algorithm that works using fuzzy models that allow all data from all cluster members are formed with different degrees of membership between 0 and 1. Log-average luminance (LAL is the average value of the lighting in an image. We can compare different image lighting from one image to another using LAL. From the experiments that have been made, it can be concluded that fuzzy c-means algorithm can be used for batik image clustering based on log-average luminance of each image possessed.

  9. Study of phosphatic nodules as a possible source of uranium mineralization in warcha sandstone of nilawahan group salt range using SSNTD technique

    International Nuclear Information System (INIS)

    Qureshi, A.A.; Ullah, K.; Ullah, N.; Mohammad, A.

    2004-07-01

    The strong in the sedimentary depositional characteristics between the Warcha Sandstone of Nilawahan Group in the Salt Range and the uranium bearing sandstones of Siwalik Group in the foot hills of Himalaya and Sulaiman Ranges tempted the geologists to investigate the former group for the occurrence of any uranium deposits in it. Like volcanic ash beds in Siwaliks, phosphatic nodules may be a possible source of uranium mineralization in Warcha Sandstone of Nilawahan Group. Samples of phosphatic nodules occurring in the Sandstone of Nilawahan Group Salt Range were analyzed using Solid State Nuclear Track Detention Technique (SSNTD) for the determination of their uranium concentration. The results obtained are quite encouraging and favour the idea of exploring the area in detail for any possible occurrence of uranium deposit. Uranium concentration in these samples ranges from (434 + - 39) ppm to (964+ -81)ppm with and average concentration of (699 + - 62) ppm. (author)

  10. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  11. Expressed satisfaction with the nominal group technique among change agents

    NARCIS (Netherlands)

    Gresham, J.N.

    1986-01-01

    Expressed Satisfaction with the Nominal Group Technique Among Change Agents. Jon Neal Gresham The purpose of this study was to determine whether or not policymakers and change agents with differing professional backgrounds and responsibilities, who participated in the structured process of a

  12. Renormalization group decimation technique for disordered binary harmonic chains

    International Nuclear Information System (INIS)

    Wiecko, C.; Roman, E.

    1983-10-01

    The density of states of disordered binary harmonic chains is calculated using the Renormalization Group Decimation technique on the displacements of the masses from their equilibrium positions. The results are compared with numerical simulation data and with those obtained with the current method of Goncalves da Silva and Koiller. The advantage of our procedure over other methods is discussed. (author)

  13. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  14. Group Guidance Services with Self-Regulation Technique to Improve Student Learning Motivation in Junior High School (JHS)

    Science.gov (United States)

    Pranoto, Hadi; Atieka, Nurul; Wihardjo, Sihadi Darmo; Wibowo, Agus; Nurlaila, Siti; Sudarmaji

    2016-01-01

    This study aims at: determining students motivation before being given a group guidance with self-regulation technique, determining students' motivation after being given a group counseling with self-regulation technique, generating a model of group counseling with self-regulation technique to improve motivation of learning, determining the…

  15. Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

    Directory of Open Access Journals (Sweden)

    Moath Kassim

    2018-05-01

    Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors

  16. Biosphere Dose Conversion Factors for Reasonably Maximally Exposed Individual and Average Member of Critical Group

    International Nuclear Information System (INIS)

    K. Montague

    2000-01-01

    The purpose of this calculation is to develop additional Biosphere Dose Conversion Factors (BDCFs) for a reasonably maximally exposed individual (RMEI) for the periods 10,000 years and 1,000,000 years after the repository closure. In addition, Biosphere Dose Conversion Factors for the average member of a critical group are calculated for those additional radionuclides postulated to reach the environment during the period after 10,000 years and up to 1,000,000 years. After the permanent closure of the repository, the engineered systems within the repository will eventually lose their abilities to contain radionuclide inventory, and the radionuclides will migrate through the geosphere and eventually enter the local water table moving toward inhabited areas. The primary release scenario is a groundwater well used for drinking water supply and irrigation, and this calculation takes these postulated releases and follows them through various pathways until they result in a dose to either a member of critical group or a reasonably maximally exposed individual. The pathways considered in this calculation include inhalation, ingestion, and direct exposure

  17. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  18. Multiple group radiator and hybrid test heads, possibilities of combining the array technique

    International Nuclear Information System (INIS)

    Wuestenberg, H.

    1993-01-01

    This article is intended to show the important considerations, which led to the development of the multichannel group radiator technique. Trends in development and the advantages and disadvantages of the different possibilities are introduced, against the background of experience now available for these configurative variants of ultrasonic test heads. For this reason, a series of experiences and arguments is reported, from the point of view of the developer of the multi-channel group radiator technique. (orig./HP) [de

  19. Spontaneous emergence, imitation and spread of alternative foraging techniques among groups of vervet monkeys.

    Directory of Open Access Journals (Sweden)

    Erica van de Waal

    Full Text Available Animal social learning has become a subject of broad interest, but demonstrations of bodily imitation in animals remain rare. Based on Voelkl and Huber's study of imitation by marmosets, we tested four groups of semi-captive vervet monkeys presented with food in modified film canisters ("aethipops'. One individual was trained to take the tops off canisters in each group and demonstrated five openings to them. In three groups these models used their mouth to remove the lid, but in one of the groups the model also spontaneously pulled ropes on a canister to open it. In the last group the model preferred to remove the lid with her hands. Following these spontaneous differentiations of foraging techniques in the models, we observed the techniques used by the other group members to open the canisters. We found that mouth opening was the most common technique overall, but the rope and hands methods were used significantly more in groups they were demonstrated in than in groups where they were not. Our results show bodily matching that is conventionally described as imitation. We discuss the relevance of these findings to discoveries about mirror neurons, and implications of the identity of the model for social transmission.

  20. Group techniques as a methodological strategy in acquiring teamwork abilities by college students

    Directory of Open Access Journals (Sweden)

    César Torres Martín

    2013-02-01

    Full Text Available From the frame of the European Higher Education Space an adaptation of teaching-learning process is being promoted by means of the pedagogical renewal, introducing into the class a major number of active or participative methodologies in order to provide students with a major autonomy in said process. This requires taking into account the incorporation of basic skills within university curriculum, especially “teamwork”. By means of group techniques students can acquire interpersonal and cognitive skills, as well as abilities that will enable them to face different group situations throughout their academic and professional career. These techniques are necessary not only as a methodological strategy in the classroom, but also as a reflection instrument for students to assess their behavior in group, with an aim to modify conduct strategies that make that relationship with others influences their learning process. Hence the importance of this ability to sensitize students positively for collective work. Thus using the research-action method in the academic classroom during one semester and making systematic intervention with different group techniques, we manage to present obtained results by means of an analysis of the qualitative data, where the selected instruments are group discussion and personal reflection.

  1. Improved Multiscale Entropy Technique with Nearest-Neighbor Moving-Average Kernel for Nonlinear and Nonstationary Short-Time Biomedical Signal Analysis

    Directory of Open Access Journals (Sweden)

    S. P. Arunachalam

    2018-01-01

    Full Text Available Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR and atrial fibrillation (AF was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.

  2. Techniques for data compression in experimental nuclear physics problems

    International Nuclear Information System (INIS)

    Byalko, A.A.; Volkov, N.G.; Tsupko-Sitnikov, V.M.

    1984-01-01

    Techniques and ways for data compression during physical experiments are estimated. Data compression algorithms are divided into three groups: the first one includes the algorithms based on coding and which posses only average indexes by data files, the second group includes algorithms with data processing elements, the third one - algorithms for converted data storage. The greatest promise for the techniques connected with data conversion is concluded. The techniques possess high indexes for compression efficiency and for fast response, permit to store information close to the source one

  3. Peak-to-average power ratio reduction in orthogonal frequency division multiplexing-based visible light communication systems using a modified partial transmit sequence technique

    Science.gov (United States)

    Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen

    2018-01-01

    We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.

  4. Nurses' Educational Needs Assessment for Financial Management Education Using the Nominal Group Technique.

    Science.gov (United States)

    Noh, Wonjung; Lim, Ji Young

    2015-06-01

    The purpose of this study was to identify the financial management educational needs of nurses in order to development an educational program to strengthen their financial management competencies. Data were collected from two focus groups using the nominal group technique. The study consisted of three steps: a literature review, focus group discussion using the nominal group technique, and data synthesis. After analyzing the results, nine key components were selected: corporate management and accounting, introduction to financial management in hospitals, basic structure of accounting, basics of hospital accounting, basics of financial statements, understanding the accounts of financial statements, advanced analysis of financial statements, application of financial management, and capital financing of hospitals. The present findings can be used to develop a financial management education program to strengthen the financial management competencies of nurses. Copyright © 2015. Published by Elsevier B.V.

  5. Report of the B-factory Group: 1, Physics and techniques

    International Nuclear Information System (INIS)

    Feldman, G.J.; Cassel, D.G.; Siemann, R.H.

    1989-01-01

    The study of B meson decay appears to offer a unique opportunity to measure basic parameters of the Standard Model, probe for interactions mediated by higher mass particles, and investigate the origin of CP violation. These opportunities have been enhanced by the results of two measurements. The first is the measurement of a long B meson lifetime. In addition to allowing a simpler identification of B mesons and a measurement of the time of their decay, this observation implies that normal decays are suppressed, making rare decays more prevalent. The second measurement is that neutral B mesons are strongly mixed. This enhances the possibilities for studying CP violation in the B system. The CESR storage ring is likely to dominate the study of B physics in e + e/sup /minus// annihilations for about the next five years. First, CESR has already reached a luminosity of 10 32 cm/sup /minus/1/ sec/sup /minus/1/ and has plans for improvements which may increase the luminosity by a factor of about five. Second, a second-generation detector, CLEO II, will start running in 1989. Given this background, the main focus of this working group was to ask what is needed for the mid- to late-1990 s. Many laboratories are thinking about new facilities involving a variety of techniques. To help clarify the choices, we focused on one example of CP violation and estimated the luminosity required to measure it using different techniques. We will briefly describe the requirements for detectors matched to these techniques. In particular, we will give a conceptual design of a possible detector for asymmetric collisions at the Υ(4S) resonance, one of the attractive techniques which will emerge from this study. A discussion of accelerator technology issues for using these techniques forms the second half of the B-factory Group report, and it follows in these proceedings. 34 refs., 2 figs., 2 tabs

  6. Group Counseling: Techniques for Teaching Social Skills to Students with Special Needs

    Science.gov (United States)

    Stephens, Derk; Jain, Sachin; Kim, Kioh

    2010-01-01

    This paper examines literature that supports the use of group counseling techniques in the school setting to teach social skills to children and adolescents with special needs. From the review of this literature it was found that group counseling is a very effective way of addressing a variety of social skills problems that can be displayed by…

  7. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  8. Behavior change techniques used in group-based behavioral support by the English stop-smoking services and preliminary assessment of association with short-term quit outcomes.

    Science.gov (United States)

    West, Robert; Evans, Adam; Michie, Susan

    2011-12-01

    To develop a reliable coding scheme for components of group-based behavioral support for smoking cessation, to establish the frequency of inclusion in English Stop-Smoking Service (SSS) treatment manuals of specific components, and to investigate the associations between inclusion of behavior change techniques (BCTs) and service success rates. A taxonomy of BCTs specific to group-based behavioral support was developed and reliability of use assessed. All English SSSs (n = 145) were contacted to request their group-support treatment manuals. BCTs included in the manuals were identified using this taxonomy. Associations between inclusion of specific BCTs and short-term (4-week) self-reported quit outcomes were assessed. Fourteen group-support BCTs were identified with >90% agreement between coders. One hundred and seven services responded to the request for group-support manuals of which 30 had suitable documents. On average, 7 BCTs were included in each manual. Two were positively associated with 4-week quit rates: "communicate group member identities" and a "betting game" (a financial deposit that is lost if a stop-smoking "buddy" relapses). It is possible to reliably code group-specific BCTs for smoking cessation. Fourteen such techniques are present in guideline documents of which 2 appear to be associated with higher short-term self-reported quit rates when included in treatment manuals of English SSSs.

  9. From conventional averages to individual dose painting in radiotherapy for human tumors: challenge to non-uniformity

    International Nuclear Information System (INIS)

    Maciejewski, B.; Rodney Withers, H.

    2004-01-01

    The exploitation of a number of current clinical trials and reports on outcomes after radiation therapy (i.e. breast, head and neck, prostate) in clinical practice reflects many limitations for conventional techniques and dose-fractionation schedules and for 'average' conclusions. Even after decades of evolution of radiation therapy we still do not know how to optimize treatment for the individual patient and only have 'averages' and ill-defined 'probabilities' to guide treatment prescription. Wide clinical and biological heterogeneity within the groups of patients recruited into clinical trials with a few-fold variation in tumour volume within one stage of disease is obvious. Basic radiobiological guidelines concerning average cell killing of uniformly distributed and equally radiosensitive tumour cells arose from elegant but idealistic in vitro experiments and seem to be of uncertain validity. Therefore, we are confronted with more dilemmas than dogmas. Nonlinearity and in homogeneity of human tumour pattern and response to irradiation are discussed. The purpose of this paper is to present and discuss various aspects of non-uniform tumour cell targeted radiotherapy using conformal and dose intensity modulated techniques. (author)

  10. Calculation of average molecular parameters, functional groups, and a surrogate molecule for heavy fuel oils using 1H and 13C NMR spectroscopy

    KAUST Repository

    Abdul Jameel, Abdul Gani; Elbaz, Ayman M.; Emwas, Abdul-Hamid M.; Roberts, William L.; Sarathy, Mani

    2016-01-01

    Heavy fuel oil (HFO) is primarily used as fuel in marine engines and in boilers to generate electricity. Nuclear Magnetic Resonance (NMR) is a powerful analytical tool for structure elucidation and in this study, 1H NMR and 13C NMR spectroscopy were used for the structural characterization of 2 HFO samples. The NMR data was combined with elemental analysis and average molecular weight to quantify average molecular parameters (AMPs), such as the number of paraffinic carbons, naphthenic carbons, aromatic hydrogens, olefinic hydrogens, etc. in the HFO samples. Recent formulae published in the literature were used for calculating various derived AMPs like aromaticity factor 〖(f〗_a), C/H ratio, average paraffinic chain length (¯n), naphthenic ring number 〖(R〗_N), aromatic ring number〖 (R〗_A), total ring number〖 (R〗_T), aromatic condensation index (φ) and aromatic condensation degree (Ω). These derived AMPs help in understanding the overall structure of the fuel. A total of 19 functional groups were defined to represent the HFO samples, and their respective concentrations were calculated by formulating balance equations that equate the concentration of the functional groups with the concentration of the AMPs. Heteroatoms like sulfur, nitrogen, and oxygen were also included in the functional groups. Surrogate molecules were finally constructed to represent the average structure of the molecules present in the HFO samples. This surrogate molecule can be used for property estimation of the HFO samples and also serve as a surrogate to represent the molecular structure for use in kinetic studies.

  11. Calculation of average molecular parameters, functional groups, and a surrogate molecule for heavy fuel oils using 1H and 13C NMR spectroscopy

    KAUST Repository

    Abdul Jameel, Abdul Gani

    2016-04-22

    Heavy fuel oil (HFO) is primarily used as fuel in marine engines and in boilers to generate electricity. Nuclear Magnetic Resonance (NMR) is a powerful analytical tool for structure elucidation and in this study, 1H NMR and 13C NMR spectroscopy were used for the structural characterization of 2 HFO samples. The NMR data was combined with elemental analysis and average molecular weight to quantify average molecular parameters (AMPs), such as the number of paraffinic carbons, naphthenic carbons, aromatic hydrogens, olefinic hydrogens, etc. in the HFO samples. Recent formulae published in the literature were used for calculating various derived AMPs like aromaticity factor 〖(f〗_a), C/H ratio, average paraffinic chain length (¯n), naphthenic ring number 〖(R〗_N), aromatic ring number〖 (R〗_A), total ring number〖 (R〗_T), aromatic condensation index (φ) and aromatic condensation degree (Ω). These derived AMPs help in understanding the overall structure of the fuel. A total of 19 functional groups were defined to represent the HFO samples, and their respective concentrations were calculated by formulating balance equations that equate the concentration of the functional groups with the concentration of the AMPs. Heteroatoms like sulfur, nitrogen, and oxygen were also included in the functional groups. Surrogate molecules were finally constructed to represent the average structure of the molecules present in the HFO samples. This surrogate molecule can be used for property estimation of the HFO samples and also serve as a surrogate to represent the molecular structure for use in kinetic studies.

  12. Development of a Temperature Programmed Identification Technique to Characterize the Organic Sulphur Functional Groups in Coal

    Directory of Open Access Journals (Sweden)

    Moinuddin Ghauri

    2017-06-01

    Full Text Available The Temperature Programmed Reduction (TPR technique is employed for the characterisation of various organic sulphur functional groups in coal. The TPR technique is modified into the Temperature Programmed Identification technique to investigate whether this method can detect various functional groups corresponding to their reduction temperatures. Ollerton, Harworth, Silverdale, Prince of Wales coal and Mequinenza lignite were chosen for this study. High pressure oxydesulphurisation of the coal samples was also done. The characterization of various organic sulphur functional groups present in untreated and treated coal by the TPR method and later by the TPI method confirmed that these methods can identify the organic sulphur groups in coal and that the results based on total sulphur are comparable with those provided by standard analytical techniques. The analysis of the untreated and treated coal samples showed that the structural changes in the organic sulphur matrix due to a reaction can be determined.

  13. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  14. Perceived Average Orientation Reflects Effective Gist of the Surface.

    Science.gov (United States)

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  15. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  16. Functional Group and Structural Characterization of Unmodified and Functionalized Lignin by Titration, Elemental Analysis, 1H NMR and FTIR Techniques

    Directory of Open Access Journals (Sweden)

    Ramin Bairami Habashi

    2017-11-01

    Full Text Available Lignin is the second most abundant polymer in the world after cellulose. Therefore, characterization of the structure and functional groups of lignin in order to assess its potential applications in various technical fields has become a necessity. One of the major problems related to the characterization of lignin is the lack of well-defined protocols and standards. In this paper, systematic studies have been done to characterize the structure and functional groups of lignin quantitatively using different techniques such as elemental analysis, titration and 1H NMR and FTIR techniques. Lignin as a black liquor was obtained from Choka Paper Factory and it was purified before any test. The lignin was reacted with α-bromoisobutyryl bromide to calculate the number of hydroxyl and methoxyl moles. Using 1H NMR spectroscopic method on α-bromoisobutyrylated lignin (BiBL in the presence of a given amount of N,N-dimethylformamide (DMF as an internal standard, the number of moles of hydroxyl and methoxyl groups per gram of lignin was found to be 6.44 mmol/g and 6.64 mmol/g, respectively. Using aqueous titration, the number of moles of phenolic hydroxyl groups and carboxyl groups of the lignin were calculated as 3.13 mmol/g and 2.84 mmol/g, respectively. The findings obtained by 1H NMR and elemental analysis indicated to phenyl propane unit of the lignin with C9 structural formula as C9 HAl 3.84HAr2.19S0.2O0.8(OH1.38(OCH31.42. Due to poor solubility of the lignin in tetrahydrofuran (THF, acetylated lignin was used in the GPC analysis, by which number-average molecular weight  of the lignin was calculated as 992 g/mol.

  17. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  18. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  19. COMPARATIVE ANALYSIS OF BLOOD GROUPING IN HEALTHY BLOOD DONOR USING GEL CARD TECHNIQUE AND TUBE METHOD

    Directory of Open Access Journals (Sweden)

    Muhammad Usman

    2016-12-01

    Full Text Available Blood grouping is a vital test in pre-transfusion testing. Both tube and gel agglutination assays are used for ABO grouping. The main object of this study was to compare ABO grouping and D typing on tube and gel agglutination assay in order to assess the efficacy of each technique. A total of 100 healthy blood donors irrespective of age and sex were included in this study. Results showed that there is no significant difference between these two techniques. However, in 10 samples it was detected that the reaction strength in serum ABO grouping by gel agglutination assay is varied by only one grade when compared to tube agglutination assay. Due to numerous positive effects of gel assay it is more beneficial to implement this technique in the setups where blood banks bear heavy routine work load.

  20. Complications after pectus excavatum repair using pectus bars in adolescents and adults: risk comparisons between age and technique groups.

    Science.gov (United States)

    Choi, Soohwan; Park, Hyung Joo

    2017-10-01

    To compare the complications associated with age and technique groups in patients undergoing pectus excavatum (PE) repair. The data of 994 patients who underwent PE repair from March 2011 to December 2015 were retrospectively reviewed. Mean age was 9.59 years (range 31 months-55 years), and 756 patients were men (76.1%). The age groups were defined as follows: Group 1, Group 2, 5-9 years; Group 3, 10-14 years; Group 4, 15-17 years; Group 5, 18-19 years; Group 6, 20-24 years; and Group 7, >24 years. The technique groups were defined as follows: Group 1, patients who underwent repair with claw fixators and hinge plates; Group 2, patients who underwent repair with our 'bridge' technique. Complications were compared between age groups and technique groups. No cases of mortality occurred. Complication rates in the age groups 1-7 were 5.4%, 3.6%, 12.1%, 18.2%, 17.3%, 13.9% and 16.7%, respectively. The complication rate tripled after the age of 10. In multivariable analysis, odds ratio of Groups 4, 5 and 7 and asymmetric types were 3.04, 2.81, 2.97 and 1.70 (P Group 1 was 0.8% (6 of 780). No bar dislocations occurred in technique Group 2. Older patients have more asymmetric pectus deformity and they are also risk factors for complications following PE repair. The bridge technique provides a bar dislocation rate of 0%, even in adult patients. This procedure seems to reduce or prevent major complications following PE repair. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  1. Group anxiety management: effectiveness, perceived helpfulness and follow-up.

    Science.gov (United States)

    Cadbury, S; Childs-Clark, A; Sandhu, S

    1990-05-01

    An evaluation was conducted on out-patient cognitive-behavioural anxiety management groups. Twenty-nine clients assessed before and after the group and at three-month follow-up showed significant improvement on self-report measures. A further follow-up on 21 clients, conducted by an independent assessor at an average of 11 months, showed greater improvement with time. Clients also rated how helpful they had found non-specific therapeutic factors, and specific anxiety management techniques. 'Universality' was the most helpful non-specific factor, and 'the explanation of anxiety' was the most helpful technique.

  2. Applications of resonance-averaged gamma-ray spectroscopy with tailored beams

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1982-01-01

    The use of techniques based on the direct experimental averaging over compound nuclear capturing states has proved valuable for investigations of nuclear structure. The various methods that have been employed are described, with particular emphasis on the transmission filter, or tailored beam technique. The mathematical limitations on averaging imposed by the filter band pass are discussed. It can readily be demonstrated that a combination of filters at different energies can form a powerful method for spin and parity predictions. Several recent examples from the HFBR program are presented

  3. Applications of resonance-averaged gamma-ray spectroscopy with tailored beams

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1982-01-01

    The use of techniques based on the direct experimental averaging over compound nuclear capturing states has proved valuable for investigations of nuclear structure. The various methods that have been employed are described, with particular emphasis on the transmission filter, or tailored beam technique. The mathematical limitations on averaging imposed by the filtre band pass are discussed. It can readily be demonstrated that a combination of filters at different energies can form a powerful method for spin and parity predictions. Several recent examples from the HFBR program are presented. (author)

  4. Critical test of isotropic periodic sum techniques with group-based cut-off schemes.

    Science.gov (United States)

    Nozawa, Takuma; Yasuoka, Kenji; Takahashi, Kazuaki Z

    2018-03-08

    Truncation is still chosen for many long-range intermolecular interaction calculations to efficiently compute free-boundary systems, macromolecular systems and net-charge molecular systems, for example. Advanced truncation methods have been developed for long-range intermolecular interactions. Every truncation method can be implemented as one of two basic cut-off schemes, namely either an atom-based or a group-based cut-off scheme. The former computes interactions of "atoms" inside the cut-off radius, whereas the latter computes interactions of "molecules" inside the cut-off radius. In this work, the effect of group-based cut-off is investigated for isotropic periodic sum (IPS) techniques, which are promising cut-off treatments to attain advanced accuracy for many types of molecular system. The effect of group-based cut-off is clearly different from that of atom-based cut-off, and severe artefacts are observed in some cases. However, no severe discrepancy from the Ewald sum is observed with the extended IPS techniques.

  5. Nominal Group Technique and its Applications in Managing Quality in Higher Education

    Directory of Open Access Journals (Sweden)

    Rafikul Islam

    2011-09-01

    Full Text Available Quality management is an important aspect in all kinds of businesses – manufacturing or service. Idea generation plays a pivotal role in managing quality in organizations. It is thenew and innovative ideas which can help corporations to survive in the turbulent business environment. Research in group dynamics has shown that more ideas are generated by individuals working alone but in a group environment than the individuals engaged in a formal group discussion. In Nominal Group Technique (NGT, individuals work alone but in a group setting. This paper shows how NGT can be applied to generate large number of ideas to solve quality related problems specifically in Malaysian higher education setting. The paper also discusses the details of NGT working procedure andexplores the areas of its further applications.

  6. Focus group interview: an underutilized research technique for improving theory and practice in health education.

    Science.gov (United States)

    Basch, C E

    1987-01-01

    The purpose of this article is to increase awareness about and stimulate interest in using focus group interviews, a qualitative research technique, to advance the state-of-the-art of education and learning about health. After a brief discussion of small group process in health education, features of focus group interviews are presented, and a theoretical framework for planning a focus group study is summarized. Then, literature describing traditional and health-related applications of focus group interviews is reviewed and a synthesis of methodological limitations and advantages of this technique is presented. Implications are discussed regarding: need for more inductive qualitative research in health education; utility of focus group interviews for research and for formative and summative evaluation of health education programs; applicability of marketing research to understanding and influencing consumer behavior, despite notable distinctions between educational initiatives and marketing; and need for professional preparation faculty to consider increasing emphasis on qualitative research methods.

  7. Long-term program for research and development of group separation and disintegration techniques

    International Nuclear Information System (INIS)

    1988-01-01

    In Japan, the basic guidelines state that high-level radioactive wastes released from reprocessing of spent fuel should be processed into stable solid material, followed by storage for cooling for 30-50 years and disposal in the ground at a depth of several hundreds of meters. The Long-Term Program for Research and Development of Group Separation and Disintegration Techniques is aimed at efficient disposal of high-level wastes, reutilization of useful substances contained, and improved safety. Important processes include separation of nuclides (group separation, individual nuclide separation) and conversion (disintegration) of long-lived nuclides into short-lived or non-radioactive one. These processes can reduce the volume of high-level wastes to be left for final disposal. Research and development projects have been under way to provide techniques to separate high-level waste substances into four groups (transuranic elements, strontium/cesium, technetium/platinum group elements, and others). These projects also cover recovery of useful metals and efficient utilization of separated substances. For disintegration, conceptual studies have been carried out for the application of fast neutron beams to conversion of long half-life transuranium elements into short half-life or non-radioactive elements. (N.K.)

  8. Simple Moving Voltage Average Incremental Conductance MPPT Technique with Direct Control Method under Nonuniform Solar Irradiance Conditions

    Directory of Open Access Journals (Sweden)

    Amjad Ali

    2015-01-01

    Full Text Available A new simple moving voltage average (SMVA technique with fixed step direct control incremental conductance method is introduced to reduce solar photovoltaic voltage (VPV oscillation under nonuniform solar irradiation conditions. To evaluate and validate the performance of the proposed SMVA method in comparison with the conventional fixed step direct control incremental conductance method under extreme conditions, different scenarios were simulated. Simulation results show that in most cases SMVA gives better results with more stability as compared to traditional fixed step direct control INC with faster tracking system along with reduction in sustained oscillations and possesses fast steady state response and robustness. The steady state oscillations are almost eliminated because of extremely small dP/dV around maximum power (MP, which verify that the proposed method is suitable for standalone PV system under extreme weather conditions not only in terms of bus voltage stability but also in overall system efficiency.

  9. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  10. Time series forecasting using ERNN and QR based on Bayesian model averaging

    Science.gov (United States)

    Pwasong, Augustine; Sathasivam, Saratha

    2017-08-01

    The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.

  11. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  12. Investigating the Effects of Group Investigation (GI and Cooperative Integrated Reading and Comprehension (CIRC as the Cooperative Learning Techniques on Learner's Reading Comprehension

    Directory of Open Access Journals (Sweden)

    Mohammad Amin Karafkan

    2015-11-01

    Full Text Available Cooperative learning consists of some techniques for helping students work together more effectively. This study investigated the effects of Group Investigation (GI and Cooperative Integrated Reading and Composition (CIRC as cooperative learning techniques on Iranian EFL learners’ reading comprehension at an intermediate level. The participants of the study were 207 male students who studied at an intermediate level at ILI. The participants were randomly assigned into three equal groups: one control group and two experimental groups. The control group was instructed via conventional technique following an individualistic instructional approach. One experimental group received GI technique. The other experimental group received CIRC technique. The findings showed that there was a meaningful difference between the mean of the reading comprehension score of GI experimental group and CRIC experimental group. CRIC technique is more effective than GI technique in enhancing the reading comprehension test scores of students.

  13. Dorsal onlay (Barbagli technique) versus dorsal inlay (Asopa technique) buccal mucosal graft urethroplasty for anterior urethral stricture: a prospective randomized study.

    Science.gov (United States)

    Aldaqadossi, Hussein; El Gamal, Samir; El-Nadey, Mohamed; El Gamal, Osama; Radwan, Mohamed; Gaber, Mohamed

    2014-02-01

    To compare both the dorsal onlay technique of Barbagli and the dorsal inlay technique of Asopa for the management of long anterior urethral stricture. From January 2010 to May 2012, a total of 47 patients with long anterior urethral strictures were randomized into two groups. The first group included 25 patients who were managed by dorsal onlay buccal mucosal graft urethroplasty. The second group included 22 patients who were managed by dorsal inlay buccal mucosal graft urethroplasty. Different clinical parameters, postoperative complications and success rates were compared between both groups. The overall success rate in the dorsal onlay group was 88%, whereas in the dorsal inlay group the success rate was 86.4% during the follow-up period. The mean operative time was significantly longer in the dorsal onlay urethroplasty group (205 ± 19.63 min) than in the dorsal inlay urethroplasty group (128 ± 4.9 min, P-value <0.0001). The average blood loss was significantly higher in the dorsal onlay urethroplasty group (228 ± 5.32 mL) than in the dorsal inlay urethroplasty group (105 ± 12.05 mL, P-value <0.0001). The dorsal onlay technique of Barbagli and the dorsal inlay technique of Asopa buccal mucosal graft urethroplasty provide similar success rates. The Asopa technique is easy to carry out, provides shorter operative time and less blood loss, and it is associated with fewer complications for anterior urethral stricture repair. © 2013 The Japanese Urological Association.

  14. Technical errors in complete mouth radiographic survey according to radiographic techniques and film holding methods

    International Nuclear Information System (INIS)

    Choi, Karp Sik; Byun, Chong Soo; Choi, Soon Chul

    1986-01-01

    The purpose of this study was to investigate the numbers and causes of retakes in 300 complete mouth radiographic surveys made by 75 senior dental students. According to radiographic techniques and film holding methods, they were divided into 4 groups: Group I: Bisecting-angle technique with patient's fingers. Group II: Bisecting-angle technique with Rinn Snap-A-Ray device. Group III: Bisecting-angle technique with Rinn XCP instrument (short cone) Group IV: Bisecting-angle technique with Rinn XCP instrument (long cone). The most frequent cases of retakes, the most frequent tooth area examined, of retakes and average number of retakes per complete mouth survey were evaluated. The obtained results were as follows: Group I: Incorrect film placement (47.8), upper canine region, and 0.89. Group II: Incorrect film placement (44.0), upper canine region, and 1.12. Group III: Incorrect film placement (79.2), upper canine region, and 2.05. Group IV: Incorrect film placement (67.7), upper canine region, and 1.69.

  15. An Experimental Study Related to Planning Abilities of Gifted and Average Students

    Directory of Open Access Journals (Sweden)

    Marilena Z. Leana-Taşcılar

    2016-02-01

    Full Text Available Gifted students differ from their average peers in psychological, social, emotional and cognitive development. One of these differences in the cognitive domain is related to executive functions. One of the most important executive functions is planning and organization ability. The aim of this study was to compare planning abilities of gifted students with those of their average peers and to test the effectiveness of a training program on planning abilities of gifted students and average students. First, students’ intelligence and planning abilities were measured and then assigned to either experimental or control group. The groups were matched by intelligence and planning ability (experimental: (13 gifted and 8 average; control: 14 gifted and 8 average. In total 182 students (79 gifted and 103 average participated in the study. Then, a training program was implemented in the experimental group to find out if it improved students’ planning ability. Results showed that boys had better planning abilities than girls did, and gifted students had better planning abilities than their average peers did. Significant results were obtained in favor of the experimental group in the posttest scores

  16. Introducer Curving Technique for the Prevention of Tilting of Transfemoral Gunther Tulip Inferior Vena Cava Filter

    International Nuclear Information System (INIS)

    Xiao, Liang; Shen, Jing; Tong, Jia Jie; Huang, De Sheng

    2012-01-01

    To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 ± 4.14 degrees. In Group C, the average ACF was 7.1 ± 4.52 degrees. In Group T, the average ACF was 4.4 ± 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 ± 4.59 vs. 5.1 ± 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF ≥ 10 degree) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, X 2 = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, X 2 = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.

  17. Introducer Curving Technique for the Prevention of Tilting of Transfemoral Gunther Tulip Inferior Vena Cava Filter

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Liang; Shen, Jing; Tong, Jia Jie [The First Hospital of China Medical University, Shenyang (China); Huang, De Sheng [College of Basic Medical Science, China Medical University, Shenyang (China)

    2012-07-15

    To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 {+-} 4.14 degrees. In Group C, the average ACF was 7.1 {+-} 4.52 degrees. In Group T, the average ACF was 4.4 {+-} 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 {+-} 4.59 vs. 5.1 {+-} 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF {>=} 10 degree) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, X{sup 2} = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, X{sup 2} = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.

  18. Average correlation clustering algorithm (ACCA) for grouping of co-regulated genes with similar pattern of variation in their expression values.

    Science.gov (United States)

    Bhattacharya, Anindya; De, Rajat K

    2010-08-01

    Distance based clustering algorithms can group genes that show similar expression values under multiple experimental conditions. They are unable to identify a group of genes that have similar pattern of variation in their expression values. Previously we developed an algorithm called divisive correlation clustering algorithm (DCCA) to tackle this situation, which is based on the concept of correlation clustering. But this algorithm may also fail for certain cases. In order to overcome these situations, we propose a new clustering algorithm, called average correlation clustering algorithm (ACCA), which is able to produce better clustering solution than that produced by some others. ACCA is able to find groups of genes having more common transcription factors and similar pattern of variation in their expression values. Moreover, ACCA is more efficient than DCCA with respect to the time of execution. Like DCCA, we use the concept of correlation clustering concept introduced by Bansal et al. ACCA uses the correlation matrix in such a way that all genes in a cluster have the highest average correlation values with the genes in that cluster. We have applied ACCA and some well-known conventional methods including DCCA to two artificial and nine gene expression datasets, and compared the performance of the algorithms. The clustering results of ACCA are found to be more significantly relevant to the biological annotations than those of the other methods. Analysis of the results show the superiority of ACCA over some others in determining a group of genes having more common transcription factors and with similar pattern of variation in their expression profiles. Availability of the software: The software has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/~rajat. Then it needs to be installed. Two word files (included in the zip file) need to

  19. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Science.gov (United States)

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  20. The Effect of Group Investigation Learning Model with Brainstroming Technique on Students Learning Outcomes

    Directory of Open Access Journals (Sweden)

    Astiti Kade kAyu

    2018-01-01

    Full Text Available This study aims to determine the effect of group investigation (GI learning model with brainstorming technique on student physics learning outcomes (PLO compared to jigsaw learning model with brainstroming technique. The learning outcome in this research are the results of learning in the cognitive domain. The method used in this research is experiment with Randomised Postest Only Control Group Design. Population in this research is all students of class XI IPA SMA Negeri 9 Kupang year lesson 2015/2016. The selected sample are 40 students of class XI IPA 1 as the experimental class and 38 students of class XI IPA 2 as the control class using simple random sampling technique. The instrument used is 13 items description test. The first hypothesis was tested by using two tailed t-test. From that, it is obtained that H0 rejected which means there are differences of students physics learning outcome. The second hypothesis was tested using one tailed t-test. It is obtained that H0 rejected which means the students PLO in experiment class were higher than control class. Based on the results of this study, researchers recommend the use of GI learning models with brainstorming techniques to improve PLO, especially in the cognitive domain.

  1. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  2. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  3. Renormalization-group decimation technique for spectra, wave-functions and density of states

    International Nuclear Information System (INIS)

    Wiecko, C.; Roman, E.

    1983-09-01

    The Renormalization Group decimation technique is very useful for problems described by 1-d nearest neighbour tight-binding model with or without translational invariance. We show how spectra, wave-functions and density of states can be calculated with little numerical work from the renormalized coefficients upon iteration. The results of this new procedure are verified using the model of Soukoulis and Economou. (author)

  4. Comparative regulatory approaches for groups of new plant breeding techniques.

    Science.gov (United States)

    Lusser, Maria; Davies, Howard V

    2013-06-25

    This manuscript provides insights into ongoing debates on the regulatory issues surrounding groups of biotechnology-driven 'New Plant Breeding Techniques' (NPBTs). It presents the outcomes of preliminary discussions and in some cases the initial decisions taken by regulators in the following countries: Argentina, Australia, Canada, EU, Japan, South Africa and USA. In the light of these discussions we suggest in this manuscript a structured approach to make the evaluation more consistent and efficient. The issue appears to be complex as these groups of new technologies vary widely in both the technologies deployed and their impact on heritable changes in the plant genome. An added complication is that the legislation, definitions and regulatory approaches for biotechnology-derived crops differ significantly between these countries. There are therefore concerns that this situation will lead to non-harmonised regulatory approaches and asynchronous development and marketing of such crops resulting in trade disruptions. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Introducer curving technique for the prevention of tilting of transfemoral Günther Tulip inferior vena cava filter.

    Science.gov (United States)

    Xiao, Liang; Huang, De-sheng; Shen, Jing; Tong, Jia-jie

    2012-01-01

    To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 ± 4.14 degrees. In Group C, the average ACF was 7.1 ± 4.52 degrees. In Group T, the average ACF was 4.4 ± 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 ± 4.59 vs. 5.1 ± 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF ≥ 10°) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, χ(2) = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, χ(2) = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.

  6. Effect of Ability Grouping in Reciprocal Teaching Technique of Collaborative Learning on Individual Achievements and Social Skills

    Science.gov (United States)

    Sumadi; Degeng, I Nyoman S.; Sulthon; Waras

    2017-01-01

    This research focused on effects of ability grouping in reciprocal teaching technique of collaborative learning on individual achievements dan social skills. The results research showed that (1) there are differences in individual achievement significantly between high group of homogeneous, middle group of homogeneous, low group of homogeneous,…

  7. Preoperative and Postoperative CT Scan Assessment of Pterygomaxillary Junction in Patients Undergoing Le Fort I Osteotomy: Comparison of Pterygomaxillary Dysjunction Technique and Trimble Technique-A Pilot Study.

    Science.gov (United States)

    Dadwal, Himani; Shanmugasundaram, S; Krishnakumar Raja, V B

    2015-09-01

    To determine the rate of complications and occurrence of pterygoid plate fractures comparing two techniques of Le Fort I osteotomy i.e., Classic Pterygomaxillary Dysjunction technique and Trimble technique and to know whether the dimensions of pterygomaxillary junction [determined preoperatively by computed tomography (CT) scan] have any influence on pterygomaxillary separation achieved during surgery. The study group consisted of eight South Indian patients with maxillary excess. A total of 16 sides were examined by CT. Preoperative CT was analyzed for all the patients. The thickness and width of the pterygomaxillary junction and the distance of the greater palatine canal from the pterygomaxillary junction was noted. Pterygomaxillary dysjunction was achieved by two techniques, the classic pterygomaxillary dysjunction technique (Group I) and Trimble technique (Group II). Patients were selected randomly and equally for both the techniques. Dysjunction was analyzed by postoperative CT. The average thickness of the pterygomaxillary junction on 16 sides was 4.5 ± 1.2 mm. Untoward pterygoid plate fractures occurred in Group I in 3 sides out of 8. In Trimble technique (Group II), no pterygoid plate fractures were noted. The average width of the pterygomaxillary junction was 7.8 ± 1.5 mm, distance of the greater palatine canal from pterygomaxillary junction was 7.4 ± 1.6 mm and the length of fusion of pterygomaxillary junction was 8.0 ± 1.9 mm. The Le Fort I osteotomy has become a standard procedure for correcting various dentofacial deformities. In an attempt to make Le Fort I osteotomy safer and avoid the problems associated with sectioning with an osteotome between the maxillary tuberosity and the pterygoid plates, Trimble suggested sectioning across the posterior aspect of the maxillary tuberosity itself. In our study, comparison between the classic pterygomaxillary dysjunction technique and the Trimble technique was made by using postoperative CT scan

  8. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  9. Use of nonstatistical techniques for pattern recognition to detect risk groups among liquidators of the Chernobyl NPP accident aftereffects

    International Nuclear Information System (INIS)

    Blinov, N.N.; Guslistyj, V.P.; Misyurev, A.V.; Novitskaya, N.N.; Snigireva, G.P.

    1993-01-01

    Attempt of using of the nonstatistical techniques for pattern recognition to detect the risk groups among liquidators of the Chernobyl NPP accident aftereffects was described. 14 hematologic, biochemical and biophysical blood serum parameters of the group of liquidators of the Chernobyl NPP accident impact as well as the group of donors free of any radiation dose (controlled group) were taken as the diagnostic parameters. Modification of the nonstatistical techniques for pattern recognition based on the assessment calculations were used. The patients were divided into risk group at the truth ∼ 80%

  10. Evaluation of an advanced physical diagnosis course using consumer preferences methods: the nominal group technique.

    Science.gov (United States)

    Coker, Joshua; Castiglioni, Analia; Kraemer, Ryan R; Massie, F Stanford; Morris, Jason L; Rodriguez, Martin; Russell, Stephen W; Shaneyfelt, Terrance; Willett, Lisa L; Estrada, Carlos A

    2014-03-01

    Current evaluation tools of medical school courses are limited by the scope of questions asked and may not fully engage the student to think on areas to improve. The authors sought to explore whether a technique to study consumer preferences would elicit specific and prioritized information for course evaluation from medical students. Using the nominal group technique (4 sessions), 12 senior medical students prioritized and weighed expectations and topics learned in a 100-hour advanced physical diagnosis course (4-week course; February 2012). Students weighted their top 3 responses (top = 3, middle = 2 and bottom = 1). Before the course, 12 students identified 23 topics they expected to learn; the top 3 were review sensitivity/specificity and high-yield techniques (percentage of total weight, 18.5%), improving diagnosis (13.8%) and reinforce usual and less well-known techniques (13.8%). After the course, students generated 22 topics learned; the top 3 were practice and reinforce advanced maneuvers (25.4%), gaining confidence (22.5%) and learn the evidence (16.9%). The authors observed no differences in the priority of responses before and after the course (P = 0.07). In a physical diagnosis course, medical students elicited specific and prioritized information using the nominal group technique. The course met student expectations regarding education of the evidence-based physical examination, building skills and confidence on the proper techniques and maneuvers and experiential learning. The novel use for curriculum evaluation may be used to evaluate other courses-especially comprehensive and multicomponent courses.

  11. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  12. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  13. Comparison Between Conventional and Automated Techniques for Blood Grouping and Crossmatching: Experience from a Tertiary Care Centre.

    Science.gov (United States)

    Bhagwat, Swarupa Nikhil; Sharma, Jayashree H; Jose, Julie; Modi, Charusmita J

    2015-01-01

    The routine immunohematological tests can be performed by automated as well as manual techniques. These techniques have advantages and disadvantages inherent to them. The present study aims to compare the results of manual and automated techniques for blood grouping and crossmatching so as to validate the automated system effectively. A total of 1000 samples were subjected to blood grouping by the conventional tube technique (CTT) and the automated microplate LYRA system on Techno TwinStation. A total of 269 samples (multitransfused patients and multigravida females) were compared for 927 crossmatches by the CTT in indirect antiglobulin phase against the column agglutination technique (CAT) performed on Techno TwinStation. For blood grouping, the study showed a concordance in results for 942/1000 samples (94.2%), discordance for 4/1000 (0.4%) samples and uninterpretable result for 54/1000 samples (5.4%). On resolution, the uninterpretable results reduced to 49/1000 samples (4.9%) with 951/1000 samples (95.1%) showing concordant results. For crossmatching, the automated CAT showed concordant results in 887/927 (95.6%) and discordant results in 3/927 (0.32%) crossmatches as compared to the CTT. Total 37/927 (3.9%) crossmatches were not interpretable by the automated technique. The automated system shows a high concordance of results with CTT and hence can be brought into routine use. However, the high proportion of uninterpretable results emphasizes on the fact that proper training and standardization are needed prior to its use.

  14. Techniques for small-bone lengthening in congenital anomalies of the hand and foot.

    Science.gov (United States)

    Minguella, J; Cabrera, M; Escolá, J

    2001-10-01

    The purpose of this study is to analyse three different lengthening techniques used in 31 small bones for congenital malformations of the hand and foot: 15 metacarpals, 12 metatarsals, 1 foot stump and 3 spaces between a previously transplanted phalanx end of the carpus or the metacarpal. Progressive lengthening with an external fixator device was performed in 23 cases: the callus distraction (callotasis) technique was used in 15 cases, whereas in the other 8 cases the speed of lengthening was faster and the defect bridged with a bone graft as a second stage. In another eight cases, a one-stage lengthening was performed. In the callotasis group, the total length gained ranged from 9 mm to 30 mm and the percentage of lengthening obtained (compared with the initial bone length) averaged 53.4%; in the fast lengthening group, the length gained ranged from 8 mm to 15 mm, and the average percentage of lengthening was 53.1%; and in the one-stage group, the length gained ranged from 7 mm to 15 mm, and the average percentage of lengthening was 43%. The overall complication rate was 22.5%.

  15. Integrating angle-frequency domain synchronous averaging technique with feature extraction for gear fault diagnosis

    Science.gov (United States)

    Zhang, Shengli; Tang, J.

    2018-01-01

    Gear fault diagnosis relies heavily on the scrutiny of vibration responses measured. In reality, gear vibration signals are noisy and dominated by meshing frequencies as well as their harmonics, which oftentimes overlay the fault related components. Moreover, many gear transmission systems, e.g., those in wind turbines, constantly operate under non-stationary conditions. To reduce the influences of non-synchronous components and noise, a fault signature enhancement method that is built upon angle-frequency domain synchronous averaging is developed in this paper. Instead of being averaged in the time domain, the signals are processed in the angle-frequency domain to solve the issue of phase shifts between signal segments due to uncertainties caused by clearances, input disturbances, and sampling errors, etc. The enhanced results are then analyzed through feature extraction algorithms to identify the most distinct features for fault classification and identification. Specifically, Kernel Principal Component Analysis (KPCA) targeting at nonlinearity, Multilinear Principal Component Analysis (MPCA) targeting at high dimensionality, and Locally Linear Embedding (LLE) targeting at local similarity among the enhanced data are employed and compared to yield insights. Numerical and experimental investigations are performed, and the results reveal the effectiveness of angle-frequency domain synchronous averaging in enabling feature extraction and classification.

  16. A new technique for noise reduction at coronary CT angiography with multi-phase data-averaging and non-rigid image registration

    Energy Technology Data Exchange (ETDEWEB)

    Tatsugami, Fuminari; Higaki, Toru; Nakamura, Yuko; Yamagami, Takuji; Date, Shuji; Awai, Kazuo [Hiroshima University, Department of Diagnostic Radiology, Minami-ku, Hiroshima (Japan); Fujioka, Chikako; Kiguchi, Masao [Hiroshima University, Department of Radiology, Minami-ku, Hiroshima (Japan); Kihara, Yasuki [Hiroshima University, Department of Cardiovascular Medicine, Minami-ku, Hiroshima (Japan)

    2015-01-15

    To investigate the feasibility of a newly developed noise reduction technique at coronary CT angiography (CTA) that uses multi-phase data-averaging and non-rigid image registration. Sixty-five patients underwent coronary CTA with prospective ECG-triggering. The range of the phase window was set at 70-80 % of the R-R interval. First, three sets of consecutive volume data at 70 %, 75 % and 80 % of the R-R interval were prepared. Second, we applied non-rigid registration to align the 70 % and 80 % images to the 75 % image. Finally, we performed weighted averaging of the three images and generated a de-noised image. The image noise and contrast-to-noise ratio (CNR) in the proximal coronary arteries between the conventional 75 % and the de-noised images were compared. Two radiologists evaluated the image quality using a 5-point scale (1, poor; 5, excellent). On de-noised images, mean image noise was significantly lower than on conventional 75 % images (18.3 HU ± 2.6 vs. 23.0 HU ± 3.3, P < 0.01) and the CNR was significantly higher (P < 0.01). The mean image quality score for conventional 75 % and de-noised images was 3.9 and 4.4, respectively (P < 0.01). Our method reduces image noise and improves image quality at coronary CTA. (orig.)

  17. Identification of Strategies to Facilitate Organ Donation among African Americans using the Nominal Group Technique

    Science.gov (United States)

    Qu, Haiyan; Shewchuk, Richard; Mannon, Roslyn B.; Gaston, Robert; Segev, Dorry L.; Mannon, Elinor C.; Martin, Michelle Y.

    2015-01-01

    Background and objectives African Americans are disproportionately affected by ESRD, but few receive a living donor kidney transplant. Surveys assessing attitudes toward donation have shown that African Americans are less likely to express a willingness to donate their own organs. Studies aimed at understanding factors that may facilitate the willingness of African Americans to become organ donors are needed. Design, setting, participants, & measurements A novel formative research method was used (the nominal group technique) to identify and prioritize strategies for facilitating increases in organ donation among church-attending African Americans. Four nominal group technique panel interviews were convened (three community and one clergy). Each community panel represented a distinct local church; the clergy panel represented five distinct faith-based denominations. Before nominal group technique interviews, participants completed a questionnaire that assessed willingness to become a donor; 28 African-American adults (≥19 years old) participated in the study. Results In total, 66.7% of participants identified knowledge- or education-related strategies as most important strategies in facilitating willingness to become an organ donor, a view that was even more pronounced among clergy. Three of four nominal group technique panels rated a knowledge-based strategy as the most important and included strategies, such as information on donor involvement and donation-related risks; 29.6% of participants indicated that they disagreed with deceased donation, and 37% of participants disagreed with living donation. Community participants’ reservations about becoming an organ donor were similar for living (38.1%) and deceased (33.4%) donation; in contrast, clergy participants were more likely to express reservations about living donation (33.3% versus 16.7%). Conclusions These data indicate a greater opposition to living donation compared with donation after one’s death

  18. Improving Study Habits of Junior High School Students Through Self-Management versus Group Discussion

    Science.gov (United States)

    Harris, Mary B.; Trujillo, Amaryllis E.

    1975-01-01

    Both a self-management approach, teaching the principles of behavior modification and self-control (n=36), and a group-discussion technique, involving discussion of study habits and problems (n=41), led to improvements in grade point averages compared with a no-treatment control group (n=36) for low-achieving junior high school students. (Author)

  19. Nominal group technique to select attributes for discrete choice experiments: an example for drug treatment choice in osteoporosis

    Directory of Open Access Journals (Sweden)

    Hiligsmann M

    2013-02-01

    Full Text Available Mickael Hiligsmann,1-3 Caroline van Durme,2 Piet Geusens,2 Benedict GC Dellaert,4 Carmen D Dirksen,3 Trudy van der Weijden,5 Jean-Yves Reginster,6 Annelies Boonen21Department of Health Services Research, School for Public Health and Primary Care (CAPHRI, Maastricht University, The Netherlands, 2Department of Internal Medicine, CAPHRI, Maastricht University, The Netherlands, 3Department of Clinical Epidemiology and Medical Technology Assessment, CAPHRI, Maastricht University, The Netherlands, 4Department of Business Economics, Erasmus Rotterdam University, The Netherlands, 5Department of General Practice, CAPHRI, Maastricht University, The Netherlands, 6Department of Public Health, Epidemiology and Health Economics, University of Liege, BelgiumBackground: Attribute selection represents an important step in the development of discrete-choice experiments (DCEs, but is often poorly reported. In some situations, the number of attributes identified may exceed what one may find possible to pilot in a DCE. Hence, there is a need to gain insight into methods to select attributes in order to construct the final list of attributes. This study aims to test the feasibility of using the nominal group technique (NGT to select attributes for DCEs.Methods: Patient group discussions (4–8 participants were convened to prioritize a list of 12 potentially important attributes for osteoporosis drug therapy. The NGT consisted of three steps: an individual ranking of the 12 attributes by importance from 1 to 12, a group discussion on each of the attributes, including a group review of the aggregate score of the initial rankings, and a second ranking task of the same attributes.Results: Twenty-six osteoporotic patients participated in five NGT sessions. Most (80% of the patients changed their ranking after the discussion. However, the average initial and final ranking did not differ markedly. In the final ranking, the most important medication attributes were

  20. DIRECT AND INDIRECT FLUORESCENT-ANTIBODY TECHNIQUES FOR THE PSITTACOSIS-LYMPHOGRANULOMA VENEREUM-TRACHOMA GROUP OF AGENTS1

    Science.gov (United States)

    Ross, Martin R.; Borman, Earle K.

    1963-01-01

    Ross, Martin R. (Connecticut State Department of Health, Hartford) and Earle K. Borman. Direct and indirect fluorescent-antibody techniques for the psittacosis-lymphogranuloma venereum-trachoma group of agents. J. Bacteriol. 85:851–858. 1963.—Direct and indirect fluorescent-antibody (FA) techniques were developed for the detection of group antigen in infected tissue cultures and the titration of group antibody in human antiserum. The growth of the agent of meningopneumonitis (MP) in mouse embryo lung cell monolayers was followed by infectivity and complement-fixing (CF) antigen titrations, and cytological examination of FA stained cultures. Although infectivity and CF antigen reached a peak at 2 days and remained constant for an additional 3 days, only cells tested 2 to 3 days after infection were suitable for FA staining with labeled anti-MP serum because of excessive artifacts in the older cultures. Fluorescein isothiocyanate-labeled rooster and guinea pig anti-MP serums and human antipsittacosis serums were titrated in direct FA and hemagglutination-inhibition (HI) tests. The rooster conjugate showed brighter staining and higher antibody titers than the guinea pig or human conjugates and was more effective in detecting minimal amounts of virus antigen. FA staining reactions with 1 and 2 units of labeled rooster serum were inhibited by unlabeled rooster serum but clear-cut inhibition with human antipsittacosis serum could not be demonstrated. The indirect FA technique was successfully used for the titration of group antibody in human serum. A comparison of the indirect FA, HI, and CF tests showed the indirect FA technique to be intermediate in sensitivity between the HI and CF tests. None of the three tests showed significant cross reactions with human serums reactive for influenza A and B; parainfluenza 1, 2, and 3; respiratory syncytial virus; Q fever; or the primary atypical pneumonia agent. PMID:14044954

  1. Microscopic description of pair transfer between two superfluid Fermi systems: Combining phase-space averaging and combinatorial techniques

    Science.gov (United States)

    Regnier, David; Lacroix, Denis; Scamps, Guillaume; Hashimoto, Yukio

    2018-03-01

    In a mean-field description of superfluidity, particle number and gauge angle are treated as quasiclassical conjugated variables. This level of description was recently used to describe nuclear reactions around the Coulomb barrier. Important effects of the relative gauge angle between two identical superfluid nuclei (symmetric collisions) on transfer probabilities and fusion barrier have been uncovered. A theory making contact with experiments should at least average over different initial relative gauge-angles. In the present work, we propose a new approach to obtain the multiple pair transfer probabilities between superfluid systems. This method, called phase-space combinatorial (PSC) technique, relies both on phase-space averaging and combinatorial arguments to infer the full pair transfer probability distribution at the cost of multiple mean-field calculations only. After benchmarking this approach in a schematic model, we apply it to the collision 20O+20O at various energies below the Coulomb barrier. The predictions for one pair transfer are similar to results obtained with an approximated projection method, whereas significant differences are found for two pairs transfer. Finally, we investigated the applicability of the PSC method to the contact between nonidentical superfluid systems. A generalization of the method is proposed and applied to the schematic model showing that the pair transfer probabilities are reasonably reproduced. The applicability of the PSC method to asymmetric nuclear collisions is investigated for the 14O+20O collision and it turns out that unrealistically small single- and multiple pair transfer probabilities are obtained. This is explained by the fact that relative gauge angle play in this case a minor role in the particle transfer process compared to other mechanisms, such as equilibration of the charge/mass ratio. We conclude that the best ground for probing gauge-angle effects in nuclear reaction and/or for applying the proposed

  2. 40 CFR 63.652 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...

  3. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  4. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  5. Serpent-COREDAX analysis of CANDU-6 time-average model

    Energy Technology Data Exchange (ETDEWEB)

    Motalab, M.A.; Cho, B.; Kim, W.; Cho, N.Z.; Kim, Y., E-mail: yongheekim@kaist.ac.kr [Korea Advanced Inst. of Science and Technology (KAIST), Dept. of Nuclear and Quantum Engineering Daejeon (Korea, Republic of)

    2015-07-01

    COREDAX-2 is the nuclear core analysis nodal code that has adopted the Analytic Function Expansion Nodal (AFEN) methodology which has been developed in Korea. AFEN method outperforms in terms of accuracy compared to other conventional nodal methods. To evaluate the possibility of CANDU-type core analysis using the COREDAX-2, the time-average analysis code system was developed. The two-group homogenized cross-sections were calculated using Monte Carlo code, Serpent2. A stand-alone time-average module was developed to determine the time-average burnup distribution in the core for a given fuel management strategy. The coupled Serpent-COREDAX-2 calculation converges to an equilibrium time-average model for the CANDU-6 core. (author)

  6. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  7. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  8. Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization techniques.

    Science.gov (United States)

    Chen, Shyi-Ming; Manalu, Gandhi Maruli Tua; Pan, Jeng-Shyang; Liu, Hsiang-Chuan

    2013-06-01

    In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization (PSO) techniques. First, we fuzzify the historical training data of the main factor and the secondary factor, respectively, to form two-factors second-order fuzzy logical relationships. Then, we group the two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, we obtain the optimal weighting vector for each fuzzy-trend logical relationship group by using PSO techniques to perform the forecasting. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index and the NTD/USD exchange rates. The experimental results show that the proposed method gets better forecasting performance than the existing methods.

  9. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  10. Generic features of the dynamics of complex open quantum systems: statistical approach based on averages over the unitary group.

    Science.gov (United States)

    Gessner, Manuel; Breuer, Heinz-Peter

    2013-04-01

    We obtain exact analytic expressions for a class of functions expressed as integrals over the Haar measure of the unitary group in d dimensions. Based on these general mathematical results, we investigate generic dynamical properties of complex open quantum systems, employing arguments from ensemble theory. We further generalize these results to arbitrary eigenvalue distributions, allowing a detailed comparison of typical regular and chaotic systems with the help of concepts from random matrix theory. To illustrate the physical relevance and the general applicability of our results we present a series of examples related to the fields of open quantum systems and nonequilibrium quantum thermodynamics. These include the effect of initial correlations, the average quantum dynamical maps, the generic dynamics of system-environment pure state entanglement and, finally, the equilibration of generic open and closed quantum systems.

  11. Performance values for non destructive assay (NDA) techniques applied to safeguards: the 2002 evaluation by the ESARDA NDA Working Group

    International Nuclear Information System (INIS)

    Guardini, S.

    2003-01-01

    The first evaluation of NDA performance values undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques (WGNDA) was published in 1993. Almost 10 years later the Working Group decided to review those values, to report about improvements and to issue new performance values for techniques which were not applied in the early nineties, or were at that time only emerging. Non-Destructive Assay techniques have become more and more important in recent years, and they are used to a large extent in nuclear material accountancy and control both by operators and control authorities. As a consequence, the performance evaluation for NDA techniques is of particular relevance to safeguards authorities in optimising Safeguards operations and reducing costs. Performance values are important also for NMAC regulators, to define detection levels, limits for anomalies, goal quantities and to negotiate basic audit rules. This paper presents the latest evaluation of ESARDA Performance Values (EPVs) for the most common NDA techniques currently used for the assay of nuclear materials for Safeguards purposes. The main topics covered by the document are: techniques for plutonium bearing materials: PuO 2 and MOX; techniques for U-bearing materials; techniques for U and Pu in liquid form; techniques for spent fuel assay. This issue of the performance values is the result of specific international round robin exercises, field measurements and ad hoc experiments, evaluated and discussed in the ESARDA NDA Working Group. (author)

  12. Assessing the Utility of the Nominal Group Technique as a Consensus-Building Tool in Extension-Led Avian Influenza Response Planning

    Science.gov (United States)

    Kline, Terence R.

    2013-01-01

    The intent of the project described was to apply the Nominal Group Technique (NGT) to achieve a consensus on Avian Influenza (AI) planning in Northeastern Ohio. Nominal Group Technique is a process first developed by Delbecq, Vande Ven, and Gustafsen (1975) to allow all participants to have an equal say in an open forum setting. A very diverse…

  13. Randomized clinical trial comparing control of maxillary anchorage with 2 retraction techniques.

    Science.gov (United States)

    Xu, Tian-Min; Zhang, Xiaoyun; Oh, Hee Soo; Boyd, Robert L; Korn, Edward L; Baumrind, Sheldon

    2010-11-01

    The objective of this pilot randomized clinical trial was to investigate the relative effectiveness of anchorage conservation of en-masse and 2-step retraction techniques during maximum anchorage treatment in patients with Angle Class I and Class II malocclusions. Sixty-four growing subjects (25 boys, 39 girls; 10.2-15.9 years old) who required maximum anchorage were randomized to 2 treatment techniques: en-masse retraction (n = 32) and 2-step retraction (n = 32); the groups were stratified by sex and starting age. Each patient was treated by a full-time clinic instructor experienced in the use of both retraction techniques at the orthodontic clinic of Peking University School of Stomatology in China. All patients used headgear, and most had transpalatal appliances. Lateral cephalograms taken before treatment and at the end of treatment were used to evaluate treatment-associated changes. Differences in maxillary molar mesial displacement and maxillary incisor retraction were measured with the before and after treatment tracings superimposed on the anatomic best fit of the palatal structures. Differences in mesial displacement of the maxillary first molar were compared between the 2 treatment techniques, between sexes, and between different starting-age groups. Average mesial displacement of the maxillary first molar was slightly less in the en-masse group than in the 2-step group (mean, -0.36 mm; 95% CI, -1.42 to 0.71 mm). The average mesial displacement of the maxillary first molar for both treatment groups pooled (n = 63, because 1 patient was lost to follow-up) was 4.3 ± 2.1 mm (mean ± standard deviation). Boys had significantly more mesial displacement than girls (mean difference, 1.3 mm; P <0.03). Younger adolescents had significantly more mesial displacement than older adolescents (mean difference, 1.3 mm; P <0.02). Average mesial displacement of the maxillary first molar with 2-step retraction was slightly greater than that for en-masse retraction, but the

  14. Respiration monitoring by Electrical Bioimpedance (EBI) Technique in a group of healthy males. Calibration equations

    International Nuclear Information System (INIS)

    Balleza, M; Vargas, M; Delgadillo, I; Kashina, S; Huerta, M R; Moreno, G

    2017-01-01

    Several research groups have proposed the electrical impedance tomography (EIT) in order to analyse lung ventilation. With the use of 16 electrodes, the EIT is capable to obtain a set of transversal section images of thorax. In previous works, we have obtained an alternating signal in terms of impedance corresponding to respiration from EIT images. Then, in order to transform those impedance changes into a measurable volume signal a set of calibration equations has been obtained. However, EIT technique is still expensive to attend outpatients in basics hospitals. For that reason, we propose the use of electrical bioimpedance (EBI) technique to monitor respiration behaviour. The aim of this study was to obtain a set of calibration equations to transform EBI impedance changes determined at 4 different frequencies into a measurable volume signal. In this study a group of 8 healthy males was assessed. From obtained results, a high mathematical adjustment in the group calibrations equations was evidenced. Then, the volume determinations obtained by EBI were compared with those obtained by our gold standard. Therefore, despite EBI does not provide a complete information about impedance vectors of lung compared with EIT, it is possible to monitor the respiration. (paper)

  15. Transmission and group-delay characterization of coupled resonator optical waveguides apodized through the longitudinal offset technique.

    Science.gov (United States)

    Doménech, J D; Muñoz, P; Capmany, J

    2011-01-15

    In this Letter, the amplitude and group delay characteristics of coupled resonator optical waveguides apodized through the longitudinal offset technique are presented. The devices have been fabricated in silicon-on-insulator technology employing deep ultraviolet lithography. The structures analyzed consisted of three racetracks resonators uniform (nonapodized) and apodized with the aforementioned technique, showing a delay of 5 ± 3 ps and 4 ± 0.5 ps over 1.6 and 1.4 nm bandwidths, respectively.

  16. Performance Values for Non-Destructive Assay (NDA) Technique Applied to Wastes: Evaluation by the ESARDA NDA Working Group

    International Nuclear Information System (INIS)

    Rackham, Jamie; Weber, Anne-Laure; Chard, Patrick

    2012-01-01

    The first evaluation of NDA performance values was undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques and was published in 1993. Almost ten years later in 2002 the Working Group reviewed those values and reported on improvements in performance values and new measurement techniques that had emerged since the original assessment. The 2002 evaluation of NDA performance values did not include waste measurements (although these had been incorporated into the 1993 exercise), because although the same measurement techniques are generally applied, the performance is significantly different compared to the assay of conventional Safeguarded special nuclear material. It was therefore considered more appropriate to perform a separate evaluation of performance values for waste assay. Waste assay is becoming increasingly important within the Safeguards community, particularly since the implementation of the Additional Protocol, which calls for declaration of plutonium and HEU bearing waste in addition to information on existing declared material or facilities. Improvements in the measurement performance in recent years, in particular the accuracy, mean that special nuclear materials can now be accounted for in wastes with greater certainty. This paper presents an evaluation of performance values for the NDA techniques in common usage for the assay of waste containing special nuclear material. The main topics covered by the document are: 1- Techniques for plutonium bearing solid wastes 2- Techniques for uranium bearing solid wastes 3 - Techniques for assay of fissile material in spent fuel wastes. Originally it was intended to include performance values for measurements of uranium and plutonium in liquid wastes; however, as no performance data for liquid waste measurements was obtained it was decided to exclude liquid wastes from this report. This issue of the performance values for waste assay has been evaluated and discussed by the ESARDA

  17. Performance Values for Non-Destructive Assay (NDA) Technique Applied to Wastes: Evaluation by the ESARDA NDA Working Group

    Energy Technology Data Exchange (ETDEWEB)

    Rackham, Jamie [Babcock International Group, Sellafield, Seascale, Cumbria, (United Kingdom); Weber, Anne-Laure [Institut de Radioprotection et de Surete Nucleaire Fontenay-Aux-Roses (France); Chard, Patrick [Canberra, Forss Business and Technology park, Thurso, Caithness (United Kingdom)

    2012-12-15

    The first evaluation of NDA performance values was undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques and was published in 1993. Almost ten years later in 2002 the Working Group reviewed those values and reported on improvements in performance values and new measurement techniques that had emerged since the original assessment. The 2002 evaluation of NDA performance values did not include waste measurements (although these had been incorporated into the 1993 exercise), because although the same measurement techniques are generally applied, the performance is significantly different compared to the assay of conventional Safeguarded special nuclear material. It was therefore considered more appropriate to perform a separate evaluation of performance values for waste assay. Waste assay is becoming increasingly important within the Safeguards community, particularly since the implementation of the Additional Protocol, which calls for declaration of plutonium and HEU bearing waste in addition to information on existing declared material or facilities. Improvements in the measurement performance in recent years, in particular the accuracy, mean that special nuclear materials can now be accounted for in wastes with greater certainty. This paper presents an evaluation of performance values for the NDA techniques in common usage for the assay of waste containing special nuclear material. The main topics covered by the document are: 1- Techniques for plutonium bearing solid wastes 2- Techniques for uranium bearing solid wastes 3 - Techniques for assay of fissile material in spent fuel wastes. Originally it was intended to include performance values for measurements of uranium and plutonium in liquid wastes; however, as no performance data for liquid waste measurements was obtained it was decided to exclude liquid wastes from this report. This issue of the performance values for waste assay has been evaluated and discussed by the ESARDA

  18. Effectiveness comparison of inferior alveolar nerve block anesthesia using direct and indirect technique

    Directory of Open Access Journals (Sweden)

    Rehatta Yongki

    2016-12-01

    Full Text Available Local anesthesia is important to do prior to tooth extraction procedure to control the patient's pain. Local anesthetic technique in dentistry consists of topical, infiltration, and anesthetic blocks. For molar tooth extraction, mandibular block technique is used either direct or indirect. This study aimed to see if there are differences in effectiveness of inferior alveolar nerve block anesthesia techniques between direct and indirect. This clinical experimental design study used 20 patients as samples during February-April. 10 patients were taken as a group that carried out direct technique while 10 others group conducted indirect techniques. The sample selection using purposive sampling method. Pain level were measured using objective assessments (pain experienced by the patient after a given stimulus and subjective evaluation (thick taste perceived by the patient. The average time of onset in direct and indirect techniques in each sample was 16.88 ± 5.30 and 102.00 ± 19.56 seconds (subjectively and 22.50 ± 8.02 and 159.00 ± 25.10 (objectively. These results indicated direct techniques onset faster than indirect techniques. The average duration of direct and indirect techniques respectively was 121.63 ± 8.80 and 87.80 ± 9.96 minutes (subjectively and 91.88 ± 8.37 and 60.20 ± 10.40 minutes (objectively. These results indicated the duration of direct technique is longer than indirect technique. There was no significant difference when viewed from anesthesia depth and aspiration level. This study indicated that direct technique had better effect than indirect technique in terms of onset and duration, while in terms of anesthesia depth and aspiration level was relatively equal. Insignificant differences were obtained when assessing anesthetic technique successful rate based on gender, age and extracted tooth.

  19. Using the IGCRA (individual, group, classroom reflective action technique to enhance teaching and learning in large accountancy classes

    Directory of Open Access Journals (Sweden)

    Cristina Poyatos

    2011-02-01

    Full Text Available First year accounting has generally been perceived as one of the more challenging first year business courses for university students. Various Classroom Assessment Techniques (CATs have been proposed to attempt to enrich and enhance student learning, with these studies generally positioning students as learners alone. This paper uses an educational case study approach and examines the implementation of the IGCRA (individual, group, classroom reflective action technique, a Classroom Assessment Technique, on first year accounting students’ learning performance. Building on theoretical frameworks in the areas of cognitive learning, social development, and dialogical learning, the technique uses reports to promote reflection on both learning and teaching. IGCRA was found to promote feedback on the effectiveness of student, as well as teacher satisfaction. Moreover, the results indicated formative feedback can assist to improve the learning and learning environment for a large group of first year accounting students. Clear guidelines for its implementation are provided in the paper.

  20. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  1. Teleradiology based CT colonography to screen a population group of a remote island; at average risk for colorectal cancer

    International Nuclear Information System (INIS)

    Lefere, Philippe; Silva, Celso; Gryspeerdt, Stefaan; Rodrigues, António; Vasconcelos, Rita; Teixeira, Ricardo; Gouveia, Francisco Henriques de

    2013-01-01

    Purpose: To prospectively assess the performance of teleradiology-based CT colonography to screen a population group of an island, at average risk for colorectal cancer. Materials and methods: A cohort of 514 patients living in Madeira, Portugal, was enrolled in the study. Institutional review board approval was obtained and all patients signed an informed consent. All patients underwent both CT colonography and optical colonoscopy. CT colonography was interpreted by an experienced radiologist at a remote centre using tele-radiology. Per-patient sensitivity, specificity, positive (PPV) and negative (NPV) predictive values with 95% confidence intervals (95%CI) were calculated for colorectal adenomas and advanced neoplasia ≥6 mm. Results: 510 patients were included in the study. CT colonography obtained a per-patient sensitivity, specificity, PPV and, NPV for adenomas ≥6 mm of 98.11% (88.6–99.9% 95% CI), 90.97% (87.8–93.4% 95% CI), 56.52% (45.8–66.7% 95% CI), 99.75% (98.4–99.9% 95% CI). For advanced neoplasia ≥6 mm per-patient sensitivity, specificity, PPV and, NPV were 100% (86.7–100% 95% CI), 87.07% (83.6–89.9% 95% CI), 34.78% (25.3–45.5% 95% CI) and 100% (98.8–100% 95% CI), respectively. Conclusion: In this prospective trial, teleradiology-based CT colonography was accurate to screen a patient cohort of a remote island, at average risk for colorectal cancer

  2. Teleradiology based CT colonography to screen a population group of a remote island; at average risk for colorectal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Lefere, Philippe, E-mail: radiologie@skynet.be [VCTC, Virtual Colonoscopy Teaching Centre, Akkerstraat 32c, B-8830 Hooglede (Belgium); Silva, Celso, E-mail: caras@uma.pt [Human Anatomy of Medical Course, University of Madeira, Praça do Município, 9000-082 Funchal (Portugal); Gryspeerdt, Stefaan, E-mail: stefaan@sgryspeerdt.be [VCTC, Virtual Colonoscopy Teaching Centre, Akkerstraat 32c, B-8830 Hooglede (Belgium); Rodrigues, António, E-mail: nucleo@nid.pt [Nucleo Imagem Diagnostica, Rua 5 De Outubro, 9000-216 Funchal (Portugal); Vasconcelos, Rita, E-mail: rita@uma.pt [Department of Engineering and Mathematics, University of Madeira, Praça do Município, 9000-082 Funchal (Portugal); Teixeira, Ricardo, E-mail: j.teixeira1947@gmail.com [Department of Gastroenterology, Central Hospital of Funchal, Avenida Luís de Camões, 9004513 Funchal (Portugal); Gouveia, Francisco Henriques de, E-mail: fhgouveia@netmadeira.com [LANA, Pathology Centre, Rua João Gago, 10, 9000-071 Funchal (Portugal)

    2013-06-15

    Purpose: To prospectively assess the performance of teleradiology-based CT colonography to screen a population group of an island, at average risk for colorectal cancer. Materials and methods: A cohort of 514 patients living in Madeira, Portugal, was enrolled in the study. Institutional review board approval was obtained and all patients signed an informed consent. All patients underwent both CT colonography and optical colonoscopy. CT colonography was interpreted by an experienced radiologist at a remote centre using tele-radiology. Per-patient sensitivity, specificity, positive (PPV) and negative (NPV) predictive values with 95% confidence intervals (95%CI) were calculated for colorectal adenomas and advanced neoplasia ≥6 mm. Results: 510 patients were included in the study. CT colonography obtained a per-patient sensitivity, specificity, PPV and, NPV for adenomas ≥6 mm of 98.11% (88.6–99.9% 95% CI), 90.97% (87.8–93.4% 95% CI), 56.52% (45.8–66.7% 95% CI), 99.75% (98.4–99.9% 95% CI). For advanced neoplasia ≥6 mm per-patient sensitivity, specificity, PPV and, NPV were 100% (86.7–100% 95% CI), 87.07% (83.6–89.9% 95% CI), 34.78% (25.3–45.5% 95% CI) and 100% (98.8–100% 95% CI), respectively. Conclusion: In this prospective trial, teleradiology-based CT colonography was accurate to screen a patient cohort of a remote island, at average risk for colorectal cancer.

  3. Group Inquiry Techniques for Teaching Writing.

    Science.gov (United States)

    Hawkins, Thom

    The small size of college composition classes encourages exciting and meaningful interaction, especially when students are divided into smaller, autonomous groups for all or part of the hour. This booklet discusses the advantages of combining the inquiry method (sometimes called the discovery method) with a group approach and describes specific…

  4. Chaotic renormalization group approach to disordered systems

    International Nuclear Information System (INIS)

    Oliveira, P.M.C. de; Continentino, M.A.; Makler, S.S.; Anda, E.V.

    1984-01-01

    We study the eletronic properties of the disordered linear chain using a technique previously developed by some of the authors for an ordered chain. The equations of motion for the one electron Green function are obtained and the configuration average is done according to the GK scheme. The dynamical problem is transformed, using a renormalization group procedure, into a bidimensional map. The properties of this map are investigated and related to the localization properties of the eletronic system. (Author) [pt

  5. ASSESSMENT OF DYNAMIC PRA TECHNIQUES WITH INDUSTRY AVERAGE COMPONENT PERFORMANCE DATA

    Energy Technology Data Exchange (ETDEWEB)

    Yadav, Vaibhav; Agarwal, Vivek; Gribok, Andrei V.; Smith, Curtis L.

    2017-06-01

    In the nuclear industry, risk monitors are intended to provide a point-in-time estimate of the system risk given the current plant configuration. Current risk monitors are limited in that they do not properly take into account the deteriorating states of plant equipment, which are unit-specific. Current approaches to computing risk monitors use probabilistic risk assessment (PRA) techniques, but the assessment is typically a snapshot in time. Living PRA models attempt to address limitations of traditional PRA models in a limited sense by including temporary changes in plant and system configurations. However, information on plant component health are not considered. This often leaves risk monitors using living PRA models incapable of conducting evaluations with dynamic degradation scenarios evolving over time. There is a need to develop enabling approaches to solidify risk monitors to provide time and condition-dependent risk by integrating traditional PRA models with condition monitoring and prognostic techniques. This paper presents estimation of system risk evolution over time by integrating plant risk monitoring data with dynamic PRA methods incorporating aging and degradation. Several online, non-destructive approaches have been developed for diagnosing plant component conditions in nuclear industry, i.e., condition indication index, using vibration analysis, current signatures, and operational history [1]. In this work the component performance measures at U.S. commercial nuclear power plants (NPP) [2] are incorporated within the various dynamic PRA methodologies [3] to provide better estimates of probability of failures. Aging and degradation is modeled within the Level-1 PRA framework and is applied to several failure modes of pumps and can be extended to a range of components, viz. valves, generators, batteries, and pipes.

  6. Regional cerebral blood flow using 133Xenon intra-venous technique, 1

    International Nuclear Information System (INIS)

    Yonekura, Masahiro; Teramoto, Shigeyoshi; Moriyama, Tadayoshi

    1990-01-01

    We used the noninvasive 133 Xenon venous technique to measure 3622 regional cerebral blood flows (rCBFs) in 1955 cases for last about six years. The majority of patients was in their fifties or sixties, and their diagnosis was ischemic cerebrovascular disease. Sixty-four healthy, non-hospitalized volunteers (10∼76 years) were studied as control value. The age-related curve of rCBF showed a rapid decrease in young age groups and a gradual decrease in older age groups. The curve was well fitted to the hyperbola (X-13.0621)(Y-42.6038)=556.493. The correlation coefficient was 0.93. This finding showed that the declined rCBF related with age was attributed to more than two factors. When cerebrovascular CO 2 reactivity was tested in the healthy control group, the rCBF on average increased to 90.5 ml/100 g/min from 70.2 ml/100 g/min (28.9%) being accompanied with the elevation of Pco 2 of 11.4 mmHg in arterial blood gas on average. The CO 2 reactivity index was 2.75 ± 1.65 on average. On the other hand, following an intravenous injection of Diamox (1 g) the rCBF increased to 80.0 ml/100 g/min from 59.6 ml/100 g/min (34.2%) on average in the control group. (author)

  7. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  8. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    International Nuclear Information System (INIS)

    Cooling, M P; Humphrey, V F; Wilkens, V

    2011-01-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  9. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    Science.gov (United States)

    Cooling, M. P.; Humphrey, V. F.; Wilkens, V.

    2011-02-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  10. The sterile-male-release technique in Great Lakes sea lamprey management

    Science.gov (United States)

    Twohey, Michael B.; Heinrich, John W.; Seelye, James G.; Fredricks, Kim T.; Bergstedt, Roger A.; Kaye, Cheryl A.; Scholefield, Ron J.; McDonald, Rodney B.; Christie, Gavin C.

    2003-01-01

    The implementation of a sterile-male-release technique from 1991 through 1999 and evaluation of its effectiveness in the Great Lakes sea lamprey (Petromyzon marinus) management program is reviewed. Male sea lampreys were injected with the chemosterilant bisazir (P,P-bis(1-aziridinyl)-N-methylphosphinothioic amide) using a robotic device. Quality assurance testing indicated the device delivered a consistent and effective dose of bisazir. Viability of embryos in an untreated control group was 64% compared to 1% in a treatment group. A task force developed nine hypotheses to guide implementation and evaluation of the technique. An annual average of 26,000 male sea lampreys was harvested from as many as 17 Great Lakes tributaries for use in the technique. An annual average of 16,100 sterilized males was released into 33 tributaries of Lake Superior to achieve a theoretical 59% reduction in larval production during 1991 to 1996. The average number of sterile males released in the St. Marys River increased from 4,000 during 1991 to 1996 to 20,100 during 1997 to 1999. The theoretical reduc-stertion in reproduction when combined with trapping was 57% during 1991 to 1996 and 86% during 1997 to 1999. Evaluation studies demonstrated that sterilized males were competitive and reduced production of larvae in streams. Field studies and simulation models suggest reductions in reproduction will result in fewer recruits, but there is risk of periodic high recruitment events independent of sterile-male release. Strategies to reduce reproduction will be most reliable when low densities of reproducing females are achieved. Expansion of the technique is limited by access to additional males for sterilization. Sterile-male release and other alternative controls are important in delivering integrated pest management and in reducing reliance on pesticides.

  11. Multiple-level defect species evaluation from average carrier decay

    Science.gov (United States)

    Debuf, Didier

    2003-10-01

    An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.

  12. The definition and computation of average neutron lifetimes

    International Nuclear Information System (INIS)

    Henry, A.F.

    1983-01-01

    A precise physical definition is offered for a class of average lifetimes for neutrons in an assembly of materials, either multiplying or not, or if the former, critical or not. A compact theoretical expression for the general member of this class is derived in terms of solutions to the transport equation. Three specific definitions are considered. Particular exact expressions for these are derived and reduced to simple algebraic formulas for one-group and two-group homogeneous bare-core models

  13. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  14. The partially averaged field approach to cosmic ray diffusion

    International Nuclear Information System (INIS)

    Jones, F.C.; Birmingham, T.J.; Kaiser, T.B.

    1976-08-01

    The kinetic equation for particles interacting with turbulent fluctuations is derived by a new nonlinear technique which successfully corrects the difficulties associated with quasilinear theory. In this new method the effects of the fluctuations are evaluated along particle orbits which themselves include the effects of a statistically averaged subset of the possible configurations of the turbulence. The new method is illustrated by calculating the pitch angle diffusion coefficient D/sub Mu Mu/ for particles interacting with slab-model magnetic turbulence, i.e., magnetic fluctuations linearly polarized transverse to a mean magnetic field. Results are compared with those of quasilinear theory and also with those of Monte Carlo calculations. The major effect of the nonlinear treatment in this illustration is the determination of D/sub Mu Mu/ in the vicinity of 90 deg pitch angles where quasilinear theory breaks down. The spatial diffusion coefficient parallel to a mean magnetic field is evaluated using D/sub Mu Mu/ as calculated by this technique. It is argued that the partially averaged field method is not limited to small amplitude fluctuating fields, and is, hence, not a perturbation theory

  15. Joint kinetics in rearfoot versus forefoot running: implications of switching technique.

    Science.gov (United States)

    Stearne, Sarah M; Alderson, Jacqueline A; Green, Benjamin A; Donnelly, Cyril J; Rubenson, Jonas

    2014-08-01

    To better understand the mechanical factors differentiating forefoot and rearfoot strike (RFS) running, as well as the mechanical consequences of switching techniques, we assessed lower limb joint kinetics in habitual and imposed techniques in both groups. All participants performed both RFS and forefoot strike (FFS) techniques on an instrumented treadmill at 4.5 m·s while force and kinematic data were collected. Total (sum of ankle, knee, and hip) lower limb work and average power did not differ between habitual RFS and FFS runners. However, moments, negative work and negative instantaneous and average power during stance were greater at the knee in RFS and at the ankle in FFS techniques. When habitual RFS runners switched to an imposed FFS, they were able to replicate the sagittal plane mechanics of a habitual FFS; however, the ankle internal rotation moment was increased by 33%, whereas the knee abduction moments were not reduced, remaining 48.5% higher than a habitual FFS. In addition, total positive and negative lower limb average power was increased by 17% and 9%, respectively. When habitual FFS runners switched to an imposed RFS, they were able to match the mechanics of habitual RFS runners with the exception of knee abduction moments, which remained 38% lower than a habitual RFS and, surprisingly, a reduction of total lower limb positive average power of 10.5%. There appears to be no clear overall mechanical advantage of a habitual FFS or RFS. Switching techniques may have different injury implications given the altered distribution in loading between joints but should be weighed against the overall effects on limb mechanics; adopting an imposed RFS may prove the most beneficial given the absence of any clear mechanical performance decrements.

  16. Investigating the Effects of Group Practice Performed Using Psychodrama Techniques on Adolescents' Conflict Resolution Skills

    Science.gov (United States)

    Karatas, Zeynep

    2011-01-01

    The aim of this study is to examine the effects of group practice which is performed using psychodrama techniques on adolescents' conflict resolution skills. The subjects, for this study, were selected among the high school students who have high aggression levels and low problem solving levels attending Haci Zekiye Arslan High School, in Nigde.…

  17. Clinical education and training: Using the nominal group technique in research with radiographers to identify factors affecting quality and capacity

    International Nuclear Information System (INIS)

    Williams, P.L.; White, N.; Klem, R.; Wilson, S.E.; Bartholomew, P.

    2006-01-01

    There are a number of group-based research techniques available to determine the views or perceptions of individuals in relation to specific topics. This paper reports on one method, the nominal group technique (NGT) which was used to collect the views of important stakeholders on the factors affecting the quality of, and capacity to provide clinical education and training in diagnostic imaging and radiotherapy and oncology departments in the UK. Inclusion criteria were devised to recruit learners, educators, practitioners and service managers to the nominal groups. Eight regional groups comprising a total of 92 individuals were enrolled; the numbers in each group varied between 9 and 13. A total of 131 items (factors) were generated across the groups (mean = 16.4). Each group was then asked to select the top three factors from their original list. Consensus on the important factors amongst groups found that all eight groups agreed on one item: staff attitude, motivation and commitment to learners. The 131 items were organised into themes using content analysis. Five main categories and a number of subcategories emerged. The study concluded that the NGT provided data which were congruent with the issues faced by practitioners and learners in their daily work; this was of vital importance if the findings are to be regarded with credibility. Further advantages and limitations of the method are discussed, however it is argued that the NGT is a useful technique to gather relevant opinion; to select priorities and to reach consensus on a wide range of issues

  18. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  19. PENERAPAN PEMBELAJARAN GROUP INVESTIGATION BERBASIS INKUIRI TERBIMBING UNTUK MENINGKATKAN HASIL BELAJAR KOLOID

    Directory of Open Access Journals (Sweden)

    Arinda Dian Wijayanti

    2015-11-01

    Full Text Available This study aims to determine whether the implementation of inquiry-based learning Group Investigation Guided influential in improving learning outcomes of chemistry in competence of Colloid Systems and how the responses of teachers and students towards applied learning. Sampling used cluster random sampling technique, obtained class XI IPA 1 as the experimental class and the XI IPA 4 as control class. Retrieval of data used techniques: tests, observations, questionnaires, and documentation. The results showed that the average grade of experimental  class was higher than the control class based on the test of the right hand, with both of the posttest score of t count  6.89 over t table of 2.00. The results of the analysis of the magnitude of the effect between variables obtained coefficient of determination 73.38%, mean that the inquirybased learning Group Investigation Guided Contributed to increasing students cognitive learning outcomes of 73.38%. On Affective and psychomotor assessment, the average grades of the experimental class learning better than classroom control. Analysis of the questionnaire responses of teachers and students also indicated that inquiry-based learning Group Investigation Guided obtained a good response. This study concluded that the implementation of inquiry-based learning Group Investigation Guided influenced in improving learning outcomes chemistry class XI student with competencies related colloidal systems and obtained good response from teachers and students.Keywords: Group Investigation, Learning Outcomes, Inquiry-Guided

  20. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  1. Data structure techniques for the graphical special unitary group approach to arbitrary spin representations

    International Nuclear Information System (INIS)

    Kent, R.D.; Schlesinger, M.

    1987-01-01

    For the purpose of computing matrix elements of quantum mechanical operators in complex N-particle systems it is necessary that as much of each irreducible representation be stored in high-speed memory as possible in order to achieve the highest possible rate of computations. A graph theoretic approach to the representation of N-particle systems involving arbitrary single-particle spin is presented. The method involves a generalization of a technique employed by Shavitt in developing the graphical group approach (GUGA) to electronic spin-orbitals. The methods implemented in GENDRT and DRTDIM overcome many deficiencies inherent in other approaches, particularly with respect to utilization of memory resources, computational efficiency in the recognition and evaluation of non-zero matrix elements of certain group theoretic operators and complete labelling of all the basis states of the permutation symmetry (S N ) adapted irreducible representations of SU(n) groups. (orig.)

  2. Improving sensitivity in micro-free flow electrophoresis using signal averaging

    Science.gov (United States)

    Turgeon, Ryan T.; Bowser, Michael T.

    2009-01-01

    Microfluidic free-flow electrophoresis (μFFE) is a separation technique that separates continuous streams of analytes as they travel through an electric field in a planar flow channel. The continuous nature of the μFFE separation suggests that approaches more commonly applied in spectroscopy and imaging may be effective in improving sensitivity. The current paper describes the S/N improvements that can be achieved by simply averaging multiple images of a μFFE separation; 20–24-fold improvements in S/N were observed by averaging the signal from 500 images recorded for over 2 min. Up to an 80-fold improvement in S/N was observed by averaging 6500 images. Detection limits as low as 14 pM were achieved for fluorescein, which is impressive considering the non-ideal optical set-up used in these experiments. The limitation to this signal averaging approach was the stability of the μFFE separation. At separation times longer than 20 min bubbles began to form at the electrodes, which disrupted the flow profile through the device, giving rise to erratic peak positions. PMID:19319908

  3. Basic interrupted versus continuous suturing techniques in bronchial anastomosis following sleeve lobectomy in dogs.

    Science.gov (United States)

    Bayram, Ahmet Sami; Erol, Mehmet Muharrem; Salci, Hakan; Ozyiğit, Ozgür; Görgül, Sacit; Gebitekin, Cengiz

    2007-12-01

    Sleeve resection with or without lung resection is a valid conservative operation for patients with benign or malignant tumors; it enables the preservation of lung parenchyma. The aim of this prospective randomized study was to compare complications, operating time, and bronchial healing between the techniques of interrupted and continuous suturing for bronchial anastomosis in dogs. Twenty adult mongrel dogs each weighing 18-22 kg (average: 20 kg) were divided into two groups according to the anastomosis technique performed: group A, interrupted suturing and group B, continuous suturing. Each group comprised of 10 dogs. Following right thoracotomy, sleeve resection of the right cranial lobe was performed in all dogs. Basic interrupted sutures using 4/0 Vicryl (Ethicon, USA) were used in group A, and continuous sutures were used in group B. The median anastomosis time was 15.2 min (range: 13-21 min) in group A and 9.6 min (range: 8-13 min) in group B. In all dogs, the anastomosis line was resected via right pneumonectomy for histopathological investigation 1 month after sleeve resection. Histopathological examination revealed that the healing of the anastomosis was not affected by the suturing technique applied. One dog from each group died on the fourth postoperative day; Fisher's exact test, p=0.763. Our research revealed that the healing of the anastomosis was not affected by the suturing technique performed.

  4. Thermodynamic Integration Methods, Infinite Swapping and the Calculation of Generalized Averages

    OpenAIRE

    Doll, J. D.; Dupuis, P.; Nyquist, P.

    2016-01-01

    In the present paper we examine the risk-sensitive and sampling issues associated with the problem of calculating generalized averages. By combining thermodynamic integration and Stationary Phase Monte Carlo techniques, we develop an approach for such problems and explore its utility for a prototypical class of applications.

  5. Transcystic cholangiogram access via rubber band with early withdrawal after liver transplantation: a safe technique.

    Science.gov (United States)

    Innocenti, F; Hepp, J; Humeres, R; Rios, H; Suárez, L; Zapata, R; Sanhueza, E; Rius, M

    2004-01-01

    Since different techniques have been described for cholangiogram access after liver transplantation, we compared two different methods for patients with duct-to-duct biliary anastomoses. Adult liver transplant patients from program inception in 1993 to May 2003 in whom a duct-to-duct biliary anastomosis with a T-tube choledochostomy were compared with those having a transcystic duct catheter using a rubber band. We excluded 10 patients in which a different technique was used or graft or patient survived less than 21 days. Group A (n = 28,) had a number 10 T-tube exteriorized through the recipient main bile duct; and group B (n = 33) a number 5 Bard ureteral stent tied to the cystic stump with reabsorbable suture and secured with a hemorrhoidal rubber ligature. The biliary complication rate was lower among the transcystic catheter group (9.1%, 3/33) compared to the T-tube group (35.7%, 10/28). Postcatheter withdrawal peritonitis was present in two patients in the T-tube group, one of whom required emergency laparotomy. A satisfactory postoperative cholangiogram was obtained in both groups. The transcystic catheter was withdrawn on average at 29 days, compared to 136 days in the T-tube group. Both techniques are equally effective in obtaining a satisfactory postoperative cholangiogram. However, the transcystic catheter technique allows a significantly earlier withdrawal with fewer complications compared to the T-tube technique.

  6. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  7. FPGA based computation of average neutron flux and e-folding period for start-up range of reactors

    International Nuclear Information System (INIS)

    Ram, Rajit; Borkar, S.P.; Dixit, M.Y.; Das, Debashis

    2013-01-01

    Pulse processing instrumentation channels used for reactor applications, play a vital role to ensure nuclear safety in startup range of reactor operation and also during fuel loading and first approach to criticality. These channels are intended for continuous run time computation of equivalent reactor core neutron flux and e-folding period. This paper focuses only the computational part of these instrumentation channels which is implemented in single FPGA using 32-bit floating point arithmetic engine. The computations of average count rate, log of average count rate, log rate and reactor period are done in VHDL using digital circuit realization approach. The computation of average count rate is done using fully adaptive window size moving average method, while Taylor series expansion for logarithms is implemented in FPGA to compute log of count rate, log rate and reactor e-folding period. This paper describes the block diagrams of digital logic realization in FPGA and advantage of fully adaptive window size moving average technique over conventional fixed size moving average technique for pulse processing of reactor instrumentations. (author)

  8. Nominal group technique: a brainstorming tool for identifying areas to improve pain management in hospitalized patients.

    Science.gov (United States)

    Peña, Adolfo; Estrada, Carlos A; Soniat, Debbie; Taylor, Benjamin; Burton, Michael

    2012-01-01

    Pain management in hospitalized patients remains a priority area for improvement; effective strategies for consensus development are needed to prioritize interventions. To identify challenges, barriers, and perspectives of healthcare providers in managing pain among hospitalized patients. Qualitative and quantitative group consensus using a brainstorming technique for quality improvement-the nominal group technique (NGT). One medical, 1 medical-surgical, and 1 surgical hospital unit at a large academic medical center. Nurses, resident physicians, patient care technicians, and unit clerks. Responses and ranking to the NGT question: "What causes uncontrolled pain in your unit?" Twenty-seven health workers generated a total of 94 ideas. The ideas perceived contributing to a suboptimal pain control were grouped as system factors (timeliness, n = 18 ideas; communication, n = 11; pain assessment, n = 8), human factors (knowledge and experience, n = 16; provider bias, n = 8; patient factors, n = 19), and interface of system and human factors (standardization, n = 14). Knowledge, timeliness, provider bias, and patient factors were the top ranked themes. Knowledge and timeliness are considered main priorities to improve pain control. NGT is an efficient tool for identifying general and context-specific priority areas for quality improvement; teams of healthcare providers should consider using NGT to address their own challenges and barriers. Copyright © 2011 Society of Hospital Medicine.

  9. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  10. Extrapolation techniques evaluating 24 hours of average electromagnetic field emitted by radio base station installations: spectrum analyzer measurements of LTE and UMTS signals

    International Nuclear Information System (INIS)

    Mossetti, Stefano; Bartolo, Daniela de; Nava, Elisa; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina

    2017-01-01

    International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. (authors)

  11. Numerical simulation of flow induced by a pitched blade turbine. Comparison of the sliding mesh technique and an averaged source term method

    Energy Technology Data Exchange (ETDEWEB)

    Majander, E.O.J.; Manninen, M.T. [VTT Energy, Espoo (Finland)

    1996-12-31

    The flow induced by a pitched blade turbine was simulated using the sliding mesh technique. The detailed geometry of the turbine was modelled in a computational mesh rotating with the turbine and the geometry of the reactor including baffles was modelled in a stationary co-ordinate system. Effects of grid density were investigated. Turbulence was modelled by using the standard k-{epsilon} model. Results were compared to experimental observations. Velocity components were found to be in good agreement with the measured values throughout the tank. Averaged source terms were calculated from the sliding mesh simulations in order to investigate the reliability of the source term approach. The flow field in the tank was then simulated in a simple grid using these source terms. Agreement with the results of the sliding mesh simulations was good. Commercial CFD-code FLUENT was used in all simulations. (author)

  12. Numerical simulation of flow induced by a pitched blade turbine. Comparison of the sliding mesh technique and an averaged source term method

    Energy Technology Data Exchange (ETDEWEB)

    Majander, E O.J.; Manninen, M T [VTT Energy, Espoo (Finland)

    1997-12-31

    The flow induced by a pitched blade turbine was simulated using the sliding mesh technique. The detailed geometry of the turbine was modelled in a computational mesh rotating with the turbine and the geometry of the reactor including baffles was modelled in a stationary co-ordinate system. Effects of grid density were investigated. Turbulence was modelled by using the standard k-{epsilon} model. Results were compared to experimental observations. Velocity components were found to be in good agreement with the measured values throughout the tank. Averaged source terms were calculated from the sliding mesh simulations in order to investigate the reliability of the source term approach. The flow field in the tank was then simulated in a simple grid using these source terms. Agreement with the results of the sliding mesh simulations was good. Commercial CFD-code FLUENT was used in all simulations. (author)

  13. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    Science.gov (United States)

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  14. Doses to worker groups in the nuclear industry

    International Nuclear Information System (INIS)

    Khan, T.; Baum, J.W.

    1992-01-01

    This article presents some of the results of a study carried out at the Brookhaven National Laboratory's ALARA Center on doses to various worker groups in the U.S. nuclear industry. In this study, data from workers in the industry were divided into male and female groups; the average radiation dose of these tow groups and the correlation of dose with age are presented. The male and female workers were further considered in the various sectors of the industry, and correlations of dose with age for each sector were investigated. For male workers, a downward correlation with age was observed, while for women there appeared to be a slight upward correlation. Data form 13 PWR and 9 BWR plants shows that a small, but important, group of workers would be affected by the NCRP proposed constraint of workers' lifetime dose in rem being maintained less than their ages. Various techniques proposed by the plants to reduce dose to this critical group of workers are also presented

  15. Simple Fully Automated Group Classification on Brain fMRI

    International Nuclear Information System (INIS)

    Honorio, J.; Goldstein, R.; Samaras, D.; Tomasi, D.; Goldstein, R.Z.

    2010-01-01

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statistical theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.

  16. Simple Fully Automated Group Classification on Brain fMRI

    Energy Technology Data Exchange (ETDEWEB)

    Honorio, J.; Goldstein, R.; Honorio, J.; Samaras, D.; Tomasi, D.; Goldstein, R.Z.

    2010-04-14

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statistical theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.

  17. The Analysis Performance Method Naive Bayes Andssvm Determine Pattern Groups of Disease

    Science.gov (United States)

    Sitanggang, Rianto; Tulus; Situmorang, Zakarias

    2017-12-01

    Information is a very important element and into the daily needs of the moment, to get a precise and accurate information is not easy, this research can help decision makers and make a comparison. Researchers perform data mining techniques to analyze the performance of methods and algorithms naïve Bayes methods Smooth Support Vector Machine (ssvm) in the grouping of the disease.The pattern of disease that is often suffered by people in the group can be in the detection area of the collection of information contained in the medical record. Medical records have infromasi disease by patients in coded according to standard WHO. Processing of medical record data to find patterns of this group of diseases that often occur in this community take the attribute address, sex, type of disease, and age. Determining the next analysis is grouping of four ersebut attribute. From the results of research conducted on the dataset fever diabete mellitus, naïve Bayes method produces an average value of 99% and an accuracy and SSVM method produces an average value of 93% accuracy

  18. Unbiased group-wise image registration: applications in brain fiber tract atlas construction and functional connectivity analysis.

    Science.gov (United States)

    Geng, Xiujuan; Gu, Hong; Shin, Wanyong; Ross, Thomas J; Yang, Yihong

    2011-10-01

    We propose an unbiased implicit-reference group-wise (IRG) image registration method and demonstrate its applications in the construction of a brain white matter fiber tract atlas and the analysis of resting-state functional MRI (fMRI) connectivity. Most image registration techniques pair-wise align images to a selected reference image and group analyses are performed in the reference space, which may produce bias. The proposed method jointly estimates transformations, with an elastic deformation model, registering all images to an implicit reference corresponding to the group average. The unbiased registration is applied to build a fiber tract atlas by registering a group of diffusion tensor images. Compared to reference-based registration, the IRG registration improves the fiber track overlap within the group. After applying the method in the fMRI connectivity analysis, results suggest a general improvement in functional connectivity maps at a group level in terms of larger cluster size and higher average t-scores.

  19. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  20. Difficulties and Problematic Steps in Teaching the Onstep Technique for Inguinal Hernia Repair, Results from a Focus Group Interview

    DEFF Research Database (Denmark)

    Andresen, Kristoffer; Laursen, Jannie; Rosenberg, Jacob

    2016-01-01

    technique for inguinal hernia repair, seen from the instructor's point of view. Methods. We designed a qualitative study using a focus group to allow participants to elaborate freely and facilitate a discussion. Participants were surgeons with extensive experience in performing the Onstep technique from...... course should preferably have experience with other types of hernia repairs. If trainees are inexperienced, the training setup should be a traditional step-by-step programme. A training setup should consist of an explanation of the technique with emphasis on anatomy and difficult parts of the procedure...

  1. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    Science.gov (United States)

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  2. Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2014

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y.; et al.

    2014-12-23

    This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2014. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, $CP$ violation parameters, parameters of semileptonic decays and CKM matrix elements.

  3. International Society of Gynecological Pathologists (ISGyP) Endometrial Cancer Project: Guidelines From the Special Techniques and Ancillary Studies Group.

    Science.gov (United States)

    Cho, Kathleen R; Cooper, Kumarasen; Croce, Sabrina; Djordevic, Bojana; Herrington, Simon; Howitt, Brooke; Hui, Pei; Ip, Philip; Koebel, Martin; Lax, Sigurd; Quade, Bradley J; Shaw, Patricia; Vidal, August; Yemelyanova, Anna; Clarke, Blaise; Hedrick Ellenson, Lora; Longacre, Teri A; Shih, Ie-Ming; McCluggage, W Glenn; Malpica, Anais; Oliva, Esther; Parkash, Vinita; Matias-Guiu, Xavier

    2018-04-11

    The aim of this article is to propose guidelines and recommendations in problematic areas in pathologic reporting of endometrial carcinoma (EC) regarding special techniques and ancillary studies. An organizing committee designed a comprehensive survey with different questions related to pathologic features, diagnosis, and prognosis of EC that was sent to all members of the International Society of Gynecological Pathologists. The special techniques/ancillary studies group received 4 different questions to be addressed. Five members of the group reviewed the literature and came up with recommendations and an accompanying text which were discussed and agreed upon by all members of the group. Twelve different recommendations are made. They address the value of immunohistochemistry, ploidy, and molecular analysis for assessing prognosis in EC, the value of steroid hormone receptor analysis to predict response to hormone therapy, and parameters regarding applying immunohistochemistry and molecular tests for assessing mismatch deficiency in EC.

  4. Biomechanical comparison of the four-strand cruciate and Strickland techniques in animal tendons

    Directory of Open Access Journals (Sweden)

    Raquel Bernardelli Iamaguchi

    2013-12-01

    Full Text Available OBJECTIVE: The objective of this study was to compare two four-strand techniques: the traditional Strickland and cruciate techniques. METHODS: Thirty-eight Achilles tendons were removed from 19 rabbits and were assigned to two groups based on suture technique (Group 1, Strickland suture; Group 2, cruciate repair. The sutured tendons were subjected to constant progressive distraction using a universal testing machine (Kratos®. Based on data from the instrument, which were synchronized with the visualized gap at the suture site and at the time of suture rupture, the following data were obtained: maximum load to rupture, maximum deformation or gap, time elapsed until failure, and stiffness. RESULTS: In the statistical analysis, the data were parametric and unpaired, and by Kolmogorov-Smirnov test, the sample distribution was normal. By Student's t-test, there was no significant difference in any of the data: the cruciate repair sutures had slightly better mean stiffness, and the Strickland sutures had longer time-elapsed suture ruptures and higher average maximum deformation. CONCLUSIONS: The cruciate and Strickland techniques for flexor tendon sutures have similar mechanical characteristics in vitro.

  5. Mercury levels in defined Italian population groups

    International Nuclear Information System (INIS)

    Ingrao, G.; Belloni, P.

    1991-01-01

    Our group conducts its research activity in the Department AMB-BIO (Biological and Sanitary Effect of Toxic Agents) of ENEA (National Committee for Research and Development of Nuclear and Alternative Energy) at the research centre of Casaccia. The food chain is the main pathway leading to humans for most of the trace elements. The research, therefore, has been directed mainly towards the study of the variability of the trace element content in the Italian diet. This programme aims at assessing the adequacy of the essential trace elements ingestion, and to verify that the ingestion of toxic trace elements is below the recommended values. Part of this research was conducted in the frame of the IAEA CRP on ''Human daily Intakes of Nutritionally Important Trace Elements as Measured by Nuclear and Other Techniques''. Studies carried out by the National Institute of Nutrition and FAO have shown that there are some population groups having a fish consumption significantly higher than the average value for the Italian population. These groups are usually found in coastal towns were fishing is one of the main occupation and generally includes fishermen, fish dealers, restaurant workers and their families. A pilot study was conducted in one of these towns, Bagnara Calabra in the South of Italy, where the average fish consumption is 27.1 kg per year, which is more than twice the national average. The results have shown that the mercury concentration in the diet is significantly higher than in other Italian locations, the mercury levels in blood and hair sample in these subjects are also significantly higher than the values reported for subjects of the general Italian population. 6 refs

  6. Acceleration techniques for the discrete ordinate method

    International Nuclear Information System (INIS)

    Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego; Trautmann, Thomas

    2013-01-01

    In this paper we analyze several acceleration techniques for the discrete ordinate method with matrix exponential and the small-angle modification of the radiative transfer equation. These techniques include the left eigenvectors matrix approach for computing the inverse of the right eigenvectors matrix, the telescoping technique, and the method of false discrete ordinate. The numerical simulations have shown that on average, the relative speedup of the left eigenvector matrix approach and the telescoping technique are of about 15% and 30%, respectively. -- Highlights: ► We presented the left eigenvector matrix approach. ► We analyzed the method of false discrete ordinate. ► The telescoping technique is applied for matrix operator method. ► Considered techniques accelerate the computations by 20% in average.

  7. Accountability in public health units: using a modified nominal group technique to develop a balanced scorecard for performance measurement.

    Science.gov (United States)

    Robinson, Victoria A; Hunter, Duncan; Shortt, Samuel E D

    2003-01-01

    Little attention has been paid to the need for accountability instruments applicable across all health units in the public health system. One tool, the balanced scorecard was created for industry and has been successfully adapted for use in Ontario hospitals. It consists of 4 quadrants: financial performance, outcomes, customer satisfaction and organizational development. The aim of the present study was to determine if a modified nominal group technique could be used to reach consensus among public health unit staff and public health specialists in Ontario about the components of a balanced scorecard for public health units. A modified nominal group technique consensus method was used with the public health unit staff in 6 Eastern Ontario health units (n=65) and public health specialists (n=18). 73.8% of the public health unit personnel from all six health units in the eastern Ontario region participated in the survey of potential indicators. A total of 74 indicators were identified in each of the 4 quadrants: program performance (n=44); financial performance (n=11); public perceptions (n=11); and organizational performance (n=8). The modified nominal group technique was a successful method of incorporating the views of public health personnel and specialists in the development of a balanced scorecard for public health.

  8. 40 CFR 63.7541 - How do I demonstrate continuous compliance under the emission averaging provision?

    Science.gov (United States)

    2010-07-01

    ... solid fuel boilers participating in the emissions averaging option as determined in § 63.7522(f) and (g... this section. (i) For each existing solid fuel boiler participating in the emissions averaging option... below the applicable limit. (ii) For each group of boilers participating in the emissions averaging...

  9. Averages of b-hadron, c-hadron, and τ-lepton properties as of summer 2016

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y. [LAL, Universite Paris-Sud, CNRS/IN2P3, Orsay (France); Banerjee, S. [University of Louisville, Louisville, KY (United States); Ben-Haim, E. [Universite Paris Diderot, CNRS/IN2P3, LPNHE, Universite Pierre et Marie Curie, Paris (France); Bernlochner, F.; Dingfelder, J.; Duell, S. [University of Bonn, Bonn (Germany); Bozek, A. [H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Bozzi, C. [INFN, Sezione di Ferrara, Ferrara (Italy); Chrzaszcz, M. [H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Gersabeck, M. [University of Manchester, School of Physics and Astronomy, Manchester (United Kingdom); Gershon, T. [University of Warwick, Department of Physics, Coventry (United Kingdom); Gerstel, D.; Serrano, J. [Aix Marseille Univ., CNRS/IN2P3, CPPM, Marseille (France); Goldenzweig, P. [Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Harr, R. [Wayne State University, Detroit, MI (United States); Hayasaka, K. [Niigata University, Niigata (Japan); Hayashii, H. [Nara Women' s University, Nara (Japan); Kenzie, M. [Cavendish Laboratory, University of Cambridge, Cambridge (United Kingdom); Kuhr, T. [Ludwig-Maximilians-University, Munich (Germany); Leroy, O. [Aix Marseille Univ., CNRS/IN2P3, CPPM, Marseille (France); Lusiani, A. [Scuola Normale Superiore, Pisa (Italy); INFN, Sezione di Pisa, Pisa (Italy); Lyu, X.R. [University of Chinese Academy of Sciences, Beijing (China); Miyabayashi, K. [Niigata University, Niigata (Japan); Naik, P. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Nanut, T. [J. Stefan Institute, Ljubljana (Slovenia); Oyanguren Campos, A. [Centro Mixto Universidad de Valencia-CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Patel, M. [Imperial College London, London (United Kingdom); Pedrini, D. [INFN, Sezione di Milano-Bicocca, Milan (Italy); Petric, M. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Rama, M. [INFN, Sezione di Pisa, Pisa (Italy); Roney, M. [University of Victoria, Victoria, BC (Canada); Rotondo, M. [INFN, Laboratori Nazionali di Frascati, Frascati (Italy); Schneider, O. [Institute of Physics, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne (Switzerland); Schwanda, C. [Institute of High Energy Physics, Vienna (Austria); Schwartz, A.J. [University of Cincinnati, Cincinnati, OH (United States); Shwartz, B. [Budker Institute of Nuclear Physics (SB RAS), Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Tesarek, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); Tonelli, D. [INFN, Sezione di Pisa, Pisa (Italy); Trabelsi, K. [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); SOKENDAI (The Graduate University for Advanced Studies), Hayama (Japan); Urquijo, P. [School of Physics, University of Melbourne, Melbourne, VIC (Australia); Van Kooten, R. [Indiana University, Bloomington, IN (United States); Yelton, J. [University of Florida, Gainesville, FL (US); Zupanc, A. [J. Stefan Institute, Ljubljana (SI); University of Ljubljana, Faculty of Mathematics and Physics, Ljubljana (SI); Collaboration: Heavy Flavor Averaging Group (HFLAV)

    2017-12-15

    This article reports world averages of measurements of b-hadron, c-hadron, and τ-lepton properties obtained by the Heavy Flavor Averaging Group using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays, and Cabbibo-Kobayashi-Maskawa matrix elements. (orig.)

  10. Resistance to abrasion of extrinsic porcelain esthetic characterization techniques.

    Science.gov (United States)

    Chi, Woo J; Browning, William; Looney, Stephen; Mackert, J Rodway; Windhorn, Richard J; Rueggeberg, Frederick

    2017-01-01

    groups. In this study, the novel external characterization technique (stain+LFP: Group SL) did not significantly enhance the wear resistance against toothbrush abrasion. Instead, the average wear of the applied extrinsic porcelain was 2 to 3 times more than Group S (stain only) and Group GS (glaze over stain). Application of a glaze layer over the colorants (Group GS) showed a significant improvement on wear resistance. Despite its superior physical properties, the leucite reinforced ceramic core (Group C) showed 2 to 4 times more wear when compared with other test groups. A conventional external esthetic characterization technique of applying a glaze layer over the colorants (Group GS) significantly enhanced the surface wear resistance to toothbrush abrasion when compared with other techniques involving application of colorants only (Group S) or mixture of colorant and LFP (Group SL). The underlying core ceramic had significantly less wear resistance compared with all externally characterized specimens. The novel esthetic characterization technique showed more wear and less color stability, and is thus not advocated as the "best" method for surface characterization. Application of a glaze layer provides a more wear-resistant surface from toothbrush abrasion when adjusting or extrinsically characterizing leucite reinforced ceramic restorations. Without the glaze layer, the restoration is subjected to a 2 to 4 times faster rate and amount of wear leading to possible shade mismatch.

  11. EXTRAPOLATION TECHNIQUES EVALUATING 24 HOURS OF AVERAGE ELECTROMAGNETIC FIELD EMITTED BY RADIO BASE STATION INSTALLATIONS: SPECTRUM ANALYZER MEASUREMENTS OF LTE AND UMTS SIGNALS.

    Science.gov (United States)

    Mossetti, Stefano; de Bartolo, Daniela; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina; Nava, Elisa

    2017-04-01

    International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Review: Ralf Bohnsack, Aglaja Przyborski & Burkhard Schäffer (Eds. (2010. Das Gruppendiskussionsverfahren in der Forschungspraxis [The Group Discussion Technique in Research Practice

    Directory of Open Access Journals (Sweden)

    Diana Schmidt-Pfister

    2011-03-01

    Full Text Available This edited volume comprises a range of studies that have employed a group discussion technique in combination with a specific strategy for reconstructive social research—the so-called documentary method. The latter is an empirical research strategy based on the meta-theoretical premises of the praxeological sociology of knowledge, as developed by Ralf BOHNSACK. It seeks to access practice in a more appropriate manner, namely by differentiating between various dimensions of knowledge and sociality. It holds that habitual collective orientations, in particular, are best accessed through group discussions. Thus this book does not address the group discussion technique in general, as might be expected from the title. Instead, it presents various contributions from researchers interpreting transcripts of group discussions according to the documentary method. The chapters are grouped into three main sections, representing different frameworks of practice and habitual orientation: childhood, adolescence, and organizational or societal context. A fourth section includes chapters on further, potentially useful ways of employing this particular technique and approach, as well as a chapter on teaching it in a meaningful way. Each chapter is structured in the same way: introduction to the research field and focus; methodological discussion; exemplary interpretation of group discussions; and concluding remarks. Whilst the transcripts referred to by the authors are very helpfully presented in the chapters, there is a lack of methodological reflection on the group discussion technique itself, which, as mentioned above, is only evaluated in regard to the documentary method. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs110225

  13. Performance of some supervised and unsupervised multivariate techniques for grouping authentic and unauthentic Viagra and Cialis

    Directory of Open Access Journals (Sweden)

    Michel J. Anzanello

    2014-09-01

    Full Text Available A typical application of multivariate techniques in forensic analysis consists of discriminating between authentic and unauthentic samples of seized drugs, in addition to finding similar properties in the unauthentic samples. In this paper, the performance of several methods belonging to two different classes of multivariate techniques–supervised and unsupervised techniques–were compared. The supervised techniques (ST are the k-Nearest Neighbor (KNN, Support Vector Machine (SVM, Probabilistic Neural Networks (PNN and Linear Discriminant Analysis (LDA; the unsupervised techniques are the k-Means CA and the Fuzzy C-Means (FCM. The methods are applied to Infrared Spectroscopy by Fourier Transform (FTIR from authentic and unauthentic Cialis and Viagra. The FTIR data are also transformed by Principal Components Analysis (PCA and kernel functions aimed at improving the grouping performance. ST proved to be a more reasonable choice when the analysis is conducted on the original data, while the UT led to better results when applied to transformed data.

  14. Software refactoring at the package level using clustering techniques

    KAUST Repository

    Alkhalid, A.

    2011-01-01

    Enhancing, modifying or adapting the software to new requirements increases the internal software complexity. Software with high level of internal complexity is difficult to maintain. Software refactoring reduces software complexity and hence decreases the maintenance effort. However, software refactoring becomes quite challenging task as the software evolves. The authors use clustering as a pattern recognition technique to assist in software refactoring activities at the package level. The approach presents a computer aided support for identifying ill-structured packages and provides suggestions for software designer to balance between intra-package cohesion and inter-package coupling. A comparative study is conducted applying three different clustering techniques on different software systems. In addition, the application of refactoring at the package level using an adaptive k-nearest neighbour (A-KNN) algorithm is introduced. The authors compared A-KNN technique with the other clustering techniques (viz. single linkage algorithm, complete linkage algorithm and weighted pair-group method using arithmetic averages). The new technique shows competitive performance with lower computational complexity. © 2011 The Institution of Engineering and Technology.

  15. The calculation of average error probability in a digital fibre optical communication system

    Science.gov (United States)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  16. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  17. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    Science.gov (United States)

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.

  18. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  19. Comparative international studies of osteoporosis using isotope techniques. Report of an IAEA advisory group meeting held in Vienna, Austria, 28-30 October 1992

    International Nuclear Information System (INIS)

    1992-01-01

    An Advisory Group Meeting convened by the IAEA in October 1992 made recommendations on the setting up of a Co-ordinated Research Programme (CRP) using nuclear and isotopic techniques for international comparative studies of osteoporosis. The proposed CRP will be implemented by the IAEA during the period 1993-1997. The main purpose of this programme is to undertake pilot studies of bone density in selected developing country populations for the purposes of (i) determining the age of peak bone mass in each study group, and (ii) quantifying differences in bone density as functions of the age and sex of persons in the study groups, as well as quantifying differences between the study groups in different countries. The preferred technique for bone density measurements in this study is DEXA (dual energy X-ray absorptiometry). Additional measurements of trace elements in bone (and possibly also teeth) are also foreseen using neutron activation analysis and other appropriate techniques

  20. Improving Pre-Operative Flexion in Primary TKA: A Surgical Technique Emphasizing Knee Flexion with 5-Year Follow-Up

    Directory of Open Access Journals (Sweden)

    Edward McPherson

    2014-06-01

    Full Text Available This study prospectively reviews a consecutive series of 228 primary total knee arthroplasty (TKA procedures utilizing a technique to optimize knee flexion. The main features include: (1the use of a “patellar friendly” femoral component and reduced thickness patellar components, (2 patient individualized adjustment of the femoral component rotation set strictly to the anterior-posterior femoral axis, (3a rigorous flexion compartment debridement to remove non-essential posterior femoral bone with a Z-osteotome, and (4incorporation of a rapid recovery protocol with features to promote knee flexion. Results were categorized into three groups: low pre-op flexion (90 degrees and below, regular pre-op flexion (91-125 degrees, and high pre-op flexion (126 degrees and above. Average flexion in the low flexion group improved by 20 degrees at 6 weeks, 28 degrees at 3 months, 31 degrees at 1 year, and 30 degrees at 5 years. In the regular flexion group, average flexion improved by 2 degrees at 6 weeks, 10 degrees at 3 months, 12 degrees at 1 year, and 13 degrees at 5 years. Finally, in the high flexion group, average flexion decreased by 7 degrees at 6 weeks, regained preoperative levels at 3 months, and increased by 3 degrees at 1 year and 4 degrees at 5 years. In summary, a technique that emphasizes patellofemoral kinematics can consistently improve flexion in TKA in short and long-term follow-up.

  1. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  2. Effective porosity and pore-throat sizes of Conasauga Group mudrock: Application, test and evaluation of petrophysical techniques

    International Nuclear Information System (INIS)

    Dorsch, J.; Katsube, T.J.; Sanford, W.E.; Univ. of Tennessee, Knoxville, TN; Dugan, B.E.; Tourkow, L.M.

    1996-04-01

    Effective porosity (specifically referring to the interconnected pore space) was recently recognized as being essential in determining the effectiveness and extent of matrix diffusion as a transport mechanism within fractured low-permeability rock formations. The research presented in this report was performed to test the applicability of several petrophysical techniques for the determination of effective porosity of fine-grained siliciclastic rocks. In addition, the aim was to gather quantitative data on the effective porosity of Conasauga Group mudrock from the Oak Ridge Reservation (ORR). The quantitative data reported here include not only effective porosities based on diverse measurement techniques, but also data on the sizes of pore throats and their distribution, and specimen bulk and grain densities. The petrophysical techniques employed include the immersion-saturation method, mercury and helium porosimetry, and the radial diffusion-cell method

  3. AGE GROUP CLASSIFICATION USING MACHINE LEARNING TECHNIQUES

    OpenAIRE

    Arshdeep Singh Syal*1 & Abhinav Gupta2

    2017-01-01

    A human face provides a lot of information that allows another person to identify characteristics such as age, sex, etc. Therefore, the challenge is to develop an age group prediction system using the automatic learning method. The task of estimating the age group of the human from their frontal facial images is very captivating, but also challenging because of the pattern of personalized and non-linear aging that differs from one person to another. This paper examines the problem of predicti...

  4. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  5. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  6. THE ASSESSMENT OF CORPORATE BONDS ON THE BASIS OF THE WEIGHTED AVERAGE

    Directory of Open Access Journals (Sweden)

    Victor V. Prokhorov

    2014-01-01

    Full Text Available The article considers the problem associated with the assessment of the interest rate of a public corporate bond issue. The theme of research is the study of techniques for evaluationof interest rates of corporate bond. The article discusses the task of developing a methodology for assessing the marketinterest rate of corporate bonded loan, which allows to takeinto account the systematic and specific risks. The technique of evaluation of market interest rates of corporate bonds onthe basis of weighted averages is proposed. This procedure uses in the calculation of cumulative barrier interest rate, sectoral weighted average interest rate and the interest ratedetermined on the basis of the model CAPM (Capital Asset Pricing Model. The results, which enable to speak about the possibility of applying the proposed methodology for assessing the market interest rate of a public corporate bond issuein the Russian conditions. The results may be applicable for Russian industrial enterprises, organizing issue public bonds,as well as investment companies exposed organizers of corporate securities loans and other organizations specializingin investments in the Russian public corporate bond loans.

  7. Averages of B-Hadron, C-Hadron, and tau-lepton properties as of early 2012

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y.; et al.

    2012-07-01

    This article reports world averages of measurements of b-hadron, c-hadron, and tau-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through the end of 2011. In some cases results available in the early part of 2012 are included. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays and CKM matrix elements.

  8. Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging

    International Nuclear Information System (INIS)

    Soltanian-Zadeh, H.; Windham, J.P.; Yagle, A.E.

    1993-01-01

    Segmentation of a feature of interest while correcting for partial volume averaging effects is a major tool for identification of hidden abnormalities, fast and accurate volume calculation, and three-dimensional visualization in the field of magnetic resonance imaging (MRI). The authors present the optimal transformation for simultaneous segmentation of a desired feature and correction of partial volume averaging effects, while maximizing the signal-to-noise ratio (SNR) of the desired feature. It is proved that correction of partial volume averaging effects requires the removal of the interfering features from the scene. It is also proved that correction of partial volume averaging effects can be achieved merely by a linear transformation. It is finally shown that the optimal transformation matrix is easily obtained using the Gram-Schmidt orthogonalization procedure, which is numerically stable. Applications of the technique to MRI simulation, phantom, and brain images are shown. They show that in all cases the desired feature is segmented from the interfering features and partial volume information is visualized in the resulting transformed images

  9. Muscle gap approach under a minimally invasive channel technique for treating long segmental lumbar spinal stenosis: A retrospective study.

    Science.gov (United States)

    Bin, Yang; De Cheng, Wang; Wei, Wang Zong; Hui, Li

    2017-08-01

    This study aimed to compare the efficacy of muscle gap approach under a minimally invasive channel surgical technique with the traditional median approach.In the Orthopedics Department of Traditional Chinese and Western Medicine Hospital, Tongzhou District, Beijing, 68 cases of lumbar spinal canal stenosis underwent surgery using the muscle gap approach under a minimally invasive channel technique and a median approach between September 2013 and February 2016. Both approaches adopted lumbar spinal canal decompression, intervertebral disk removal, cage implantation, and pedicle screw fixation. The operation time, bleeding volume, postoperative drainage volume, and preoperative and postoperative visual analog scale (VAS) score and Japanese Orthopedics Association score (JOA) were compared between the 2 groups.All patients were followed up for more than 1 year. No significant difference between the 2 groups was found with respect to age, gender, surgical segments. No diversity was noted in the operation time, intraoperative bleeding volume, preoperative and 1 month after the operation VAS score, preoperative and 1 month after the operation JOA score, and 6 months after the operation JOA score between 2 groups (P > .05). The amount of postoperative wound drainage (260.90 ± 160 mL vs 447.80 ± 183.60 mL, P gap approach group than in the median approach group (P gap approach under a minimally invasive channel group, the average drainage volume was reduced by 187 mL, and the average VAS score 6 months after the operation was reduced by an average of 0.48.The muscle gap approach under a minimally invasive channel technique is a feasible method to treat long segmental lumbar spinal canal stenosis. It retains the integrity of the posterior spine complex to the greatest extent, so as to reduce the adjacent spinal segmental degeneration and soft tissue trauma. Satisfactory short-term and long-term clinical results were obtained.

  10. Group performance and group learning at dynamic system control tasks

    International Nuclear Information System (INIS)

    Drewes, Sylvana

    2013-01-01

    Proper management of dynamic systems (e.g. cooling systems of nuclear power plants or production and warehousing) is important to ensure public safety and economic success. So far, research has provided broad evidence for systematic shortcomings in individuals' control performance of dynamic systems. This research aims to investigate whether groups manifest synergy (Larson, 2010) and outperform individuals and if so, what processes lead to these performance advantages. In three experiments - including simulations of a nuclear power plant and a business setting - I compare the control performance of three-person-groups to the average individual performance and to nominal groups (N = 105 groups per experiment). The nominal group condition captures the statistical advantage of aggregated group judgements not due to social interaction. First, results show a superior performance of groups compared to individuals. Second, a meta-analysis across all three experiments shows interaction-based process gains in dynamic control tasks: Interacting groups outperform the average individual performance as well as the nominal group performance. Third, group interaction leads to stable individual improvements of group members that exceed practice effects. In sum, these results provide the first unequivocal evidence for interaction-based performance gains of groups in dynamic control tasks and imply that employers should rely on groups to provide opportunities for individual learning and to foster dynamic system control at its best.

  11. Environmental stresses can alleviate the average deleterious effect of mutations

    Directory of Open Access Journals (Sweden)

    Leibler Stanislas

    2003-05-01

    Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.

  12. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  13. Clinical value of CARE dose 4D technique in decreasing CT scanning dose of adult chest

    International Nuclear Information System (INIS)

    Wu Aiqin; Zheng Wenlong; Xu Chongyong; Fang Bidong; Ge Wen

    2011-01-01

    Objective: To investigate the value of CARE Dose 4D technique in decreasing radiation dose and improving image quality of multi-slice spiral CT in adult chest scanning. Methods: 100 patients of chest CT scanning were equally divided into study group and control group randomly. CARE Dose 4D Technique was used in study group. Effective mAs value, volume CT dose index (CTDI vol ) and dose length product (DLP) were displayed automatically in machine while chest scanning; those values and actual mAs value of every image were recorded respectively. The image quality at apex of lung, lower edge of aorta arch, middle area of left atrium and base of lung on every image of 400 images was judged and classified as three level (excellent, good, poor) by two deputy chief physicians with double blind method, the image noise at corresponding parts was measured. Results: While setting 80 mAs for quality reference mAs, the effective mAs value in study group most decreased 44 mAs than control group with an average decrease of 9.60 (12.0%), CTDI vol with 4.75 mGy with an average decrease of 0.95 mCy (11.0%), DLP 99.50% in study group, with 98.0% in control group. But it was higher at apex of lung and base of lung, lower at middle area of left atrium, and similar at lower edge of aorta arch in study group than contrast group. The image noise were lower at apex of lung and base of lung in study group than control group (t =6.299 and 2.332, all P<0.05), higher at middle area of left atrium in study group than control group (t=3.078, P<0.05) and similar at lower edge of aorta arch in study group than control group (t=1.191, P>0.05). Conclusions: CARE Dose 4D technique provides a function regulated mAs real-time on line, it not only raises utilization rate of radiation and decreases radiation dose, but also promises and increases image quality in chest CT scanning, and has some clinical significance. (authors)

  14. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    OpenAIRE

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    2010-01-01

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and nume...

  15. The evolution of cooperation in spatial groups

    International Nuclear Information System (INIS)

    Zhang Jianlei; Zhang Chunyan; Chu Tianguang

    2011-01-01

    Research highlights: → We propose a model of evolutionary games in which individuals are organized into networked groups. → We show that the social dilemma can be resolved and high cooperation levels are attained. → Larger average group size would lead to lower cooperation level but higher average payoffs. → The results show that higher expectations can bring the system with larger average payoffs. - Abstract: Much of human cooperation remains an evolutionary riddle. There is evidence that individuals are often organized into groups in many social situations. Inspired by this observation, we propose a simple model of evolutionary public goods games in which individuals are organized into networked groups. Here, nodes in the network represent groups; the edges, connecting the nodes, refer to the interactions between the groups. Individuals establish public goods games with partners in the same group and migrate among neighboring groups depending on their payoffs and expectations. We show that the paradigmatic public goods social dilemma can be resolved and high cooperation levels are attained in structured groups, even in relatively harsh conditions for cooperation. Further, by means of numerical simulations and mean-field analysis, we arrive at the result: larger average group size and milder cooperation environment would lead to lower cooperation level but higher average payoffs of the entire population. Altogether, these results emphasize that our understanding of cooperation can be enhanced by investigations of how spatial groups of individuals affect the evolution dynamics, which might help in explaining the emergence and evolution of cooperation.

  16. Whole-body bone segmentation from MRI for PET/MRI attenuation correction using shape-based averaging

    DEFF Research Database (Denmark)

    Arabi, Hossein; Zaidi, H.

    2016-01-01

    Purpose: The authors evaluate the performance of shape-based averaging (SBA) technique for whole-body bone segmentation from MRI in the context of MRI-guided attenuation correction (MRAC) in hybrid PET/MRI. To enhance the performance of the SBA scheme, the authors propose to combine it with stati......Purpose: The authors evaluate the performance of shape-based averaging (SBA) technique for whole-body bone segmentation from MRI in the context of MRI-guided attenuation correction (MRAC) in hybrid PET/MRI. To enhance the performance of the SBA scheme, the authors propose to combine...... it with statistical atlas fusion techniques. Moreover, a fast and efficient shape comparisonbased atlas selection scheme was developed and incorporated into the SBA method. Methods: Clinical studies consisting of PET/CT and MR images of 21 patients were used to assess the performance of the SBA method. In addition...... voting (MV) atlas fusion scheme was also evaluated as a conventional and commonly used method. MRI-guided attenuation maps were generated using the different segmentation methods. Thereafter, quantitative analysis of PET attenuation correction was performed using CT-based attenuation correction...

  17. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  18. Energy group structure determination using particle swarm optimization

    International Nuclear Information System (INIS)

    Yi, Ce; Sjoden, Glenn

    2013-01-01

    Highlights: ► Particle swarm optimization is applied to determine broad group structure. ► A graph representation of the broad group structure problem is introduced. ► The approach is tested on a fuel-pin model. - Abstract: Multi-group theory is widely applied for the energy domain discretization when solving the Linear Boltzmann Equation. To reduce the computational cost, fine group cross libraries are often down-sampled into broad group cross section libraries. Cross section data collapsing generally involves two steps: Firstly, the broad group structure has to be determined; secondly, a weighting scheme is used to evaluate the broad cross section library based on the fine group cross section data and the broad group structure. A common scheme is to average the fine group cross section weighted by the fine group flux. Cross section collapsing techniques have been intensively researched. However, most studies use a pre-determined group structure, open based on experience, to divide the neutron energy spectrum into thermal, epi-thermal, fast, etc. energy range. In this paper, a swarm intelligence algorithm, particle swarm optimization (PSO), is applied to optimize the broad group structure. A graph representation of the broad group structure determination problem is introduced. And the swarm intelligence algorithm is used to solve the graph model. The effectiveness of the approach is demonstrated using a fuel-pin model

  19. Principles of resonance-averaged gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1981-01-01

    The unambiguous determination of excitation energies, spins, parities, and other properties of nuclear levels is the paramount goal of the nuclear spectroscopist. All developments of nuclear models depend upon the availability of a reliable data base on which to build. In this regard, slow neutron capture gamma-ray spectroscopy has proved to be a valuable tool. The observation of primary radiative transitions connecting initial and final states can provide definite level positions. In particular the use of the resonance-averaged capture technique has received much recent attention because of the claims advanced for this technique (Chrien 1980a, Casten 1980); that it is able to identify all states in a given spin-parity range and to provide definite spin parity information for these states. In view of the importance of this method, it is perhaps surprising that until now no firm analytical basis has been provided which delineates its capabilities and limitations. Such an analysis is necessary to establish the spin-parity assignments derived from this method on a quantitative basis; in other words a quantitative statement of the limits of error must be provided. It is the principal aim of the present paper to present such an analysis. To do this, a historical description of the technique and its applications is presented and the principles of the method are stated. Finally a method of statistical analysis is described, and the results are applied to recent measurements carried out at the filtered beam facilities at the Brookhaven National Laboratory

  20. Comparison of process estimation techniques for on-line calibration monitoring

    International Nuclear Information System (INIS)

    Shumaker, B. D.; Hashemian, H. M.; Morton, G. W.

    2006-01-01

    The goal of on-line calibration monitoring is to reduce the number of unnecessary calibrations performed each refueling cycle on pressure, level, and flow transmitters in nuclear power plants. The effort requires a baseline for determining calibration drift and thereby the need for a calibration. There are two ways to establish the baseline: averaging and modeling. Averaging techniques have proven to be highly successful in the applications when there are a large number of redundant transmitters; but, for systems with little or no redundancy, averaging methods are not always reliable. That is, for non-redundant transmitters, more sophisticated process estimation techniques are needed to augment or replace the averaging techniques. This paper explores three well-known process estimation techniques; namely Independent Component Analysis (ICA), Auto-Associative Neural Networks (AANN), and Auto-Associative Kernel Regression (AAKR). Using experience and data from an operating nuclear plant, the paper will present an evaluation of the effectiveness of these methods in detecting transmitter drift in actual plant conditions. (authors)

  1. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  2. Investigation on the average serum E2 level and menopausal age in healthy women in Wuhan area

    International Nuclear Information System (INIS)

    Chen Huilin; Lan Jian; Zhang Yangyang; Li Fei; Zhang Yuanji

    2008-01-01

    Objective: To investigate the average serum E 2 level and menopausal age of healthy women in Wuhan area and assess the appropriateness of hormone replacement therapy in these women. Methods: Serum E 2 levels were measured with RIA in 2020 healthy women (26-75 yr old) in Wuhan area. Results: (1) Serum E 2 levels reached peak in 31-35yr group, significantly dropped in 46-50yr group and reached menopausal level in 51-55 yr group. (2) The average menopausal age in Wuhan area was rather early-48.08yr. Conclusion: The average menopausal age in Wuhan area was 2.3yr earlier than the nationwide 1989 screening result, which should be a concern for the maternity health workers. (authors)

  3. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  4. Role of spatial averaging in multicellular gradient sensing.

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-05-20

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  5. Attractiveness of the female body: Preference for the average or the supernormal?

    Directory of Open Access Journals (Sweden)

    Marković Slobodan

    2017-01-01

    Full Text Available The main purpose of the present study was to contrast the two hypotheses of female body attractiveness. The first is the “preference-for-the average” hypothesis: the most attractive female body is the one that represents the average body proportions for a given population. The second is the “preference-for-the supernormal” hypothesis: according to the so-called “peak shift effect”, the most attractive female body is more feminine than the average. We investigated the preference for three female body characteristics: waist to hip ratio (WHR, buttocks and breasts. There were 456 participants of both genders. Using a program for computer animation (DAZ 3D three sets of stimuli were generated (WHR, buttocks and breasts. Each set included six stimuli ranked from the lowest to the highest femininity level. Participants were asked to choose the stimulus within each set which they found most attractive (task 1 and average (task 2. One group of participants judged the body parts that were presented in the global context (whole body, while the other group judged the stimuli in the local context (isolated body parts only. Analyses have shown that the most attractive WHR, buttocks and breasts are more feminine (meaning smaller for WHR and larger for breasts and buttocks than average ones, for both genders and in both presentation contexts. The effect of gender was obtained only for the most attractive breasts: males prefer larger breasts than females. Finally, most attractive and average WHR and breasts were less feminine in the local than in the global context. These results support the preference-for the supernormal hypothesis: all analyses have shown that both male and female participants preferred female body parts which are more feminine than those judged average. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. 179033

  6. Specificity and sensitivity assessment of selected nasal provocation testing techniques

    Directory of Open Access Journals (Sweden)

    Edyta Krzych-Fałta

    2016-12-01

    Full Text Available Introduction: Nasal provocation testing involves an allergen-specific local reaction of the nasal mucosa to the administered allergen. Aim: To determine the most objective nasal occlusion assessment technique that could be used in nasal provocation testing. Material and methods : A total of 60 subjects, including 30 patients diagnosed with allergy to common environmental allergens and 30 healthy subjects were enrolled into the study. The method used in the study was a nasal provocation test with an allergen, with a standard dose of a control solution and an allergen (5,000 SBU/ml administered using a calibrated atomizer into both nostrils at room temperature. Early-phase nasal mucosa response in the early phase of the allergic reaction was assessed via acoustic rhinometry, optical rhinometry, nitric oxide in nasal air, and tryptase levels in the nasal lavage fluid. Results : In estimating the homogeneity of the average values, the Levene’s test was used and receiver operating characteristic curves were plotted for all the methods used for assessing the nasal provocation test with an allergen. Statistically significant results were defined for p < 0.05. Of all the objective assessment techniques, the most sensitive and characteristic ones were the optical rhinometry techniques (specificity = 1, sensitivity = 1, AUC = 1, PPV = 1, NPV = 1. Conclusions : The techniques used showed significant differences between the group of patients with allergic rhinitis and the control group. Of all the objective assessment techniques, those most sensitive and characteristic were the optical rhinometry.

  7. RESEARCH ON WEIGHT EVOLUTION AND DAILY AVERAGE INCREASE TO FOUR DIFFERENT GROUPS OF LITTLE CROSSBREED BULLS EXPOSED TO INTENSIVE FATTENING

    Directory of Open Access Journals (Sweden)

    I.R. MOLDOVAN

    2009-10-01

    Full Text Available Research aimed to highlight the weight differences and daily growth average of four groups of little crossbreed bulls raised in the same environmental conditions and having the same feeding diet. Farm in which they do research is TCE 3 abis SRL Piatra Neamt, located in Zanesti village at 14 km from the city of Piatra Neamt. Location of the farm is on the old IAS Zanesti and is endowed eight shelters from which two are still functional. Shelters are divided into collective lumber rooms, on which are housed an optimal number of calves depending on their age, number varied from 25 calves at 0 - 3 months up to 6 heads during growing and finishing period when they reach weights of 600-700 kg. Farm population is obtained with calves from reformed cows from milk farm belonging to the same company. Forage base is provided from the company's vegetable farm, farm exploits about 14,000 ha of arable land in Neamt County. Feeding (in three phases is made with the technological trailer once-daily in morning and drinking is made at discretion at constant.

  8. NOTES AND CORRESPONDENCE Evaluation of Tidal Removal Method Using Phase Average Technique from ADCP Surveys along the Peng-Hu Channel in the Taiwan Strait

    Directory of Open Access Journals (Sweden)

    Yu-Chia Chang

    2008-01-01

    Full Text Available Three cruises with shipboard Acoustic Doppler Current Profiler (ADCP were performed along a transect across the Peng-hu Channel (PHC in the Taiwan Strait during 2003 - 2004 in order to investigate the feasibility and accuracy of the phase-averaging method to eliminate tidal components from shipboard ADCP measurement of currents. In each cruise measurement was repeated a number of times along the transect with a specified time lag of either 5, 6.21, or 8 hr, and the repeated data at the same location were averaged to eliminate the tidal currents; this is the so-called ¡§phase-averaging method¡¨. We employed 5-phase-averaging, 4-phase-averaging, 3-phase-averaging, and 2-phase-averaging methods in this study. The residual currents and volume transport of the PHC derived from various phase-averaging methods were intercompared and were also compared with results of the least-square harmonic reduction method proposed by Simpson et al. (1990 and the least-square interpolation method using Gaussian function (Wang et al. 2004. The estimated uncertainty of the residual flow through the PHC derived from the 5-phase-averaging, 4-phase-averaging, 3-phase-averaging, and 2-phase-averaging methods is 0.3, 0.3, 1.3, and 4.6 cm s-1, respectively. Procedures for choosing a best phase average method to remove tidal currents in any particular region are also suggested.

  9. Is the arthroscopic suture bridge technique suitable for full-thickness rotator cuff tears of any size?

    Science.gov (United States)

    Lee, Sung Hyun; Kim, Jeong Woo; Kim, Tae Kyun; Kweon, Seok Hyun; Kang, Hong Je; Kim, Se Jin; Park, Jin Sung

    2017-07-01

    The purpose of this study was to compare functional outcomes and tendon integrity between the suture bridge and modified tension band techniques for arthroscopic rotator cuff repair. A consecutive series of 128 patients who underwent the modified tension band (MTB group; 69 patients) and suture bridge (SB group; 59 patients) techniques were enrolled. The pain visual analogue scale (VAS), Constant, and American Shoulder and Elbow Surgeons (ASES) scores were determined preoperatively and at the final follow-up. Rotator cuff hypotrophy was quantified by calculating the occupation ratio (OR). Rotator cuff integrity and the global fatty degeneration index were determined by using magnetic resonance imaging at 6 months postoperatively. The average VAS, Constant, and ASES scores improved significantly at the final follow-up in both groups (p bridge groups (7.0 vs. 6.8%, respectively; p = n.s.). The retear rate of large-to-massive tears was significantly lower in the suture bridge group than in the modified tension band group (33.3 vs. 70%; p = 0.035). Fatty infiltration (postoperative global fatty degeneration index, p = 0.022) and muscle hypotrophy (postoperative OR, p = 0.038) outcomes were significantly better with the suture bridge technique. The retear rate was lower with the suture bridge technique in the case of large-to-massive rotator cuff tears. Additionally, significant improvements in hypotrophy and fatty infiltration of the rotator cuff were obtained with the suture bridge technique, possibly resulting in better anatomical outcomes. The suture bridge technique was a more effective method for the repair of rotator cuff tears of all sizes as compared to the modified tension band technique. Retrospective Cohort Design, Treatment Study, level III.

  10. Embolization of the Internal Iliac Artery: Cost-Effectiveness of Two Different Techniques

    International Nuclear Information System (INIS)

    Pellerin, Olivier; Caruba, Thibaud; Kandounakis, Yanis; Novelli, Luigi; Pineau, Judith; Prognon, Patrice; Sapoval, Marc

    2008-01-01

    The purpose of this study was to compare the cost-effectiveness of coils versus the Amplatzer Vascular Plug (AVP) for occlusion of the internal iliac artery (IAA). Between 2002 and January 2006, 13 patients (mean age 73 ± 13 years) were referred for stent-grafting of abdominal aortic aneurysm (n = 6); type I distal endoleak (n = 3), isolated iliac aneurysm (n = 3), or rupture of a common iliac aneurysm (n = 1). In all patients, extension of the stent-graft was needed because the distal neck was absent. Two different techniques were used to occlude the IIA: AVP in seven patients (group A) and coil embolization in six patients (group C). Immediate results and direct material costs were assessed retrospectively. Immediate success was achieved in all patients, and simultaneous stent-grafting was successfully performed in two of six patients in group C versus five of seven patients in group A. In all group A patients, a single AVP was sufficient to achieve occlusion of the IIA, accounting for a mean cost of 485 Euro , whereas in group C patients, an average of 7 ± 3 coils were used, accounting for a mean cost of 1,745 Euro . Mean average cost savings using the AVP was 1,239 Euro . When IIA occlusion is needed, the AVP allows a single-step procedure at significant cost savings.

  11. Tools, techniques, organisation and culture of the CADD group at Sygnature Discovery.

    Science.gov (United States)

    St-Gallay, Steve A; Sambrook-Smith, Colin P

    2017-03-01

    Computer-aided drug design encompasses a wide variety of tools and techniques, and can be implemented with a range of organisational structures and focus in different organisations. Here we outline the computational chemistry skills within Sygnature Discovery, along with the software and hardware at our disposal, and briefly discuss the methods that are not employed and why. The goal of the group is to provide support for design and analysis in order to improve the quality of compounds synthesised and reduce the timelines of drug discovery projects, and we reveal how this is achieved at Sygnature. Impact on medicinal chemistry is vital to demonstrating the value of computational chemistry, and we discuss the approaches taken to influence the list of compounds for synthesis, and how we recognise success. Finally we touch on some of the areas being developed within the team in order to provide further value to the projects and clients.

  12. Monthly streamflow forecasting with auto-regressive integrated moving average

    Science.gov (United States)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  13. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    OpenAIRE

    Kirti AREKAR; Rinku JAIN

    2017-01-01

    The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE) by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in...

  14. Assessing Dental Hygienists' Communication Techniques for Use with Low Oral Health Literacy Patients.

    Science.gov (United States)

    Flynn, Priscilla; Acharya, Amit; Schwei, Kelsey; VanWormer, Jeffrey; Skrzypcak, Kaitlyn

    2016-06-01

    This primary aim of this study was to assess communication techniques used with low oral health literacy patients by dental hygienists in rural Wisconsin dental clinics. A secondary aim was to determine the utility of the survey instrument used in this study. A mixed methods study consisting of a cross-sectional survey, immediately followed by focus groups, was conducted among dental hygienists in the Marshfield Clinic (Wisconsin) service area. The survey quantified the routine use of 18 communication techniques previously shown to be effective with low oral health literacy patients. Linear regression was used to analyze the association between routine use of each communication technique and several indicator variables, including geographic practice region, oral health literacy familiarity, communication skills training and demographic indicators. Qualitative analyses included code mapping to the 18 communication techniques identified in the survey, and generating new codes based on discussion content. On average, the 38 study participants routinely used 6.3 communication techniques. Dental hygienists who used an oral health literacy assessment tool reported using significantly more communication techniques compared to those who did not use an oral health literacy assessment tool. Focus group results differed from survey responses as few dental hygienists stated familiarity with the term "oral health literacy." Motivational interviewing techniques and using an integrated electronic medical-dental record were additional communication techniques identified as useful with low oral health literacy patients. Dental hygienists in this study routinely used approximately one-third of the communication techniques recommended for low oral health literacy patients supporting the need for training on this topic. Based on focus group results, the survey used in this study warrants modification and psychometric testing prior to further use. Copyright © 2016 The American Dental

  15. Determination of the optimal method for the field-in-field technique in breast tangential radiotherapy

    International Nuclear Information System (INIS)

    Tanaka, Hidekazu; Hayashi, Shinya; Hoshi, Hiroaki

    2014-01-01

    Several studies have reported the usefulness of the field-in-field (FIF) technique in breast radiotherapy. However, the methods for the FIF technique used in these studies vary. These methods were classified into three categories. We simulated a radiotherapy plan with each method and analyzed the outcomes. In the first method, a pair of subfields was added to each main field: the single pair of subfields method (SSM). In the second method, three pairs of subfields were added to each main field: the multiple pairs of subfields method (MSM). In the third method, subfields were alternately added: the alternate subfields method (ASM). A total of 51 patients were enrolled in this study. The maximum dose to the planning target volume (PTV) (Dmax) and the volumes of the PTV receiving 100% of the prescription dose (V100%) were calculated. The thickness of the breast between the chest wall and skin surface was measured, and patients were divided into two groups according to the median. In the overall series, the average V100% with ASM (60.3%) was significantly higher than with SSM (52.6%) and MSM (48.7%). In the thin breast group as well, the average V100% with ASM (57.3%) and SSM (54.2%) was significantly higher than that with MSM (43.3%). In the thick breast group, the average V100% with ASM (63.4%) was significantly higher than that with SSM (51.0%) and MSM (54.4%). ASM resulted in better dose distribution, regardless of the breast size. Moreover, planning for ASM required a relatively short time. ASM was considered the most preferred method. (author)

  16. Climate, canopy disturbance, and radial growth averaging in a second-growth mixed-oak forest in West Virginia, USA

    Science.gov (United States)

    James S. Rentch; B. Desta Fekedulegn; Gary W. Miller

    2002-01-01

    This study evaluated the use of radial growth averaging as a technique of identifying canopy disturbances in a thinned 55-year-old mixed-oak stand in West Virginia. We used analysis of variance to determine the time interval (averaging period) and lag period (time between thinning and growth increase) that best captured the growth increase associated with different...

  17. Facial rejuvenation with fillers: The dual plane technique

    Directory of Open Access Journals (Sweden)

    Giovanni Salti

    2015-01-01

    Full Text Available Background: Facial aging is characterized by skin changes, sagging and volume loss. Volume is frequently addressed with reabsorbable fillers like hyaluronic acid gels. Materials and Methods: From an anatomical point of view, the deep and superficial fat compartments evolve differently with aging in a rather predictable manner. Volume can therefore be restored following a technique based on restoring first the deep volumes and there after the superficial volumes. We called this strategy "dual plane". A series of 147 consecutive patients have been treated with fillers using the dual plane technique in the last five years. Results: An average of 4.25 session per patient has been carried out for a total of 625 treatment sessions. The average total amount of products used has been 12 ml per patient with an average amount per session of 3.75 ml. We had few and limited adverse events with this technique. Conclusion: The dual plane technique is an injection technique based on anatomical logics. Different types of products can be used according to the plane of injection and their rheology in order to obtain a natural result and few side effects.

  18. Local and average structure of Mn- and La-substituted BiFeO3

    Science.gov (United States)

    Jiang, Bo; Selbach, Sverre M.

    2017-06-01

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO3 is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space group symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO3. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions.

  19. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  20. Biochemical and genetic variation of some Syrian wheat varieties using NIR, RAPD and AFLPs techniques

    International Nuclear Information System (INIS)

    Saleh, B.

    2012-01-01

    This study was performed to assess chemical components and genetic variability of five Syrian wheat varieties using NIR, RAPD and AFLP techniques. NIR technique showed that Cham6 was the best variety in term of wheat grain quality due to their lowest protein (%), hardness, water uptake and baking volume and the highest starch (%) compared to the other tested varieties. PCR amplifications with 21 RAPD primers and 13 AFLP PCs primer combinations gave 104 and 466 discernible loci of which 24 (18.823%) and 199 (45.527%) were polymorphic for the both techniques respectively. Our data indicated that the three techniques gave similar results regarding the degree of relatedness among the tested varieties. In the present investigation, AFLP fingerprinting was more efficient than the RAPD assay. Where the letter exhibited lower Marker Index (MI) average (0.219) compared to AFLP one (3.203). The pattern generated by RAPD, AFLPs markers or by NIR separated the five wheat varieties into two groups. The first group consists of two subclusters. The first subcluster involved Cham8 and Bohous6, while the second one includes Cham6 that is very closed to precedent varieties. The second group consists of Bohous9 and Cham7 that were also closely related. Based on this study, the use of NIR, RAPD and AFLP techniques could be a powerful tool to detect the effectiveness relationships of these technologies. (author)

  1. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  2. A survey of doses to worker groups in the nuclear industry

    International Nuclear Information System (INIS)

    Khan, T.A.; Baum, J.W.

    1991-01-01

    The the US National Council on Radiation Protection and Measurements (NCRP) has suggested ''...as guidance for radiation programs that cumulative exposure not exceed the age of the individual in years x 10 mSv (years x 1 rem).'' The International Commission on Radiological Protection (ICRP) has recommended a dose limit of 10 rem averaged over 5 years. With these developments in mind, the US Nuclear Regulatory Commission (NRC) requested the ALARA Center of the Brookhaven National Laboratory to undertake two parallel studies. One study, which is still ongoing, is to examine the impact of the newly recommended dose limits on the nuclear industry as a whole; the other study was intended to assist in this larger project by looking more closely at the nuclear power industry. Preliminary data had indicated that the critical industry as far as the impact of new regulatory limits were concerned would be the nuclear power industry, because, it was conjectured, there existed a core of highly skilled workers in some groups which routinely get higher than average exposures. The objectives of the second study were to get a better understanding of the situation vis grave a vis the nuclear power industry, by identifying the high-dose worker groups, quantifying the annual and lifetime doses to these groups to see the extent of the problem if there was one, and finally to determine if there were any dose-reduction techniques which were particularly suited to reducing doses to these groups. In this presentation we describe some of the things learned during our work on the two projects. For more detailed information on the project on dose-reduction techniques for high-dose worker groups in the nuclear power industry, see NUREG/CR-5139. An industry/advisory committee has been set up which is in the process of evaluating the data from the larger project on the impact of new dose limits and will shortly produce its report. 7 refs., 5 figs., 6 tabs

  3. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  4. First Clinical Investigations of New Ultrasound Techniques in Three Patient Groups: Patients with Liver Tumors, Arteriovenous Fistulas, and Arteriosclerotic Femoral Arteries

    DEFF Research Database (Denmark)

    Hansen, Peter Møller

    In this PhD project two newer ultrasound techniques are for the first time used for clinical scans of patients with malignant liver tumors (Study I), arteriovenous fistulas for hemodialysis (Study II) and arteriosclerotic femoral arteries (Study III). The same commercial ultrasound scanner was us...... of the new ultrasound techniques in selected groups of patients. For all three studies the results are promising, and hopefully the techniques will find their way into everyday clinical practice for the benefit of both patients and healthcare practitioners.......In this PhD project two newer ultrasound techniques are for the first time used for clinical scans of patients with malignant liver tumors (Study I), arteriovenous fistulas for hemodialysis (Study II) and arteriosclerotic femoral arteries (Study III). The same commercial ultrasound scanner was used...... in all three studies. Study I was a comparative study of B-mode ultrasound images obtained with conventional technique and the experimental technique Synthetic Aperture Sequential Beamforming (SASB). SASB is a datareducing version of the technique synthetic aperture, which has the potential to produce...

  5. The Pulsair 3000 tonometer--how many readings need to be taken to ensure accuracy of the average?

    Science.gov (United States)

    McCaghrey, G E; Matthews, F E

    2001-07-01

    Manufacturers of non-contact tonometers recommend that a number of readings are taken on each eye, and an average obtained. With the Keeler Pulsair 3000 it is advised to take four readings, and average these. This report analyses readings in 100 subjects, and compares the first reading, and the averages of the first two and first three readings with the "machine standard" of the average of four readings. It is found that, in the subject group investigated, the average of three readings is not different from the average of four in 95% of individuals, with equivalence defined as +/- 1.0 mmHg.

  6. Grade Point Average: Report of the GPA Pilot Project 2013-14

    Science.gov (United States)

    Higher Education Academy, 2015

    2015-01-01

    This report is published as the result of a range of investigations and debates involving many universities and colleges and a series of meetings, presentations, discussions and consultations. Interest in a grade point average (GPA) system was originally initiated by a group of interested universities, progressing to the systematic investigation…

  7. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  8. Applying the nominal group technique in an employment relations conflict situation: A case study of a university maintenance section in South Africa

    Directory of Open Access Journals (Sweden)

    Cornelis (Kees S. van der Waal

    2009-09-01

    Full Text Available After a breakdown in employment relations in the maintenance section of a higher education institution, the authors were asked to intervene in order to try and solve the employment relations conflict situation. It was decided to employ the Nominal Group Technique (NGT as a tool in problem identification during conflict in the workplace. An initial investigation of documentation and interviews with prominent individuals in the organisation was carried out. The NGT was then used in four focus group discussions to determine the important issues as seen by staff members. The NGT facilitates the determination of shared perceptions and the ranking of ideas. The NGT was used in diverse groups, necessitating adaptations to the technique. The perceived causes of the conflict were established. The NGT can be used in a conflict situation in the workplace in order to establish the perceived causes of employment relations conflict.

  9. Clustering economies based on multiple criteria decision making techniques

    Directory of Open Access Journals (Sweden)

    Mansour Momeni

    2011-10-01

    Full Text Available One of the primary concerns on many countries is to determine different important factors affecting economic growth. In this paper, we study some factors such as unemployment rate, inflation ratio, population growth, average annual income, etc to cluster different countries. The proposed model of this paper uses analytical hierarchy process (AHP to prioritize the criteria and then uses a K-mean technique to cluster 59 countries based on the ranked criteria into four groups. The first group includes countries with high standards such as Germany and Japan. In the second cluster, there are some developing countries with relatively good economic growth such as Saudi Arabia and Iran. The third cluster belongs to countries with faster rates of growth compared with the countries located in the second group such as China, India and Mexico. Finally, the fourth cluster includes countries with relatively very low rates of growth such as Jordan, Mali, Niger, etc.

  10. A New Classification Technique in Mobile Robot Navigation

    Directory of Open Access Journals (Sweden)

    Bambang Tutuko

    2011-12-01

    Full Text Available This paper presents a novel pattern recognition algorithm that use weightless neural network (WNNs technique.This technique plays a role of situation classifier to judge the situation around the mobile robot environment and makes control decision in mobile robot navigation. The WNNs technique is choosen due to significant advantages over conventional neural network, such as they can be easily implemented in hardware using standard RAM, faster in training phase and work with small resources. Using a simple classification algorithm, the similar data will be grouped with each other and it will be possible to attach similar data classes to specific local areas in the mobile robot environment. This strategy is demonstrated in simple mobile robot powered by low cost microcontrollers with 512 bytes of RAM and low cost sensors. Experimental result shows, when number of neuron increases the average environmental recognition ratehas risen from 87.6% to 98.5%.The WNNs technique allows the mobile robot to recognize many and different environmental patterns and avoid obstacles in real time. Moreover, by using proposed WNNstechnique mobile robot has successfully reached the goal in dynamic environment compare to fuzzy logic technique and logic function, capable of dealing with uncertainty in sensor reading, achieving good performance in performing control actions with 0.56% error rate in mobile robot speed.

  11. Modified sandwich vacuum pack technique for temporary closure of abdominal wounds: an African perspective.

    Science.gov (United States)

    van As, A B; Navsaria, P; Numanoglu, A; McCulloch, M

    2007-01-01

    South Africa has very high levels of accidental trauma as well as interpersonal violence. There are more admissions for trauma in South Africa than for any other disease; therefore it can be regarded as the Number 1 disease in the country. Complex abdominal injuries are common, requiring specific management techniques. The aim is to document our experience with the Modified Sandwich Vacuum Pack technique for temporary closure of abdominal wounds. After providing a short historical overview, we will demonstrate the technique which we carefully adapted over the last decade to the present Modified Sandwich Vacuum Pack technique. In the Last 5 years we utilized our Modified Sandwich Vacuum Pack technique 153 times in 69 patients. Five (5) patients were under the age of 12 years. In the patient group over 12 years the most common indication for using our technique were penetrating injuries (40), abdominal sepsis (28), visceral edema (10), abdominal compartment syndrome (9), abdominal packs (6),Abdominal wall defects (2). In the group under 12-years the 2 children had liver ruptures (posttraumatic) and 3 liver transplantations. The average cost for the materials used with our technique was ZAR 96. (10 Euro and 41 cents). In our experience the Modified Sandwich Vacuum Pack technique is an effective, cheap methodology to deal with open abdomens in the African setting.A drawback may be the technical expertise required, particular in centers dealing with low numbers of complex abdominal trauma.

  12. Assessing women's sexuality after cancer therapy: checking assumptions with the focus group technique.

    Science.gov (United States)

    Bruner, D W; Boyd, C P

    1999-12-01

    Cancer and cancer therapies impair sexual health in a multitude of ways. The promotion of sexual health is therefore vital for preserving quality of life and is an integral part of total or holistic cancer management. Nursing, to provide holistic care, requires research that is meaningful to patients as well as the profession to develop educational and interventional studies to promote sexual health and coping. To obtain meaningful research data instruments that are reliable, valid, and pertinent to patients' needs are required. Several sexual functioning instruments were reviewed for this study and found to be lacking in either a conceptual foundation or psychometric validation. Without a defined conceptual framework, authors of the instruments must have made certain assumptions regarding what women undergoing cancer therapy experience and what they perceive as important. To check these assumptions before assessing women's sexuality after cancer therapies in a larger study, a pilot study was designed to compare what women experience and perceive as important regarding their sexuality with what is assessed in several currently available research instruments, using the focus group technique. Based on the focus group findings, current sexual functioning questionnaires may be lacking in pertinent areas of concern for women treated for breast or gynecologic malignancies. Better conceptual foundations may help future questionnaire design. Self-regulation theory may provide an acceptable conceptual framework from which to develop a sexual functioning questionnaire.

  13. Disconnection technique with a bronchial blocker for improving lung deflation: a comparison with a double-lumen tube and bronchial blocker without disconnection.

    Science.gov (United States)

    Yoo, Ji Young; Kim, Dae Hee; Choi, Ho; Kim, Kun; Chae, Yun Jeong; Park, Sung Yong

    2014-08-01

    One-lung ventilation (OLV) is accomplished with a double-lumen tube (DLT) or a bronchial blocker (BB). The authors compared the effectiveness of lung collapse using DLT, BB, and BB with the disconnection technique. Prospective, randomized, blind trial. A university hospital. Fifty-two patients undergoing elective pneumothorax surgery. Patients were assigned randomly to 1 of 3 groups: The DLT group (group 1), the BB group (group 2), and the BB with the disconnection technique group (group 3). The authors modified the disconnection technique in group 3 as follows: (1) turned off the ventilator and opened the adjustable pressure-limiting valve, allowing both lungs to collapse and (2) after loss of the CO2 trace on the capnograph, inflated the blocker cuff and turned on the ventilator, allowing only dependent-lung ventilation. Five and ten minutes after OLV, the degree of lung collapse was assessed by the surgeon, who was blinded to the isolation technique. The quality of lung collapse at 5 and 10 minutes was significantly better in groups 1 and 3 than in group 2. No significant differences were observed for the degree of lung collapse at any time point between groups 1 and 3. The average time for loss of the CO2 trace on the capnograph was 32.3±7.0 seconds in group 3. A BB with spontaneous collapse took longer to deflate and did not provide equivalent surgical exposure to the DLT. The disconnection technique could be helpful to accelerate lung collapse with a BB. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. On the evils of group averaging: commentary on Nevin's "Resistance to extinction and behavioral momentum".

    Science.gov (United States)

    Gallistel, C R

    2012-05-01

    Except under unusually favorable circumstances, one can infer from functions obtained by averaging across the subjects neither the form of the function that describes the behavior of the individual subject nor the central tendencies of descriptive parameter values. We should restore the cumulative record to the place of honor as our means of visualizing behavioral change, and we should base our conclusions on analyses that measure where the change occurs in these response-by-response records of the behavior of individual subjects. When that is done, we may find that the extinction of responding to a continuously reinforced stimulus is faster than the extinction of responding to a partially reinforced stimulus in a within-subject design because the latter is signaled extinction. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Variation estimation of the averaged cross sections in the direct and adjoint fluxes

    International Nuclear Information System (INIS)

    Cardoso, Carlos Eduardo Santos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    1995-01-01

    There are several applications of the perturbation theory to specifics problems of reactor physics, such as nonuniform fuel burnup, nonuniform poison accumulation and evaluations of Doppler effects on reactivity. The neutron fluxes obtained from the solutions of direct and adjoint diffusion equations, are used in these applications. In the adjoint diffusion equation has been used the group constants averaged in the energy-dependent direct neutron flux, that it is not theoretically consistent. In this paper it is presented a method to calculate the energy-dependent adjoint neutron flux, to obtain the average group-constant that will be used in the adjoint diffusion equation. The method is based on the solution of the adjoint neutron balance equations, that were derived for a two regions cell. (author). 5 refs, 2 figs, 1 tab

  16. Detection of small traumatic hemorrhages using a computer-generated average human brain CT.

    Science.gov (United States)

    Afzali-Hashemi, Liza; Hazewinkel, Marieke; Tjepkema-Cloostermans, Marleen C; van Putten, Michel J A M; Slump, Cornelis H

    2018-04-01

    Computed tomography is a standard diagnostic imaging technique for patients with traumatic brain injury (TBI). A limitation is the poor-to-moderate sensitivity for small traumatic hemorrhages. A pilot study using an automatic method to detect hemorrhages [Formula: see text] in diameter in patients with TBI is presented. We have created an average image from 30 normal noncontrast CT scans that were automatically aligned using deformable image registration as implemented in Elastix software. Subsequently, the average image was aligned to the scans of TBI patients, and the hemorrhages were detected by a voxelwise subtraction of the average image from the CT scans of nine TBI patients. An experienced neuroradiologist and a radiologist in training assessed the presence of hemorrhages in the final images and determined the false positives and false negatives. The 9 CT scans contained 67 small haemorrhages, of which 97% was correctly detected by our system. The neuroradiologist detected three false positives, and the radiologist in training found two false positives. For one patient, our method showed a hemorrhagic contusion that was originally missed. Comparing individual CT scans with a computed average may assist the physicians in detecting small traumatic hemorrhages in patients with TBI.

  17. Improved performance of high average power semiconductor arrays for applications in diode pumped solid state lasers

    International Nuclear Information System (INIS)

    Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.

    1994-01-01

    The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL's). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL's which are appropriate for material processing applications, low and intermediate average power DPSSL's are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications

  18. Proton transport properties of poly(aspartic acid) with different average molecular weights

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Yuki, E-mail: ynagao@kuchem.kyoto-u.ac.j [Department of Mechanical Systems and Design, Graduate School of Engineering, Tohoku University, 6-6-01 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Imai, Yuzuru [Institute of Development, Aging and Cancer (IDAC), Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575 (Japan); Matsui, Jun [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan); Ogawa, Tomoyuki [Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, 6-6-05 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Miyashita, Tokuji [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan)

    2011-04-15

    Research highlights: Seven polymers with different average molecular weights were synthesized. The proton conductivity depended on the number-average degree of polymerization. The difference of the proton conductivities was more than one order of magnitude. The number-average molecular weight contributed to the stability of the polymer. - Abstract: We synthesized seven partially protonated poly(aspartic acids)/sodium polyaspartates (P-Asp) with different average molecular weights to study their proton transport properties. The number-average degree of polymerization (DP) for each P-Asp was 30 (P-Asp30), 115 (P-Asp115), 140 (P-Asp140), 160 (P-Asp160), 185 (P-Asp185), 205 (P-Asp205), and 250 (P-Asp250). The proton conductivity depended on the number-average DP. The maximum and minimum proton conductivities under a relative humidity of 70% and 298 K were 1.7 . 10{sup -3} S cm{sup -1} (P-Asp140) and 4.6 . 10{sup -4} S cm{sup -1} (P-Asp250), respectively. Differential thermogravimetric analysis (TG-DTA) was carried out for each P-Asp. The results were classified into two categories. One exhibited two endothermic peaks between t = (270 and 300) {sup o}C, the other exhibited only one peak. The P-Asp group with two endothermic peaks exhibited high proton conductivity. The high proton conductivity is related to the stability of the polymer. The number-average molecular weight also contributed to the stability of the polymer.

  19. Comparisons between different techniques for measuring mass segregation

    Science.gov (United States)

    Parker, Richard J.; Goodwin, Simon P.

    2015-06-01

    We examine the performance of four different methods which are used to measure mass segregation in star-forming regions: the radial variation of the mass function {M}_MF; the minimum spanning tree-based ΛMSR method; the local surface density ΣLDR method; and the ΩGSR technique, which isolates groups of stars and determines whether the most massive star in each group is more centrally concentrated than the average star. All four methods have been proposed in the literature as techniques for quantifying mass segregation, yet they routinely produce contradictory results as they do not all measure the same thing. We apply each method to synthetic star-forming regions to determine when and why they have shortcomings. When a star-forming region is smooth and centrally concentrated, all four methods correctly identify mass segregation when it is present. However, if the region is spatially substructured, the ΩGSR method fails because it arbitrarily defines groups in the hierarchical distribution, and usually discards positional information for many of the most massive stars in the region. We also show that the ΛMSR and ΣLDR methods can sometimes produce apparently contradictory results, because they use different definitions of mass segregation. We conclude that only ΛMSR measures mass segregation in the classical sense (without the need for defining the centre of the region), although ΣLDR does place limits on the amount of previous dynamical evolution in a star-forming region.

  20. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  1. Occlusal consequence of using average condylar guidance settings: An in vitro study.

    Science.gov (United States)

    Lee, Wonsup; Lim, Young-Jun; Kim, Myung-Joo; Kwon, Ho-Beom

    2017-04-01

    A simplified mounting technique that adopts an average condylar guidance has been advocated. Despite this, the experimental explanation of how average settings differ from individual condylar guidance remains unclear. The purpose of this in vitro study was to examine potential occlusal error by using average condylar guidance settings during nonworking side movement of the articulator. Three-dimensional positions of the nonworking side maxillary first molar at various condylar and incisal settings were traced using a laser displacement sensor attached to the motorized stages with biaxial freedom of movement. To examine clinically relevant occlusal consequences of condylar guidance setting errors, the vertical occlusal error was defined as the vertical-axis positional difference between the average setting trace and the other condylar guidance setting trace. In addition, the respective contribution of the condylar and incisal guidance to the position of the maxillary first molar area was analyzed by multiple regression analysis using the resultant coordinate data. Alteration from individual to average settings led to a positional difference in the maxillary first molar nonworking side movement. When the individual setting was lower than average, vertical occlusal error occurred, which might cause occlusal interference. The vertical occlusal error ranged from -2964 to 1711 μm. In addition, the occlusal effect of incisal guidance was measured as a partial regression coefficient of 0.882, which exceeded the effect of condylar guidance, 0.431. Potential occlusal error as a result of adopting an average condylar guidance setting was observed. The occlusal effect of incisal guidance doubled the effect of condylar guidance. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  2. Lie groups for pedestrians

    CERN Document Server

    Lipkin, Harry J

    2002-01-01

    According to the author of this concise, high-level study, physicists often shy away from group theory, perhaps because they are unsure which parts of the subject belong to the physicist and which belong to the mathematician. However, it is possible for physicists to understand and use many techniques which have a group theoretical basis without necessarily understanding all of group theory. This book is designed to familiarize physicists with those techniques. Specifically, the author aims to show how the well-known methods of angular momentum algebra can be extended to treat other Lie group

  3. A Comparative Study of Clear Corneal Phacoemulsification with Rigid IOL Versus SICS; the Preferred Surgical Technique in Low Socio-economic group Patients of Rural Areas.

    Science.gov (United States)

    Devendra, Jaya; Agarwal, Smita; Singh, Pankaj Kumar

    2014-11-01

    Low socio-economic group patients from rural areas often opt for free cataract surgeries offered by charitable organisations. SICS continues to be a time tested technique for cataract removal in such patients. In recent times, camp patients are sometimes treated by clear corneal phacoemulsification with implantation of a rigid IOL, which being more cost effective is often provided for camp patients. This study was undertaken to find out which surgical technique yielded better outcomes and was more suited for high volume camp surgery. To find the better surgical option- phacoemulsification with rigid IOL or SICS, in poor patients from rural areas. A prospective randomised controlled trial of cataract patients operated by two different techniques. One hundred and twelve eyes were selected and were randomly allocated into two groups of 56 eyes each. At completion of the study, data was analysed for 52 eyes operated by clear corneal phacoemulsification and implantation of a rigid IOL, and 56 eyes operated by SICS. Unpaired t-test was used to calculate the p- value. The results were evaluated on the following criteria. The mean post-operative astigmatism at the end of four weeks - was significantly higher in phacoemulsification group as compared to SICS group The BCVA (best corrected visual acuity) at the end of four weeks - was comparable in both groups. Subjective complaints and/ or complications: In phaco group two patients required sutures and seven had striate keratitis , while none in SICS group. Complaint of irritation was similar in both groups. Surgical time- was less for SICS group as compared to phaco group. SICS by virtue of being a faster surgery with more secure wound and significantly less astigmatism is a better option in camp patients from rural areas as compared to phacoemulsification with rigid IOL.

  4. Larynx-sparing techniques using intensity-modulated radiation therapy for oropharyngeal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Bar Ad, Voichita, E-mail: voichita.bar-ad@jeffersonhospital.org [Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA (United States); Lin, Haibo [Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA (United States); Hwang, Wei-Ting [Department of Biostatistics and Epidemiology, University of Pennsylvania, Philadelphia, PA (United States); Deville, Curtiland [Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA (United States); Dutta, Pinaki R. [Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA (United States); Department of Biostatistics and Epidemiology, University of Pennsylvania, Philadelphia, PA (United States); Tochner, Zelig; Both, Stefan [Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA (United States)

    2012-01-01

    The purpose of the current study was to explore whether the laryngeal dose can be reduced by using 2 intensity-modulated radiation therapy (IMRT) techniques: whole-neck field IMRT technique (WF-IMRT) vs. junctioned IMRT (J-IMRT). The effect on planning target volumes (PTVs) coverage and laryngeal sparing was evaluated. WF-IMRT technique consisted of a single IMRT plan, including the primary tumor and the superior and inferior neck to the level of the clavicular heads. The larynx was defined as an organ at risk extending superiorly to cover the arytenoid cartilages and inferiorly to include the cricoid cartilage. The J-IMRT technique consisted of an IMRT plan for the primary tumor and the superior neck, matched to conventional antero-posterior opposing lower neck fields at the level of the thyroid notch. A central block was used for the anterior lower neck field at the level of the larynx to restrict the dose to the larynx. Ten oropharyngeal cancer cases were analyzed. Both the primary site and bilateral regional lymphatics were included in the radiotherapy targets. The averaged V95 for the PTV57.6 was 99.2% for the WF-IMRT technique compared with 97.4% (p = 0.02) for J-IMRT. The averaged V95 for the PTV64 was 99.9% for the WF-IMRT technique compared with 98.9% (p = 0.02) for J-IMRT and the averaged V95 for the PT70 was 100.0% for WF-IMRT technique compared with 99.5% (p = 0.04) for J-IMRT. The averaged mean laryngeal dose was 18 Gy with both techniques. The averaged mean doses within the matchline volumes were 69.3 Gy for WF-MRT and 66.2 Gy for J-IMRT (p = 0.03). The WF-IMRT technique appears to offer an optimal coverage of the target volumes and a mean dose to the larynx similar with J-IMRT and should be further evaluated in clinical trials.

  5. Medical students benefit from the use of ultrasound when learning peripheral IV techniques.

    Science.gov (United States)

    Osborn, Scott R; Borhart, Joelle; Antonis, Michael S

    2012-03-06

    Recent studies support high success rates after a short learning period of ultrasound IV technique, and increased patient and provider satisfaction when using ultrasound as an adjunct to peripheral IV placement. No study to date has addressed the efficacy for instructing ultrasound-naive providers. We studied the introduction of ultrasound to the teaching technique of peripheral IV insertion on first- and second-year medical students. This was a prospective, randomized, and controlled trial. A total of 69 medical students were randomly assigned to the control group with a classic, landmark-based approach (n = 36) or the real-time ultrasound-guided group (n = 33). Both groups observed a 20-min tutorial on IV placement using both techniques and then attempted vein cannulation. Students were given a survey to report their results and observations by a 10-cm visual analog scale. The survey response rate was 100%. In the two groups, 73.9% stated that they attempted an IV previously, and 63.7% of students had used an ultrasound machine prior to the study. None had used ultrasound for IV access prior to our session. The average number of attempts at cannulation was 1.42 in either group. There was no difference between the control and ultrasound groups in terms of number of attempts (p = 0.31). In both groups, 66.7% of learners were able to cannulate in one attempt, 21.7% in two attempts, and 11.6% in three attempts. The study group commented that they felt they gained more knowledge from the experience (p students feel they learn more when using ultrasound after a 20-min tutorial to place IVs and cannulation of the vein feels easier. Success rates are comparable between the traditional and ultrasound teaching approaches.

  6. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    Directory of Open Access Journals (Sweden)

    Kirti AREKAR

    2017-12-01

    Full Text Available The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in volatility fluctuations, attention, and level between epochs of pre and post particularly high stock returns.

  7. Characterization of acid functional groups of carbon dots by nonlinear regression data fitting of potentiometric titration curves

    Science.gov (United States)

    Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.

    2016-05-01

    The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.

  8. Carving Technique – Methodical Perspectives

    Directory of Open Access Journals (Sweden)

    Adela BADAU

    2015-09-01

    Full Text Available The alpine skiing has undergone major changes and adjustments due to both technological innovations of materials and update of theoretical and methodological concepts on all levels of specific training. The purpose: the introduction of technological innovation in the field of materials specif ic to carving ski causes a review of methodology, aiming at bringing the execution technique to superior indices in order to obtain positive results. The event took place in Poiana Brasov between December 2014 and March 2015, on an 800m long slope and comp rised a single experimental group made of four males and four females, cadet category, that carried out two lessons per day. The tests targeted the technique level for slalom skiing and giant slalom skiing, having in view four criteria: leg work, basin mov ement, torso position and arms work. As a result of the research and of the statistic - mathematical analysis of the individual values, the giant slalom race registered an average improvement of 3.5 points between the tests, while the slalom race registered 4 points. In conclusion, the use of a specific methodology applied scientifically, which aims to select the most efficient means of action specific to children’s ski, determines technical improvement at an advanced level.

  9. Blooming Trees: Substructures and Surrounding Groups of Galaxy Clusters

    Science.gov (United States)

    Yu, Heng; Diaferio, Antonaldo; Serra, Ana Laura; Baldi, Marco

    2018-06-01

    We develop the Blooming Tree Algorithm, a new technique that uses spectroscopic redshift data alone to identify the substructures and the surrounding groups of galaxy clusters, along with their member galaxies. Based on the estimated binding energy of galaxy pairs, the algorithm builds a binary tree that hierarchically arranges all of the galaxies in the field of view. The algorithm searches for buds, corresponding to gravitational potential minima on the binary tree branches; for each bud, the algorithm combines the number of galaxies, their velocity dispersion, and their average pairwise distance into a parameter that discriminates between the buds that do not correspond to any substructure or group, and thus eventually die, and the buds that correspond to substructures and groups, and thus bloom into the identified structures. We test our new algorithm with a sample of 300 mock redshift surveys of clusters in different dynamical states; the clusters are extracted from a large cosmological N-body simulation of a ΛCDM model. We limit our analysis to substructures and surrounding groups identified in the simulation with mass larger than 1013 h ‑1 M ⊙. With mock redshift surveys with 200 galaxies within 6 h ‑1 Mpc from the cluster center, the technique recovers 80% of the real substructures and 60% of the surrounding groups; in 57% of the identified structures, at least 60% of the member galaxies of the substructures and groups belong to the same real structure. These results improve by roughly a factor of two the performance of the best substructure identification algorithm currently available, the σ plateau algorithm, and suggest that our Blooming Tree Algorithm can be an invaluable tool for detecting substructures of galaxy clusters and investigating their complex dynamics.

  10. A comparative In vivo efficacy of three spiral techniques versus incremental technique in obturating primary teeth

    Directory of Open Access Journals (Sweden)

    Shalini Chandrasekhar

    2018-01-01

    Full Text Available Background: The aim of this study was to evaluate the efficiency of four different obturating techniques in filling the radicular space in primary teeth. Materials and Methods: This clinical trial was carried out on 34 healthy, cooperative children (5–9 years who had 63 carious primary teeth indicated for pulpectomy. They were divided into four groups, such that in each group, a total of 40 canals were allotted for obturation with respective technique. The root canals of selected primary teeth were filled with Endoflas obturating material using either bi-directional spiral (Group 1; incremental technique (Group 2, past inject (Group 3 or lentulo spiral (Group 4 according to the groups assigned. The effectiveness of the obturation techniques was assessed using postoperative radiographs. The assessment was made for a depth of fill in the canal, the presence of any voids using Modified Coll and Sadrian criteria. The obtained data were analyzed by using ANOVA test and unpaired t-test. Results: Bi-directional spiral and lentulo spiral were superior to other techniques in providing optimally filled canals (P< 0.05. The bi-directional spiral was superior to lentulo spiral in preventing overfill (P< 0.05. Conclusion: Based on the present study results, bi-directional spiral can be recommended as an alternate obturating technique in primary teeth.

  11. Recommendations of the FAO/IAEA advisory group meeting on improving the productivity of indigenous animals in harsh environments with the aid of nuclear techniques

    International Nuclear Information System (INIS)

    1986-01-01

    The purpose of the FAO/IAEA advisory group meeting was to evaluate the nuclear and related techniques currently used to quantify such functions as animal adaptation, digestion and utilization of poor quality feedstuffs, reproductive efficiency and resistance to disease and other forms of stress. The recommendations made by the advisory group are grouped into five sections: reproduction, parasitic diseases, infectious diseases, environmental physiology and nutrition

  12. The use of nominal group technique in identifying community health priorities in Moshi rural district, northern Tanzania

    DEFF Research Database (Denmark)

    Makundi, E A; Manongi, R; Mushi, A K

    2005-01-01

    in the list implying that priorities should not only be focused on diseases, but should also include health services and social cultural issues. Indeed, methods which are easily understood and applied thus able to give results close to those provided by the burden of disease approaches should be adopted....... The patients/caregivers, women's group representatives, youth leaders, religious leaders and community leaders/elders constituted the principal subjects. Emphasis was on providing qualitative data, which are of vital consideration in multi-disciplinary oriented studies, and not on quantitative information from....... It is the provision of ownership of the derived health priorities to partners including the community that enhances research utilization of the end results. In addition to disease-based methods, the Nominal Group Technique is being proposed as an important research tool for involving the non-experts in priority...

  13. The relationship between limit of Dysphagia and average volume per swallow in patients with Parkinson's disease.

    Science.gov (United States)

    Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes

    2014-08-01

    The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.

  14. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-04-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  15. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-06-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  16. Phase-rectified signal averaging method to predict perinatal outcome in infants with very preterm fetal growth restriction- a secondary analysis of TRUFFLE-trial

    NARCIS (Netherlands)

    Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H A; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T M; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid

    2016-01-01

    Background Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival

  17. Phase-rectified signal averaging method to predict perinatal outcome in infants with very preterm fetal growth restriction- a secondary analysis of TRUFFLE-trial

    NARCIS (Netherlands)

    Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H. A.; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T. M.; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid

    2016-01-01

    Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival after

  18. Phase-mapping technique for the evaluation of aortic valve stenosis by MR

    International Nuclear Information System (INIS)

    Engels, G.; Mueller, E.; Reynen, K.; Wilke, N.; Bachmann, K.

    1992-01-01

    New MR-techniques for quantitative blood flow registration such as phase-mapping (a two-dimensional space-resolved technique with a time-averaged measurement of blood flow) or RACE (real-time acquisition and evaluation of blood flow in one-dimensional space projection) are available for the diagnosis of valvular heart disease. Initial results of grading aortic valve stenosis by these methods are shown in comparison to continuous wave Ultrasound-Doppler. Two groups of 15 patients were examined by RACE or phase-mapping, 12 respectively 8 of whom suffered from an aortic valve stenosis. The shape of blood flow profiles as well as grading of aortic valve stenosis show high concordance when comparing the results of MR and Doppler technique. Good reliability and practicability of the demonstrated MR-method are shown. With respect to the results of RACE and phase-mapping the development of an alternative and competing MR-method for the evaluation of valvular heart disease and shunt diagnostics seems possible. (orig.)

  19. Phase-mapping technique for the evaluation of aortic valve stenosis by MR

    Energy Technology Data Exchange (ETDEWEB)

    Engels, G. [Medizinische Klinik 2, Univ. of Erlangen (Germany); Mueller, E. [Siemens Medical Engineering Group, Erlangen (Germany); Reynen, K. [Medizinische Klinik 2, Univ. of Erlangen (Germany); Wilke, N. [Siemens Medical Engineering Group, Erlangen (Germany); Bachmann, K. [Medizinische Klinik 2, Univ. of Erlangen (Germany)

    1992-08-01

    New MR-techniques for quantitative blood flow registration such as phase-mapping (a two-dimensional space-resolved technique with a time-averaged measurement of blood flow) or RACE (real-time acquisition and evaluation of blood flow in one-dimensional space projection) are available for the diagnosis of valvular heart disease. Initial results of grading aortic valve stenosis by these methods are shown in comparison to continuous wave Ultrasound-Doppler. Two groups of 15 patients were examined by RACE or phase-mapping, 12 respectively 8 of whom suffered from an aortic valve stenosis. The shape of blood flow profiles as well as grading of aortic valve stenosis show high concordance when comparing the results of MR and Doppler technique. Good reliability and practicability of the demonstrated MR-method are shown. With respect to the results of RACE and phase-mapping the development of an alternative and competing MR-method for the evaluation of valvular heart disease and shunt diagnostics seems possible. (orig.)

  20. [COMPARISON OF EFFECTIVENESS BETWEEN TWO OPERATIVE TECHNIQUES OF CORACOCLAVICULAR LIGAMENT RECONSTRUCTION FOR TREATMENT OF Tossy TYPE III ACROMIOCLAVICULAR JOINT DISLOCATION].

    Science.gov (United States)

    Tang, Hongwei; Gao, Sheng; Yin, Yong; Li, Yunfei; Han, Qingtian; Li, Huizhang

    2015-11-01

    To evaluate and compare the effectiveness of double Endobutton technique and suture anchor combined Endobutton plate in the treatment of Tossy type III acromioclavicular joint dislocation. Between May 2010 and March 2014, a retrospective study was preformed on 56 patients with Tossy type III acromioclavicular joint dislocation. The coracoclavicular ligament was reconstructed with double Endobutton technique in 31 cases (Endobutton group), and with suture anchor combined Endobutton plate in 25 cases (Anchor group). There was no significant difference in age, gender, injury causes, injury side, associated injury, medical comorbidities, and disease duration between 2 groups (P>0.05). The operation time, medical device expenses, postoperative complications, preoperative and postoperative Constant-Murley scores, and postoperative Karlsson grading of the injured shoulder were compared between 2 groups. The average operation time in Endobutton group was significantly greater than that in Anchor group (t = 4.285, P = 0.000); there was no significant difference in the medical device expenses between 2 groups (t = 1.555, P = 0.126). Primary healing of incision was obtained in all patients of 2 groups; no early complications of infection and skin necrosis occurred. All patients were followed up 15.6 months on average (range, 11-35 months). During follow-up, some loss of reduction and ectopic ossification in the coracoclavicular gap were observed in 1 case and 6 cases of Endobutton group, respectively. No recurrence of acromioclavicular joint dislocation, implant fixation loosening and broken, and secondary fractures occurred in the other patients. There was significant difference in the incidence of postoperative complications between 2 groups (P = 0.013). Constant-Murley scores of the injured shoulder significantly increased at 9 months after operation when compared with preoperative values in 2 groups (P 0.05). At last follow-up, there was no significant difference in

  1. Semi-analytical wave functions in relativistic average atom model for high-temperature plasmas

    International Nuclear Information System (INIS)

    Guo Yonghui; Duan Yaoyong; Kuai Bin

    2007-01-01

    The semi-analytical method is utilized for solving a relativistic average atom model for high-temperature plasmas. Semi-analytical wave function and the corresponding energy eigenvalue, containing only a numerical factor, are obtained by fitting the potential function in the average atom into hydrogen-like one. The full equations for the model are enumerated, and more attentions are paid upon the detailed procedures including the numerical techniques and computer code design. When the temperature of plasmas is comparatively high, the semi-analytical results agree quite well with those obtained by using a full numerical method for the same model and with those calculated by just a little different physical models, and the result's accuracy and computation efficiency are worthy of note. The drawbacks for this model are also analyzed. (authors)

  2. Mindfulness for group facilitation

    DEFF Research Database (Denmark)

    Adriansen, Hanne Kirstine; Krohn, Simon

    2014-01-01

    In this paper, we argue that mindfulness techniques can be used for enhancing the outcome of group performance. The word mindfulness has different connotations in the academic literature. Broadly speaking there is ‘mindfulness without meditation’ or ‘Western’ mindfulness which involves active...... thinking and ‘Eastern’ mindfulness which refers to an open, accepting state of mind, as intended with Buddhist-inspired techniques such as meditation. In this paper, we are interested in the latter type of mindfulness and demonstrate how Eastern mindfulness techniques can be used as a tool for facilitation....... A brief introduction to the physiology and philosophy of Eastern mindfulness constitutes the basis for the arguments of the effect of mindfulness techniques. The use of mindfulness techniques for group facilitation is novel as it changes the focus from individuals’ mindfulness practice...

  3. Verification of intravenous catheter placement by auscultation--a simple, noninvasive technique.

    Science.gov (United States)

    Lehavi, Amit; Rudich, Utay; Schechtman, Moshe; Katz, Yeshayahu Shai

    2014-01-01

    Verification of proper placement of an intravenous catheter may not always be simple. We evaluated the auscultation technique for this purpose. Twenty healthy volunteers were randomized for 18G catheter inserted intravenously either in the right (12) or left arm (8), and subcutaneously in the opposite arm. A standard stethoscope was placed over an area approximately 3 cm proximal to the tip of the catheter in the presumed direction of the vein to grade on a 0-6 scale the murmur heard by rapidly injecting 2 mL of NaCl 0.9% solution. The auscultation was evaluated by a blinded staff anesthesiologist. All 20 intravenous injection were evaluated as flow murmurs, and were graded an average 5.65 (±0.98), whereas all 20 subcutaneous injections were evaluated as either crackles or no sound, and were graded an average 2.00 (±1.38), without negative results. Sensitivity was calculated as 95%. Specificity and Kappa could not be calculated due to an empty false-positive group. Being simple, handy and noninvasive, we recommend to use the auscultation technique for verification of the proper placement of an intravenous catheter when uncertain of its position. Data obtained in our limited sample of healthy subjects need to be confirmed in the clinical setting.

  4. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  5. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  6. A systematic comparison of motion artifact correction techniques for functional near-infrared spectroscopy.

    Science.gov (United States)

    Cooper, Robert J; Selb, Juliette; Gagnon, Louis; Phillip, Dorte; Schytz, Henrik W; Iversen, Helle K; Ashina, Messoud; Boas, David A

    2012-01-01

    Near-infrared spectroscopy (NIRS) is susceptible to signal artifacts caused by relative motion between NIRS optical fibers and the scalp. These artifacts can be very damaging to the utility of functional NIRS, particularly in challenging subject groups where motion can be unavoidable. A number of approaches to the removal of motion artifacts from NIRS data have been suggested. In this paper we systematically compare the utility of a variety of published NIRS motion correction techniques using a simulated functional activation signal added to 20 real NIRS datasets which contain motion artifacts. Principle component analysis, spline interpolation, wavelet analysis, and Kalman filtering approaches are compared to one another and to standard approaches using the accuracy of the recovered, simulated hemodynamic response function (HRF). Each of the four motion correction techniques we tested yields a significant reduction in the mean-squared error (MSE) and significant increase in the contrast-to-noise ratio (CNR) of the recovered HRF when compared to no correction and compared to a process of rejecting motion-contaminated trials. Spline interpolation produces the largest average reduction in MSE (55%) while wavelet analysis produces the highest average increase in CNR (39%). On the basis of this analysis, we recommend the routine application of motion correction techniques (particularly spline interpolation or wavelet analysis) to minimize the impact of motion artifacts on functional NIRS data.

  7. Average blood flow and oxygen uptake in the human brain during resting wakefulness

    DEFF Research Database (Denmark)

    Madsen, P L; Holm, S; Herning, M

    1993-01-01

    tracer between the brain and its venous blood is not reached. As a consequence, normal values for CBF and CMRO2 of 54 ml 100 g-1 min-1 and 3.5 ml 100 g-1 min-1 obtained with the Kety-Schmidt technique are an overestimation of the true values. Using the Kety-Schmidt technique we have performed 57...... the measured data, we find that the true average values for CBF and CMRO2 in the healthy young adult are approximately 46 ml 100 g-1 min-1 and approximately 3.0 ml 100 g-1 min-1. Previous studies have suggested that some of the variation in CMRO2 values could be ascribed to differences in cerebral venous...

  8. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  9. Analysis of compound parabolic concentrators and aperture averaging to mitigate fading on free-space optical links

    Science.gov (United States)

    Wasiczko, Linda M.; Smolyaninov, Igor I.; Davis, Christopher C.

    2004-01-01

    Free space optics (FSO) is one solution to the bandwidth bottleneck resulting from increased demand for broadband access. It is well known that atmospheric turbulence distorts the wavefront of a laser beam propagating through the atmosphere. This research investigates methods of reducing the effects of intensity scintillation and beam wander on the performance of free space optical communication systems, by characterizing system enhancement using either aperture averaging techniques or nonimaging optics. Compound Parabolic Concentrators, nonimaging optics made famous by Winston and Welford, are inexpensive elements that may be easily integrated into intensity modulation-direct detection receivers to reduce fading caused by beam wander and spot breakup in the focal plane. Aperture averaging provides a methodology to show the improvement of a given receiver aperture diameter in averaging out the optical scintillations over the received wavefront.

  10. On the use of the Lie group technique for differential equations with a small parameter: Approximate solutions and integrable equations

    International Nuclear Information System (INIS)

    Burde, G.I.

    2002-01-01

    A new approach to the use of the Lie group technique for partial and ordinary differential equations dependent on a small parameter is developed. In addition to determining approximate solutions to the perturbed equation, the approach allows constructing integrable equations that have solutions with (partially) prescribed features. Examples of application of the approach to partial differential equations are given

  11. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  12. Tomographic reconstruction of the time-averaged density distribution in two-phase flow

    International Nuclear Information System (INIS)

    Fincke, J.R.

    1982-01-01

    The technique of reconstructive tomography has been applied to the measurement of time-average density and density distribution in a two-phase flow field. The technique of reconstructive tomography provides a model-independent method of obtaining flow-field density information. A tomographic densitometer system for the measurement of two-phase flow has two unique problems: a limited number of data values and a correspondingly coarse reconstruction grid. These problems were studied both experimentally through the use of prototype hardware on a 3-in. pipe, and analytically through computer generation of simulated data. The prototype data were taken on phantoms constructed of all Plexiglas and Plexiglas laminated with wood and polyurethane foam. Reconstructions obtained from prototype data are compared with reconstructions from the simulated data. Also presented are some representative results in a horizontal air/water flow

  13. Phase-Averaged Method Applied to Periodic Flow Between Shrouded Corotating Disks

    Directory of Open Access Journals (Sweden)

    Shen-Chun Wu

    2003-01-01

    Full Text Available This study investigates the coherent flow fields between corotating disks in a cylindrical enclosure. By using two laser velocimeters and a phase-averaged technique, the vortical structures of the flow could be reconstructed and their dynamic behavior was observed. The experimental results reveal clearly that the flow field between the disks is composed of three distinct regions: an inner region near the hub, an outer region, and a shroud boundary layer region. The outer region is distinguished by the presence of large vortical structures. The number of vortical structures corresponds to the normalized frequency of the flow.

  14. The comparison between limited open carpal tunnel release using direct vision and tunneling technique and standard open carpal tunnel release: a randomized controlled trial study.

    Science.gov (United States)

    Suppaphol, Sorasak; Worathanarat, Patarawan; Kawinwongkovit, Viroj; Pittayawutwinit, Preecha

    2012-04-01

    To compare the operative outcome of carpal tunnel release between limited open carpal tunnel release using direct vision and tunneling technique (group A) with standard open carpal tunnel release (group B). Twenty-eight patients were enrolled in the present study. A single blind randomized control trial study was conducted to compare the postoperative results between group A and B. The study parameters were Levine's symptom severity and functional score, grip and pinch strength, and average two-point discrimination. The postoperative results between two groups were comparable with no statistical significance. Only grip strength at three months follow up was significantly greater in group A than in group B. The limited open carpal tunnel release in the present study is effective comparable to the standard open carpal tunnel release. The others advantage of this technique are better cosmesis and improvement in grip strength at the three months postoperative period.

  15. Novel Techniques to Characterize Pore Size of Porous Materials

    KAUST Repository

    Alabdulghani, Ali J.

    2016-01-01

    Porous materials are implemented in several industrial applications such as water desalination, gas separation and pharmaceutical care which they are mainly governed by the pore size and the PSD. Analyzing shale reservoirs are not excluded from these applications and numerous advantages can be gained by evaluating the PSD of a given shale reservoir. Because of the limitations of the conventional characterization techniques, novel methods for characterizing the PSD have to be proposed in order to obtain better characterization results for the porous materials, in general, and shale rocks in particular. Thus, permporosimetry and evapoporometry (EP) technologies were introduced, designed and utilized for evaluating the two key parameters, pore size and pore size distribution. The pore size and PSD profiles of different shale samples from Norway and Argentina were analyzed using these technologies and then confirmed by mercury intrusion porosimeter (MIP). Norway samples showed an average pore diameter of 12.94 nm and 19.22 nm with an average diameter of 13.77 nm and 23.23 nm for Argentina samples using permporosimetry and EP respectively. Both techniques are therefore indicative of the heterogeneity of the shales. The results from permporosimetry are in good agreement with those obtained from MIP technique, but EP for most part over-estimates the average pore size. The divergence of EP results compared to permporosimetry results is referred to the fact that the latter technique measures only the active pores which is not the case with the former technique. Overall, both techniques are complementary to each other which the results from both techniques seem reasonable and reliable and provide two simple techniques to estimate the pore size and pore size distributions for shale rocks.

  16. Novel Techniques to Characterize Pore Size of Porous Materials

    KAUST Repository

    Alabdulghani, Ali J.

    2016-04-24

    Porous materials are implemented in several industrial applications such as water desalination, gas separation and pharmaceutical care which they are mainly governed by the pore size and the PSD. Analyzing shale reservoirs are not excluded from these applications and numerous advantages can be gained by evaluating the PSD of a given shale reservoir. Because of the limitations of the conventional characterization techniques, novel methods for characterizing the PSD have to be proposed in order to obtain better characterization results for the porous materials, in general, and shale rocks in particular. Thus, permporosimetry and evapoporometry (EP) technologies were introduced, designed and utilized for evaluating the two key parameters, pore size and pore size distribution. The pore size and PSD profiles of different shale samples from Norway and Argentina were analyzed using these technologies and then confirmed by mercury intrusion porosimeter (MIP). Norway samples showed an average pore diameter of 12.94 nm and 19.22 nm with an average diameter of 13.77 nm and 23.23 nm for Argentina samples using permporosimetry and EP respectively. Both techniques are therefore indicative of the heterogeneity of the shales. The results from permporosimetry are in good agreement with those obtained from MIP technique, but EP for most part over-estimates the average pore size. The divergence of EP results compared to permporosimetry results is referred to the fact that the latter technique measures only the active pores which is not the case with the former technique. Overall, both techniques are complementary to each other which the results from both techniques seem reasonable and reliable and provide two simple techniques to estimate the pore size and pore size distributions for shale rocks.

  17. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  18. Time-averaged second-order pressure and velocity measurements in a pressurized oscillating flow prime mover

    Energy Technology Data Exchange (ETDEWEB)

    Paridaens, Richard [DynFluid, Arts et Metiers, 151 boulevard de l' Hopital, Paris (France); Kouidri, Smaine [LIMSI-CNRS, Orsay Cedex (France)

    2016-11-15

    Nonlinear phenomena in oscillating flow devices cause the appearance of a relatively minor secondary flow known as acoustic streaming, which is superimposed on the primary oscillating flow. Knowledge of control parameters, such as the time-averaged second-order velocity and pressure, would elucidate the non-linear phenomena responsible for this part of the decrease in the system's energetic efficiency. This paper focuses on the characterization of a travelling wave oscillating flow engine by measuring the time-averaged second order pressure and velocity. Laser Doppler velocimetry technique was used to measure the time-averaged second-order velocity. As streaming is a second-order phenomenon, its measurement requires specific settings especially in a pressurized device. Difficulties in obtaining the proper settings are highlighted in this study. The experiments were performed for mean pressures varying from 10 bars to 22 bars. Non-linear effect does not constantly increase with pressure.

  19. Averaged null energy condition and difference inequalities in quantum field theory

    International Nuclear Information System (INIS)

    Yurtsever, U.

    1995-01-01

    For a large class of quantum states, all local (pointwise) energy conditions widely used in relativity are violated by the renormalized stress-energy tensor of a quantum field. In contrast, certain nonlocal positivity constraints on the quantum stress-energy tensor might hold quite generally, and this possibility has received considerable attention in recent years. In particular, it is now known that the averaged null energy condition, the condition that the null-null component of the stress-energy tensor integrated along a complete null geodesic is non-negative for all states, holds quite generally in a wide class of spacetimes for a minimally coupled scalar field. Apart from the specific class of spacetimes considered (mainly two-dimensional spacetimes and four-dimensional Minkowski space), the most significant restriction on this result is that the null geodesic over which the average is taken must be achronal. Recently, Ford and Roman have explored this restriction in two-dimensional flat spacetime, and discovered that in a flat cylindrical space, although the stress energy tensor itself fails to satisfy the averaged null energy condition (ANEC) along the (nonachronal) null geodesics, when the ''Casimir-vacuum'' contribution is subtracted from the stress-energy the resulting tensor does satisfy the ANEC inequality. Ford and Roman name this class of constraints on the quantum stress-energy tensor ''difference inequalities.'' Here I give a proof of the difference inequality for a minimally coupled massless scalar field in an arbitrary (globally hyperbolic) two-dimensional spacetime, using the same techniques as those we relied on to prove the ANEC in an earlier paper with Wald. I begin with an overview of averaged energy conditions in quantum field theory

  20. Actor groups, related needs, and challenges at the climate downscaling interface

    Science.gov (United States)

    Rössler, Ole; Benestad, Rasmus; Diamando, Vlachogannis; Heike, Hübener; Kanamaru, Hideki; Pagé, Christian; Margarida Cardoso, Rita; Soares, Pedro; Maraun, Douglas; Kreienkamp, Frank; Christodoulides, Paul; Fischer, Andreas; Szabo, Peter

    2016-04-01

    At the climate downscaling interface, numerous downscaling techniques and different philosophies compete on being the best method in their specific terms. Thereby, it remains unclear to what extent and for which purpose these downscaling techniques are valid or even the most appropriate choice. A common validation framework that compares all the different available methods was missing so far. The initiative VALUE closes this gap with such a common validation framework. An essential part of a validation framework for downscaling techniques is the definition of appropriate validation measures. The selection of validation measures should consider the needs of the stakeholder: some might need a temporal or spatial average of a certain variable, others might need temporal or spatial distributions of some variables, still others might need extremes for the variables of interest or even inter-variable dependencies. Hence, a close interaction of climate data providers and climate data users is necessary. Thus, the challenge in formulating a common validation framework mirrors also the challenges between the climate data providers and the impact assessment community. This poster elaborates the issues and challenges at the downscaling interface as it is seen within the VALUE community. It suggests three different actor groups: one group consisting of the climate data providers, the other two groups being climate data users (impact modellers and societal users). Hence, the downscaling interface faces classical transdisciplinary challenges. We depict a graphical illustration of actors involved and their interactions. In addition, we identified four different types of issues that need to be considered: i.e. data based, knowledge based, communication based, and structural issues. They all may, individually or jointly, hinder an optimal exchange of data and information between the actor groups at the downscaling interface. Finally, some possible ways to tackle these issues are

  1. U.S.-GERMAN BILATERAL WORKING GROUP: International Research Cooperation to Develop and Evaluate Tools and Techniques for Revitalization of Potentially Contaminated Sites

    Science.gov (United States)

    The U.S. German Bilateral Working Group originated in 1990 in order to share and transfer information, ideas, tools and techniques regarding environmental research. The U.S. Environmental Protection Agency (EPA)/Office of Research and Development (ORD) and the German Federal Mini...

  2. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  3. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  4. Determination of Viscosity-Average Molecular Weight of Chitosan using Intrinsic Viscosity Measurement

    International Nuclear Information System (INIS)

    Norzita Yacob; Norhashidah Talip; Maznah Mahmud

    2011-01-01

    Molecular weight of chitosan can be determined by different techniques such as Gel Permeation Chromatography (GPC), Static Light Scattering (SLS) and intrinsic viscosity measurement. Determination of molecular weight by intrinsic viscosity measurement is a simple method for characterization of chitosan. Different concentrations of chitosan were prepared and measurement was done at room temperature. The flow time data was used to calculate the intrinsic viscosity by extrapolating the reduced viscosity to zero concentration. The value of intrinsic viscosity was then recalculated into the viscosity-average molecular weight using Mark-Houwink equation. (author)

  5. Removing the Influence of Shimmer in the Calculation of Harmonics-To-Noise Ratios Using Ensemble-Averages in Voice Signals

    Directory of Open Access Journals (Sweden)

    Carlos Ferrer

    2009-01-01

    Full Text Available Harmonics-to-noise ratios (HNRs are affected by general aperiodicity in voiced speech signals. To specifically reflect a signal-to-additive-noise ratio, the measurement should be insensitive to other periodicity perturbations, like jitter, shimmer, and waveform variability. The ensemble averaging technique is a time-domain method which has been gradually refined in terms of its sensitivity to jitter and waveform variability and required number of pulses. In this paper, shimmer is introduced in the model of the ensemble average, and a formula is derived which allows the reduction of shimmer effects in HNR calculation. The validity of the technique is evaluated using synthetically shimmered signals, and the prerequisites (glottal pulse positions and amplitudes are obtained by means of fully automated methods. The results demonstrate the feasibility and usefulness of the correction.

  6. An efficient nonlinear relaxation technique for the three-dimensional, Reynolds-averaged Navier-Stokes equations

    Science.gov (United States)

    Edwards, Jack R.; Mcrae, D. S.

    1993-01-01

    An efficient implicit method for the computation of steady, three-dimensional, compressible Navier-Stokes flowfields is presented. A nonlinear iteration strategy based on planar Gauss-Seidel sweeps is used to drive the solution toward a steady state, with approximate factorization errors within a crossflow plane reduced by the application of a quasi-Newton technique. A hybrid discretization approach is employed, with flux-vector splitting utilized in the streamwise direction and central differences with artificial dissipation used for the transverse fluxes. Convergence histories and comparisons with experimental data are presented for several 3-D shock-boundary layer interactions. Both laminar and turbulent cases are considered, with turbulent closure provided by a modification of the Baldwin-Barth one-equation model. For the problems considered (175,000-325,000 mesh points), the algorithm provides steady-state convergence in 900-2000 CPU seconds on a single processor of a Cray Y-MP.

  7. Hierarchical encoding makes individuals in a group seem more attractive.

    Science.gov (United States)

    Walker, Drew; Vul, Edward

    2014-01-01

    In the research reported here, we found evidence of the cheerleader effect-people seem more attractive in a group than in isolation. We propose that this effect arises via an interplay of three cognitive phenomena: (a) The visual system automatically computes ensemble representations of faces presented in a group, (b) individual members of the group are biased toward this ensemble average, and (c) average faces are attractive. Taken together, these phenomena suggest that individual faces will seem more attractive when presented in a group because they will appear more similar to the average group face, which is more attractive than group members' individual faces. We tested this hypothesis in five experiments in which subjects rated the attractiveness of faces presented either alone or in a group with the same gender. Our results were consistent with the cheerleader effect.

  8. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  9. Comparative study between the effects of isolated manual therapy techniques and those associated with low level laser therapy on pain in patients with temporomandibular dysfunction

    Directory of Open Access Journals (Sweden)

    Juliana Cristina Frare

    2008-01-01

    Full Text Available Objective: This study sought to evaluate the pain condition in patients with temporomandibular dysfunction after applying manual therapy techniques and those associated with this low level laser therapy. Methods: The study involved 20 patients with temporomandibular dysfunction, divided randomly into two groups: G1 (n = 10, formed by 7 women and 3 men, average age 28.2 years (± 7, treated with manual therapy techniques and G2 (n = 10, formed by 8 women and 2 men, with average age 24.01 (± 6.04, treated with the combination of manual therapy techniques and low level laser therapy. The patients were treated three times a week for four consecutive weeks. The memorandum of manual therapy techniques based on Chaintow,Makofsky and Bienfaint was used. For low level laser therapy GaAs laser (904 nm, 6 J/cm2, 0.38 mW/cm2 was used, applied at 4pre-auricular points. To analyze the pain level, the visual analog pain scale was used. For data analysis the Student’s-t and Wilcoxon tests were used, both with significance level of 5% (p <0.05.Results: There was significant reduction (p <0.05 in the level of pain in both groups treated, but in G2 the significance was higher.Conclusion: Manual therapy techniques, either alone or associated with low level laser therapy showed satisfactory results for pain control in patients with temporomandibular dysfunction.

  10. Structure of two-phase air-water flows. Study of average void fraction and flow patterns

    International Nuclear Information System (INIS)

    Roumy, R.

    1969-01-01

    This report deals with experimental work on a two phase air-water mixture in vertical tubes of different diameters. The average void fraction was measured in a 2 metre long test section by means of quick-closing valves. Using resistive probes and photographic techniques, we have determined the flow patterns and developed diagrams to indicate the boundaries between the various patterns: independent bubbles, agglomerated bubbles, slugs, semi-annular, annular. In the case of bubble flow and slug flow, it is shown that the relationship between the average void fraction and the superficial velocities of the phases is given by: V sg = f( ) * g(V sl ). The function g(V sl ) for the case of independent bubbles has been found to be: g(V sl ) = V sl + 20. For semi-annular and annular flow conditions; it appears that the average void fraction depends, to a first approximation only on the ratio V sg /V sl . (author) [fr

  11. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    Science.gov (United States)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  12. Coarse-mesh discretized low-order quasi-diffusion equations for subregion averaged scalar fluxes

    International Nuclear Information System (INIS)

    Anistratov, D. Y.

    2004-01-01

    In this paper we develop homogenization procedure and discretization for the low-order quasi-diffusion equations on coarse grids for core-level reactor calculations. The system of discretized equations of the proposed method is formulated in terms of the subregion averaged group scalar fluxes. The coarse-mesh solution is consistent with a given fine-mesh discretization of the transport equation in the sense that it preserves a set of average values of the fine-mesh transport scalar flux over subregions of coarse-mesh cells as well as the surface currents, and eigenvalue. The developed method generates numerical solution that mimics the large-scale behavior of the transport solution within assemblies. (authors)

  13. The Turn the Tables Technique (T[cube]): A Program Activity to Provide Group Facilitators Insight into Teen Sexual Behaviors and Beliefs

    Science.gov (United States)

    Sclafane, Jamie Heather; Merves, Marni Loiacono; Rivera, Angelic; Long, Laura; Wilson, Ken; Bauman, Laurie J.

    2012-01-01

    The Turn the Tables Technique (T[cube]) is an activity designed to provide group facilitators who lead HIV/STI prevention and sexual health promotion programs with detailed and current information on teenagers' sexual behaviors and beliefs. This information can be used throughout a program to tailor content. Included is a detailed lesson plan of…

  14. The effects of sex and grade-point average on emotional intelligence.

    Science.gov (United States)

    Tapia, Martha; Marsh, George E

    2006-01-01

    This study was conducted to examine the effects of sex and grade-point average (GPA) on emotional intelligence on secondary students as measured by the Emotional Intelligence Inventory (EII). The EII is a 41-item Likert scale based on the original theoretical model of emotional intelligence developed by Salovey and Mayer. An exploratory factor analysis identified four factors, which were named Empathy, Utilization of Feelings, Handling Relationships, and Self-control. The sample consisted of 319 students, 162 males and 157 females, who attended school at a bilingual (English and Spanish) college preparatory school. General linear analysis revealed significant differences in empathy scores when grouped by gender. There were significant differences in self-control when grouped by GPA levels.

  15. Comparison of two local anesthesia techniques (conventional & akinosi for inferior alveolar dental nerve

    Directory of Open Access Journals (Sweden)

    Refua Y

    2001-09-01

    Full Text Available Different techniques for local anesthesia are used in the mandible. The purpose of this study"nwas to determine the effects of inferior alveolar dental nerve blocks by comparing the two akinosi and"nconventional techniques. 80 patients (aged 15-60 years old were randomly divided into tow groups for"nextracting the mandibuler posterior teeth by akinosi and conventional techniques. Patients were all"ninjected with 1.8 ml of Lidocaine 2% plus Adernaline j^nnnn .Then the Pain Sensation during injection,"npositive aspiration, beginning time of anesthesia, duration of anesthesia depth of anesthesia, and the anesthesia of soft tissue related to sensory nerves were evaluated. The results showed that the pain sensation in conventional technique was significantly higher than that of akinosi technique. The number of positive aspirations in conventional technique (12,5% was higher than that of akinosi (5% but not significantly different. The long buccal nerve anesthesia in akinosi technique (75% was significantly higher than that of conventional technique. There was no significant difference between the two techniques for the depth of anesthesia. The success rate was 87.5% in conventional technique and 80% in akinosi technique. The average time of lips anesthesia in conventional technique was 3 minutes compared with 4 minutes in akinosi technique, which was not significantly different from each other. However, the beginning time of aneshtesia in tongue was significantly lower in conventional technique. No significant difference in the duration of anesthesia in lips and tonques between the two techniques was observed.

  16. A three-stage strategy for optimal price offering by a retailer based on clustering techniques

    International Nuclear Information System (INIS)

    Mahmoudi-Kohan, N.; Shayesteh, E.; Moghaddam, M. Parsa; Sheikh-El-Eslami, M.K.

    2010-01-01

    In this paper, an innovative strategy for optimal price offering to customers for maximizing the profit of a retailer is proposed. This strategy is based on load profile clustering techniques and includes three stages. For the purpose of clustering, an improved weighted fuzzy average K-means is proposed. Also, in this paper a new acceptance function for increasing the profit of the retailer is proposed. The new method is evaluated by implementation on a group of 300 customers of a 20 kV distribution network. (author)

  17. A three-stage strategy for optimal price offering by a retailer based on clustering techniques

    Energy Technology Data Exchange (ETDEWEB)

    Mahmoudi-Kohan, N.; Shayesteh, E. [Islamic Azad University (Garmsar Branch), Garmsar (Iran); Moghaddam, M. Parsa; Sheikh-El-Eslami, M.K. [Tarbiat Modares University, Tehran (Iran)

    2010-12-15

    In this paper, an innovative strategy for optimal price offering to customers for maximizing the profit of a retailer is proposed. This strategy is based on load profile clustering techniques and includes three stages. For the purpose of clustering, an improved weighted fuzzy average K-means is proposed. Also, in this paper a new acceptance function for increasing the profit of the retailer is proposed. The new method is evaluated by implementation on a group of 300 customers of a 20 kV distribution network. (author)

  18. A study of breast composition using radiographic techniques

    International Nuclear Information System (INIS)

    Noriah Jamal

    2005-01-01

    subjective Tabar patterns classification by the radiologist (correctly classified 83 % of digitise mammograms). This indicates that it is possible to categorise mammograms with an interactive thresholding method into Patterns suggested by Tabar. Significant inter-observer variation was seen (p=0.003; repeated ANOVA), while there was no significant intra-observer variation (p=0.06; repeated ANOVA) in measuring breast density. Part 3 demonstrated a newly developed technique to estimate breast glandularity from radiographic data in actual clinical situations. The average breast glandularity of the study sample (n=705: Malay=235, Chinese=235, Indian=235) was 48.9 % ± 18.7 %. No significant difference was seen in breast glandularity and mean glandular dose among ethnic groups (p>0.05, Kruskal Wallis test). Radiographic techniques have been developed and validated in laboratory conditions to quantitatively study breast composition and have shown no significant difference amongst three ethnic groups. This research has the potential for providing further insight into the aetiology of breast diseases, as well as translational benefits from more accurate diagnosis and treatment of breast diseases. (author)

  19. Group-decoupled multi-group pin power reconstruction utilizing nodal solution 1D flux profiles

    International Nuclear Information System (INIS)

    Yu, Lulin; Lu, Dong; Zhang, Shaohong; Wang, Dezhong

    2014-01-01

    Highlights: • A direct fitting multi-group pin power reconstruction method is developed. • The 1D nodal solution flux profiles are used as the condition. • The least square fit problem is analytically solved. • A slowing down source improvement method is applied. • The method shows good accuracy for even challenging problems. - Abstract: A group-decoupled direct fitting method is developed for multi-group pin power reconstruction, which avoids both the complication of obtaining 2D analytic multi-group flux solution and any group-coupled iteration. A unique feature of the method is that in addition to nodal volume and surface average fluxes and corner fluxes, transversely-integrated 1D nodal solution flux profiles are also used as the condition to determine the 2D intra-nodal flux distribution. For each energy group, a two-dimensional expansion with a nine-term polynomial and eight hyperbolic functions is used to perform a constrained least square fit to the 1D intra-nodal flux solution profiles. The constraints are on the conservation of nodal volume and surface average fluxes and corner fluxes. Instead of solving the constrained least square fit problem numerically, we solve it analytically by fully utilizing the symmetry property of the expansion functions. Each of the 17 unknown expansion coefficients is expressed in terms of nodal volume and surface average fluxes, corner fluxes and transversely-integrated flux values. To determine the unknown corner fluxes, a set of linear algebraic equations involving corner fluxes is established via using the current conservation condition on all corners. Moreover, an optional slowing down source improvement method is also developed to further enhance the accuracy of the reconstructed flux distribution if needed. Two test examples are shown with very good results. One is a four-group BWR mini-core problem with all control blades inserted and the other is the seven-group OECD NEA MOX benchmark, C5G7

  20. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  1. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  2. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  3. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  4. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  5. Effectiveness of an educational video as an instrument to refresh and reinforce the learning of a nursing technique: a randomized controlled trial.

    Science.gov (United States)

    Salina, Loris; Ruffinengo, Carlo; Garrino, Lorenza; Massariello, Patrizia; Charrier, Lorena; Martin, Barbara; Favale, Maria Santina; Dimonte, Valerio

    2012-05-01

    The Undergraduate Nursing Course has been using videos for the past year or so. Videos are used for many different purposes such as during lessons, nurse refresher courses, reinforcement, and sharing and comparison of knowledge with the professional and scientific community. The purpose of this study was to estimate the efficacy of the video (moving an uncooperative patient from the supine to the lateral position) as an instrument to refresh and reinforce nursing techniques. A two-arm randomized controlled trial (RCT) design was chosen: both groups attended lessons in the classroom as well as in the laboratory; a month later while one group received written information as a refresher, the other group watched the video. Both groups were evaluated in a blinded fashion. A total of 223 students agreed to take part in the study. The difference observed between those who had seen the video and those who had read up on the technique turned out to be an average of 6.19 points in favour of the first (P video were better able to apply the technique, resulting in a better performance. The video, therefore, represents an important tool to refresh and reinforce previous learning.

  6. Local and average structure of Mn- and La-substituted BiFeO{sub 3}

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Bo; Selbach, Sverre M., E-mail: selbach@ntnu.no

    2017-06-15

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO{sub 3} is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space group symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO{sub 3}. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions. - Graphical abstract: The experimental and simulated partial pair distribution functions (PDF) for BiFeO{sub 3}, BiFe{sub 0.875}Mn{sub 0.125}O{sub 3}, BiFe{sub 0.75}Mn{sub 0.25}O{sub 3} and Bi{sub 0.9}La{sub 0.1}FeO{sub 3}.

  7. Evaluation of combined intracoronary two-dimensional and doppler ultransound techniques in the relaxation function of coronary microcirculation

    International Nuclear Information System (INIS)

    Qi Chunmei; Li Dongye; Pan Defeng; Zhu Hong

    2005-01-01

    Objective: To observe the value of detecting the relaxation function of coronary microcirculation by using combined intracoronary two-dimensional (IVUS) and Doppler interventional ultransound (ICD) techniques with mean arteries pressure. Methods: Fourteen healthy male swines were divided into two groups randomly: eight swines fed with 1% cholesterol-rich diet for 12 weeks as a model of early atherosclerosis were classified as the experimental group; six swines fed with standard diet were classified as control group. All the swines were undergone cardiovascular catheterization examination after 12 weeks. Combined IVUS and ICD techniques were taken to calculate the change of coronary blood flow (CBF) after the administration of acetylcholine and nitroglycerin. The pressure of the root of aorta and then the relaxation function of coronary microcirculation can be accessed with coronary resistance index (RI). At last, all of the examed coronary arteries and related coronary microcirculation were undergone pathological examinations. Results: The pathological examinations demonstrated that the average intima thickness in experimental group was increased more evidently than that of control group (74.80 μm ± 17.60 μm vs 7.60 μm ± 4.27 μm P<0.001). The intima thickness increase can not be seen in the coronary microcirculation. Acetylcholine induced increase in RI in experimental group compared with control group (-0.18 ± 0.09 vs 0.29 ± 0.18, P<0.05). Nitroglycerin induced a decrease in RI for both groups (-0.40 ± 0.13 vs -0.34 ± 0.20). Conclusions: Using IVUS and ICD techniques combined mean arterial pressure can identify the endothelium-mediated dysfunction on coronary microcirculation in the early stage of AS. (authors)

  8. Treatment of anxiety: a comparison of the usefulness of self-hypnosis and a meditational relaxation technique. An overview.

    Science.gov (United States)

    Benson, H; Frankel, F H; Apfel, R; Daniels, M D; Schniewind, H E; Nemiah, J C; Sifneos, P E; Crassweller, K D; Greenwood, M M; Kotch, J B; Arns, P A; Rosner, B

    1978-01-01

    We have investigated prospectively the efficacy of two nonpharmacologic relaxation techniques in the therapy of anxiety. A simple, meditational relaxation technique (MT) that elicits the changes of decreased sympathetic nervous system activity was compared to a self-hypnosis technique (HT) in which relaxation, with or without altered perceptions, was suggested. 32 patients with anxiety neurosis were divided into 2 groups on the basis of their responsivity to hypnosis: moderate-high and low responsivity. The MT or HT was then randomly assigned separately to each member of the two responsivity groups. Thus, 4 treatment groups were studied: moderate-high responsivity MT; low responsivity MT; moderate-high responsivity HT; and low responsivity HT. The low responsivity HT group, by definition largely incapable of achieving the altered perceptions essential to hypnosis, was designed as the control group. Patients were instructed to practice the assigned technique daily for 8 weeks. Change in anxiety was determined by three types of evaluation: psychiatric assessment; physiologic testing; and self-assessment. There was essentially no difference between the two techniques in therapeutic efficacy according to these evaluations. Psychiatric assessment revealed overall improvement in 34% of the patients and the self-rating assessment indicated improvement in 63% of the population. Patients who had moderate-high hypnotic responsivity, independent of the technique used, significantly improved on psychiatric assessment (p = 0.05) and decreased average systolic blood pressure from 126.1 to 122.5 mm Hg over the 8-week period (p = 0.048). The responsivity scores at the higher end of the hypnotic responsivity spectrum were proportionately correlated to greater decreases in systolic blood pressure (p = 0.075) and to improvement by psychiatric assessment (p = 0.003). There was, however, no consistent relation between hypnotic responsivity and the other assessments made, such as

  9. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  10. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  11. Evaluation of the accuracy of group calculations for reactor criticality perturbations

    International Nuclear Information System (INIS)

    Dulin, V.A.

    1985-09-01

    For calculations of criticality perturbations it is necessary to use group constants which take into account not only the peculiarities of the intra-group flux but also those of the behaviour of the adjoint flux. A new method is proposed for obtaining bilinear-averaged constants of this type on the basis of the resonance characteristics of the importance function and the difference between the value of neutron importance at the group boundary and the group-averaged value (the bsup(+j) factor). A number of calculations are made for the ratios of reactivity coefficients in the BFS assemblies. Values have been obtained for the difference between the results of calculation with bilinear-averaged constants and those averaged conventionally (over flux). In many cases, this difference exceeds the experimental error. (author)

  12. Making Cooperative Learning Groups Work.

    Science.gov (United States)

    Hawley, James; De Jong, Cherie

    1995-01-01

    Discusses the use of cooperative-learning groups with middle school students. Describes cooperative-learning techniques, including group roles, peer evaluation, and observation and monitoring. Considers grouping options, including group size and configuration, dyads, the think-pair-share lecture, student teams achievement divisions, jigsaw groups,…

  13. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  14. Five years' experience of the modified Meek technique in the management of extensive burns.

    Science.gov (United States)

    Hsieh, Chun-Sheng; Schuong, Jen-Yu; Huang, W S; Huang, Ted T

    2008-05-01

    The Meek technique of skin expansion is useful for covering a large open wound with a small piece of skin graft, but requires a carefully followed protocol. Over the past 5 years, a skin graft expansion technique following the Meek principle was used to treat 37 individuals who had sustained third degree burns involving more than 40% of the body surface. A scheme was devised whereby the body was divided into six areas, in order to clarify the optimal order of wound debridements and skin grafting procedures as well as the regimen of aftercare. The mean body surface involvement was 72.9% and the mean area of third degree burns was 41%. The average number of operations required was 1.84. There were four deaths among in this group of patients. The Meek technique of skin expansion and the suggested protocol are together efficient and effective in covering an open wound, particularly where there is a paucity of skin graft donor sites.

  15. Practical adjoint Monte Carlo technique for fixed-source and eigenfunction neutron transport problems

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    1981-01-01

    An adjoint Monte Carlo technique is described for the solution of neutron transport problems. The optimum biasing function for a zero-variance collision estimator is derived. The optimum treatment of an analog of a non-velocity thermal group has also been derived. The method is extended to multiplying systems, especially for eigenfunction problems to enable the estimate of averages over the unknown fundamental neutron flux distribution. A versatile computer code, FOCUS, has been written, based on the described theory. Numerical examples are given for a shielding problem and a critical assembly, illustrating the performance of the FOCUS code. 19 refs

  16. Life expectancy for the University of Utah beagle colony and selection of a control group

    International Nuclear Information System (INIS)

    Atherton, D.R.; Stevens, W.; Bruenger, F.W.; Woodbury, L.; Stover, B.J.; Smith, J.M.; Wrenn, M.E.

    1986-01-01

    In the internal-emitters toxicity program at the University of Utah Radiobiology Laboratory, each experimental group carries its own specific control cohort, which is the same size as most of the individual experimental cohorts. Variations in average lifetime are observed among individual control cohorts. This may be due to external causes, genetic variances such as the occurrence of epileptic syndromes, or changes such as those that result from improved medical core or husbandry. The Stover-Eyring method was used to eliminate from control and experimental cohorts those dogs with specific diseases such as epilepsy - dogs that were at risk for too short a time for a later pathological response to occur. By the use of conventional statistical techniques, it ws shown to be reasonable to pool individual control cohorts into a much larger selected cohort that provided greater precision in the estimate of control survival and thus a more sensitive basis for the estimation of the relative life shortening in the experimental groups. The analysis suggested that control groups could be combined, and a control population of 114 beagles was proposed. Their average lifespan was 4926 +- 849 days, and the time when half the animals had died was 5000 days. 3 refs., 2 figs., 5 tabs

  17. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  18. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  19. The efficacy of the 'mind map' study technique.

    Science.gov (United States)

    Farrand, Paul; Hussain, Fearzana; Hennessy, Enid

    2002-05-01

    To examine the effectiveness of using the 'mind map' study technique to improve factual recall from written information. To obtain baseline data, subjects completed a short test based on a 600-word passage of text prior to being randomly allocated to form two groups: 'self-selected study technique' and 'mind map'. After a 30-minute interval the self-selected study technique group were exposed to the same passage of text previously seen and told to apply existing study techniques. Subjects in the mind map group were trained in the mind map technique and told to apply it to the passage of text. Recall was measured after an interfering task and a week later. Measures of motivation were taken. Barts and the London School of Medicine and Dentistry, University of London. 50 second- and third-year medical students. Recall of factual material improved for both the mind map and self-selected study technique groups at immediate test compared with baseline. However this improvement was only robust after a week for those in the mind map group. At 1 week, the factual knowledge in the mind map group was greater by 10% (adjusting for baseline) (95% CI -1% to 22%). However motivation for the technique used was lower in the mind map group; if motivation could have been made equal in the groups, the improvement with mind mapping would have been 15% (95% CI 3% to 27%). Mind maps provide an effective study technique when applied to written material. However before mind maps are generally adopted as a study technique, consideration has to be given towards ways of improving motivation amongst users.

  20. Method for analysis of averages over transmission energy of resonance neutrons

    International Nuclear Information System (INIS)

    Komarov, A.V.; Luk'yanov, A.A.

    1981-01-01

    Experimental data on transmissions on iron specimens in different energy groups have been analyzed on the basis of an earlier developed theoretical model for the description of resonance neutron averages in transmission energy, as the functions of specimen thickness and mean resonance parameters. The parameter values obtained agree with the corresponding data evaluated in the theory of mean neutron cross sections. The method suggested for the transmission description permits to reproduce experimental results for any thicknesses of specimens [ru

  1. Strips of hourly power options. Approximate hedging using average-based forward contracts

    International Nuclear Information System (INIS)

    Lindell, Andreas; Raab, Mikael

    2009-01-01

    We study approximate hedging strategies for a contingent claim consisting of a strip of independent hourly power options. The payoff of the contingent claim is a sum of the contributing hourly payoffs. As there is no forward market for specific hours, the fundamental problem is to find a reasonable hedge using exchange-traded forward contracts, e.g. average-based monthly contracts. The main result is a simple dynamic hedging strategy that reduces a significant part of the variance. The idea is to decompose the contingent claim into mathematically tractable components and to use empirical estimations to derive hedging deltas. Two benefits of the method are that the technique easily extends to more complex power derivatives and that only a few parameters need to be estimated. The hedging strategy based on the decomposition technique is compared with dynamic delta hedging strategies based on local minimum variance hedging, using a correlated traded asset. (author)

  2. [A study on breakfast and school performance in a group of adolescents].

    Science.gov (United States)

    Herrero Lozano, R; Fillat Ballesteros, J C

    2006-01-01

    TO know the relationship between breakfast, from a qualitative perspective, and school performance. The study was performed in 141 students (70 males and 71 females) with ages ranging 12-13 years, of 1st grade of Mandatory Secondary Education (ESO) from an institute of Saragossa, by means of recalling the breakfast of the day before. Breakfast quality has been assessed according to criteria of the Kid study: GOOD QUALITY: contains at least one food from each one of dairy, cereals, or fruit groups. IMPROVABLE QUALITY: lacks one of the groups. INSUFFICIENT QUALITY: lacks two groups. POOR QUALITY: does not have breakfast. We considered that quality was improved only when a mid-morning snack with a different food from those taken with breakfast was added. Average mark at the end of the school year has been the criterion used to assess school performance. Statistical analysis of data gathered for the present study has been done with SPSS software. This analysis comprises descriptive and inferential statistics. For analysis of global significance between the differences the Analysis of Variance method has been applied, followed by post hoe tests with Bonferroni's and Turkey's methods to detect specific groups explaining global significance. Average mark systematically increases as breakfast quality increases from an average score of 5.63 in the group with poor quality breakfast to 7.73 average score in the group with a good quality breakfast. An analysis of variance has been performed to study the statistical significance of the mean differences between both groups. The outcomes yield significant global differences between groups (p value = 0.001), i.e., the average mark significantly varies according to breakfast quality. When pooled quality of breakfast and mid-morning snack is analyzed, the average mark systematically increases as breakfast-snack quality increases, from an average mark of 5,77 in the group with poor or insufficient quality up to 7.61 in the group with

  3. Rapid discrimination and classification of the Lactobacillus plantarum group based on a partial dnaK sequence and DNA fingerprinting techniques.

    Science.gov (United States)

    Huang, Chien-Hsun; Lee, Fwu-Ling; Liou, Jong-Shian

    2010-03-01

    The Lactobacillus plantarum group comprises five very closely related species. Some species of this group are considered to be probiotic and widely applied in the food industry. In this study, we compared the use of two different molecular markers, the 16S rRNA and dnaK gene, for discriminating phylogenetic relationships amongst L. plantarum strains using sequencing and DNA fingerprinting. The average sequence similarity for the dnaK gene (89.2%) among five type strains was significantly less than that for the 16S rRNA (99.4%). This result demonstrates that the dnaK gene sequence provided higher resolution than the 16S rRNA and suggests that the dnaK could be used as an additional phylogenetic marker for L. plantarum. Species-specific profiles of the Lactobacillus strains were obtained with RAPD and RFLP methods. Our data indicate that phylogenetic relationships between these strains are easily resolved using sequencing of the dnaK gene or DNA fingerprinting assays.

  4. Average use of Alcohol and Binge Drinking in Pregnancy: Neuropsychological Effects at Age 5

    DEFF Research Database (Denmark)

    Kilburn, Tina R.

    Objectives The objective of this PhD. was to examine the relation between low weekly average maternal alcohol consumption and ‘Binge drinking' (defined as intake of 5 or more drinks per occasion) during pregnancy and information processing time (IPT) in children aged five years. Since a method...... that provided detailed information on maternal alcohol drinking patterns before and during pregnancy and other lifestyle factors. These women were categorized in groups of prenatally average alcohol intake and binge drinking, timing and number of episodes. At the age of five years the children of these women...... and number of episodes) and between simple reaction time (SRT) and alcohol intake or binge drinking (timing and number of episodes) during pregnancy. Conclusion This was one of the first studies investigating IPT and prenatally average alcohol intake and binge drinking in early pregnancy. Daily prenatal...

  5. Consumer understanding of food labels: toward a generic tool for identifying the average consumer

    DEFF Research Database (Denmark)

    Sørensen, Henrik Selsøe; Holm, Lotte; Møgelvang-Hansen, Peter

    2013-01-01

    The ‘average consumer’ is referred to as a standard in regulatory contexts when attempts are made to benchmark how consumers are expected to reason while decoding food labels. An attempt is made to operationalize this hypothetical ‘average consumer’ by proposing a tool for measuring the level of ...... that independent future studies of consumer behavior and decision making in relation to food products in different contexts could benefit from this type of benchmarking tool.......The ‘average consumer’ is referred to as a standard in regulatory contexts when attempts are made to benchmark how consumers are expected to reason while decoding food labels. An attempt is made to operationalize this hypothetical ‘average consumer’ by proposing a tool for measuring the level...... of informedness of an individual consumer against the national median at any time. Informedness, i.e. the individual consumer's ability to interpret correctly the meaning of the words and signs on a food label is isolated as one essential dimension for dividing consumers into three groups: less-informed, informed...

  6. Prosopography of social and political groups historically located: method or research technique?

    Directory of Open Access Journals (Sweden)

    Lorena Madruga Monteiro

    2014-06-01

    Full Text Available The prosopographical approach has been questioned in different disciplinary domains as its scientific nature. The debate prosopography is a technique, a tool for research, an auxiliary science or method transpires in scientific arguments and those who are dedicated to explaining the prosopographical research assumptions. In the social sciences, for example, prosopography is not seen only as an instrument of research, but as a method associated with a theoretical construct to apprehend the social world. The historians that use prosopographic analysis, in turn, oscillate about the analysis of collective biography is a method or a polling technique. Given this setting we aimed at in this article, discuss the prosopographical approach from their different uses. The study presents a literature review, demonstrating the technique of prosopography as historical research, and further as a method of sociological analysis, and then highlight your procedures and methodological limits.

  7. A review of behaviour change theories and techniques used in group based self-management programmes for chronic low back pain and arthritis.

    Science.gov (United States)

    Keogh, Alison; Tully, Mark A; Matthews, James; Hurley, Deirdre A

    2015-12-01

    Medical Research Council (MRC) guidelines recommend applying theory within complex interventions to explain how behaviour change occurs. Guidelines endorse self-management of chronic low back pain (CLBP) and osteoarthritis (OA), but evidence for its effectiveness is weak. This literature review aimed to determine the use of behaviour change theory and techniques within randomised controlled trials of group-based self-management programmes for chronic musculoskeletal pain, specifically CLBP and OA. A two-phase search strategy of electronic databases was used to identify systematic reviews and studies relevant to this area. Articles were coded for their use of behaviour change theory, and the number of behaviour change techniques (BCTs) was identified using a 93-item taxonomy, Taxonomy (v1). 25 articles of 22 studies met the inclusion criteria, of which only three reported having based their intervention on theory, and all used Social Cognitive Theory. A total of 33 BCTs were coded across all articles with the most commonly identified techniques being 'instruction on how to perform the behaviour', 'demonstration of the behaviour', 'behavioural practice', 'credible source', 'graded tasks' and 'body changes'. Results demonstrate that theoretically driven research within group based self-management programmes for chronic musculoskeletal pain is lacking, or is poorly reported. Future research that follows recommended guidelines regarding the use of theory in study design and reporting is warranted. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  9. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  10. Group Grammar

    Science.gov (United States)

    Adams, Karen

    2015-01-01

    In this article Karen Adams demonstrates how to incorporate group grammar techniques into a classroom activity. In the activity, students practice using the target grammar to do something they naturally enjoy: learning about each other.

  11. Exogenous bleaching evaluation on dentin using chemical activated technique compared with diode laser technique

    International Nuclear Information System (INIS)

    Carvalho, Breno Carnevalli Franco de

    2003-01-01

    This in vitro study compared the results of different exogenous bleaching proceedings on dentin after treatment of enamel surface. Thirty human canine were hewn preserving the vestibular half of the crown and 3 mm of root, showing a vestibular-lingual thickness average of 3,5 mm, measuring in the third middle of the crown. Ali teeth were maintained in wet chamber during the experiment. Digital photographs were taken of the dentin surface at 3 experimental times (LI: initial record, L0: immediate pos-bleaching record and L 15: 15 days after bleaching). The teeth were divided into 3 experimental groups of 10 teeth in each. The Control Group did not receive any kind of treatment. The Laser Group received 2 session of laser bleaching, with 3 applications each, using 35% hydrogen peroxide, activated by diode laser during 30 seconds, by scanning the enamel surface from incisal edge to the top of the crown, from mesial to distal portion of the crown and circularly, each movement during 10 seconds. The following parameters being adopted: wavelength of 808 nm, power of 1,5 W and optic fiber with 600 μm (core). The Peroxide Group received 28 daily applications, during 4 hours each application, using 16% carbamide peroxide. The bleaching records were analysed using a computer, through RGBK (red, green , blue and black). The K averages (K=100% for black and K=0% for white) of the records for Control Group were: LI=50,1 %, L0=50,3% and L 15=50,6%. For Laser Group the K averages were LI=48,5%, L0=50,0% and L 15=47,7%. And for the Peroxide Group were LI=50,5%, L0=35,9% and L 15=37,3%. The statistical analysis showed no significant difference of the K between the Control Group and the Laser Group, as to LI, L0 and L 15. Only Peroxide Group showed significant statistical difference between LI with L0 and L 15 (0,1%), and L0 in comparison with L 15 did not show any difference. (author)

  12. Anterior cruciate ligament reconstruction using autologous hamstring single-bundle Rigidfix technique compared with single-bundle Transfix technique

    Directory of Open Access Journals (Sweden)

    Mousavi Hamid

    2012-01-01

    Full Text Available Background: Initial fixation strength is critical for the early post-operative rehabilitation of patients with anterior cruciate ligament (ACL reconstructions. However, even the best femoral fixation devices remain controversial. We compared the results of 2 of the femoral fixation techniques,Rigidfix and Transfix. Materials and Methods: A total of 30 patients with unilateral ACL deficiency were randomly assigned to 1 of 2 groups. In Group A an anatomic single-bundle ACL reconstruction was performed using Rigidfix technique(Mitek, Norwood,MA, Group B were treated by a single bundle using Transfix technique(Arthrex, Naples, FL, USA. For tibial fixation, a bioabsorbable Intrafix interference screw was used for all the groups and the graft was fashioned from the semitendinosus and gracilis tendons in all patients. The patients were subjected to a clinical evaluation, with assessment of the anterior drawer, Lachman′s and the pivot-shift tests. They also completed the International Knee Documentation Committee (IKDC score. Results: At a mean of 14 months (12-17 followup there were no significant differences concerning time between injury and range of movement between the 2 groups. However, the Rigidfix group showed significantly better results for the subjective assessment of knee function ( P = 0.002. The Lachman, anterior drawer, and pivot-shift tests also showed no significant difference between the 2 groups. The IKDC scale showed no significant difference among the groups ( P < 0.001.There was no difference regarding duration of operation and cost of the operation between the 2 groups.On clinical evaluation there was no significant difference between the 2 groups. However, regardless of the technique, all knees were improved by ACL reconstruction compared with their preoperative status. Conclusion: Both techniques can be used for reconstruction of ACL. Other factors, such as psychic profile of the patients should be considered for surgery

  13. Evaluation of exposure in mammography: limitations of average glandular dose and proposal of a new quantity

    International Nuclear Information System (INIS)

    Geeraert, N.; Bosmans, H.; Klausz, R.; Muller, S.; Bloch, I.

    2015-01-01

    The radiation risk in mammography is traditionally evaluated using the average glandular dose. This quantity for the average breast has proven to be useful for population statistics and to compare exposure techniques and systems. However it is not indicating the individual radiation risk based on the individual glandular amount and distribution. Simulations of exposures were performed for six appropriate virtual phantoms with varying glandular amount and distribution. The individualised average glandular dose (iAGD), i.e. the individual glandular absorbed energy divided by the mass of the gland, and the glandular imparted energy (GIE), i.e. the glandular absorbed energy, were computed. Both quantities were evaluated for their capability to take into account the glandular amount and distribution. As expected, the results have demonstrated that iAGD reflects only the distribution, while GIE reflects both the glandular amount and distribution. Therefore GIE is a good candidate for individual radiation risk assessment. (authors)

  14. Acid demineralization susceptibility of dental enamel submitted to different bleaching techniques and fluoridation regimens.

    Science.gov (United States)

    Salomão, Dlf; Santos, Dm; Nogueira, Rd; Palma-Dibb, Rg; Geraldo-Martins, Vr

    2014-01-01

    The aim of the current study was to assess the acid demineralization susceptibility of bleached dental enamel submitted to different fluoride regimens. One hundred bovine enamel blocks (6×6×3 mm) were randomly divided into 10 groups (n=10). Groups 1 and 2 received no bleaching. Groups 3 to 6 were submitted to an at-home bleaching technique using 6% hydrogen peroxide (HP; G3 and G4) or 10% carbamide peroxide (CP; G5 and G6). Groups 7 to 10 were submitted to an in-office bleaching technique using 35% HP (G7 and G8) or 35% CP (G9 and G10). During bleaching, a daily fluoridation regimen of 0.05% sodium fluoride (NaF) solution was performed on groups 3, 5, 7, and 9, while weekly fluoridation with a 2% NaF gel was performed on groups 4, 6, 8, and 10. The samples in groups 2 to 10 were pH cycled for 14 consecutive days. The samples from all groups were then assessed by cross-sectional Knoop microhardness at different depths from the outer enamel surface. The average Knoop hardness numbers (KHNs) were compared using one-way analysis of variance and Tukey tests (α=0.05). The comparison between groups 1 and 2 showed that the demineralization method was effective. The comparison among groups 2 to 6 showed the same susceptibility to acid demineralization, regardless of the fluoridation method used. However, the samples from groups 8 and 10 showed more susceptibility to acid demineralization when compared with group 2 (penamel to acid demineralization. However, the use of 35% HP and 35% CP must be associated with a daily fluoridation regimen, otherwise the in-office bleaching makes the bleached enamel more susceptible to acid demineralization.

  15. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  16. Construction of a technique plan repository and evaluation system based on AHP group decision-making for emergency treatment and disposal in chemical pollution accidents

    International Nuclear Information System (INIS)

    Shi, Shenggang; Cao, Jingcan; Feng, Li; Liang, Wenyan; Zhang, Liqiu

    2014-01-01

    Highlights: • Different chemical pollution accidents were simplified using the event tree analysis. • Emergency disposal technique plan repository of chemicals accidents was constructed. • The technique evaluation index system of chemicals accidents disposal was developed. • A combination of group decision and analytical hierarchy process (AHP) was employed. • Group decision introducing similarity and diversity factor was used for data analysis. - Abstract: The environmental pollution resulting from chemical accidents has caused increasingly serious concerns. Therefore, it is very important to be able to determine in advance the appropriate emergency treatment and disposal technology for different types of chemical accidents. However, the formulation of an emergency plan for chemical pollution accidents is considerably difficult due to the substantial uncertainty and complexity of such accidents. This paper explains how the event tree method was used to create 54 different scenarios for chemical pollution accidents, based on the polluted medium, dangerous characteristics and properties of chemicals involved. For each type of chemical accident, feasible emergency treatment and disposal technology schemes were established, considering the areas of pollution source control, pollutant non-proliferation, contaminant elimination and waste disposal. Meanwhile, in order to obtain the optimum emergency disposal technology schemes as soon as the chemical pollution accident occurs from the plan repository, the technique evaluation index system was developed based on group decision-improved analytical hierarchy process (AHP), and has been tested by using a sudden aniline pollution accident that occurred in a river in December 2012

  17. Construction of a technique plan repository and evaluation system based on AHP group decision-making for emergency treatment and disposal in chemical pollution accidents

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Shenggang [College of Environmental Science and Engineering, Beijing Forestry University, Beijing 100083 (China); College of Chemistry, Baotou Teachers’ College, Baotou 014030 (China); Cao, Jingcan; Feng, Li; Liang, Wenyan [College of Environmental Science and Engineering, Beijing Forestry University, Beijing 100083 (China); Zhang, Liqiu, E-mail: zhangliqiu@163.com [College of Environmental Science and Engineering, Beijing Forestry University, Beijing 100083 (China)

    2014-07-15

    Highlights: • Different chemical pollution accidents were simplified using the event tree analysis. • Emergency disposal technique plan repository of chemicals accidents was constructed. • The technique evaluation index system of chemicals accidents disposal was developed. • A combination of group decision and analytical hierarchy process (AHP) was employed. • Group decision introducing similarity and diversity factor was used for data analysis. - Abstract: The environmental pollution resulting from chemical accidents has caused increasingly serious concerns. Therefore, it is very important to be able to determine in advance the appropriate emergency treatment and disposal technology for different types of chemical accidents. However, the formulation of an emergency plan for chemical pollution accidents is considerably difficult due to the substantial uncertainty and complexity of such accidents. This paper explains how the event tree method was used to create 54 different scenarios for chemical pollution accidents, based on the polluted medium, dangerous characteristics and properties of chemicals involved. For each type of chemical accident, feasible emergency treatment and disposal technology schemes were established, considering the areas of pollution source control, pollutant non-proliferation, contaminant elimination and waste disposal. Meanwhile, in order to obtain the optimum emergency disposal technology schemes as soon as the chemical pollution accident occurs from the plan repository, the technique evaluation index system was developed based on group decision-improved analytical hierarchy process (AHP), and has been tested by using a sudden aniline pollution accident that occurred in a river in December 2012.

  18. Presentations of groups

    CERN Document Server

    Johnson, D L

    1997-01-01

    The aim of this book is to provide an introduction to combinatorial group theory. Any reader who has completed first courses in linear algebra, group theory and ring theory will find this book accessible. The emphasis is on computational techniques but rigorous proofs of all theorems are supplied. This new edition has been revised throughout, including new exercises and an additional chapter on proving that certain groups are infinite.

  19. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  20. Investigation of timing effects in modified composite quadrupolar echo pulse sequences by mean of average Hamiltonian theory

    Science.gov (United States)

    Mananga, Eugene Stephane

    2018-01-01

    The utility of the average Hamiltonian theory and its antecedent the Magnus expansion is presented. We assessed the concept of convergence of the Magnus expansion in quadrupolar spectroscopy of spin-1 via the square of the magnitude of the average Hamiltonian. We investigated this approach for two specific modified composite pulse sequences: COM-Im and COM-IVm. It is demonstrated that the size of the square of the magnitude of zero order average Hamiltonian obtained on the appropriated basis is a viable approach to study the convergence of the Magnus expansion. The approach turns to be efficient in studying pulse sequences in general and can be very useful to investigate coherent averaging in the development of high resolution NMR technique in solids. This approach allows comparing theoretically the two modified composite pulse sequences COM-Im and COM-IVm. We also compare theoretically the current modified composite sequences (COM-Im and COM-IVm) to the recently published modified composite pulse sequences (MCOM-I, MCOM-IV, MCOM-I_d, MCOM-IV_d).

  1. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  2. Estimation After a Group Sequential Trial.

    Science.gov (United States)

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why

  3. What does it mean to be average? : the miles per gallon versus gallons per mile paradox revisited

    NARCIS (Netherlands)

    Haans, A.

    2008-01-01

    In the efficiency paradox, which was introduced by Hand (1994; J R Stat Soc A, 157, 317-356), two groups of engineers are in disagreement about the average fuel efficiency of a set of cars. One group measured efficiency on a miles per gallon scale, the other on a gallons per mile scale. In the

  4. The use of luminescence techniques with ceramic materials for retrospective dosimetry

    International Nuclear Information System (INIS)

    Bailiff, I.K.

    1996-01-01

    Luminescence techniques are being used with ceramic materials to provide evaluations of integrated external gamma dose for dose reconstruction in populated areas contaminated by Chernobyl fallout. A range of suitable ceramics can be found associated with buildings: on the exterior surfaces (tiles), within walls (bricks) and within the interiors (porcelain fittings and tiles). Dose evaluations obtained using such samples provide information concerning the time-averaged incident gamma radiation field, average shielding factors and, with the aid of computational modelling techniques, dose estimates at external reference positions

  5. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  6. Optimization of technique factors for a silicon diode array full-field digital mammography system and comparison to screen-film mammography with matched average glandular dose

    International Nuclear Information System (INIS)

    Berns, Eric A.; Hendrick, R. Edward; Cutter, Gary R.

    2003-01-01

    Contrast-detail experiments were performed to optimize technique factors for the detection of low-contrast lesions using a silicon diode array full-field digital mammography (FFDM) system under the conditions of a matched average glandular dose (AGD) for different techniques. Optimization was performed for compressed breast thickness from 2 to 8 cm. FFDM results were compared to screen-film mammography (SFM) at each breast thickness. Four contrast-detail (CD) images were acquired on a SFM unit with optimal techniques at 2, 4, 6, and 8 cm breast thicknesses. The AGD for each breast thickness was calculated based on half-value layer (HVL) and entrance exposure measurements on the SFM unit. A computer algorithm was developed and used to determine FFDM beam current (mAs) that matched AGD between FFDM and SFM at each thickness, while varying target, filter, and peak kilovoltage (kVp) across the full range available for the FFDM unit. CD images were then acquired on FFDM for kVp values from 23-35 for a molybdenum-molybdenum (Mo-Mo), 23-40 for a molybdenum-rhodium (Mo-Rh), and 25-49 for a rhodium-rhodium (Rh-Rh) target-filter under the constraint of matching the AGD from screen-film for each breast thickness (2, 4, 6, and 8 cm). CD images were scored independently for SFM and each FFDM technique by six readers. CD scores were analyzed to assess trends as a function of target-filter and kVp and were compared to SFM at each breast thickness. For 2 cm thick breasts, optimal FFDM CD scores occurred at the lowest possible kVp setting for each target-filter, with significant decreases in FFDM CD scores as kVp was increased under the constraint of matched AGD. For 2 cm breasts, optimal FFDM CD scores were not significantly different from SFM CD scores. For 4-8 cm breasts, optimum FFDM CD scores were superior to SFM CD scores. For 4 cm breasts, FFDM CD scores decreased as kVp increased for each target-filter combination. For 6 cm breasts, CD scores decreased slightly as k

  7. Visualization of Radial Peripapillary Capillaries Using Optical Coherence Tomography Angiography: The Effect of Image Averaging.

    Directory of Open Access Journals (Sweden)

    Shelley Mo

    Full Text Available To assess the effect of image registration and averaging on the visualization and quantification of the radial peripapillary capillary (RPC network on optical coherence tomography angiography (OCTA.Twenty-two healthy controls were imaged with a commercial OCTA system (AngioVue, Optovue, Inc.. Ten 10x10° scans of the optic disc were obtained, and the most superficial layer (50-μm slab extending from the inner limiting membrane was extracted for analysis. Rigid registration was achieved using ImageJ, and averaging of each 2 to 10 frames was performed in five ~2x2° regions of interest (ROI located 1° from the optic disc margin. The ROI were automatically skeletonized. Signal-to-noise ratio (SNR, number of endpoints and mean capillary length from the skeleton, capillary density, and mean intercapillary distance (ICD were measured for the reference and each averaged ROI. Repeated measures analysis of variance was used to assess statistical significance. Three patients with primary open angle glaucoma were also imaged to compare RPC density to controls.Qualitatively, vessels appeared smoother and closer to histologic descriptions with increasing number of averaged frames. Quantitatively, number of endpoints decreased by 51%, and SNR, mean capillary length, capillary density, and ICD increased by 44%, 91%, 11%, and 4.5% from single frame to 10-frame averaged, respectively. The 10-frame averaged images from the glaucomatous eyes revealed decreased density correlating to visual field defects and retinal nerve fiber layer thinning.OCTA image registration and averaging is a viable and accessible method to enhance the visualization of RPCs, with significant improvements in image quality and RPC quantitative parameters. With this technique, we will be able to non-invasively and reliably study RPC involvement in diseases such as glaucoma.

  8. Speech evaluation and dental arch shape following pushback palatoplasty in cleft palate patients: Supraperiosteal flap technique versus mucoperiosteal flap technique.

    Science.gov (United States)

    Ito, Shizuyo; Noguchi, Makoto; Suda, Yoshiyuki; Yamaguchi, Akira; Kohama, Geniku; Yamamoto, Etsuhide

    2006-04-01

    The aim of this study was to evaluate and compare the maxillary dental arch shape and speech of cleft palate patients following pushback palatoplasty using either the supraperiosteal flap technique or the mucoperiosteal flap technique. Sixty-two patients (29, cleft palate only; 33, unilateral cleft lip, alveolus and palate) operated on by the supraperiosteal technique and 47 patients (23, cleft palate only; 24 unilateral cleft lip, alveolus and palate) by the mucoperiosteal technique were reviewed in this study. Dental arch shape and speech proficiency at preschool and school age were evaluated in all patients. Dental arch shapes were classified as U type (good dental arch shape) and V type (narrow dental arch shape). In cleft palate only patients, U type was observed in 90% of the supraperiosteal group and 83% of the mucoperiosteal group. In unilateral cleft lip, alveolus and palate patients, U type was observed in 85% of the supraperiosteal group, while only in 33% of the mucoperiosteal group. In cleft palate only patients, normal speech at school age was observed 100% of the supraperiosteal group and 83% of the mucoperiosteal group. In unilateral cleft lip, alveolus and palate patients, normal speech at school age was observed in 97% of the supraperiosteal group and 75% of the mucoperiosteal group. Misarticulation was frequently found in patients with the V type of dental arch shape. It is suggested that pushback palatoplasty using the supraperiosteal technique is more advantageous for speech development compared with the mucoperiosteal technique.

  9. Pediatric mandibular fractures: a free hand technique.

    Science.gov (United States)

    Davison, S P; Clifton, M S; Davison, M N; Hedrick, M; Sotereanos, G

    2001-01-01

    The treatment of pediatric mandibular fractures is rare, controversial, and complicated by mixed dentition. To determine if open mandibular fracture repair with intraoral and extraoral rigid plate placement, after free hand occlusal and bone reduction, without intermaxillary fixation (IMF), is appropriate and to discuss postoperative advantages, namely, maximal early return of function and minimal oral hygiene issues. A group of 29 pediatric patients with a mandibular fracture were examined. Twenty pediatric patients (13 males and 7 females) with a mean age of 9 years (age range, 1-17 years) were treated using IMF. All patients were treated by the same surgeon (G.S.). Surgical time for plating was reduced by 1 hour, the average time to place patients in IMF. The patients who underwent open reduction internal fixation without IMF ate a soft mechanical diet by postoperative day 3 compared with postoperative day 16 for those who underwent IMF. Complication rates related to fixation technique were comparable at 20% for those who did not undergo IMF and 33% for those who did. We believe that free hand reduction is a valuable technique to reduce operative time for pediatric mandibular fractures. It maximizes return to function while minimizing the oral hygiene issues and hardware removal of intermaxillary function.

  10. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  11. Two distinct groups within the Bacillus subtilis group display significantly different spore heat resistance properties.

    Science.gov (United States)

    Berendsen, Erwin M; Zwietering, Marcel H; Kuipers, Oscar P; Wells-Bennik, Marjon H J

    2015-02-01

    The survival of bacterial spores after heat treatment and the subsequent germination and outgrowth in a food product can lead to spoilage of the food product and economical losses. Prediction of time-temperature conditions that lead to sufficient inactivation requires access to detailed spore thermal inactivation kinetics of relevant model strains. In this study, the thermal inactivation kinetics of spores of fourteen strains belonging to the Bacillus subtilis group were determined in detail, using both batch heating in capillary tubes and continuous flow heating in a micro heater. The inactivation data were fitted using a log linear model. Based on the spore heat resistance data, two distinct groups (p subtilis group could be identified. One group of strains had spores with an average D120 °C of 0.33 s, while the spores of the other group displayed significantly higher heat resistances, with an average D120 °C of 45.7 s. When comparing spore inactivation data obtained using batch- and continuous flow heating, the z-values were significantly different, hence extrapolation from one system to the other was not justified. This study clearly shows that heat resistances of spores from different strains in the B. subtilis group can vary greatly. Strains can be separated into two groups, to which different spore heat inactivation kinetics apply. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Clinical observation on fibrin glue technique in pterygium surgery performed with limbal autograft transplantation

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2013-07-01

    Full Text Available AIM: To compare the efficiency and safety of fibrin glue to suture technique in pterygium surgery performed with limbal autograft.METHODS: A prospective randomized clinical trial was carried out in 60 eyes of 48 patients operated for primary nasal pterygium. Autologous limbal graft taken from the superotemporal limbus was used to cover the sclera after pterygium excision under local anesthesia with 2% lidocaine. In 22 cases(30 eyes, the transplant was attached to the sclera with a fibrin tissue adhesive(group 1and in 26 cases(30 eyeswith 10-0 Virgin silk sutures(group 2. Patients were followed up at least for 3 months. Time of operation, matching degree of graft and visual analogue scale(VASscore were mainly observed and recorded. RESULTS: Patient symptoms were significantly less and biomicroscopic findings were better in group 1. Pterygium recurrence was seen in 1 case of group 1, and 1 case of group 2. Average surgery time was shorter(PCONCLUSION: Using fibrin glue for graft fixation in pterygium surgery causes significantly less postoperative pain and shortens surgery time significantly.

  13. Gastroesophageal anastomosis: single-layer versus double-layer technique

    International Nuclear Information System (INIS)

    Aslam, V.A.; Bilal, A.; Khan, A.; Ahmed, M.

    2008-01-01

    Considerable controversy exists regarding the optimum technique for gastroesophageal anastomosis. Double layer technique has long been considered important for safe healing but there is evidence that single layer technique is also safe and can be performed in much shorter time. The purpose of this study was to compare the outcome of single layer and double layer techniques for gastroesophageal anastomosis. A prospective randomized study was conducted in cardiothoracic unit, Lady Reading Hospital from Jan 2006 to Jan 2008. Fifty patients with oesophageal carcinoma undergoing subtotal oesophagectomy were randomized to have the anastomosis by single layer continuous or double layer continuous technique (group A (n=24) and B (n=26) respectively). The demographic data, operative and anastomosis time, postoperative complications and hospital mortality were recorded on a proforma and analyzed on SPSS 10. There was no significant difference between group A and B in terms of age, gender, postoperative complications and duration of hospital stay. Anastomotic leak occurred in 4.2% patients in group A and 7.7% in group B (p=NS). Mean anastomosis time was 10.04 minutes in group A and 19.2 minutes in group B (p=0.0001). Mean operative time was 163.83 minutes and 170.96 minutes in group A and B respectively. Overall hospital mortality was 2%; no deaths occurred due to anastomotic leak. Single layer continuous technique is equally safe and can be performed in shorter time and at a lower cost than the double layer technique. (author)

  14. Feathering effect detection and artifact agglomeration index-based video deinterlacing technique

    Science.gov (United States)

    Martins, André Luis; Rodrigues, Evandro Luis Linhari; de Paiva, Maria Stela Veludo

    2018-03-01

    Several video deinterlacing techniques have been developed, and each one presents a better performance in certain conditions. Occasionally, even the most modern deinterlacing techniques create frames with worse quality than primitive deinterlacing processes. This paper validates that the final image quality can be improved by combining different types of deinterlacing techniques. The proposed strategy is able to select between two types of deinterlaced frames and, if necessary, make the local correction of the defects. This decision is based on an artifact agglomeration index obtained from a feathering effect detection map. Starting from a deinterlaced frame produced by the "interfield average" method, the defective areas are identified, and, if deemed appropriate, these areas are replaced by pixels generated through the "edge-based line average" method. Test results have proven that the proposed technique is able to produce video frames with higher quality than applying a single deinterlacing technique through getting what is good from intra- and interfield methods.

  15. Impact of digitalization of mammographic units on average glandular doses in the Flemish Breast Cancer Screening Program

    OpenAIRE

    De Hauwere, An; Thierens, Hubert

    2012-01-01

    The impact of digitalization on the average glandular doses in 49 mammographic units participating in the Flemish Breast Cancer Screening Program was studied. Screen-film was changed to direct digital radiography and computed radiography in 25 and 24 departments respectively. Average glandular doses were calculated before and after digitalization for different PMMA-phantom thicknesses and for groups of 50 successive patients. For the transition from screen-film to computed radiography both ph...

  16. Advanced pulse oximeter signal processing technology compared to simple averaging. I. Effect on frequency of alarms in the operating room.

    Science.gov (United States)

    Rheineck-Leyssius, A T; Kalkman, C J

    1999-05-01

    To determine the effect of a new signal processing technique (Oxismart, Nellcor, Inc., Pleasanton, CA) on the incidence of false pulse oximeter alarms in the operating room (OR). Prospective observational study. Nonuniversity hospital. 53 ASA physical status I, II, and III consecutive patients undergoing general anesthesia with tracheal intubation. In the OR we compared the number of alarms produced by a recently developed third generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504). Three pulse oximeters were used simultaneously in each patient: a Nellcor pulse oximeter, a Criticare with the signal averaging time set at 3 seconds (Criticareaverage3s) and a similar unit with the signal averaging time set at 21 seconds (Criticareaverage21s). For each pulse oximeter, the number of false (artifact) alarms was counted. One false alarm was produced by the Nellcor (duration 55 sec) and one false alarm by the Criticareaverage21s monitor (5 sec). The incidence of false alarms was higher in Criticareaverage3s. In eight patients, Criticareaverage3s produced 20 false alarms (p signal processing compared with the Criticare monitor with the longer averaging time of 21 seconds.

  17. Instantaneous equations for multiphase flow in porous media without length-scale restrictions using a non-local averaging volume

    International Nuclear Information System (INIS)

    Espinosa-Paredes, Gilberto

    2010-01-01

    The aim of this paper is to propose a framework to obtain a new formulation for multiphase flow conservation equations without length-scale restrictions, based on the non-local form of the averaged volume conservation equations. The simplification of the local averaging volume of the conservation equations to obtain practical equations is subject to the following length-scale restrictions: d << l << L, where d is the characteristic length of the dispersed phases, l is the characteristic length of the averaging volume, and L is the characteristic length of the physical system. If the foregoing inequality does not hold, or if the scale of the problem of interest is of the order of l, the averaging technique and therefore, the macroscopic theories of multiphase flow should be modified in order to include appropriate considerations and terms in the corresponding equations. In these cases the local form of the averaged volume conservation equations are not appropriate to describe the multiphase system. As an example of the conservation equations without length-scale restrictions, the natural circulation boiling water reactor was consider to study the non-local effects on the thermal-hydraulic core performance during steady-state and transient behaviors, and the results were compared with the classic local averaging volume conservation equations.

  18. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  19. PENERAPAN METODE BELAJAR AKTIF TIPE GROUP TO GROUP EXCHANGE (GGE UNTUK MENINGKATKAN KREATIVITAS DAN HASIL BELAJAR ALAT UKUR SISWAKELAS X SMK MUHAMMADIYAH 3 KLATEN UTARA TAHUN PELAJARAN 2014/2015

    Directory of Open Access Journals (Sweden)

    Mursyid Zuny Aziz

    2015-12-01

    Full Text Available This study aimed to describe (1 learning creativity of measurement tool subject among the tenth grade students of SMK Muhammadiyah 3 Klaten Utara in the academic year 2014/2015 and (2 learning achievement of measurement tool subject among the tenth grade students of SMK Muhammadiyah 3 Klaten Utara in the academic year 2014/2015. This study was an action research. Techniques collecting data used test and documentation. This study shows that (1 the implementation of group to group exchange could improve learning creativity. The learning creativity of cycle was 45.23, while the learning creativity in cycle II was 45.23. (2 the implementation of group to group exchange could improve learning achievement of measurement tool subject among the tenth grade students of SMK Muhammadiyah 3 Klaten Utara in the academic year 2014/2015. There was improvement of scores in cycle I and cycle II. The average score in cycle I was 58.85, cycle II was 75.14. It could be stated that the implementation of group to group exchange to improve learning creativity and lerning achievement of measurement tool subject among the tenth grade students of SMK Muhammadiyah 3 Klaten Utara in the academic year 2014/2015.

  20. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  1. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  2. Electrical Resistance Imaging of Two-Phase Flow With a Mesh Grouping Technique Based On Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Lee, Bo An; Kim, Bong Seok; Ko, Min Seok; Kim, Kyung Young; Kim, Sin

    2014-01-01

    An electrical resistance tomography (ERT) technique combining the particle swarm optimization (PSO) algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes) comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes). Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images

  3. ELECTRICAL RESISTANCE IMAGING OF TWO-PHASE FLOW WITH A MESH GROUPING TECHNIQUE BASED ON PARTICLE SWARM OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    BO AN LEE

    2014-02-01

    Full Text Available An electrical resistance tomography (ERT technique combining the particle swarm optimization (PSO algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes. Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images.

  4. Electrical Resistance Imaging of Two-Phase Flow With a Mesh Grouping Technique Based On Particle Swarm Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Bo An; Kim, Bong Seok; Ko, Min Seok; Kim, Kyung Young; Kim, Sin [Jeju National Univ., Jeju (Korea, Republic of)

    2014-02-15

    An electrical resistance tomography (ERT) technique combining the particle swarm optimization (PSO) algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes) comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes). Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images.

  5. The average cost of measles cases and adverse events following vaccination in industrialised countries

    Directory of Open Access Journals (Sweden)

    Kou Ulla

    2002-09-01

    Full Text Available Abstract Background Even though the annual incidence rate of measles has dramatically decreased in industrialised countries since the implementation of universal immunisation programmes, cases continue to occur in countries where endemic measles transmission has been interrupted and in countries where adequate levels of immunisation coverage have not been maintained. The objective of this study is to develop a model to estimate the average cost per measles case and per adverse event following measles immunisation using the Netherlands (NL, the United Kingdom (UK and Canada as examples. Methods Parameter estimates were based on a review of the published literature. A decision tree was built to represent the complications associated with measles cases and adverse events following imminisation. Monte-Carlo Simulation techniques were used to account for uncertainty. Results From the perspective of society, we estimated the average cost per measles case to be US$276, US$307 and US$254 for the NL, the UK and Canada, respectively, and the average cost of adverse events following immunisation per vaccinee to be US$1.43, US$1.93 and US$1.51 for the NL, UK and Canada, respectively. Conclusions These average cost estimates could be combined with incidence estimates and costs of immunisation programmes to provide estimates of the cost of measles to industrialised countries. Such estimates could be used as a basis to estimate the potential economic gains of global measles eradication.

  6. Development of a high average current polarized electron source with long cathode operational lifetime

    Energy Technology Data Exchange (ETDEWEB)

    C. K. Sinclair; P. A. Adderley; B. M. Dunham; J. C. Hansknecht; P. Hartmann; M. Poelker; J. S. Price; P. M. Rutt; W. J. Schneider; M. Steigerwald

    2007-02-01

    Substantially more than half of the electromagnetic nuclear physics experiments conducted at the Continuous Electron Beam Accelerator Facility of the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory) require highly polarized electron beams, often at high average current. Spin-polarized electrons are produced by photoemission from various GaAs-based semiconductor photocathodes, using circularly polarized laser light with photon energy slightly larger than the semiconductor band gap. The photocathodes are prepared by activation of the clean semiconductor surface to negative electron affinity using cesium and oxidation. Historically, in many laboratories worldwide, these photocathodes have had short operational lifetimes at high average current, and have often deteriorated fairly quickly in ultrahigh vacuum even without electron beam delivery. At Jefferson Lab, we have developed a polarized electron source in which the photocathodes degrade exceptionally slowly without electron emission, and in which ion back bombardment is the predominant mechanism limiting the operational lifetime of the cathodes during electron emission. We have reproducibly obtained cathode 1/e dark lifetimes over two years, and 1/e charge density and charge lifetimes during electron beam delivery of over 2?105???C/cm2 and 200 C, respectively. This source is able to support uninterrupted high average current polarized beam delivery to three experimental halls simultaneously for many months at a time. Many of the techniques we report here are directly applicable to the development of GaAs photoemission electron guns to deliver high average current, high brightness unpolarized beams.

  7. Decision rules and group rationality: cognitive gain or standstill?

    Directory of Open Access Journals (Sweden)

    Petru Lucian Curşeu

    Full Text Available Recent research in group cognition points towards the existence of collective cognitive competencies that transcend individual group members' cognitive competencies. Since rationality is a key cognitive competence for group decision making, and group cognition emerges from the coordination of individual cognition during social interactions, this study tests the extent to which collaborative and consultative decision rules impact the emergence of group rationality. Using a set of decision tasks adapted from the heuristics and biases literature, we evaluate rationality as the extent to which individual choices are aligned with a normative ideal. We further operationalize group rationality as cognitive synergy (the extent to which collective rationality exceeds average or best individual rationality in the group, and we test the effect of collaborative and consultative decision rules in a sample of 176 groups. Our results show that the collaborative decision rule has superior synergic effects as compared to the consultative decision rule. The ninety one groups working in a collaborative fashion made more rational choices (above and beyond the average rationality of their members than the eighty five groups working in a consultative fashion. Moreover, the groups using a collaborative decision rule were closer to the rationality of their best member than groups using consultative decision rules. Nevertheless, on average groups did not outperformed their best member. Therefore, our results reveal how decision rules prescribing interpersonal interactions impact on the emergence of collective cognitive competencies. They also open potential venues for further research on the emergence of collective rationality in human decision-making groups.

  8. Decision rules and group rationality: cognitive gain or standstill?

    Science.gov (United States)

    Curşeu, Petru Lucian; Jansen, Rob J G; Chappin, Maryse M H

    2013-01-01

    Recent research in group cognition points towards the existence of collective cognitive competencies that transcend individual group members' cognitive competencies. Since rationality is a key cognitive competence for group decision making, and group cognition emerges from the coordination of individual cognition during social interactions, this study tests the extent to which collaborative and consultative decision rules impact the emergence of group rationality. Using a set of decision tasks adapted from the heuristics and biases literature, we evaluate rationality as the extent to which individual choices are aligned with a normative ideal. We further operationalize group rationality as cognitive synergy (the extent to which collective rationality exceeds average or best individual rationality in the group), and we test the effect of collaborative and consultative decision rules in a sample of 176 groups. Our results show that the collaborative decision rule has superior synergic effects as compared to the consultative decision rule. The ninety one groups working in a collaborative fashion made more rational choices (above and beyond the average rationality of their members) than the eighty five groups working in a consultative fashion. Moreover, the groups using a collaborative decision rule were closer to the rationality of their best member than groups using consultative decision rules. Nevertheless, on average groups did not outperformed their best member. Therefore, our results reveal how decision rules prescribing interpersonal interactions impact on the emergence of collective cognitive competencies. They also open potential venues for further research on the emergence of collective rationality in human decision-making groups.

  9. Asymmetry within social groups

    DEFF Research Database (Denmark)

    Barker, Jessie; Loope, Kevin J.; Reeve, H. Kern

    2016-01-01

    Social animals vary in their ability to compete with group members over shared resources and also vary in their cooperative efforts to produce these resources. Competition among groups can promote within-group cooperation, but many existing models of intergroup cooperation do not explicitly account...... of two roles, with relative competitive efficiency and the number of individuals varying between roles. Players in each role make simultaneous, coevolving decisions. The model predicts that although intergroup competition increases cooperative contributions to group resources by both roles, contributions...... are predominantly from individuals in the less competitively efficient role, whereas individuals in the more competitively efficient role generally gain the larger share of these resources. When asymmetry in relative competitive efficiency is greater, a group's per capita cooperation (averaged across both roles...

  10. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  11. Results of a bone splint technique for the treatment of lower limb deformities in children with type I osteogenesis imperfecta

    Directory of Open Access Journals (Sweden)

    Dasheng Lin

    2013-01-01

    Full Text Available Background: Children with osteogenesis imperfecta (OI can suffer from frequent fractures and limb deformities, resulting in impaired ambulation. Osteopenia and thin cortices complicate orthopedic treatment in this group. This study evaluates the clinical results of a bone splint technique for the treatment of lower limb deformities in children with type I OI. The technique consists of internal plating combined with cortical strut allograft fixation. Materials and Methods: We prospectively followed nine children (five boys, four girls with lower limb deformities due to type I OI, who had been treated with the bone splint technique (11 femurs, four tibias between 2003 and 2006. The fracture healing time, deformity improvement, ambulation ability and complications were recorded to evaluate treatment effects. Results: At the time of surgery the average age in our study was 7.7 years (range 5-12 years. The average length of followup was 69 months (range 60-84 months. All patients had good fracture healing with an average healing time of 14 weeks (range 12-16 weeks and none experienced further fractures, deformity, or nonunion. The fixation remained stable throughout the procedure in all cases, with no evidence of loosening or breakage of screws and the deformity and mobility significantly improved after surgery. Of the two children confined to bed before surgery, one was able to walk on crutches and the other needed a wheelchair. The other seven patients could walk without walking aids or support like crutches. Conclusions: These findings suggest that the bone splint technique provides good mechanical support and increases the bone mass. It is an effective treatment for children with OI and lower limb deformities.

  12. Description of group-theoretical model of developed turbulence

    International Nuclear Information System (INIS)

    Saveliev, V L; Gorokhovski, M A

    2008-01-01

    We propose to associate the phenomenon of stationary turbulence with the special self-similar solutions of the Euler equations. These solutions represent the linear superposition of eigenfields of spatial symmetry subgroup generators and imply their dependence on time through the parameter of the symmetry transformation only. From this model, it follows that for developed turbulent process, changing the scale of averaging (filtering) of the velocity field is equivalent to composition of scaling, translation and rotation transformations. We call this property a renormalization-group invariance of filtered turbulent fields. The renormalization group invariance provides an opportunity to transform the averaged Navier-Stokes equation over a small scale (inner threshold of the turbulence) to larger scales by simple scaling. From the methodological point of view, it is significant to note that the turbulent viscosity term appeared not as a result of averaging of the nonlinear term in the Navier-Stokes equation, but from the molecular viscosity term with the help of renormalization group transformation.

  13. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  14. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  15. A comparison of direct technique with the liquid stylet technique of radial artery cannulation in patients presenting for coronary artery bypass grafting

    International Nuclear Information System (INIS)

    Younus, U.; Ahmed, I.

    2012-01-01

    To compare direct technique with the liquid stylet technique of radial artery cannulation in patients undergoing coronary artery byroms grouping. We hypothesized that liquid stylet technique would lead to fewer attempts and save vital time. Study Design: Randomized controlled Trial (RCT). Place and Duration of Study: Department of Anaesthesia and Intensive Care, Armed Forces Institute of Cardiology and National Institute of Heart Diseases, [AFIC-NIHD] Rawalpindi, between 1 July 2007 to 31 Dec 2007. Patients and Methods: One hundred patients of either sex scheduled for CABG were included and were randomized to 2 groups using random number table. Fifty patients in the direct technique group and 50 in the liquid stylet group. Results: These two groups comparable with respect to age, gender and weight were studied. The number of attempts in group 1 were 1.7+-0.5 vs 3.6+-1.6 in groups 2, (p=0.021). The time consumed in group was 3.3 +- 2.2 minutes while in groups 2 it was 8.0 +- 3.6 (p=0.022). Conclusion: We concluded that the liquid stylet technique is safe, quick and is associated with lesser number of attempts at cannulation. Secondly it can be done without fancy guide wires and other technology especially in a developing country like Pakistan. (author)

  16. Sci-Thur AM: YIS – 05: Prediction of lung tumor motion using a generalized neural network optimized from the average prediction outcome of a group of patients

    Energy Technology Data Exchange (ETDEWEB)

    Teo, Troy; Alayoubi, Nadia; Bruce, Neil; Pistorius, Stephen [University of Manitoba/ CancerCare Manitoba, University of Manitoba, University of Manitoba, University of Manitoba / CancerCare Manitoba (Canada)

    2016-08-15

    Purpose: In image-guided adaptive radiotherapy systems, prediction of tumor motion is required to compensate for system latencies. However, due to the non-stationary nature of respiration, it is a challenge to predict the associated tumor motions. In this work, a systematic design of the neural network (NN) using a mixture of online data acquired during the initial period of the tumor trajectory, coupled with a generalized model optimized using a group of patient data (obtained offline) is presented. Methods: The average error surface obtained from seven patients was used to determine the input data size and number of hidden neurons for the generalized NN. To reduce training time, instead of using random weights to initialize learning (method 1), weights inherited from previous training batches (method 2) were used to predict tumor position for each sliding window. Results: The generalized network was established with 35 input data (∼4.66s) and 20 hidden nodes. For a prediction horizon of 650 ms, mean absolute errors of 0.73 mm and 0.59 mm were obtained for method 1 and 2 respectively. An average initial learning period of 8.82 s is obtained. Conclusions: A network with a relatively short initial learning time was achieved. Its accuracy is comparable to previous studies. This network could be used as a plug-and play predictor in which (a) tumor positions can be predicted as soon as treatment begins and (b) the need for pretreatment data and optimization for individual patients can be avoided.

  17. Sci-Thur AM: YIS – 05: Prediction of lung tumor motion using a generalized neural network optimized from the average prediction outcome of a group of patients

    International Nuclear Information System (INIS)

    Teo, Troy; Alayoubi, Nadia; Bruce, Neil; Pistorius, Stephen

    2016-01-01

    Purpose: In image-guided adaptive radiotherapy systems, prediction of tumor motion is required to compensate for system latencies. However, due to the non-stationary nature of respiration, it is a challenge to predict the associated tumor motions. In this work, a systematic design of the neural network (NN) using a mixture of online data acquired during the initial period of the tumor trajectory, coupled with a generalized model optimized using a group of patient data (obtained offline) is presented. Methods: The average error surface obtained from seven patients was used to determine the input data size and number of hidden neurons for the generalized NN. To reduce training time, instead of using random weights to initialize learning (method 1), weights inherited from previous training batches (method 2) were used to predict tumor position for each sliding window. Results: The generalized network was established with 35 input data (∼4.66s) and 20 hidden nodes. For a prediction horizon of 650 ms, mean absolute errors of 0.73 mm and 0.59 mm were obtained for method 1 and 2 respectively. An average initial learning period of 8.82 s is obtained. Conclusions: A network with a relatively short initial learning time was achieved. Its accuracy is comparable to previous studies. This network could be used as a plug-and play predictor in which (a) tumor positions can be predicted as soon as treatment begins and (b) the need for pretreatment data and optimization for individual patients can be avoided.

  18. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  19. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  20. Adaptive neuro-fuzzy based inferential sensor model for estimating the average air temperature in space heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Jassar, S.; Zhao, L. [Department of Electrical and Computer Engineering, Ryerson University, 350 Victoria Street, Toronto, ON (Canada); Liao, Z. [Department of Architectural Science, Ryerson University (Canada)

    2009-08-15

    The heating systems are conventionally controlled by open-loop control systems because of the absence of practical methods for estimating average air temperature in the built environment. An inferential sensor model, based on adaptive neuro-fuzzy inference system modeling, for estimating the average air temperature in multi-zone space heating systems is developed. This modeling technique has the advantage of expert knowledge of fuzzy inference systems (FISs) and learning capability of artificial neural networks (ANNs). A hybrid learning algorithm, which combines the least-square method and the back-propagation algorithm, is used to identify the parameters of the network. This paper describes an adaptive network based inferential sensor that can be used to design closed-loop control for space heating systems. The research aims to improve the overall performance of heating systems, in terms of energy efficiency and thermal comfort. The average air temperature results estimated by using the developed model are strongly in agreement with the experimental results. (author)

  1. [IMPACT OF PERIOPERATIVE AVERAGE BLOOD-GLUCOSE LEVEL ON PROGNOSIS OF PATIENTS WITH HIP FRACTURE AND DIABETES MELLITUS].

    Science.gov (United States)

    Wang, Guoqi; Long, Anhua; Zhang, Lihai; Zhang, Hao; Yin, Peng; Tang, Peifu

    2014-07-01

    To explore the impact of perioperative average blood-glucose level on the prognosis of patients with hip fracture and diabetes mellitus. A retrospective analysis was made on the clinical data of 244 patients with hip fracture and diabetes mellitus who accorded with the inclusion criteria between September 2009 and September 2012. Of 244 patients, 125 patients with poorly controlled fasting blood-glucose (average fasting blood-glucose level > 7.8 mmol/L) were assigned in group A, and 119 patients with well controlled fasting blood-glucose (average fasting blood-glucose level ≤ 7.8 mmol/L) were assigned in group B according to "China guideline for type 2 diabetes" criteria. There was no significant difference in gender, age, disease duration of diabetes mellitus, serum albumin, fracture type and disease duration, surgical procedure, anaesthesia, and complications between 2 groups (P > 0.05). Group A had a higher hemoglobin level and fewer patients who can do some outdoor activities than group B (t = -2.353, P = 0.020; χ2 = 4.333, P = 0.037). The hospitalization time, days to await surgery, stitch removal time, the postoperative complication rate, the mortality at 1 month and 1 year after operation, and ambulatory ability at 1 year after operation were compared between the 2 groups. A total of 223 patients (114 in group A and 109 in group B) were followed up 12-15 months (mean, 13.5 months). The days to await surgery of group A were significantly more than those of group B (t = -2.743, P=0.007), but no significant difference was found in hospitalization time and stitch removal time between 2 groups (P > 0.05). The postoperative complication rate of group A (19.2%, 24/125) was significantly higher than that of group B (8.4%, 10/119) (χ2 =5.926, P = 0.015). Group A had a higher mortality at 1 month after operation than group B (6.1% vs. 0) (χ2 = 5.038, P = 0.025), but no significant difference was shown at 1 year after operation between groups A and B (8.8% vs. 4

  2. Online Learning for Students from Diverse Backgrounds: Learning Disability Students, Excellent Students and Average Students

    Directory of Open Access Journals (Sweden)

    Miri Shonfeld

    2015-09-01

    Full Text Available The perceived contribution of science education online course to pre-service students (N=121 from diverse backgrounds - students with learning disabilities (25 LD students, 28 excellent students and 68 average students is presented in this five years research. During the online course students were asked to choose a scientific subject; to map it and to plan teaching activities; to carry out the proposed activities with students in a classroom experience; and to reflect the process. The assumption was that adapting the online course by using information and communication technology following formative assessment will improve students' self-learning ability as well as broaden their science knowledge, their lab performance and teaching skills. Data were collected using quantitative and qualitative tools including: pre and post questionnaires and nine (three students from each group depth interviews upon completion of the course. Findings, based on students` perceived evaluation, pinpointed on the advantages of the online course for students of the three groups. LD students’ achievements were not inferior to those of their peers, excellent students and average students. Yet, it carefully reports on a slight but explicitly marginal perceived evaluation of the LD students in comparison to excellent students and average students regarding: forum participation, authentic task and water lab performance. The article discusses the affordance of the online course via additional features that can be grouped into two categories: knowledge construction and flexibility in time, interaction and knowledge. Further research is suggested to extend the current study by examine the effect of other courses and different contents and by considering various evaluation methods of online courses, such as: observation, the think aloud, text and tasks analysis, and reflection.

  3. Cognitive distance, absorptive capacity and group rationality: a simulation study.

    Directory of Open Access Journals (Sweden)

    Petru Lucian Curşeu

    Full Text Available We report the results of a simulation study in which we explore the joint effect of group absorptive capacity (as the average individual rationality of the group members and cognitive distance (as the distance between the most rational group member and the rest of the group on the emergence of collective rationality in groups. We start from empirical results reported in the literature on group rationality as collective group level competence and use data on real-life groups of four and five to validate a mathematical model. We then use this mathematical model to predict group level scores from a variety of possible group configurations (varying both in cognitive distance and average individual rationality. Our results show that both group competence and cognitive distance are necessary conditions for emergent group rationality. Group configurations, in which the groups become more rational than the most rational group member, are groups scoring low on cognitive distance and scoring high on absorptive capacity.

  4. Cognitive distance, absorptive capacity and group rationality: a simulation study.

    Science.gov (United States)

    Curşeu, Petru Lucian; Krehel, Oleh; Evers, Joep H M; Muntean, Adrian

    2014-01-01

    We report the results of a simulation study in which we explore the joint effect of group absorptive capacity (as the average individual rationality of the group members) and cognitive distance (as the distance between the most rational group member and the rest of the group) on the emergence of collective rationality in groups. We start from empirical results reported in the literature on group rationality as collective group level competence and use data on real-life groups of four and five to validate a mathematical model. We then use this mathematical model to predict group level scores from a variety of possible group configurations (varying both in cognitive distance and average individual rationality). Our results show that both group competence and cognitive distance are necessary conditions for emergent group rationality. Group configurations, in which the groups become more rational than the most rational group member, are groups scoring low on cognitive distance and scoring high on absorptive capacity.

  5. Using Psychodrama Techniques to Promote Counselor Identity Development in Group Supervision

    Science.gov (United States)

    Scholl, Mark B.; Smith-Adcock, Sondra

    2007-01-01

    The authors briefly introduce the concepts, techniques, and theory of identity development associated with J. L. Moreno's (1946, 1969, 1993) Psychodrama. Based upon Loganbill, Hardy, and Delworth's (1982) model, counselor identity development is conceptualized as consisting of seven developmental themes or vectors (e.g., issues of awareness and…

  6. Average Albedos of Close-in Super-Earths and Super-Neptunes from Statistical Analysis of Long-cadence Kepler Secondary Eclipse Data

    Science.gov (United States)

    Sheets, Holly A.; Deming, Drake

    2017-10-01

    We present the results of our work to determine the average albedo for small, close-in planets in the Kepler candidate catalog. We have adapted our method of averaging short-cadence light curves of multiple Kepler planet candidates to long-cadence data, in order to detect an average albedo for the group of candidates. Long-cadence data exist for many more candidates than the short-cadence data, and so we separate the candidates into smaller radius bins than in our previous work: 1-2 {R}\\oplus , 2-4 {R}\\oplus , and 4-6 {R}\\oplus . We find that, on average, all three groups appear darker than suggested by the short-cadence results, but not as dark as many hot Jupiters. The average geometric albedos for the three groups are 0.11 ± 0.06, 0.05 ± 0.04, and 0.23 ± 0.11, respectively, for the case where heat is uniformly distributed about the planet. If heat redistribution is inefficient, the albedos are even lower, since there will be a greater thermal contribution to the total light from the planet. We confirm that newly identified false-positive Kepler Object of Interest (KOI) 1662.01 is indeed an eclipsing binary at twice the period listed in the planet candidate catalog. We also newly identify planet candidate KOI 4351.01 as an eclipsing binary, and we report a secondary eclipse measurement for Kepler-4b (KOI 7.01) of ˜7.50 ppm at a phase of ˜0.7, indicating that the planet is on an eccentric orbit.

  7. Theoretical Issues in Clinical Social Group Work.

    Science.gov (United States)

    Randall, Elizabeth; Wodarski, John S.

    1989-01-01

    Reviews relevant issues in clinical social group practice including group versus individual treatment, group work advantages, approach rationale, group conditions for change, worker role in group, group composition, group practice technique and method, time as group work dimension, pretherapy training, group therapy precautions, and group work…

  8. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  9. X-ray structure determination of new monomers to establish their polymerizability: copolymerization of two tetrasubstituted electrophilic olefins with electron-rich styrenes giving polymers with an average 1.25 functional groups per chain carbon atom

    International Nuclear Information System (INIS)

    Hall, H.K. Jr.; Reineke, K.E.; Ried, J.H.; Sentman, R.C.; Miller, D.

    1982-01-01

    X-ray crystal structure determination for two tetrasubstituted electrophilic olefins, tetramethyl ethylenetetracarboxylate TMET and dimethyl dicyanofumarate DDCF, revealed two fundamentally different molecular structures. TMET is a nonplanar molecule that possesses two opposite ester groups planar and the others above and below the molecular plane. In contrast, DDCF is a molecule for which both ester groups lie in the plane of the double bond and nitrile groups. DDCF underwent thermal spontaneous copolymerization with electron-rich styrenes to give 1:1 alternating copolymers in moderate yields and molecular weights. These copolymers, which result from the first copolymerization of a tetrasubstituted olefin, possess an average functionality of 1.25 per chain carbon atom. Polymerization is made possible by low steric hindrance and the high delocalization in the propagating radical. The yields were limited by competing cycloaddition reaction. The corresponding diethyl ester also copolymerized, but not so well. Neither electrophilic olefin homopolymerized under γ-irradiation. TMET did not copolymerize at all when treated under identical conditions

  10. Typology of end-of-life priorities in Saudi females: averaging analysis and Q-methodology

    Directory of Open Access Journals (Sweden)

    Hammami MM

    2016-05-01

    Full Text Available Muhammad M Hammami,1,2 Safa Hammami,1 Hala A Amer,1 Nesrine A Khodr1 1Clinical Studies and Empirical Ethics Department, King Faisal Specialist Hospital and Research Centre, 2College of Medicine, Alfaisal University, Riyadh, Saudi Arabia Background: Understanding culture-and sex-related end-of-life preferences is essential to provide quality end-of-life care. We have previously explored end-of-life choices in Saudi males and found important culture-related differences and that Q-methodology is useful in identifying intraculture, opinion-based groups. Here, we explore Saudi females’ end-of-life choices.Methods: A volunteer sample of 68 females rank-ordered 47 opinion statements on end-of-life issues into a nine-category symmetrical distribution. The ranking scores of the statements were analyzed by averaging analysis and Q-methodology.Results: The mean age of the females in the sample was 30.3 years (range, 19–55 years. Among them, 51% reported average religiosity, 78% reported very good health, 79% reported very good life quality, and 100% reported high-school education or more. The extreme five overall priorities were to be able to say the statement of faith, be at peace with God, die without having the body exposed, maintain dignity, and resolve all conflicts. The extreme five overall dis-priorities were to die in the hospital, die well dressed, be informed about impending death by family/friends rather than doctor, die at peak of life, and not know if one has a fatal illness. Q-methodology identified five opinion-based groups with qualitatively different characteristics: “physical and emotional privacy concerned, family caring” (younger, lower religiosity, “whole person” (higher religiosity, “pain and informational privacy concerned” (lower life quality, “decisional privacy concerned” (older, higher life quality, and “life quantity concerned, family dependent” (high life quality, low life satisfaction. Out of the

  11. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  12. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  13. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  14. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  15. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  16. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  17. Development of one-energy group, two-dimensional, frequency dependent detector adjoint function based on the nodal method

    International Nuclear Information System (INIS)

    Khericha, Soli T.

    2000-01-01

    One-energy group, two-dimensional computer code was developed to calculate the response of a detector to a vibrating absorber in a reactor core. A concept of local/global components, based on the frequency dependent detector adjoint function, and a nodalization technique were utilized. The frequency dependent detector adjoint functions presented by complex equations were expanded into real and imaginary parts. In the nodalization technique, the flux is expanded into polynomials about the center point of each node. The phase angle and the magnitude of the one-energy group detector adjoint function were calculated for a detector located in the center of a 200x200 cm reactor using a two-dimensional nodalization technique, the computer code EXTERMINATOR, and the analytical solution. The purpose of this research was to investigate the applicability of a polynomial nodal model technique to the calculations of the real and the imaginary parts of the detector adjoint function for one-energy group two-dimensional polynomial nodal model technique. From the results as discussed earlier, it is concluded that the nodal model technique can be used to calculate the detector adjoint function and the phase angle. Using the computer code developed for nodal model technique, the magnitude of one energy group frequency dependent detector adjoint function and the phase angle were calculated for the detector located in the center of a 200x200 cm homogenous reactor. The real part of the detector adjoint function was compared with the results obtained from the EXTERMINATOR computer code as well as the analytical solution based on a double sine series expansion using the classical Green's Function solution. The values were found to be less than 1% greater at 20 cm away from the source region and about 3% greater closer to the source compared to the values obtained from the analytical solution and the EXTERMINATOR code. The currents at the node interface matched within 1% of the average

  18. Comparison of dural puncture epidural technique versus conventional epidural technique for labor analgesia in primigravida

    Directory of Open Access Journals (Sweden)

    Pritam Yadav

    2018-01-01

    Full Text Available >Background: Dural puncture epidural (DPE is a method in which a dural hole is created prior to epidural injection. This study was planned to evaluate whether dural puncture improves onset and duration of labor analgesia when compared to conventional epidural technique.Methods and Materials: Sixty term primigravida parturients of ASA grade I and II were randomly assigned to two groups of 30 each (Group E for conventional epidural and Group DE for dural puncture epidural. In group E, epidural space was identified and 18-gauge multi-orifice catheter was threaded 5 cm into the epidural space. In group DE, dura was punctured using the combines spinal epidural (CSE spinal needle and epidural catheter threaded as in group E followed by 10 ml of injection of Ropivacaine (0.2% with 20 mcg of Fentanyl (2 mcg/ml in fractions of 2.5 ml. Later, Ropivacaine 10 ml was given as a top-up on patient request. Onset, visual analouge scale (VAS, sensory and motor block, haemodynamic variables, duration of analgesia of initial dose were noted along with mode of delivery and the neonatal outcome.Results: Six parturients in group DE achieved adequate analgesia in 5 minutes while none of those in group E (P 0.05.Conclusions: Both techniques of labor analgesia are efficacious; dural puncture epidural has the potential to fasten onset and improve quality of labor analgesia when compared with conventional epidural technique.

  19. COMPARISON OF THE EFFECTIVENESS OF RADICULAR BLOCKING TECHNIQUES IN THE TREATMENT OF LUMBAR DISK HERNIA

    Directory of Open Access Journals (Sweden)

    Igor de Barcellos Zanon

    2015-12-01

    Full Text Available Objective : Compare the interlaminar blocking technique with the transforaminal blocking, with regard to pain and the presence or absence of complications. Methods : Prospective, descriptive and comparative, double-blind, randomized study, with 40 patients of both sex suffering from sciatic pain due to central-lateral or foraminal disc herniation, who did not respond to 20 physiotherapy sessions and had no instability diagnosed on examination of dynamic radiography. The type of blocking, transforaminal or interlaminar, to be performed was determined by draw. Results : We evaluated 40 patients, 17 males, mean age 49 years, average VAS pre-blocking of 8.85, average values in transforaminal technique in 24 hours, 7, 21, and 90 days of 0.71, 1.04, 2.33 and 3.84, respectively; the average VAS post-blocking for interlaminar technique was 0.89, 1.52, 3.63 and 4.88. The techniques differ only in the post-blocking period of 21 days and overall post-blocking, with significance of p=0.022 and p=0.027, respectively. Conclusion : Both techniques are effective in relieving pain and present low complication rate, and the transforaminal technique proved to be the most effective.

  20. Fundamentals - state of the art of radiation techniques

    International Nuclear Information System (INIS)

    Wogman, N.A.

    1982-01-01

    In minerals exploration and extraction, nuclear techniques have several advantages. The techniques are elementally specific and their exploration range varies from a few millimeters in average rock formations to more than a meter. Because of the heterogeneous disposition of minerals and difficult environments in which measurements are required (in boreholes, on conveyor belts, in bunkers), interrogating techniques are required which exhibit both elemental specificity and range. It is for these fundamental reasons that nuclear techniques are the only possible techniques which satisfy all requirements. A variety of techniques have been developed and used. These are based on energy dispersive x-ray fluorescence (EDXRF), measurement of natural gamma-ray radiation, gamma-ray attenuation and scattering, and on neutron interactions. This paper discusses the fundamentals of these four techniques and their applications. A table is also provided listing some existing selected applications of nuclear techniques in mineral exploration, mining and processing

  1. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  2. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  3. Application of phaco prechop with phaco chop technique in phacoemulsification

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2014-03-01

    Full Text Available AIM:To compare two phaco techniques, namely phaco prechop with phaco chop and divide and conquer, and to discuss the technical advantages of phaco prechop with phaco chop METHODS:The study included 131 patients(156 eyeswith age-related cataract eyes divided into 2 groups, group A including 68 patients(82 eyes, in which phaco prechop with phaco chop was performed, and group B including 63 patients(74 eyes, in which divide and conquer was performed. The mean parameters including average power(AP, U/S time, accumulated energy complex parameter(AECP, mean endothelial cell count, mean endothelial cell loss, intraoperative complications, postoperative uncorrected visual acuity(UCVAat 1d and 1wk, and corneal edema were reported in the two groups both preoperative and postoperative.RESULTS:The subgroups with same grade of lens nucleus hardness were compared. Parameters such as AP, U/S time, AECP in group A were significantly less than those in group B. Postoperative corneal clarity and UCVA at 1d in group A was better than that in group B. No significant difference was found in UCVA at 1wk after operation between the two groups. The difference in mean endothelial cell count at 3mo postoperative between the two groups was statistically insignificant(P>0.05, however the difference in endothelial cell loss at 3mo postoperatively between the two groups was statistically significant(PCONCLUSION:Compare with divide and conquer, phaco prechop with phaco chop utilized less phaco time, energy, and the rate of endothelial cell loss at 3mo postoperatively, and better early postoperative uncorrected visual acuity.

  4. Infuence of Averaging Method on the Evaluation of a Coastal Ocean Color Event on the U.S. Northeast Coast

    Science.gov (United States)

    Acker, James G.; Uz, Stephanie Schollaert; Shen, Suhung; Leptoukh, Gregory G.

    2010-01-01

    Application of appropriate spatial averaging techniques is crucial to correct evaluation of ocean color radiometric data, due to the common log-normal or mixed log-normal distribution of these data. Averaging method is particularly crucial for data acquired in coastal regions. The effect of averaging method was markedly demonstrated for a precipitation-driven event on the U.S. Northeast coast in October-November 2005, which resulted in export of high concentrations of riverine colored dissolved organic matter (CDOM) to New York and New Jersey coastal waters over a period of several days. Use of the arithmetic mean averaging method created an inaccurate representation of the magnitude of this event in SeaWiFS global mapped chl a data, causing it to be visualized as a very large chl a anomaly. The apparent chl a anomaly was enhanced by the known incomplete discrimination of CDOM and phytoplankton chlorophyll in SeaWiFS data; other data sources enable an improved characterization. Analysis using the geometric mean averaging method did not indicate this event to be statistically anomalous. Our results predicate the necessity of providing the geometric mean averaging method for ocean color radiometric data in the Goddard Earth Sciences DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni).

  5. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  6. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  7. Measurement of void fractions by nuclear techniques

    International Nuclear Information System (INIS)

    Hernandez G, A.; Vazquez G, J.; Diaz H, C.; Salinas R, G.A.

    1997-01-01

    In this work it is done a general analysis of those techniques used to determine void fractions and it is chosen a nuclear technique to be used in the heat transfer circuit of the Physics Department of the Basic Sciences Management. The used methods for the determination of void fractions are: radioactive absorption, acoustic techniques, average velocity measurement, electromagnetic flow measurement, optical methods, oscillating absorption, nuclear magnetic resonance, relation between pressure and flow oscillation, infrared absorption methods, sound neutron analysis. For the case of this work it will be treated about the radioactive absorption method which is based in the gamma rays absorption. (Author)

  8. Laparoscopic pyloromyotomy: redefining the advantages of a novel technique.

    Science.gov (United States)

    Caceres, Manuel; Liu, Donald

    2003-01-01

    With recent advances in minimally invasive techniques, many surgeons are favoring laparoscopic over traditional "open" pyloromyotomy for hypertrophic pyloric stenosis. The results of few studies, however, exist in the literature adequately comparing surgical outcome. We present a retrospective analysis of 56 consecutive patients who underwent laparoscopic or open pyloromyotomy. A retrospective chart review of 56 consecutive infants (ages: 2 to 9 weeks; weights: 2.2 to 5.4 kilograms) who underwent laparoscopic (Group A-28) vs open (Group B- 28) pyloromyotomy between January 2000 and May 2001 was performed. Preoperative (age, sex, weight, HCO3, and K values) and postoperative (operating time, time to full feedings, persistence of emesis, and hospital stay) parameters were compared. Statistical analysis was performed via the Student t test and chi-square/Fischer analysis where appropriate. A P value 0.05). In Group A, 26/28 (92.9%) were completed successfully with 2 open conversions. Group A versus Group B average operating times (36.1 vs 32.5 minutes), time to full feedings (24.1 vs 27.0 hours), and hospital stay (2.5 vs 2.6 days) were similar (P>0.05). Persistent vomiting was observed in Group A, 25.0% (day 1)/3.5% (day 2) vs Group B, 39.3% (day 1)/10.7% (day 2). One infant in Group B required operative drainage of a wound abscess 1 week after surgery. Laparoscopic pyloromyotomy can be performed with similar efficiency and surgical outcome as traditional open pyloromyotomy. Improved cosmesis and avoidance of wound complications are major benefits of this procedure, and a tendency towards less postoperative emesis is a potential benefit that deserves further investigation.

  9. TRAN-STAT: statistics for environmental studies, Number 22. Comparison of soil-sampling techniques for plutonium at Rocky Flats

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Bernhardt, D.E.; Hahn, P.B.

    1983-01-01

    A summary of a field soil sampling study conducted around the Rocky Flats Colorado plant in May 1977 is preseted. Several different soil sampling techniques that had been used in the area were applied at four different sites. One objective was to comparethe average 239 - 240 Pu concentration values obtained by the various soil sampling techniques used. There was also interest in determining whether there are differences in the reproducibility of the various techniques and how the techniques compared with the proposed EPA technique of sampling to 1 cm depth. Statistically significant differences in average concentrations between the techniques were found. The differences could be largely related to the differences in sampling depth-the primary physical variable between the techniques. The reproducibility of the techniques was evaluated by comparing coefficients of variation. Differences between coefficients of variation were not statistically significant. Average (median) coefficients ranged from 21 to 42 percent for the five sampling techniques. A laboratory study indicated that various sample treatment and particle sizing techniques could increase the concentration of plutonium in the less than 10 micrometer size fraction by up to a factor of about 4 compared to the 2 mm size fraction

  10. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  11. A NEM diffusion code for fuel management and time average core calculation

    International Nuclear Information System (INIS)

    Mishra, Surendra; Ray, Sherly; Kumar, A.N.

    2005-01-01

    A computer code based on Nodal expansion method has been developed for solving two groups three dimensional diffusion equation. This code can be used for fuel management and time average core calculation. Explicit Xenon and fuel temperature estimation are also incorporated in this code. TAPP-4 phase-B physics experimental results were analyzed using this code and a code based on FD method. This paper gives the comparison of the observed data and the results obtained with this code and FD code. (author)

  12. Average gluon and quark jet multiplicities at higher orders

    Energy Technology Data Exchange (ETDEWEB)

    Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2013-05-15

    We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.

  13. Hospitals Productivity Measurement Using Data Envelopment Analysis Technique.

    Science.gov (United States)

    Torabipour, Amin; Najarzadeh, Maryam; Arab, Mohammad; Farzianpour, Freshteh; Ghasemzadeh, Roya

    2014-11-01

    This study aimed to measure the hospital productivity using data envelopment analysis (DEA) technique and Malmquist indices. This is a cross sectional study in which the panel data were used in a 4 year period from 2007 to 2010. The research was implemented in 12 teaching and non-teaching hospitals of Ahvaz County. Data envelopment analysis technique and the Malmquist indices with an input-orientation approach, was used to analyze the data and estimation of productivity. Data were analyzed using the SPSS.18 and DEAP.2 software. Six hospitals (50%) had a value lower than 1, which represents an increase in total productivity and other hospitals were non-productive. the average of total productivity factor (TPF) was 1.024 for all hospitals, which represents a decrease in efficiency by 2.4% from 2007 to 2010. The average technical, technologic, scale and managerial efficiency change was 0.989, 1.008, 1.028, and 0.996 respectively. There was not a significant difference in mean productivity changes among teaching and non-teaching hospitals (P>0.05) (except in 2009 years). Productivity rate of hospitals had an increasing trend generally. However, the total average of productivity was decreased in hospitals. Besides, between the several components of total productivity, variation of technological efficiency had the highest impact on reduce of total average of productivity.

  14. Development of a high average current polarized electron source with long cathode operational lifetime

    Directory of Open Access Journals (Sweden)

    C. K. Sinclair

    2007-02-01

    Full Text Available Substantially more than half of the electromagnetic nuclear physics experiments conducted at the Continuous Electron Beam Accelerator Facility of the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory require highly polarized electron beams, often at high average current. Spin-polarized electrons are produced by photoemission from various GaAs-based semiconductor photocathodes, using circularly polarized laser light with photon energy slightly larger than the semiconductor band gap. The photocathodes are prepared by activation of the clean semiconductor surface to negative electron affinity using cesium and oxidation. Historically, in many laboratories worldwide, these photocathodes have had short operational lifetimes at high average current, and have often deteriorated fairly quickly in ultrahigh vacuum even without electron beam delivery. At Jefferson Lab, we have developed a polarized electron source in which the photocathodes degrade exceptionally slowly without electron emission, and in which ion back bombardment is the predominant mechanism limiting the operational lifetime of the cathodes during electron emission. We have reproducibly obtained cathode 1/e dark lifetimes over two years, and 1/e charge density and charge lifetimes during electron beam delivery of over 2×10^{5}   C/cm^{2} and 200 C, respectively. This source is able to support uninterrupted high average current polarized beam delivery to three experimental halls simultaneously for many months at a time. Many of the techniques we report here are directly applicable to the development of GaAs photoemission electron guns to deliver high average current, high brightness unpolarized beams.

  15. Typology of end-of-life priorities in Saudi females: averaging analysis and Q-methodology

    Science.gov (United States)

    Hammami, Muhammad M; Hammami, Safa; Amer, Hala A; Khodr, Nesrine A

    2016-01-01

    Background Understanding culture-and sex-related end-of-life preferences is essential to provide quality end-of-life care. We have previously explored end-of-life choices in Saudi males and found important culture-related differences and that Q-methodology is useful in identifying intraculture, opinion-based groups. Here, we explore Saudi females’ end-of-life choices. Methods A volunteer sample of 68 females rank-ordered 47 opinion statements on end-of-life issues into a nine-category symmetrical distribution. The ranking scores of the statements were analyzed by averaging analysis and Q-methodology. Results The mean age of the females in the sample was 30.3 years (range, 19–55 years). Among them, 51% reported average religiosity, 78% reported very good health, 79% reported very good life quality, and 100% reported high-school education or more. The extreme five overall priorities were to be able to say the statement of faith, be at peace with God, die without having the body exposed, maintain dignity, and resolve all conflicts. The extreme five overall dis-priorities were to die in the hospital, die well dressed, be informed about impending death by family/friends rather than doctor, die at peak of life, and not know if one has a fatal illness. Q-methodology identified five opinion-based groups with qualitatively different characteristics: “physical and emotional privacy concerned, family caring” (younger, lower religiosity), “whole person” (higher religiosity), “pain and informational privacy concerned” (lower life quality), “decisional privacy concerned” (older, higher life quality), and “life quantity concerned, family dependent” (high life quality, low life satisfaction). Out of the extreme 14 priorities/dis-priorities for each group, 21%–50% were not represented among the extreme 20 priorities/dis-priorities for the entire sample. Conclusion Consistent with the previously reported findings in Saudi males, transcendence and dying in

  16. Typology of end-of-life priorities in Saudi females: averaging analysis and Q-methodology.

    Science.gov (United States)

    Hammami, Muhammad M; Hammami, Safa; Amer, Hala A; Khodr, Nesrine A

    2016-01-01

    Understanding culture-and sex-related end-of-life preferences is essential to provide quality end-of-life care. We have previously explored end-of-life choices in Saudi males and found important culture-related differences and that Q-methodology is useful in identifying intraculture, opinion-based groups. Here, we explore Saudi females' end-of-life choices. A volunteer sample of 68 females rank-ordered 47 opinion statements on end-of-life issues into a nine-category symmetrical distribution. The ranking scores of the statements were analyzed by averaging analysis and Q-methodology. The mean age of the females in the sample was 30.3 years (range, 19-55 years). Among them, 51% reported average religiosity, 78% reported very good health, 79% reported very good life quality, and 100% reported high-school education or more. The extreme five overall priorities were to be able to say the statement of faith, be at peace with God, die without having the body exposed, maintain dignity, and resolve all conflicts. The extreme five overall dis-priorities were to die in the hospital, die well dressed, be informed about impending death by family/friends rather than doctor, die at peak of life, and not know if one has a fatal illness. Q-methodology identified five opinion-based groups with qualitatively different characteristics: "physical and emotional privacy concerned, family caring" (younger, lower religiosity), "whole person" (higher religiosity), "pain and informational privacy concerned" (lower life quality), "decisional privacy concerned" (older, higher life quality), and "life quantity concerned, family dependent" (high life quality, low life satisfaction). Out of the extreme 14 priorities/dis-priorities for each group, 21%-50% were not represented among the extreme 20 priorities/dis-priorities for the entire sample. Consistent with the previously reported findings in Saudi males, transcendence and dying in the hospital were the extreme end-of-life priority and dis

  17. Separation techniques: Chromatography

    Science.gov (United States)

    Coskun, Ozlem

    2016-01-01

    Chromatography is an important biophysical technique that enables the separation, identification, and purification of the components of a mixture for qualitative and quantitative analysis. Proteins can be purified based on characteristics such as size and shape, total charge, hydrophobic groups present on the surface, and binding capacity with the stationary phase. Four separation techniques based on molecular characteristics and interaction type use mechanisms of ion exchange, surface adsorption, partition, and size exclusion. Other chromatography techniques are based on the stationary bed, including column, thin layer, and paper chromatography. Column chromatography is one of the most common methods of protein purification. PMID:28058406

  18. Group-wise partial least square regression

    NARCIS (Netherlands)

    Camacho, José; Saccenti, Edoardo

    2018-01-01

    This paper introduces the group-wise partial least squares (GPLS) regression. GPLS is a new sparse PLS technique where the sparsity structure is defined in terms of groups of correlated variables, similarly to what is done in the related group-wise principal component analysis. These groups are

  19. Sample-averaged biexciton quantum yield measured by solution-phase photon correlation.

    Science.gov (United States)

    Beyler, Andrew P; Bischof, Thomas S; Cui, Jian; Coropceanu, Igor; Harris, Daniel K; Bawendi, Moungi G

    2014-12-10

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals.

  20. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  1. Use of a hybrid iterative reconstruction technique to reduce image noise and improve image quality in obese patients undergoing computed tomographic pulmonary angiography.

    Science.gov (United States)

    Kligerman, Seth; Mehta, Dhruv; Farnadesh, Mahmmoudreza; Jeudy, Jean; Olsen, Kathryn; White, Charles

    2013-01-01

    To determine whether an iterative reconstruction (IR) technique (iDose, Philips Healthcare) can reduce image noise and improve image quality in obese patients undergoing computed tomographic pulmonary angiography (CTPA). The study was Health Insurance Portability and Accountability Act compliant and approved by our institutional review board. A total of 33 obese patients (average body mass index: 42.7) underwent CTPA studies following standard departmental protocols. The data were reconstructed with filtered back projection (FBP) and 3 iDose strengths (iDoseL1, iDoseL3, and iDoseL5) for a total of 132 studies. FBP data were collected from 33 controls (average body mass index: 22) undergoing CTPA. Regions of interest were drawn at 6 identical levels in the pulmonary artery (PA), from the main PA to a subsegmental branch, in both the control group and study groups using each algorithm. Noise and attenuation were measured at all PA levels. Three thoracic radiologists graded each study on a scale of 1 (very poor) to 5 (ideal) by 4 categories: image quality, noise, PA enhancement, and "plastic" appearance. Statistical analysis was performed using an unpaired t test, 1-way analysis of variance, and linear weighted κ. Compared with the control group, there was significantly higher noise with FBP, iDoseL1, and iDoseL3 algorithms (Pnoise in the control group and iDoseL5 algorithm in the study group. Analysis within the study group showed a significant and progressive decrease in noise and increase in the contrast-to-noise ratio as the level of IR was increased (Pnoise and PA enhancement with increasing levels of iDose. The use of an IR technique leads to qualitative and quantitative improvements in image noise and image quality in obese patients undergoing CTPA.

  2. Strengthened glass for high average power laser applications

    International Nuclear Information System (INIS)

    Cerqua, K.A.; Lindquist, A.; Jacobs, S.D.; Lambropoulos, J.

    1987-01-01

    Recent advancements in high repetition rate and high average power laser systems have put increasing demands on the development of improved solid state laser materials with high thermal loading capabilities. The authors have developed a process for strengthening a commercially available Nd doped phosphate glass utilizing an ion-exchange process. Results of thermal loading fracture tests on moderate size (160 x 15 x 8 mm) glass slabs have shown a 6-fold improvement in power loading capabilities for strengthened samples over unstrengthened slabs. Fractographic analysis of post-fracture samples has given insight into the mechanism of fracture in both unstrengthened and strengthened samples. Additional stress analysis calculations have supported these findings. In addition to processing the glass' surface during strengthening in a manner which preserves its post-treatment optical quality, the authors have developed an in-house optical fabrication technique utilizing acid polishing to minimize subsurface damage in samples prior to exchange treatment. Finally, extension of the strengthening process to alternate geometries of laser glass has produced encouraging results, which may expand the potential or strengthened glass in laser systems, making it an exciting prospect for many applications

  3. Notes on Well-Posed, Ensemble Averaged Conservation Equations for Multiphase, Multi-Component, and Multi-Material Flows

    International Nuclear Information System (INIS)

    Ray A. Berry

    2005-01-01

    At the INL researchers and engineers routinely encounter multiphase, multi-component, and/or multi-material flows. Some examples include: Reactor coolant flows Molten corium flows Dynamic compaction of metal powders Spray forming and thermal plasma spraying Plasma quench reactor Subsurface flows, particularly in the vadose zone Internal flows within fuel cells Black liquor atomization and combustion Wheat-chaff classification in combine harvesters Generation IV pebble bed, high temperature gas reactor The complexity of these flows dictates that they be examined in an averaged sense. Typically one would begin with known (or at least postulated) microscopic flow relations that hold on the ''small'' scale. These include continuum level conservation of mass, balance of species mass and momentum, conservation of energy, and a statement of the second law of thermodynamics often in the form of an entropy inequality (such as the Clausius-Duhem inequality). The averaged or macroscopic conservation equations and entropy inequalities are then obtained from the microscopic equations through suitable averaging procedures. At this stage a stronger form of the second law may also be postulated for the mixture of phases or materials. To render the evolutionary material flow balance system unique, constitutive equations and phase or material interaction relations are introduced from experimental observation, or by postulation, through strict enforcement of the constraints or restrictions resulting from the averaged entropy inequalities. These averaged equations form the governing equation system for the dynamic evolution of these mixture flows. Most commonly, the averaging technique utilized is either volume or time averaging or a combination of the two. The flow restrictions required for volume and time averaging to be valid can be severe, and violations of these restrictions are often found. A more general, less restrictive (and far less commonly used) type of averaging known as

  4. EDITAR: a module for reaction rate editing and cross-section averaging within the AUS neutronics code system

    International Nuclear Information System (INIS)

    Robinson, G.S.

    1986-03-01

    The EDITAR module of the AUS neutronics code system edits one and two-dimensional flux data pools produced by other AUS modules to form reaction rates for materials and their constituent nuclides, and to average cross sections over space and energy. The module includes a Bsub(L) flux calculation for application to cell leakage. The STATUS data pool of the AUS system is used to enable the 'unsmearing' of fluxes and nuclide editing with minimal user input. The module distinguishes between neutron and photon groups, and printed reaction rates are formed accordingly. Bilinear weighting may be used to obtain material reactivity worths and to average cross sections. Bilinear weighting is at present restricted to diffusion theory leakage estimates made using mesh-average fluxes

  5. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  6. Structure and Dynamics of Humpback Whales Competitive Groups in Ecuador

    Directory of Open Access Journals (Sweden)

    Fernando Félix

    2015-02-01

    Full Text Available We assessed the social structure and behavior of humpback whale (Megaptera novaeangliae competitive groups off Ecuador between July and August 2010. During this time we followed 185 whales in 22 competitive groups for 41.45 hr. The average group size was 8.4 animals (SD = 2.85. The average sighting time was 113.05 min/group (SD = 47.1. We used photographs of dorsal fins and video to record interactions and estimate an association index (AI between each pair of whales within the groups. Sightings were divided into periods, which were defined by changes in group membership. On average, group composition changed every 30.2 min, which confirms that the structure of competitive groups is highly dynamic. Interactions between escorts characterized by low level of aggression. At least 60% of escorts joined or left together the group in small subunits between two and five animals, suggesting some type of cooperative association. Although singletons, as well as pairs or trios were able to join competitive groups at any moment, escorts that joined together were able to stay longer with the group and displace dominant escorts. Genetic analysis showed that in three occasions more than one female was present within a competitive group, suggesting either males are herding females or large competitive groups are formed by subunits. Males and females performed similar surface displays. We propose that competition and cooperation are interrelated in humpback whales’ competitive groups and that male cooperation would be an adaptive strategy either to displace dominant escorts or to fend off challengers.

  7. Using benchmarking techniques and the 2011 maternity practices infant nutrition and care (mPINC) survey to improve performance among peer groups across the United States.

    Science.gov (United States)

    Edwards, Roger A; Dee, Deborah; Umer, Amna; Perrine, Cria G; Shealy, Katherine R; Grummer-Strawn, Laurence M

    2014-02-01

    A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4-6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement.

  8. 75 FR 19185 - Direct and Counter-Cyclical Program and Average Crop Revenue Election Program, Disaster...

    Science.gov (United States)

    2010-04-14

    ... calculations. * * * * * (b) An eligible producer of farm-raised game or sport fish may receive payments for... provided in Sec. 760.203(i), based on 60 percent of the average fair market value of the game fish or sport... because of their identity as members of a group without regard to their individual qualities. Gender is...

  9. Renormalization Group Theory

    International Nuclear Information System (INIS)

    Stephens, C. R.

    2006-01-01

    In this article I give a brief account of the development of research in the Renormalization Group in Mexico, paying particular attention to novel conceptual and technical developments associated with the tool itself, rather than applications of standard Renormalization Group techniques. Some highlights include the development of new methods for understanding and analysing two extreme regimes of great interest in quantum field theory -- the ''high temperature'' regime and the Regge regime

  10. Reduction of aflatoxins in dundi-cut whole red chillies (capsicum indicum)by manual sorting technique

    International Nuclear Information System (INIS)

    Khan, M.A.; Asghar, M.A.; Ahmed, A.; Iqbal, J.; Shamsuddin, Z.A.

    2013-01-01

    Dundi-cut whole red chillies (Capsicum indicum) are the most revenue- generating commodity of Pakistan. Accordingly, the competence and magnitude of manual hand-picked sorting of red chillies on the reduction of total aflatoxins (AFs) content were assessed during the present study. AFs contents were determined by thin layer chromatography (TLC) technique. On the basis of AFs content, red chilli samples were grouped as Group A with 1 to 20 mu g/kg, Group B with 20 to 30?g/kg, Group C with 30-100?g/kg and Group D quality samples with 100 to 150g/kg. Physically identified defects including midget/dwarfed, damaged, broken, dusty and dirty were looked for and such pods were removed. A reduction of 90-100% of AFs was achieved in Group A, 65-80% in B, 65-75% in C and 70% in D quality samples. An average of 78% reduction in AFs content was achieved. Hence, the non-destructive physical hand-picked sorting of red chillies can be applied as a rapid, safe and cost effective method for the reduction of AFs content in red chillies with preserved nutritional values. (author)

  11. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    Science.gov (United States)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  12. Gynecomastia: evolving paradigm of management and comparison of techniques.

    Science.gov (United States)

    Petty, Paul M; Solomon, Matthias; Buchel, Edward W; Tran, Nho V

    2010-05-01

    Since 1997, the authors have used a minimally invasive technique for the management of gynecomastia using ultrasound-assisted liposuction and the arthroscopic shaver to remove breast tissue through a remote incision. This technique has allowed for a consistent, refined, "unoperated" postoperative appearance in this patient population. This study analyzes the outcomes of this procedure and compares the procedure against established techniques. A retrospective study was performed on all patients who underwent surgery for gynecomastia at the authors' institution between January of 1988 and October of 2007. A total of 227 patients were divided into four groups: group 1, open excision only (n = 45); group 2, open excision plus liposuction (n = 56); group 3, liposuction only (n = 50); and group 4, liposuction plus arthroscopic shaver (n = 76). Medical records and photographs were used to compare groups for complications and results. Complications using the liposuction plus arthroscopic shaver technique included seroma (n = 2), hematoma (n = 1), scar revision (n = 1), and skin buttonhole from the arthroscopic shaver (n = 1). There was no difference between groups in the overall incidence of complications (p liposuction plus arthroscopic shaver) had the overall highest mean score, with statistical significance between group 2 (open excision plus liposuction) and group 4 (p gynecomastia is a safe and effective technique, with excellent cosmetic results and an acceptable complication rate.

  13. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  14. The influence of "C-factor" and light activation technique on polymerization contraction forces of resin composite

    Directory of Open Access Journals (Sweden)

    Sérgio Kiyoshi Ishikiriama

    2012-12-01

    Full Text Available OBJECTIVES: This study evaluated the influence of the cavity configuration factor ("C-Factor" and light activation technique on polymerization contraction forces of a Bis-GMA-based composite resin (Charisma, Heraeus Kulzer. MATERIAL AND METHODS: Three different pairs of steel moving bases were connected to a universal testing machine (emic DL 500: groups A and B - 2x2 mm (CF=0.33, groups C and D - 3x2 mm (CF=0.66, groups e and F - 6x2 mm (CF=1.5. After adjustment of the height between the pair of bases so that the resin had a volume of 12 mm³ in all groups, the material was inserted and polymerized by two different methods: pulse delay (100 mW/cm² for 5 s, 40 s interval, 600 mW/cm² for 20 s and continuous pulse (600 mW/cm² for 20 s. Each configuration was light cured with both techniques. Tensions generated during polymerization were recorded by 120 s. The values were expressed in curves (Force(N x Time(s and averages compared by statistical analysis (ANOVA and Tukey's test, p<0.05. RESULTS: For the 2x2 and 3x2 bases, with a reduced C-Factor, significant differences were found between the light curing methods. For 6x2 base, with high C-Factor, the light curing method did not influence the contraction forces of the composite resin. CONCLUSIONS: Pulse delay technique can determine less stress on tooth/restoration interface of adhesive restorations only when a reduced C-Factor is present.

  15. Occupational therapy with people with depression: using nominal group technique to collate clinician opinion.

    Science.gov (United States)

    Hitch, Danielle; Taylor, Michelle; Pepin, Genevieve

    2015-05-01

    This aim of this study was to obtain a consensus from clinicians regarding occupational therapy for people with depression, for the assessments and practices they use that are not currently supported by research evidence directly related to functional performance. The study also aimed to discover how many of these assessments and practices were currently supported by research evidence. Following a previously reported systematic review of assessments and practices used in occupational therapy for people with depression, a modified nominal group technique was used to discover which assessments and practices occupational therapists currently utilize. Three online surveys gathered initial data on therapeutic options (survey 1), which were then ranked (survey 2) and re-ranked (survey 3) to gain the final consensus. Twelve therapists completed the first survey, whilst 10 clinicians completed both the second and third surveys. Only 30% of the assessments and practices identified by the clinicians were supported by research evidence. A consensus was obtained on a total of 35 other assessments and interventions. These included both occupational-therapy-specific and generic assessments and interventions. Principle conclusion. Very few of the assessments and interventions identified were supported by research evidence directly related to functional performance. While a large number of options were generated, the majority of these were not occupational therapy specific.

  16. Scale-invariant Green-Kubo relation for time-averaged diffusivity

    Science.gov (United States)

    Meyer, Philipp; Barkai, Eli; Kantz, Holger

    2017-12-01

    In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.

  17. An agent-based simulation combined with group decision-making technique for improving the performance of an emergency department

    Directory of Open Access Journals (Sweden)

    M. Yousefi

    Full Text Available This study presents an agent-based simulation modeling in an emergency department. In a traditional approach, a supervisor (or a manager allocates the resources (receptionist, nurses, doctors, etc. to different sections based on personal experience or by using decision-support tools. In this study, each staff agent took part in the process of allocating resources based on their observation in their respective sections, which gave the system the advantage of utilizing all the available human resources during the workday by being allocated to a different section. In this simulation, unlike previous studies, all staff agents took part in the decision-making process to re-allocate the resources in the emergency department. The simulation modeled the behavior of patients, receptionists, triage nurses, emergency room nurses and doctors. Patients were able to decide whether to stay in the system or leave the department at any stage of treatment. In order to evaluate the performance of this approach, 6 different scenarios were introduced. In each scenario, various key performance indicators were investigated before and after applying the group decision-making. The outputs of each simulation were number of deaths, number of patients who leave the emergency department without being attended, length of stay, waiting time and total number of discharged patients from the emergency department. Applying the self-organizing approach in the simulation showed an average of 12.7 and 14.4% decrease in total waiting time and number of patients who left without being seen, respectively. The results showed an average increase of 11.5% in total number of discharged patients from emergency department.

  18. Image averaging of flexible fibrous macromolecules: the clathrin triskelion has an elastic proximal segment.

    Science.gov (United States)

    Kocsis, E; Trus, B L; Steer, C J; Bisher, M E; Steven, A C

    1991-08-01

    We have developed computational techniques that allow image averaging to be applied to electron micrographs of filamentous molecules that exhibit tight and variable curvature. These techniques, which involve straightening by cubic-spline interpolation, image classification, and statistical analysis of the molecules' curvature properties, have been applied to purified brain clathrin. This trimeric filamentous protein polymerizes, both in vivo and in vitro, into a wide range of polyhedral structures. Contrasted by low-angle rotary shadowing, dissociated clathrin molecules appear as distinctive three-legged structures, called "triskelions" (E. Ungewickell and D. Branton (1981) Nature 289, 420). We find triskelion legs to vary from 35 to 62 nm in total length, according to an approximately bell-shaped distribution (mu = 51.6 nm). Peaks in averaged curvature profiles mark hinges or sites of enhanced flexibility. Such profiles, calculated for each length class, show that triskelion legs are flexible over their entire lengths. However, three curvature peaks are observed in every case: their locations define a proximal segment of systematically increasing length (14.0-19.0 nm), a mid-segment of fixed length (approximately 12 nm), and a rather variable end-segment (11.6-19.5 nm), terminating in a hinge just before the globular terminal domain (approximately 7.3 nm diameter). Thus, two major factors contribute to the overall variability in leg length: (1) stretching of the proximal segment and (2) stretching of the end-segment and/or scrolling of the terminal domain. The observed elasticity of the proximal segment may reflect phosphorylation of the clathrin light chains.

  19. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  20. Exploring Modeling Options and Conversion of Average Response to Appropriate Vibration Envelopes for a Typical Cylindrical Vehicle Panel with Rib-stiffened Design

    Science.gov (United States)

    Harrison, Phil; LaVerde, Bruce; Teague, David

    2009-01-01

    Although applications for Statistical Energy Analysis (SEA) techniques are more widely used in the aerospace industry today, opportunities to anchor the response predictions using measured data from a flight-like launch vehicle structure are still quite valuable. Response and excitation data from a ground acoustic test at the Marshall Space Flight Center permitted the authors to compare and evaluate several modeling techniques available in the SEA module of the commercial code VA One. This paper provides an example of vibration response estimates developed using different modeling approaches to both approximate and bound the response of a flight-like vehicle panel. Since both vibration response and acoustic levels near the panel were available from the ground test, the evaluation provided an opportunity to learn how well the different modeling options can match band-averaged spectra developed from the test data. Additional work was performed to understand the spatial averaging of the measurements across the panel from measured data. Finally an evaluation/comparison of two conversion approaches from the statistical average response results that are output from an SEA analysis to a more useful envelope of response spectra appropriate to specify design and test vibration levels for a new vehicle.

  1. Novel prostate brachytherapy technique: Improved dosimetric and clinical outcome

    International Nuclear Information System (INIS)

    Nobes, Jenny P.; Khaksar, Sara J.; Hawkins, Maria A.; Cunningham, Melanie J.; Langley, Stephen E.M.; Laing, Robert W.

    2008-01-01

    Purpose: Erectile dysfunction following prostate brachytherapy is reported to be related to dose received by the penile bulb. To minimise this, whilst preserving prostate dosimetry, we have developed a technique for I-125 seed brachytherapy using both stranded seeds and loose seeds delivered with a Mick applicator, and implanted via the sagittal plane on trans-rectal ultrasound. Materials and methods: Post-implant dosimetry and potency rates were compared in 120 potent patients. In Group 1, 60 patients were treated using a conventional technique of seeds implanted in a modified-uniform distribution. From January 2005, a novel technique was developed using stranded seeds peripherally and centrally distributed loose seeds implanted via a Mick applicator (Group 2). The latter technique allows greater flexibility when implanting the seeds at the apex. Each patient was prescribed a minimum peripheral dose of 145 Gy. No patients received external beam radiotherapy or hormone treatment. There was no significant difference in age or pre-implant potency score (mean IIEF-5 score 22.4 vs. 22.6, p = 0.074) between the two groups. Results: The new technique delivers lower penile bulb doses (D 25 as %mPD - Group 1: 61.2 ± 35.7, Group 2: 29.7 ± 16.0, p 50 as %mPD - Group 1: 45.8 ± 26.9, Group 2: 21.4 ± 11.7, p 90 - Group 1: 147 Gy ± 21.1, Group 2: 155 Gy ± 16.7, p = 0.03). At 2 years, the potency rate was also improved: Group 1: 61.7%; Group 2: 83.3% (p = 0.008). Conclusions: In this study, the novel brachytherapy technique using both peripheral stranded seeds and central loose seeds delivered via a Mick applicator results in a lower penile bulb dose whilst improving prostate dosimetry, and may achieve higher potency rates

  2. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Godfrey, Devon J. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Page McAdams, H. [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Dobbins, James T. III [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Department of Biomedical Engineering, Department of Physics, and Medical Physics Graduate Program, Duke University Medical Center, Durham, North Carolina 27705 (United States)

    2013-02-15

    Purpose: Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Methods: Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. Results: For scan angles of 20 Degree-Sign and 5 mm plane separation, seven MITS

  3. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis

    Science.gov (United States)

    Godfrey, Devon J.; Page McAdams, H.; Dobbins, James T.

    2013-01-01

    Purpose: Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Methods: Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. Results: For scan angles of 20° and 5 mm plane separation, seven MITS planes must be

  4. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis.

    Science.gov (United States)

    Godfrey, Devon J; McAdams, H Page; Dobbins, James T

    2013-02-01

    Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. For scan angles of 20° and 5 mm plane separation, seven MITS planes must be averaged to sufficiently

  5. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis

    International Nuclear Information System (INIS)

    Godfrey, Devon J.; Page McAdams, H.; Dobbins, James T. III

    2013-01-01

    Purpose: Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Methods: Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. Results: For scan angles of 20° and 5 mm plane separation, seven MITS planes must be

  6. Impact of discussion on preferences elicited in a group setting

    Directory of Open Access Journals (Sweden)

    Milne Ruairidh

    2006-03-01

    Full Text Available Abstract Background The completeness of preferences is assumed as one of the axioms of expected utility theory but has been subject to little empirical study. Methods Fifteen non-health professionals was recruited and familiarised with the standard gamble technique. The group then met five times over six months and preferences were elicited independently on 41 scenarios. After individual valuation, the group discussed the scenarios, following which preferences could be changed. Changes made were described and summary measures (mean and median before and after discussion compared using paired t test and Wilcoxon Signed Rank Test. Semi-structured telephone interviews were carried out to explore attitudes to discussing preferences. These were transcribed, read by two investigators and emergent themes described. Results Sixteen changes (3.6% were made to preferences by seven (47% of the fifteen members. The difference between individual preference values before and after discussion ranged from -0.025 to 0.45. The average effect on the group mean was 0.0053. No differences before and after discussion were statistically significant. The group valued discussion highly and suggested it brought four main benefits: reassurance; improved procedural performance; increased group cohesion; satisfying curiosity. Conclusion The hypothesis that preferences are incomplete cannot be rejected for a proportion of respondents. However, brief discussion did not result in substantial number of changes to preferences and these did not have significant impact on summary values for the group, suggesting that incompleteness, if present, may not have an important effect on cost-utility analyses.

  7. Vasectomy occlusion techniques for male sterilization.

    Science.gov (United States)

    Cook, L A; Vliet, H; Pun, A; Gallo, M F

    2004-01-01

    Vasectomy is an increasingly popular and effective family planning method. A variety of vasectomy techniques are used worldwide including various vas occlusion techniques (excision and ligation, thermal or electrocautery, and mechanical and chemical occlusion methods), vas irrigation and fascial interposition. Vasectomy guidelines largely rely on information from observational studies. Ideally, the choice of vasectomy techniques should be based on the best available evidence from randomized controlled trials. The objective of this review was to compare the effectiveness, safety, acceptability and costs of vasectomy techniques for male sterilization. We searched the computerized databases the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Popline and LILACS. In addition, we searched the reference lists of relevant articles and book chapters. We included randomized controlled trials and controlled clinical trials comparing vasectomy techniques. We assessed all titles and abstracts located in the literature searches and two reviewers independently extracted articles identified for inclusion. Data were presented in the text of the review. Outcome measures include contraceptive efficacy, safety, discontinuation, and acceptability. Two trials compared vas occlusion with clips versus a conventional vasectomy technique; both were of poor quality. Neither trial found a difference between the two groups with regard to the primary outcome of failure to reach azoospermia. Four trials examined vas irrigation: three compared water irrigation with no irrigation and one compared water irrigation with euflavine. All of the trials were of poor quality. None of the trials found a significant difference between the groups with respect to the primary outcome of time to azoospermia. However, one trial found that the median number of ejaculations to azoospermia was significantly lower in the euflavine group compared to the water irrigation group. The one trial

  8. Power Efficiency Improvements through Peak-to-Average Power Ratio Reduction and Power Amplifier Linearization

    Directory of Open Access Journals (Sweden)

    Zhou G Tong

    2007-01-01

    Full Text Available Many modern communication signal formats, such as orthogonal frequency-division multiplexing (OFDM and code-division multiple access (CDMA, have high peak-to-average power ratios (PARs. A signal with a high PAR not only is vulnerable in the presence of nonlinear components such as power amplifiers (PAs, but also leads to low transmission power efficiency. Selected mapping (SLM and clipping are well-known PAR reduction techniques. We propose to combine SLM with threshold clipping and digital baseband predistortion to improve the overall efficiency of the transmission system. Testbed experiments demonstrate the effectiveness of the proposed approach.

  9. The Implementation of Kagan’s Cooperative (Co-Op Technique to Improve Reading Comprehension of Junior High Students

    Directory of Open Access Journals (Sweden)

    Farid Helmi Setyawan

    2017-04-01

    Full Text Available Abstract: This study was aimed to investigate how Co-op technique can be implemented to improve reading comprehension of the eighth grade students of MTsN Ngawi who faced the problems in reading. The students did not comprehend the text and the score was low. The average score of reading test in preliminary study was 67 where as the average score of student’ success based on the minimum standard of students’ score is seventy (70. The design of this study was classroom action research. The technique applied in the research was Co-op technique. The result showed that the students reading average score significantly improved. In two cycle study, in the first test the students reading average score was 69.54, in the second test the students reading score was 76.15. It could be concluded that predetermined criteria of success had been achieved.

  10. Uncovering category specificity of genital sexual arousal in women: The critical role of analytic technique.

    Science.gov (United States)

    Pulverman, Carey S; Hixon, J Gregory; Meston, Cindy M

    2015-10-01

    Based on analytic techniques that collapse data into a single average value, it has been reported that women lack category specificity and show genital sexual arousal to a large range of sexual stimuli including those that both match and do not match their self-reported sexual interests. These findings may be a methodological artifact of the way in which data are analyzed. This study examined whether using an analytic technique that models data over time would yield different results. Across two studies, heterosexual (N = 19) and lesbian (N = 14) women viewed erotic films featuring heterosexual, lesbian, and gay male couples, respectively, as their physiological sexual arousal was assessed with vaginal photoplethysmography. Data analysis with traditional methods comparing average genital arousal between films failed to detect specificity of genital arousal for either group. When data were analyzed with smoothing regression splines and a within-subjects approach, both heterosexual and lesbian women demonstrated different patterns of genital sexual arousal to the different types of erotic films, suggesting that sophisticated statistical techniques may be necessary to more fully understand women's genital sexual arousal response. Heterosexual women showed category-specific genital sexual arousal. Lesbian women showed higher arousal to the heterosexual film than the other films. However, within subjects, lesbian women showed significantly different arousal responses suggesting that lesbian women's genital arousal discriminates between different categories of stimuli at the individual level. Implications for the future use of vaginal photoplethysmography as a diagnostic tool of sexual preferences in clinical and forensic settings are discussed. © 2015 Society for Psychophysiological Research.

  11. Management of chest deformity caused by microtia reconstruction: Comparison of autogenous diced cartilage versus cadaver cartilage graft partial filling techniques.

    Science.gov (United States)

    Go, Ju Young; Kang, Bo Young; Hwang, Jin Hee; Oh, Kap Sung

    2017-01-01

    Efforts to prevent chest wall deformity after costal cartilage graft are ongoing. In this study, we introduce a new method to prevent donor site deformation using irradiated cadaver cartilage (ICC) and compare this method to the autogenous diced cartilage (ADC) technique. Forty-two pediatric patients comprised the ADC group (n = 24) and the ICC group (n = 18). After harvesting costal cartilage, the empty perichondrial space was filled with autologous diced cartilage in the ADC group and cadaver cartilage in the ICC group. Digital photographs and rib cartilage three-dimensional computed tomography (CT) data were analyzed to compare the preventive effect of donor site deformity. We compared the pre- and postoperative costal cartilage volumes using 3D-CT and graded the volumes (grade I: 0%-25%, grade II: 25%-50%, grade III: 50%-75%, and grade IV: 75%-100%). The average follow-up period was 20 and 24 months in the ADC and ICC groups, respectively. Grade IV maintenance of previous costal cartilage volume was evident postoperatively in 22% of patients in the ADC group and 82% of patients in the ICC group. Intercostal space narrowing and chest wall depression were less in the ICC group. There were no complications or severe resorption of cadaver cartilage. ICC support transected costal ring and prevented stability loss by acting as a spacer. The ICC technique is more effective in preventing intercostal space narrowing and chest wall depression than the ADC technique. Samsung Medical Center Institution Review Board, Unique protocol ID: 2009-10-006-008. This study is also registered on PRS (ClinicalTrials.gov Record 2009-10-006). Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  12. Intersubassembly incoherencies and grouping techniques in LMFBR hypothetical overpower accident

    International Nuclear Information System (INIS)

    Wilburn, N.P.

    1977-10-01

    A detailed analysis was made of the FTR core using the 100-channel MELT-IIIA code. Results were studied for the transient overpower accident (where 0.5$/sec and 1$/sec ramps) and in which the Damage Parameter and the Failure Potential criteria were used. Using the information obtained from these series of runs, a new method of grouping the subassemblies into channels has been developed. Also, it was demonstrated that a 7-channel representation of the FTR core using this method does an adequate job of representing the behavior during a hypothetical disruptive transient overpower core accident. It has been shown that this new 7-channel grouping method does a better job than an earlier 20-channel grouping. It has also been demonstrated that the incoherency effects between subassemblies as shown during the 76-channel representation of the reactor can be adequately modeled by 7-channels, provided the 7-channels are selected according to the criteria stated in the report. The overall results of power and net reactivity were shown to be only slightly different in the two cases of the 7-channel and the 76-channel runs. Therefore, it can be concluded that any intersubassembly incoherencies can be modeled adequately by a small number of channels, provided the subassemblies making up these channels are selected according to the criteria stated

  13. Predictive capability of average Stokes polarimetry for simulation of phase multilevel elements onto LCoS devices.

    Science.gov (United States)

    Martínez, Francisco J; Márquez, Andrés; Gallego, Sergi; Ortuño, Manuel; Francés, Jorge; Pascual, Inmaculada; Beléndez, Augusto

    2015-02-20

    Parallel-aligned (PA) liquid-crystal on silicon (LCoS) microdisplays are especially appealing in a wide range of spatial light modulation applications since they enable phase-only operation. Recently we proposed a novel polarimetric method, based on Stokes polarimetry, enabling the characterization of their linear retardance and the magnitude of their associated phase fluctuations or flicker, exhibited by many LCoS devices. In this work we apply the calibrated values obtained with this technique to show their capability to predict the performance of spatially varying phase multilevel elements displayed onto the PA-LCoS device. Specifically we address a series of multilevel phase blazed gratings. We analyze both their average diffraction efficiency ("static" analysis) and its associated time fluctuation ("dynamic" analysis). Two different electrical configuration files with different degrees of flicker are applied in order to evaluate the actual influence of flicker on the expected performance of the diffractive optical elements addressed. We obtain a good agreement between simulation and experiment, thus demonstrating the predictive capability of the calibration provided by the average Stokes polarimetric technique. Additionally, it is obtained that for electrical configurations with less than 30° amplitude for the flicker retardance, they may not influence the performance of the blazed gratings. In general, we demonstrate that the influence of flicker greatly diminishes when the number of quantization levels in the optical element increases.

  14. Adaptive Control for Buck Power Converter Using Fixed Point Inducting Control and Zero Average Dynamics Strategies

    Science.gov (United States)

    Hoyos Velasco, Fredy Edimer; García, Nicolás Toro; Garcés Gómez, Yeison Alberto

    In this paper, the output voltage of a buck power converter is controlled by means of a quasi-sliding scheme. The Fixed Point Inducting Control (FPIC) technique is used for the control design, based on the Zero Average Dynamics (ZAD) strategy, including load estimation by means of the Least Mean Squares (LMS) method. The control scheme is tested in a Rapid Control Prototyping (RCP) system based on Digital Signal Processing (DSP) for dSPACE platform. The closed loop system shows adequate performance. The experimental and simulation results match. The main contribution of this paper is to introduce the load estimator by means of LMS, to make ZAD and FPIC control feasible in load variation conditions. In addition, comparison results for controlled buck converter with SMC, PID and ZAD-FPIC control techniques are shown.

  15. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  16. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  17. Evaluation of an automated microplate technique in the Galileo system for ABO and Rh(D) blood grouping.

    Science.gov (United States)

    Xu, Weiyi; Wan, Feng; Lou, Yufeng; Jin, Jiali; Mao, Weilin

    2014-01-01

    A number of automated devices for pretransfusion testing have recently become available. This study evaluated the Immucor Galileo System, a fully automated device based on the microplate hemagglutination technique for ABO/Rh (D) determinations. Routine ABO/Rh typing tests were performed on 13,045 samples using the Immucor automated instruments. Manual tube method was used to resolve ABO forward and reverse grouping discrepancies. D-negative test results were investigated and confirmed manually by the indirect antiglobulin test (IAT). The system rejected 70 tests for sample inadequacy. 87 samples were read as "No-type-determined" due to forward and reverse grouping discrepancies. 25 tests gave these results because of sample hemolysis. After further tests, we found 34 tests were caused by weakened RBC antibodies, 5 tests were attributable to weak A and/or B antigens, 4 tests were due to mixed-field reactions, and 8 tests had high titer cold agglutinin with blood qualifications which react only at temperatures below 34 degrees C. In the remaining 11 cases, irregular RBC antibodies were identified in 9 samples (seven anti-M and two anti-P) and two subgroups were identified in 2 samples (one A1 and one A2) by a reference laboratory. As for D typing, 2 weak D+ samples missed by automated systems gave negative results, but weak-positive reactions were observed in the IAT. The Immucor Galileo System is reliable and suited for ABO and D blood groups, some reasons may cause a discrepancy in ABO/D typing using a fully automated system. It is suggested that standardization of sample collection may improve the performance of the fully automated system.

  18. Experimental determination of average turbulent heat transfer and friction factor in stator internal rib-roughened cooling channels.

    Science.gov (United States)

    Battisti, L; Baggio, P

    2001-05-01

    In gas turbine cooling design, techniques for heat extraction from the surfaces exposed to the hot stream are based on the increase of the inner heat transfer areas and on the promotion of the turbulence of the cooling flow. This is currently obtained by casting periodic ribs on one or more sides of the serpentine passages into the core of the blade. Fluid dynamic and thermal behaviour of the cooling flow have been extensively investigated by means of experimental facilities and many papers dealing with this subject have appeared in the latest years. The evaluation of the average value of the heat transfer coefficient most of the time is inferred from local measurements obtained by various experimental techniques. Moreover the great majority of these studies are not concerned with the overall average heat transfer coefficient for the combined ribs and region between them, but do focus just on one of them. This paper presents an attempt to collect information about the average Nusselt number inside a straight ribbed duct. Series of measurements have been performed in steady state eliminating the error sources inherently connected with transient methods. A low speed wind tunnel, operating in steady state flow, has been built to simulate the actual flow condition occurring in a rectilinear blade cooling channel. A straight square channel with 20 transverse ribs on two sides has been tested for Re of about 3 x 10(4), 4.5 x 10(4) and 6 x 10(4). The ribbed wall test section is electrically heated and the heat removed by a stationary flow of known thermal and fluid dynamic characteristics.

  19. Molecular invariants: atomic group valence

    International Nuclear Information System (INIS)

    Mundim, K.C.; Giambiagi, M.; Giambiagi, M.S. de.

    1988-01-01

    Molecular invariants may be deduced in a very compact way through Grassman algebra. In this work, a generalized valence is defined for an atomic group; it reduces to the Known expressions for the case of an atom in a molecule. It is the same of the correlations between the fluctions of the atomic charges qc and qd (C belongs to the group and D does not) around their average values. Numerical results agree with chemical expectation. (author) [pt

  20. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  1. Reducing the volume of antibiotic prescriptions: a peer group intervention among physicians serving a community with special ethnic characteristics.

    Science.gov (United States)

    Wilf-Miron, Rachel; Ron, Naama; Ishai, Shlomit; Chory, Hana; Abboud, Louis; Peled, Ronit

    2012-05-01

    Antibiotics are a front-line weapon against many infectious diseases. However, antibiotic overuse is the key driver of drug resistance. Previously published studies have suggested benefits of using peer-to-peer education, working with group leaders to build trust and maintain confidentiality within a quality initiative. We hypothesized that working with physicians as a peer group might be beneficial in influencing antibiotic prescribing patterns. To describe and evaluate a peer group model for an intervention to reduce the volume of antibiotic prescriptions among physicians with above average prescribing rates serving an Arab community in northern Israel. Primary care physicians in a defined geographic area who served Arab communities and had high antibiotic prescribing rates--defined as above average number of antibiotic prescriptions per office visit compared with regional and organizational averages--were recruited for the intervention. All other physicians from the same region served as a comparison group. The intervention was administered during 2007 and was completed in early 2008. Four structured meetings scheduled 2 months apart, in which the group explored the issues related to antibiotic overuse, included the following topics: adherence to clinical guidelines; the special position physicians serving Arab communities hold and the influence it has on their practices; pressure due to consumer demands; and suggestions for possible strategies to face ethnic sensitivity, mainly because of the special ties the physicians have with their communities. T-tests for independent samples were used to perform between-group comparisons for each quarter and year of observation from 2006 through 2010, and t-tests for paired samples were used to compare pre-intervention with post-intervention antibiotic prescribing rates. In the 2006 pre-intervention period, the antibiotic prescribing rates were 0.17 for the peer group (n = 11 physicians) and 0.15 for the comparison group

  2. THE TECHNIQUES IN TEACHING LISTENING SKILL

    Directory of Open Access Journals (Sweden)

    Hidayah Nor

    2015-12-01

    Full Text Available Listening is very important skill in language because by listening students can produce language like speaking and writing by vocabulary that they get from listening. The English teacher of MAN 3 Banjarmasin used some techniques in teaching listening using the facilities in language laboratory such as tape cassette, television, and VCD/DVD. This research described the techniques in teaching listening skill of the Islamic high school students. The subjects of this study were an English teacher and 48 students of the tenth grade at MAN 3 Banjarmasin in Academic Year 2009/2010. To collect the data, it was used some techniques such as observation, interview, and documentary. Then all data were analyzed using descriptive method qualitatively and quantitatively, by concluding inductively. The result indicates that the techniques in teaching listening applied by the English teacher of the tenth grade students at MAN 3 Banjarmasin in Academic Year 2009/2010 are: Information Transfer, Paraphrasing and Translating, Answering Questions, Summarizing, Filling in Blanks, and Answering to Show Comprehension of Messages. The students’ ability of listening comprehension using six techniques is categorized in very high, high, and average levels. Keywords: listening techniques, teaching listening skill

  3. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  4. State of practice and emerging application of analytical techniques of nuclear forensic analysis: highlights from the 4th Collaborative Materials Exercise of the Nuclear Forensics International Technical Working Group (ITWG)

    International Nuclear Information System (INIS)

    Schwantes, J.M.; Pellegrini, K.L.; Marsden, Oliva

    2017-01-01

    The Nuclear Forensics International Technical Working Group (ITWG) recently completed its fourth Collaborative Materials Exercise (CMX-4) in the 21 year history of the Group. This was also the largest materials exercise to date, with participating laboratories from 16 countries or international organizations. Exercise samples (including three separate samples of low enriched uranium oxide) were shipped as part of an illicit trafficking scenario, for which each laboratory was asked to conduct nuclear forensic analyses in support of a fictitious criminal investigation. In all, over 30 analytical techniques were applied to characterize exercise materials, for which ten of those techniques were applied to ITWG exercises for the first time. An objective review of the state of practice and emerging application of analytical techniques of nuclear forensic analysis based upon the outcome of this most recent exercise is provided. (author)

  5. State of practice and emerging application of analytical techniques of nuclear forensic analysis: highlights from the 4th Collaborative Materials Exercise of the Nuclear Forensics International Technical Working Group (ITWG)

    International Nuclear Information System (INIS)

    Schwantes, Jon M.; Marsden, Oliva; Pellegrini, Kristi L.

    2016-01-01

    The Nuclear Forensics International Technical Working Group (ITWG) recently completed its fourth Collaborative Materials Exercise (CMX-4) in the 21 year history of the Group. This was also the largest materials exercise to date, with participating laboratories from 16 countries or international organizations. Moreover, exercise samples (including three separate samples of low enriched uranium oxide) were shipped as part of an illicit trafficking scenario, for which each laboratory was asked to conduct nuclear forensic analyses in support of a fictitious criminal investigation. In all, over 30 analytical techniques were applied to characterize exercise materials, for which ten of those techniques were applied to ITWG exercises for the first time. We performed an objective review of the state of practice and emerging application of analytical techniques of nuclear forensic analysis based upon the outcome of this most recent exercise is provided.

  6. A modified technique of reconstruction for complete acromioclavicular dislocation: a prospective study.

    Science.gov (United States)

    Tienen, Tony G; Oyen, Jan F C H; Eggen, Peter J G M

    2003-01-01

    Many procedures, both nonoperative and operative, have been described for treatment of complete acromioclavicular dislocations. The best primary treatment, however, still remains unclear. We present a new surgical technique in which the clavicle is reduced to an anatomic position, the coracoacromial ligament is transferred to the clavicle, and acromioclavicular joint fixation is accomplished with the use of absorbable, braided suture cord. Twenty-one patients underwent the modified technique of reconstruction. Patients were included only if they had sustained a Rockwood type V acromioclavicular dislocation and were extremely active in competitive sports before dislocation occurred. Eighteen patients returned to their sports without pain within 2.5 months after operation. The mean follow-up was 35.7 months. The average Constant score at last follow-up was 97. Radiographs taken at this time confirmed anatomic reduction in 18 patients, residual subluxation in 2 patients, and, in 1 patient, redislocation of the joint that occurred because of infection. Six patients had radiographic evidence of coracoclavicular ossifications. All patients developed a wide scar. Considering its operative simplicity, the advantage of absorbable augmentation of the clavicular reduction, and the low rate of recurrence, this technique may be an attractive alternative in this particular group of patients.

  7. Liposomal Bupivacaine Injection Technique in Total Knee Arthroplasty.

    Science.gov (United States)

    Meneghini, R Michael; Bagsby, Deren; Ireland, Philip H; Ziemba-Davis, Mary; Lovro, Luke R

    2017-01-01

    Liposomal bupivacaine has gained popularity for pain control after total knee arthroplasty (TKA), yet its true efficacy remains unproven. We compared the efficacy of two different periarticular injection (PAI) techniques for liposomal bupivacaine with a conventional PAI control group. This retrospective cohort study compared consecutive patients undergoing TKA with a manufacturer-recommended, optimized injection technique for liposomal bupivacaine, a traditional injection technique for liposomal bupivacaine, and a conventional PAI of ropivacaine, morphine, and epinephrine. The optimized technique utilized a smaller gauge needle and more injection sites. Self-reported pain scores, rescue opioids, and side effects were compared. There were 41 patients in the liposomal bupivacaine optimized injection group, 60 in the liposomal bupivacaine traditional injection group, and 184 in the conventional PAI control group. PAI liposomal bupivacaine delivered via manufacturer-recommended technique offered no benefit over PAI ropivacaine, morphine, and epinephrine. Mean pain scores and the proportions reporting no or mild pain, time to first opioid, and amount of opioids consumed were not better with PAI liposomal bupivacaine compared with PAI ropivacaine, morphine, and epinephrine. The use of the manufacturer-recommended technique for PAI of liposomal bupivacaine does not offer benefit over a conventional, less expensive PAI during TKA. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  8. The influence of averageness on judgments of facial attractiveness: no own-age or own-sex advantage among children attending single-sex schools.

    Science.gov (United States)

    Vingilis-Jaremko, Larissa; Maurer, Daphne; Gao, Xiaoqing

    2014-04-01

    We examined how recent biased face experience affects the influence of averageness on judgments of facial attractiveness among 8- and 9-year-old children attending a girls' school, a boys' school, and a mixed-sex school. We presented pairs of individual faces in which one face was transformed 50% toward its group average, whereas the other face was transformed 50% away from that average. Across blocks, the faces varied in age (adult, 9-year-old, or 5-year-old) and sex (male or female). We expected that averageness might influence attractiveness judgments more strongly for same-age faces and, for children attending single-sex schools, same-sex faces of that age because their prototype(s) should be best tuned to the faces they see most frequently. Averageness influenced children's judgments of attractiveness, but the strength of the influence was not modulated by the age of the face, nor did the effects of sex of face differ across schools. Recent biased experience might not have affected the results because of similarities between the average faces of different ages and sexes and/or because a minimum level of experience with a particular group of faces may be adequate for the formation of a veridical prototype and its influence on judgments of attractiveness. The results suggest that averageness affects children's judgments of the attractiveness of the faces they encounter in everyday life regardless of age or sex of face. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Radon survey techniques

    International Nuclear Information System (INIS)

    Anon.

    1976-01-01

    The report reviews radon measurement surveys in soils and in water. Special applications, and advantages and limitations of the radon measurement techniques are considered. The working group also gives some directions for further research in this field

  10. Reconstruction of a time-averaged midposition CT scan for radiotherapy planning of lung cancer patients using deformable registration.

    Science.gov (United States)

    Wolthaus, J W H; Sonke, J J; van Herk, M; Damen, E M F

    2008-09-01

    lower lobe lung tumors move with amplitudes of up to 2 cm due to respiration. To reduce respiration imaging artifacts in planning CT scans, 4D imaging techniques are used. Currently, we use a single (midventilation) frame of the 4D data set for clinical delineation of structures and radiotherapy planning. A single frame, however, often contains artifacts due to breathing irregularities, and is noisier than a conventional CT scan since the exposure per frame is lower. Moreover, the tumor may be displaced from the mean tumor position due to hysteresis. The aim of this work is to develop a framework for the acquisition of a good quality scan representing all scanned anatomy in the mean position by averaging transformed (deformed) CT frames, i.e., canceling out motion. A nonrigid registration method is necessary since motion varies over the lung. 4D and inspiration breath-hold (BH) CT scans were acquired for 13 patients. An iterative multiscale motion estimation technique was applied to the 4D CT scan, similar to optical flow but using image phase (gray-value transitions from bright to dark and vice versa) instead. From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position. A 3D midposition (MidP) CT scan was then obtained by (arithmetic or median) averaging of the deformed 4D CT scan. Image registration accuracy, tumor shape deviation with respect to the BH CT scan, and noise were determined to evaluate the image fidelity of the MidP CT scan and the performance of the technique. Accuracy of the used deformable image registration method was comparable to established automated locally rigid registration and to manual landmark registration (average difference to both methods noise of individual 4D CT scan frames. We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a

  11. Relationships between the group-theoretic and soliton-theoretic techniques for generating stationary axisymmetric gravitational solutions

    International Nuclear Information System (INIS)

    Cosgrove, C.M.

    1980-01-01

    We investigate the precise interrelationships between several recently developed solution-generating techniques capable of generating asymptotically flat gravitational solutions with arbitrary multipole parameters. The transformations we study in detail here are the Lie groups Q and Q of Cosgrove, the Hoenselaers--Kinnersley--Xanthopoulos (HKX) transformations and their SL(2) tensor generalizations, the Neugebauer--Kramer discrete mapping, the Neugebauer Baecklund transformations I 1 and I 2 , the Harrison Baecklund transformation, and the Belinsky--Zakharov (BZ) one- and two-soliton transformations. Two particular results, among many reported here, are that the BZ soliton transformations are essentially equivalent to Harrison transformations and that the generalized HKX transformation may be deduced as a confluent double soliton transformation. Explicit algebraic expressions are given for the transforms of the Kinnersley--Chitre generating functions under all of the above transformations. In less detail, we also study the Kinnersley--Chitre β transformations, the non-null HKX transformations, and the Hilbert problems proposed independently by Belinsky and Zakharov, and Hauser and Ernst. In conclusion, we describe the nature of the exact solutions constructible in a finite number of steps with the available methods

  12. Two-phase flow measurement by pulsed neutron activation techniques

    International Nuclear Information System (INIS)

    Kehler, P.

    1978-01-01

    The Pulsed Neutron Activation (PNA) technique for measuring the mass flow velocity and the average density of two-phase mixtures is described. PNA equipment can be easily installed at different loops, and PNA techniques are non-intrusive and independent of flow regimes. These features of the PNA technique make it suitable for in-situ measurement of two-phase flows, and for calibration of more conventional two-phase flow measurement devices. Analytic relations governing the various PNA methods are derived. The equipment and procedures used in the first air-water flow measurement by PNA techniques are discussed, and recommendations are made for improvement of future tests. In the present test, the mass flow velocity was determined with an accuracy of 2 percent, and average densities were measured down to 0.08 g/cm 3 with an accuracy of 0.04 g/cm 3 . Both the accuracy of the mass flow velocity measurement and the lower limit of the density measurement are functions of the injected activity and of the total number of counts. By using a stronger neutron source and a larger number of detectors, the measurable density can be decreased by a factor of 12 to .007 g/cm 3 for 12.5 cm pipes, and to even lower ranges for larger pipes

  13. Radon remedial techniques in buildings - analysis of French actual cases

    International Nuclear Information System (INIS)

    Dupuis, M.

    2004-01-01

    The IRSN has compiled a collection of solutions from data provided by the various decentralised government services in 31 French departments. Contributors were asked to provide a description of the building, as well as details of measured radon levels, the type of reduction technique adopted and the cost. Illustrative layouts, technical drawings and photographs were also requested, when available. Of the cases recorded, 85% are establishments open to the public (schools (70%), city halls (4%) and combined city halls and school houses (26%)), 11% are houses and 4% industrial buildings. IRSN obtained 27 real cases of remedial techniques used. The data were presented in the form of fact sheets. The primary aim of this exercise was to illustrate each of the radon reduction techniques that can be used in the different building types (with basement, ground bearing slab, crawl space). This investigation not only enabled us to show that combining passive and active techniques reduces the operating cost of the installation, but above all that it considerably improves the efficiency. The passive technique reduces the amount of radon in the building and thus reduces the necessary ventilation rate, which directly affects the cost of operating the installation. For the 27 cases recorded, we noted:(a) the application of 7 passive techniques: sealing of floors and semi-buried walls, together with improved aeration by installing ventilation openings or ventilation strips in the windows. Radon concentrations were reduced on average by a factor of 4.7. No measurement in excess of 400 Bq.m -3 (the limit recommended by the French public authorities) was obtained following completion of the works; (b) the application of 15 active techniques: depressurization of the underlying ground, crawl space or basement and/or pressurization of the building. Radon concentrations were reduced on average by a factor of 13.8. Radon concentrations of over 400 Bq.m -3 were measured in only 4 cases

  14. Savings and credit: women's informal groups as models for change in developing countries.

    Science.gov (United States)

    Wickrama, K A; Keith, P M

    1994-04-01

    The aim of this research was to examine the financial success of newly formed women's groups involved in Sri Lanka's Hambantota Integrated Rural Development Program (HIRDEP). The project was initiated in July 1986 with 20 trained social mobilizers, who were each assigned to a village community of about 100 families. Mobilizers were selected from village volunteers involved in development activities. The study population included 78 women's groups, with an average size of 7 persons, from 19 villages with populations under the poverty level and people receiving food stamps. Measures of group performance included the exchange of labor among group members, the collective purchase of raw materials and consumer goods, and collective marketing. Service use was differentiated by extension services, inputs, assets, and general benefits. Financial activity was measured as the rupee size of the fund and amounts of loans. 54 groups were engaged in nonfarm activity, and most groups had women social mobilizers. About 50% of women's groups had received all four service types. Funding ranged from Rs. 240 to Rs. 9500. The average of the credit loans per month was Rs. 408 per group. 85% of the loans were used for production, investment, or repayment of old loans. Younger age groups affected the slower growth of funds but were more efficient in loaning money, acquiring services, and marketing activities collectively. Young social mobilizers were associated with efficiency of credit disbursement. Diversity of collective activities was related to the size and growth rate of funds. Multivariate analysis revealed that the growth rate of funds was primarily related to the personal income of members and the level of training of social mobilizers. Members were able to obtain loans equal to about 50% of their monthly income at an average interest rate of about 5%, which was three to four times less than normally available. 47% of the variance in the size of the fund was explained by average

  15. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  16. The Consequences of Indexing the Minimum Wage to Average Wages in the U.S. Economy.

    Science.gov (United States)

    Macpherson, David A.; Even, William E.

    The consequences of indexing the minimum wage to average wages in the U.S. economy were analyzed. The study data were drawn from the 1974-1978 May Current Population Survey (CPS) and the 180 monthly CPS Outgoing Rotation Group files for 1979-1993 (approximate annual sample sizes of 40,000 and 180,000, respectively). The effects of indexing on the…

  17. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  18. Evaluating a nursing erasmus exchange experience: Reflections on the use and value of the Nominal Group Technique for evaluation.

    Science.gov (United States)

    Cunningham, Sheila

    2017-09-01

    This paper discusses the use of Nominal Group Technique (NGT) for European nursing exchange evaluation at one university. The NGT is a semi-quantitative evaluation method derived from the Delphi method popular in the 1970s and 1980s. The NGT was modified from the traditional version retaining the structured cycles and but adding a broader group discussion. The NGT had been used for 2 successive years but required analysis and evaluation itself for credibility and 'fit' for purpose which is presented here. It aimed to explore nursing students' exchange experiences and aid programme development futures exchanges and closure from exchange. Results varied for the cohorts and students as participants enthusiastically engaged generating ample data which they ranked and categorised collectively. Evaluation of the NGT itself was two fold: by the programme team who considered purpose, audience, inclusivity, context and expertise. Secondly, students were asked for their thoughts using a graffiti board. Students avidly engaged with NGT but importantly also reported an effect from the process itself as an opportunity to reflect and share their experiences. The programme team concluded the NGT offered a credible evaluation tool which made use of authentic student voice and offered interactive group processes. Pedagogially, it enabled active reflection thus aiding reorientation back to the United Kingdom and awareness of 'transformative' consequences of their exchange experiences. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Treatment of breast cancer with simultaneous integrated boost in hybrid plan technique. Influence of flattening filter-free beams

    Energy Technology Data Exchange (ETDEWEB)

    Bahrainy, Marzieh; Kretschmer, Matthias; Joest, Vincent; Kasch, Astrid; Wuerschmidt, Florian; Dahle, Joerg; Lorenzen, Joern [Radiologische Allianz, Hamburg (Germany)

    2016-05-15

    The present study compares in silico treatment plans using hybrid plan technique during hypofractionated radiation of mammary carcinoma with simultaneous integrated boost (SIB). The influence of 6 MV photon radiation in flattening filter free (FFF) mode against the clinical standard flattening filter (FF) mode is to be examined. RT planning took place with FF and FFF radiation plans for 10 left-sided breast cancer patients. Hybrid plans were realised with two tangential IMRT fields and one VMAT field. The dose prescription was in line with the guidelines in the ARO-2010-01 study. The dosimetric verification took place with a manufacturer-independent measurement system. Required dose prescriptions for the planning target volumes (PTV) were achieved for both groups. The average dose values of the ipsi- and contralateral lung and the heart did not differ significantly. The overall average incidental dose to the left anterior descending artery (LAD) of 8.24 ± 3.9 Gy in the FFF group and 9.05 ± 3.7 Gy in the FF group (p < 0.05) were found. The dosimetric verifications corresponded to the clinical requirements. FFF-based RT plans reduced the average treatment time by 17 s/fraction. In comparison to the FF-based hybrid plan technique the FFF mode allows further reduction of the average LAD dose for comparable target volume coverage without adverse low-dose exposure of contralateral structures. The combination of hybrid plan technique and 6 MV photon radiation in the FFF mode is suitable for use with hypofractionated dose schemes. The increased dose rate allows a substantial reduction of treatment time and thus beneficial application of the deep inspiration breath hold technique. (orig.) [German] Vergleich der ''In-silico''-Bestrahlungsplaene der klinisch etablierten Hybridplan-Technik bei hypofraktionierter Bestrahlung des Mammakarzinoms mit simultan integriertem Boost (SIB). Untersucht wird der Einfluss von 6MV-Photonenstrahlung im Flattening

  20. Phase distribution measurements in narrow rectangular channels using image processing techniques

    International Nuclear Information System (INIS)

    Bentley, C.; Ruggles, A.

    1991-01-01

    Many high flux research reactor fuel assemblies are cooled by systems of parallel narrow rectangular channels. The HFIR is cooled by single phase forced convection under normal operating conditions. However, two-phase forced convection or two phase mixed convection can occur in the fueled region as a result of some hypothetical accidents. Such flow conditions would occur only at decay power levels. The system pressure would be around 0.15 MPa in such circumstances. Phase distribution of air-water flow in a narrow rectangular channel is examined using image processing techniques. Ink is added to the water and clear channel walls are used to allow high speed still photographs and video tape to be taken of the air-water flow field. Flow field images are digitized and stored in a Macintosh 2ci computer using a frame grabber board. Local grey levels are related to liquid thickness in the flow channel using a calibration fixture. Image processing shareware is used to calculate the spatially averaged liquid thickness from the image of the flow field. Time averaged spatial liquid distributions are calculated using image calculation algorithms. The spatially averaged liquid distribution is calculated from the time averaged spatial liquid distribution to formulate the combined temporally and spatially averaged fraction values. The temporally and spatially averaged liquid fractions measured using this technique compare well to those predicted from pressure gradient measurements at zero superficial liquid velocity