Flexible time domain averaging technique
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Time-dependence and averaging techniques in atomic photoionization calculations
International Nuclear Information System (INIS)
Scheibner, K.F.
1984-01-01
Two distinct problems in the development and application of averaging techniques to photoionization calculations are considered. The first part of the thesis is concerned with the specific problem of near-resonant three-photon ionization in hydrogen, a process for which no cross section exists. Effects of the inclusion of the laser pulse characteristics (both temporal and spatial) on the dynamics of the ionization probability and of the metastable state probability are examined. It is found, for example, that the ionization probability can decrease with increasing field intensity. The temporal profile of the laser pulse is found to affect the dynamics very little, whereas the spatial character of the pulse can affect the results drastically. In the second part of the thesis techniques are developed for calculating averaged cross sections directly without first calculating a detailed cross section. Techniques are developed whereby the detailed cross section never has to be calculated as an intermediate step, but rather, the averaged cross section is calculated directly. A variation of the moment technique and a new method based on the stabilization technique are applied successfully to atomic hydrogen and helium
Diagram Techniques in Group Theory
Stedman, Geoffrey E.
2009-09-01
Preface; 1. Elementary examples; 2. Angular momentum coupling diagram techniques; 3. Extension to compact simple phase groups; 4. Symmetric and unitary groups; 5. Lie groups and Lie algebras; 6. Polarisation dependence of multiphoton processes; 7. Quantum field theoretic diagram techniques for atomic systems; 8. Applications; Appendix; References; Indexes.
Exploring JLA supernova data with improved flux-averaging technique
Energy Technology Data Exchange (ETDEWEB)
Wang, Shuang; Wen, Sixiang; Li, Miao, E-mail: wangshuang@mail.sysu.edu.cn, E-mail: wensx@mail2.sysu.edu.cn, E-mail: limiao9@mail.sysu.edu.cn [School of Physics and Astronomy, Sun Yat-Sen University, University Road (No. 2), Zhuhai (China)
2017-03-01
In this work, we explore the cosmological consequences of the ''Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the ( z {sub cut}, Δ z ) plane, where z {sub cut} and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying z {sub cut} and varying Δ z , revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is ( z {sub cut} = 0.6, Δ z =0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at z {sub cut} ≥ 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ω {sub m} . In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.
Characteristics of phase-averaged equations for modulated wave groups
Klopman, G.; Petit, H.A.H.; Battjes, J.A.
2000-01-01
The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993).
Abboud, S.; Blatt, C. M.; Lown, B.; Graboys, T. B.; Sadeh, D.; Cohen, R. J.
1987-01-01
An advanced non invasive signal averaging technique was used to detect late potentials in two groups of patients: Group A (24 patients) with coronary artery disease (CAD) and without sustained ventricular tachycardia (VT) and Group B (8 patients) with CAD and sustained VT. Recorded analog data were digitized and aligned using a cross correlation function with fast Fourier transform schema, averaged and band pass filtered between 60 and 200 Hz with a non-recursive digital filter. Averaged filtered waveforms were analyzed by computer program for 3 parameters: (1) filtered QRS (fQRS) duration (2) interval between the peak of the R wave peak and the end of fQRS (R-LP) (3) RMS value of last 40 msec of fQRS (RMS). Significant change was found between Groups A and B in fQRS (101 -/+ 13 msec vs 123 -/+ 15 msec; p < .0005) and in R-LP vs 52 -/+ 11 msec vs 71-/+18 msec, p <.002). We conclude that (1) the use of a cross correlation triggering method and non-recursive digital filter enables a reliable recording of late potentials from the body surface; (2) fQRS and R-LP durations are sensitive indicators of CAD patients susceptible to VT.
An application of commercial data averaging techniques in pulsed photothermal experiments
International Nuclear Information System (INIS)
Grozescu, I.V.; Moksin, M.M.; Wahab, Z.A.; Yunus, W.M.M.
1997-01-01
We present an application of data averaging technique commonly implemented in many commercial digital oscilloscopes or waveform digitizers. The technique was used for transient data averaging in the pulsed photothermal radiometry experiments. Photothermal signals are surrounded by an important amount of noise which affect the precision of the measurements. The effect of the noise level on photothermal signal parameters in our particular case, fitted decay time, is shown. The results of the analysis can be used in choosing the most effective averaging technique and estimating the averaging parameter values. This would help to reduce the data acquisition time while improving the signal-to-noise ratio
A Hybrid Islanding Detection Technique Using Average Rate of Voltage Change and Real Power Shift
DEFF Research Database (Denmark)
Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte
2009-01-01
The mainly used islanding detection techniques may be classified as active and passive techniques. Passive techniques don't perturb the system but they have larger nondetection znes, whereas active techniques have smaller nondetection zones but they perturb the system. In this paper, a new hybrid...... technique is proposed to solve this problem. An average rate of voltage change (passive technique) has been used to initiate a real power shift (active technique), which changes the eal power of distributed generation (DG), when the passive technique cannot have a clear discrimination between islanding...
PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM
Bahubali K. Shiragapur; Uday Wali
2016-01-01
In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR) quantity. The Golay Code (24, 12), Reed-Muller code (16, 11), Hamming code (7, 4) and Hybrid technique (Combination of Signal Scrambling and Signal Distortion) proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conve...
van Osch, Y.M.J.; Blanken, Irene; Meijs, Maartje H. J.; van Wolferen, Job
2015-01-01
We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of
van Osch, Yvette; Blanken, Irene; Meijs, Maartje H J; van Wolferen, Job
2015-04-01
We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of physical attractiveness are more positive than the average ratings of the group members. A meta-analysis on 33 comparisons reveals that the effect is medium to large (Cohen's d = 0.60) and moderated by group size. We explored two explanations for the GA-effect: (a) selective attention to attractive group members, and (b) the Gestalt principle of similarity. The results of our studies are in favor of the selective attention account: People selectively attend to the most attractive members of a group and their attractiveness has a greater influence on the evaluation of the group. © 2015 by the Society for Personality and Social Psychology, Inc.
Large-signal analysis of DC motor drive system using state-space averaging technique
International Nuclear Information System (INIS)
Bekir Yildiz, Ali
2008-01-01
The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model
Grouping techniques in a EFL Classroom
Ramírez Salas, Marlene
2005-01-01
This article focuses on the need for English language teachers to use group work as a means to foster communication among students. The writer presents a definition of group work, its advantages and disadvantages, some activities for using group work and some grouping techniques created or adapted by the writer to illustrate the topic. Este artículo manifiesta la importancia del trabajo grupal en el aula, para motivar la comunicación entre los estudiantes. Asimismo, presenta la definición de...
Grouping techniques in a EFL Classroom
Directory of Open Access Journals (Sweden)
Ramírez Salas, Marlene
2005-03-01
Full Text Available This article focuses on the need for English language teachers to use group work as a means to foster communication among students. The writer presents a definition of group work, its advantages and disadvantages, some activities for using group work and some grouping techniques created or adapted by the writer to illustrate the topic. Este artículo manifiesta la importancia del trabajo grupal en el aula, para motivar la comunicación entre los estudiantes. Asimismo, presenta la definición de trabajo en grupo, sus ventajas, desventajas y algunas actividades y técnicas para formar grupos.
PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM
Directory of Open Access Journals (Sweden)
Bahubali K. Shiragapur
2016-03-01
Full Text Available In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR quantity. The Golay Code (24, 12, Reed-Muller code (16, 11, Hamming code (7, 4 and Hybrid technique (Combination of Signal Scrambling and Signal Distortion proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conventional and Modified Selective mapping techniques. The simulation results are validated through statistical properties, for proposed technique’s autocorrelation value is maximum shows reduction in PAPR. The symbol preference is the key idea to reduce PAPR based on Hamming distance. The simulation results are discussed in detail, in this article.
A Group Neighborhood Average Clock Synchronization Protocol for Wireless Sensor Networks
Lin, Lin; Ma, Shiwei; Ma, Maode
2014-01-01
Clock synchronization is a very important issue for the applications of wireless sensor networks. The sensors need to keep a strict clock so that users can know exactly what happens in the monitoring area at the same time. This paper proposes a novel internal distributed clock synchronization solution using group neighborhood average. Each sensor node collects the offset and skew rate of the neighbors. Group averaging of offset and skew rate value are calculated instead of conventional point-to-point averaging method. The sensor node then returns compensated value back to the neighbors. The propagation delay is considered and compensated. The analytical analysis of offset and skew compensation is presented. Simulation results validate the effectiveness of the protocol and reveal that the protocol allows sensor networks to quickly establish a consensus clock and maintain a small deviation from the consensus clock. PMID:25120163
A Group Neighborhood Average Clock Synchronization Protocol for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Lin Lin
2014-08-01
Full Text Available Clock synchronization is a very important issue for the applications of wireless sensor networks. The sensors need to keep a strict clock so that users can know exactly what happens in the monitoring area at the same time. This paper proposes a novel internal distributed clock synchronization solution using group neighborhood average. Each sensor node collects the offset and skew rate of the neighbors. Group averaging of offset and skew rate value are calculated instead of conventional point-to-point averaging method. The sensor node then returns compensated value back to the neighbors. The propagation delay is considered and compensated. The analytical analysis of offset and skew compensation is presented. Simulation results validate the effectiveness of the protocol and reveal that the protocol allows sensor networks to quickly establish a consensus clock and maintain a small deviation from the consensus clock.
Using Creative Group Techniques in High Schools
Veach, Laura J.; Gladding, Samuel T.
2007-01-01
Groups in high schools that use creative techniques help adolescents express their emotions appropriately, behave differently, and gain insight into themselves and others. This article looks at seven different creative arts media--music, movement, visual art, literature, drama, play, and humor--and offers examples of how they can be used in groups…
Survey as a group interactive teaching technique
Directory of Open Access Journals (Sweden)
Ana GOREA
2017-03-01
Full Text Available Smooth running of the educational process and the results depend a great deal on the methods used. The methodology of teaching offers a great variety of teaching techniques that the teacher can make use of in the teaching/learning process. Such techniques as brainstorming, the cube, KLW, case study, Venn diagram, and many other are familiar to the teachers and they use them effectively in the classroom. The present article proposes a technique called ‘survey’, which has been successfully used by the author as a student-centered speaking activity in foreign language classes. It has certain advantages especially if used in large groups. It can be adapted for any other discipline in the case when the teacher wishes to offer the students space for cooperative activity and creativity.
Software for the grouped optimal aggregation technique
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Human-experienced temperature changes exceed global average climate changes for all income groups
Hsiang, S. M.; Parshall, L.
2009-12-01
Global climate change alters local climates everywhere. Many climate change impacts, such as those affecting health, agriculture and labor productivity, depend on these local climatic changes, not global mean change. Traditional, spatially averaged climate change estimates are strongly influenced by the response of icecaps and oceans, providing limited information on human-experienced climatic changes. If used improperly by decision-makers, these estimates distort estimated costs of climate change. We overlay the IPCC’s 20 GCM simulations on the global population distribution to estimate local climatic changes experienced by the world population in the 21st century. The A1B scenario leads to a well-known rise in global average surface temperature of +2.0°C between the periods 2011-2030 and 2080-2099. Projected on the global population distribution in 2000, the median human will experience an annual average rise of +2.3°C (4.1°F) and the average human will experience a rise of +2.4°C (4.3°F). Less than 1% of the population will experience changes smaller than +1.0°C (1.8°F), while 25% and 10% of the population will experience changes greater than +2.9°C (5.2°F) and +3.5°C (6.2°F) respectively. 67% of the world population experiences temperature changes greater than the area-weighted average change of +2.0°C (3.6°F). Using two approaches to characterize the spatial distribution of income, we show that the wealthiest, middle and poorest thirds of the global population experience similar changes, with no group dominating the global average. Calculations for precipitation indicate that there is little change in average precipitation, but redistributions of precipitation occur in all income groups. These results suggest that economists and policy-makers using spatially averaged estimates of climate change to approximate local changes will systematically and significantly underestimate the impacts of climate change on the 21st century population. Top: The
National survey provides average power quality profiles for different customer groups
International Nuclear Information System (INIS)
Hughes, B.; Chan, J.
1996-01-01
A three year survey, beginning in 1991, was conducted by the Canadian Electrical Association to study the levels of power quality that exist in Canada, and to determine ways to increase utility expertise in making power quality measurements. Twenty-two utilities across Canada were involved, with a total of 550 sites being monitored, including residential and commercial customers. Power disturbances, power outages and power quality were recorded for each site. To create a group average power quality plot, the transient disturbance activity for each site was normalized to a per channel, per month basis and then divided into a grid. Results showed that the average power quality provided by Canadian utilities was very good. Almost all the electrical disturbance within a customer premises were created and stayed within those premises. Disturbances were generally beyond utility control. Utilities could, however, reduce the amount of time the steady-state voltage exceeds the CSA normal voltage upper limit. 5 figs
Roth, P L; Bobko, P
2000-06-01
College grade point average (GPA) is often used in a variety of ways in personnel selection. Unfortunately, there is little empirical research literature in human resource management that informs researchers or practitioners about the magnitude of ethnic group differences and any potential adverse impact implications when using cumulative GPA for selection. Data from a medium-sized university in the Southeast (N = 7,498) indicate that the standardized average Black-White difference for cumulative GPA in the senior year is d = 0.78. The authors also conducted analyses at 3 GPA screens (3.00, 3.25, and 3.50) to demonstrate that employers (or educators) might face adverse impact at all 3 levels if GPA continues to be implemented as part of a selection system. Implications and future research are discussed.
Davit, Yohan
2013-12-01
A wide variety of techniques have been developed to homogenize transport equations in multiscale and multiphase systems. This has yielded a rich and diverse field, but has also resulted in the emergence of isolated scientific communities and disconnected bodies of literature. Here, our goal is to bridge the gap between formal multiscale asymptotics and the volume averaging theory. We illustrate the methodologies via a simple example application describing a parabolic transport problem and, in so doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points with great care, and reference many fundamental works. © 2013 Elsevier Ltd.
Zhang, Shengli; Tang, Jiong
2016-04-01
Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.
Measurement of cross sections of threshold detectors with spectrum average technique
International Nuclear Information System (INIS)
Agus, Y.; Celenk, I.; Oezmen, A.
2004-01-01
Cross sections of the reactions 103 Rh(n, n') 103m Rh, 115 In(n, n') 115m In, 232 Th(n, f), 47 Ti(n, p) 47 Sc, 64 Zn(n, p) 64 Cu, 58 Ni(n, p) 58 Co, 54 Fe(n, p) 54 Mn, 46 Ti(n, p) 46 Sc, 27 Al(n, p) 27 Mg, 56 Fe(n, p) 56 Mn, 24 Mg(n, p) 24 Na, 59 Co(n, α) 56 Mn, 27 Al(n, α) 24 Na and 48 Ti(n, p) 48 Sc were measured with average neutron energies above effective threshold by using the activation method through usage of spectrum average technique in an irradiation system where there are three equivalent Am/Be sources, each of which has 592 GBq activity. The cross sections were determined with reference to the fast neutron fission cross section of 238 U. The measured values and published values are generally in agreement. (orig.)
International Nuclear Information System (INIS)
K. Montague
2000-01-01
The purpose of this calculation is to develop additional Biosphere Dose Conversion Factors (BDCFs) for a reasonably maximally exposed individual (RMEI) for the periods 10,000 years and 1,000,000 years after the repository closure. In addition, Biosphere Dose Conversion Factors for the average member of a critical group are calculated for those additional radionuclides postulated to reach the environment during the period after 10,000 years and up to 1,000,000 years. After the permanent closure of the repository, the engineered systems within the repository will eventually lose their abilities to contain radionuclide inventory, and the radionuclides will migrate through the geosphere and eventually enter the local water table moving toward inhabited areas. The primary release scenario is a groundwater well used for drinking water supply and irrigation, and this calculation takes these postulated releases and follows them through various pathways until they result in a dose to either a member of critical group or a reasonably maximally exposed individual. The pathways considered in this calculation include inhalation, ingestion, and direct exposure
Group Inquiry Techniques for Teaching Writing.
Hawkins, Thom
The small size of college composition classes encourages exciting and meaningful interaction, especially when students are divided into smaller, autonomous groups for all or part of the hour. This booklet discusses the advantages of combining the inquiry method (sometimes called the discovery method) with a group approach and describes specific…
AGE GROUP CLASSIFICATION USING MACHINE LEARNING TECHNIQUES
Arshdeep Singh Syal*1 & Abhinav Gupta2
2017-01-01
A human face provides a lot of information that allows another person to identify characteristics such as age, sex, etc. Therefore, the challenge is to develop an age group prediction system using the automatic learning method. The task of estimating the age group of the human from their frontal facial images is very captivating, but also challenging because of the pattern of personalized and non-linear aging that differs from one person to another. This paper examines the problem of predicti...
Saurino, Dan R.; Hinson, Kenneth; Bouma, Amy
This paper focuses on the use of a group action research approach to help student teachers develop strategies to improve the grade point average of at-risk students. Teaching interventions such as group work and group and individual tutoring were compared to teaching strategies already used in the field. Results indicated an improvement in the…
Zhang, Shengli; Tang, J.
2018-01-01
Gear fault diagnosis relies heavily on the scrutiny of vibration responses measured. In reality, gear vibration signals are noisy and dominated by meshing frequencies as well as their harmonics, which oftentimes overlay the fault related components. Moreover, many gear transmission systems, e.g., those in wind turbines, constantly operate under non-stationary conditions. To reduce the influences of non-synchronous components and noise, a fault signature enhancement method that is built upon angle-frequency domain synchronous averaging is developed in this paper. Instead of being averaged in the time domain, the signals are processed in the angle-frequency domain to solve the issue of phase shifts between signal segments due to uncertainties caused by clearances, input disturbances, and sampling errors, etc. The enhanced results are then analyzed through feature extraction algorithms to identify the most distinct features for fault classification and identification. Specifically, Kernel Principal Component Analysis (KPCA) targeting at nonlinearity, Multilinear Principal Component Analysis (MPCA) targeting at high dimensionality, and Locally Linear Embedding (LLE) targeting at local similarity among the enhanced data are employed and compared to yield insights. Numerical and experimental investigations are performed, and the results reveal the effectiveness of angle-frequency domain synchronous averaging in enabling feature extraction and classification.
Gallistel, C R
2012-05-01
Except under unusually favorable circumstances, one can infer from functions obtained by averaging across the subjects neither the form of the function that describes the behavior of the individual subject nor the central tendencies of descriptive parameter values. We should restore the cumulative record to the place of honor as our means of visualizing behavioral change, and we should base our conclusions on analyses that measure where the change occurs in these response-by-response records of the behavior of individual subjects. When that is done, we may find that the extinction of responding to a continuously reinforced stimulus is faster than the extinction of responding to a partially reinforced stimulus in a within-subject design because the latter is signaled extinction. Copyright © 2012 Elsevier B.V. All rights reserved.
ASSESSMENT OF DYNAMIC PRA TECHNIQUES WITH INDUSTRY AVERAGE COMPONENT PERFORMANCE DATA
Energy Technology Data Exchange (ETDEWEB)
Yadav, Vaibhav; Agarwal, Vivek; Gribok, Andrei V.; Smith, Curtis L.
2017-06-01
In the nuclear industry, risk monitors are intended to provide a point-in-time estimate of the system risk given the current plant configuration. Current risk monitors are limited in that they do not properly take into account the deteriorating states of plant equipment, which are unit-specific. Current approaches to computing risk monitors use probabilistic risk assessment (PRA) techniques, but the assessment is typically a snapshot in time. Living PRA models attempt to address limitations of traditional PRA models in a limited sense by including temporary changes in plant and system configurations. However, information on plant component health are not considered. This often leaves risk monitors using living PRA models incapable of conducting evaluations with dynamic degradation scenarios evolving over time. There is a need to develop enabling approaches to solidify risk monitors to provide time and condition-dependent risk by integrating traditional PRA models with condition monitoring and prognostic techniques. This paper presents estimation of system risk evolution over time by integrating plant risk monitoring data with dynamic PRA methods incorporating aging and degradation. Several online, non-destructive approaches have been developed for diagnosing plant component conditions in nuclear industry, i.e., condition indication index, using vibration analysis, current signatures, and operational history [1]. In this work the component performance measures at U.S. commercial nuclear power plants (NPP) [2] are incorporated within the various dynamic PRA methodologies [3] to provide better estimates of probability of failures. Aging and degradation is modeled within the Level-1 PRA framework and is applied to several failure modes of pumps and can be extended to a range of components, viz. valves, generators, batteries, and pipes.
Consolidated techniques for groups of enterprises with complex structure
Directory of Open Access Journals (Sweden)
Cristina Ciuraru-Andrica
2009-12-01
Full Text Available The preparation and disclosure of the financial statements of a group of enterprises involves some consolidation techniques. The Literature presents many techniques, but in practice are used two of them. They will be described first of all in a particular manner and after that in a comparative one. The group of entities can choose one of these techniques, the final result (the consolidated financial statements being the same, whatever the option.
Altshuller, Aubrey P
1955-01-01
The average bond energies D(gm)(B-Z) for boron-containing molecules have been calculated by the Pauling geometric-mean equation. These calculated bond energies are compared with the average bond energies D(exp)(B-Z) obtained from experimental data. The higher values of D(exp)(B-Z) in comparison with D(gm)(B-Z) when Z is an element in the fifth, sixth, or seventh periodic group may be attributed to resonance stabilization or double-bond character.
A Survey of Spatio-Temporal Grouping Techniques
National Research Council Canada - National Science Library
Megret, Remi; DeMenthon, Daniel
2002-01-01
...) segmentation by trajectory grouping, and (3) joint spatial and temporal segmentation. The first category is the broadest, as it inherits the legacy techniques of image segmentation and motion segmentation...
International Nuclear Information System (INIS)
Blink, J.A.; Dye, R.E.; Kimlinger, J.R.
1981-12-01
Calculation of neutron activation of proposed fusion reactors requires a library of neutron-activation cross sections. One such library is ACTL, which is being updated and expanded by Howerton. If the energy-dependent neutron flux is also known as a function of location and time, the buildup and decay of activation products can be calculated. In practice, hand calculation is impractical without energy-averaged cross sections because of the large number of energy groups. A widely used activation computer code, ORIGEN2, also requires energy-averaged cross sections. Accordingly, we wrote the ORLIB code to collapse the ACTL library, using the flux as a weighting function. The ORLIB code runs on the LLNL Cray computer network. We have also modified ORIGEN2 to accept the expanded activation libraries produced by ORLIB
Group decision-making techniques for natural resource management applications
Coughlan, Beth A.K.; Armour, Carl L.
1992-01-01
This report is an introduction to decision analysis and problem-solving techniques for professionals in natural resource management. Although these managers are often called upon to make complex decisions, their training in the natural sciences seldom provides exposure to the decision-making tools developed in management science. Our purpose is to being to fill this gap. We present a general analysis of the pitfalls of group problem solving, and suggestions for improved interactions followed by the specific techniques. Selected techniques are illustrated. The material is easy to understand and apply without previous training or excessive study and is applicable to natural resource management issues.
Expressed satisfaction with the nominal group technique among change agents
Gresham, J.N.
1986-01-01
Expressed Satisfaction with the Nominal Group Technique Among Change Agents. Jon Neal Gresham The purpose of this study was to determine whether or not policymakers and change agents with differing professional backgrounds and responsibilities, who participated in the structured process of a
Renormalization group decimation technique for disordered binary harmonic chains
International Nuclear Information System (INIS)
Wiecko, C.; Roman, E.
1983-10-01
The density of states of disordered binary harmonic chains is calculated using the Renormalization Group Decimation technique on the displacements of the masses from their equilibrium positions. The results are compared with numerical simulation data and with those obtained with the current method of Goncalves da Silva and Koiller. The advantage of our procedure over other methods is discussed. (author)
Directory of Open Access Journals (Sweden)
Amjad Ali
2015-01-01
Full Text Available A new simple moving voltage average (SMVA technique with fixed step direct control incremental conductance method is introduced to reduce solar photovoltaic voltage (VPV oscillation under nonuniform solar irradiation conditions. To evaluate and validate the performance of the proposed SMVA method in comparison with the conventional fixed step direct control incremental conductance method under extreme conditions, different scenarios were simulated. Simulation results show that in most cases SMVA gives better results with more stability as compared to traditional fixed step direct control INC with faster tracking system along with reduction in sustained oscillations and possesses fast steady state response and robustness. The steady state oscillations are almost eliminated because of extremely small dP/dV around maximum power (MP, which verify that the proposed method is suitable for standalone PV system under extreme weather conditions not only in terms of bus voltage stability but also in overall system efficiency.
Gessner, Manuel; Breuer, Heinz-Peter
2013-04-01
We obtain exact analytic expressions for a class of functions expressed as integrals over the Haar measure of the unitary group in d dimensions. Based on these general mathematical results, we investigate generic dynamical properties of complex open quantum systems, employing arguments from ensemble theory. We further generalize these results to arbitrary eigenvalue distributions, allowing a detailed comparison of typical regular and chaotic systems with the help of concepts from random matrix theory. To illustrate the physical relevance and the general applicability of our results we present a series of examples related to the fields of open quantum systems and nonequilibrium quantum thermodynamics. These include the effect of initial correlations, the average quantum dynamical maps, the generic dynamics of system-environment pure state entanglement and, finally, the equilibration of generic open and closed quantum systems.
Regnier, David; Lacroix, Denis; Scamps, Guillaume; Hashimoto, Yukio
2018-03-01
In a mean-field description of superfluidity, particle number and gauge angle are treated as quasiclassical conjugated variables. This level of description was recently used to describe nuclear reactions around the Coulomb barrier. Important effects of the relative gauge angle between two identical superfluid nuclei (symmetric collisions) on transfer probabilities and fusion barrier have been uncovered. A theory making contact with experiments should at least average over different initial relative gauge-angles. In the present work, we propose a new approach to obtain the multiple pair transfer probabilities between superfluid systems. This method, called phase-space combinatorial (PSC) technique, relies both on phase-space averaging and combinatorial arguments to infer the full pair transfer probability distribution at the cost of multiple mean-field calculations only. After benchmarking this approach in a schematic model, we apply it to the collision 20O+20O at various energies below the Coulomb barrier. The predictions for one pair transfer are similar to results obtained with an approximated projection method, whereas significant differences are found for two pairs transfer. Finally, we investigated the applicability of the PSC method to the contact between nonidentical superfluid systems. A generalization of the method is proposed and applied to the schematic model showing that the pair transfer probabilities are reasonably reproduced. The applicability of the PSC method to asymmetric nuclear collisions is investigated for the 14O+20O collision and it turns out that unrealistically small single- and multiple pair transfer probabilities are obtained. This is explained by the fact that relative gauge angle play in this case a minor role in the particle transfer process compared to other mechanisms, such as equilibration of the charge/mass ratio. We conclude that the best ground for probing gauge-angle effects in nuclear reaction and/or for applying the proposed
Energy Technology Data Exchange (ETDEWEB)
Lefere, Philippe, E-mail: radiologie@skynet.be [VCTC, Virtual Colonoscopy Teaching Centre, Akkerstraat 32c, B-8830 Hooglede (Belgium); Silva, Celso, E-mail: caras@uma.pt [Human Anatomy of Medical Course, University of Madeira, Praça do Município, 9000-082 Funchal (Portugal); Gryspeerdt, Stefaan, E-mail: stefaan@sgryspeerdt.be [VCTC, Virtual Colonoscopy Teaching Centre, Akkerstraat 32c, B-8830 Hooglede (Belgium); Rodrigues, António, E-mail: nucleo@nid.pt [Nucleo Imagem Diagnostica, Rua 5 De Outubro, 9000-216 Funchal (Portugal); Vasconcelos, Rita, E-mail: rita@uma.pt [Department of Engineering and Mathematics, University of Madeira, Praça do Município, 9000-082 Funchal (Portugal); Teixeira, Ricardo, E-mail: j.teixeira1947@gmail.com [Department of Gastroenterology, Central Hospital of Funchal, Avenida Luís de Camões, 9004513 Funchal (Portugal); Gouveia, Francisco Henriques de, E-mail: fhgouveia@netmadeira.com [LANA, Pathology Centre, Rua João Gago, 10, 9000-071 Funchal (Portugal)
2013-06-15
Purpose: To prospectively assess the performance of teleradiology-based CT colonography to screen a population group of an island, at average risk for colorectal cancer. Materials and methods: A cohort of 514 patients living in Madeira, Portugal, was enrolled in the study. Institutional review board approval was obtained and all patients signed an informed consent. All patients underwent both CT colonography and optical colonoscopy. CT colonography was interpreted by an experienced radiologist at a remote centre using tele-radiology. Per-patient sensitivity, specificity, positive (PPV) and negative (NPV) predictive values with 95% confidence intervals (95%CI) were calculated for colorectal adenomas and advanced neoplasia ≥6 mm. Results: 510 patients were included in the study. CT colonography obtained a per-patient sensitivity, specificity, PPV and, NPV for adenomas ≥6 mm of 98.11% (88.6–99.9% 95% CI), 90.97% (87.8–93.4% 95% CI), 56.52% (45.8–66.7% 95% CI), 99.75% (98.4–99.9% 95% CI). For advanced neoplasia ≥6 mm per-patient sensitivity, specificity, PPV and, NPV were 100% (86.7–100% 95% CI), 87.07% (83.6–89.9% 95% CI), 34.78% (25.3–45.5% 95% CI) and 100% (98.8–100% 95% CI), respectively. Conclusion: In this prospective trial, teleradiology-based CT colonography was accurate to screen a patient cohort of a remote island, at average risk for colorectal cancer.
International Nuclear Information System (INIS)
Lefere, Philippe; Silva, Celso; Gryspeerdt, Stefaan; Rodrigues, António; Vasconcelos, Rita; Teixeira, Ricardo; Gouveia, Francisco Henriques de
2013-01-01
Purpose: To prospectively assess the performance of teleradiology-based CT colonography to screen a population group of an island, at average risk for colorectal cancer. Materials and methods: A cohort of 514 patients living in Madeira, Portugal, was enrolled in the study. Institutional review board approval was obtained and all patients signed an informed consent. All patients underwent both CT colonography and optical colonoscopy. CT colonography was interpreted by an experienced radiologist at a remote centre using tele-radiology. Per-patient sensitivity, specificity, positive (PPV) and negative (NPV) predictive values with 95% confidence intervals (95%CI) were calculated for colorectal adenomas and advanced neoplasia ≥6 mm. Results: 510 patients were included in the study. CT colonography obtained a per-patient sensitivity, specificity, PPV and, NPV for adenomas ≥6 mm of 98.11% (88.6–99.9% 95% CI), 90.97% (87.8–93.4% 95% CI), 56.52% (45.8–66.7% 95% CI), 99.75% (98.4–99.9% 95% CI). For advanced neoplasia ≥6 mm per-patient sensitivity, specificity, PPV and, NPV were 100% (86.7–100% 95% CI), 87.07% (83.6–89.9% 95% CI), 34.78% (25.3–45.5% 95% CI) and 100% (98.8–100% 95% CI), respectively. Conclusion: In this prospective trial, teleradiology-based CT colonography was accurate to screen a patient cohort of a remote island, at average risk for colorectal cancer
Pangaribuan, Tagor; Manik, Sondang
2018-01-01
This research held at SMA HKBP 1 Tarutung North Sumatra on the research result of test XI[superscript 2] and XI[superscript 2] students, after they got treatment in teaching writing in recount text by using buzz group and clustering technique. The average score (X) was 67.7 and the total score buzz group the average score (X) was 77.2 and in…
Directory of Open Access Journals (Sweden)
I.R. MOLDOVAN
2009-10-01
Full Text Available Research aimed to highlight the weight differences and daily growth average of four groups of little crossbreed bulls raised in the same environmental conditions and having the same feeding diet. Farm in which they do research is TCE 3 abis SRL Piatra Neamt, located in Zanesti village at 14 km from the city of Piatra Neamt. Location of the farm is on the old IAS Zanesti and is endowed eight shelters from which two are still functional. Shelters are divided into collective lumber rooms, on which are housed an optimal number of calves depending on their age, number varied from 25 calves at 0 - 3 months up to 6 heads during growing and finishing period when they reach weights of 600-700 kg. Farm population is obtained with calves from reformed cows from milk farm belonging to the same company. Forage base is provided from the company's vegetable farm, farm exploits about 14,000 ha of arable land in Neamt County. Feeding (in three phases is made with the technological trailer once-daily in morning and drinking is made at discretion at constant.
Comparative regulatory approaches for groups of new plant breeding techniques.
Lusser, Maria; Davies, Howard V
2013-06-25
This manuscript provides insights into ongoing debates on the regulatory issues surrounding groups of biotechnology-driven 'New Plant Breeding Techniques' (NPBTs). It presents the outcomes of preliminary discussions and in some cases the initial decisions taken by regulators in the following countries: Argentina, Australia, Canada, EU, Japan, South Africa and USA. In the light of these discussions we suggest in this manuscript a structured approach to make the evaluation more consistent and efficient. The issue appears to be complex as these groups of new technologies vary widely in both the technologies deployed and their impact on heritable changes in the plant genome. An added complication is that the legislation, definitions and regulatory approaches for biotechnology-derived crops differ significantly between these countries. There are therefore concerns that this situation will lead to non-harmonised regulatory approaches and asynchronous development and marketing of such crops resulting in trade disruptions. Copyright © 2013 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Chen, S.; Liu, H.-L.; Yang Yihong; Hsu, Y.-Y.; Chuang, K.-S.
2006-01-01
Quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast (DSC) magnetic resonance imaging (MRI) requires the determination of the arterial input function (AIF). The segmentation of surrounding tissue by manual selection is error-prone due to the partial volume artifacts. Independent component analysis (ICA) has the advantage in automatically decomposing the signals into interpretable components. Recently group ICA technique has been applied to fMRI study and showed reduced variance caused by motion artifact and noise. In this work, we investigated the feasibility and efficacy of the use of group ICA technique to extract the AIF. Both simulated and in vivo data were analyzed in this study. The simulation data of eight phantoms were generated using randomized lesion locations and time activity curves. The clinical data were obtained from spin-echo EPI MR scans performed in seven normal subjects. Group ICA technique was applied to analyze data through concatenating across seven subjects. The AIFs were calculated from the weighted average of the signals in the region selected by ICA. Preliminary results of this study showed that group ICA technique could not extract accurate AIF information from regions around the vessel. The mismatched location of vessels within the group reduced the benefits of group study
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
International Nuclear Information System (INIS)
Vasil'ev, Yu.A.; Barashkov, Yu.A.; Golovanov, O.A.; Sidorov, L.V.
1977-01-01
A method for determining the average number of secondary neutrons anti ν produced in nuclear fission by the neutrons of the 252 Cf fission spectra by means of a 4π time-of-flight spectrometer is described. Layers of 252 Cf and an isotope studied are placed close to each other; if the isotope layer density is 1 mg/cm 2 probability of its fission is about 10 -5 per one spontaneous fission of californium. Fission fragments of 252 Cf and the isotope investigated have been detected by two surface-barrier counters with an efficiency close to 100%. The layers and the counters are situated in a measuring chamber placed in the center of the 4π time-of-flight spectrometer. The latter is utilized as a neutron counter because of its fast response. The method has been verified by carrying out measurements for 235 U and 239 Pu. A comparison of the experimental and calculated results shows that the method suggested can apply to determine the number of secondary neutrons in fission of isotopes that have not been investigated yet
Edwards, Jack R.; Mcrae, D. S.
1993-01-01
An efficient implicit method for the computation of steady, three-dimensional, compressible Navier-Stokes flowfields is presented. A nonlinear iteration strategy based on planar Gauss-Seidel sweeps is used to drive the solution toward a steady state, with approximate factorization errors within a crossflow plane reduced by the application of a quasi-Newton technique. A hybrid discretization approach is employed, with flux-vector splitting utilized in the streamwise direction and central differences with artificial dissipation used for the transverse fluxes. Convergence histories and comparisons with experimental data are presented for several 3-D shock-boundary layer interactions. Both laminar and turbulent cases are considered, with turbulent closure provided by a modification of the Baldwin-Barth one-equation model. For the problems considered (175,000-325,000 mesh points), the algorithm provides steady-state convergence in 900-2000 CPU seconds on a single processor of a Cray Y-MP.
Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William
2014-01-01
Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853
Abdul Jameel, Abdul Gani
2016-04-22
Heavy fuel oil (HFO) is primarily used as fuel in marine engines and in boilers to generate electricity. Nuclear Magnetic Resonance (NMR) is a powerful analytical tool for structure elucidation and in this study, 1H NMR and 13C NMR spectroscopy were used for the structural characterization of 2 HFO samples. The NMR data was combined with elemental analysis and average molecular weight to quantify average molecular parameters (AMPs), such as the number of paraffinic carbons, naphthenic carbons, aromatic hydrogens, olefinic hydrogens, etc. in the HFO samples. Recent formulae published in the literature were used for calculating various derived AMPs like aromaticity factor 〖(f〗_a), C/H ratio, average paraffinic chain length (¯n), naphthenic ring number 〖(R〗_N), aromatic ring number〖 (R〗_A), total ring number〖 (R〗_T), aromatic condensation index (φ) and aromatic condensation degree (Ω). These derived AMPs help in understanding the overall structure of the fuel. A total of 19 functional groups were defined to represent the HFO samples, and their respective concentrations were calculated by formulating balance equations that equate the concentration of the functional groups with the concentration of the AMPs. Heteroatoms like sulfur, nitrogen, and oxygen were also included in the functional groups. Surrogate molecules were finally constructed to represent the average structure of the molecules present in the HFO samples. This surrogate molecule can be used for property estimation of the HFO samples and also serve as a surrogate to represent the molecular structure for use in kinetic studies.
Abdul Jameel, Abdul Gani; Elbaz, Ayman M.; Emwas, Abdul-Hamid M.; Roberts, William L.; Sarathy, Mani
2016-01-01
Heavy fuel oil (HFO) is primarily used as fuel in marine engines and in boilers to generate electricity. Nuclear Magnetic Resonance (NMR) is a powerful analytical tool for structure elucidation and in this study, 1H NMR and 13C NMR spectroscopy were used for the structural characterization of 2 HFO samples. The NMR data was combined with elemental analysis and average molecular weight to quantify average molecular parameters (AMPs), such as the number of paraffinic carbons, naphthenic carbons, aromatic hydrogens, olefinic hydrogens, etc. in the HFO samples. Recent formulae published in the literature were used for calculating various derived AMPs like aromaticity factor 〖(f〗_a), C/H ratio, average paraffinic chain length (¯n), naphthenic ring number 〖(R〗_N), aromatic ring number〖 (R〗_A), total ring number〖 (R〗_T), aromatic condensation index (φ) and aromatic condensation degree (Ω). These derived AMPs help in understanding the overall structure of the fuel. A total of 19 functional groups were defined to represent the HFO samples, and their respective concentrations were calculated by formulating balance equations that equate the concentration of the functional groups with the concentration of the AMPs. Heteroatoms like sulfur, nitrogen, and oxygen were also included in the functional groups. Surrogate molecules were finally constructed to represent the average structure of the molecules present in the HFO samples. This surrogate molecule can be used for property estimation of the HFO samples and also serve as a surrogate to represent the molecular structure for use in kinetic studies.
Intersubassembly incoherencies and grouping techniques in LMFBR hypothetical overpower accident
International Nuclear Information System (INIS)
Wilburn, N.P.
1977-10-01
A detailed analysis was made of the FTR core using the 100-channel MELT-IIIA code. Results were studied for the transient overpower accident (where 0.5$/sec and 1$/sec ramps) and in which the Damage Parameter and the Failure Potential criteria were used. Using the information obtained from these series of runs, a new method of grouping the subassemblies into channels has been developed. Also, it was demonstrated that a 7-channel representation of the FTR core using this method does an adequate job of representing the behavior during a hypothetical disruptive transient overpower core accident. It has been shown that this new 7-channel grouping method does a better job than an earlier 20-channel grouping. It has also been demonstrated that the incoherency effects between subassemblies as shown during the 76-channel representation of the reactor can be adequately modeled by 7-channels, provided the 7-channels are selected according to the criteria stated in the report. The overall results of power and net reactivity were shown to be only slightly different in the two cases of the 7-channel and the 76-channel runs. Therefore, it can be concluded that any intersubassembly incoherencies can be modeled adequately by a small number of channels, provided the subassemblies making up these channels are selected according to the criteria stated
Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen
2018-01-01
We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.
Three decision-making aids: brainstorming, nominal group, and Delphi technique.
McMurray, A R
1994-01-01
The methods of brainstorming, Nominal Group Technique, and the Delphi technique can be important resources for nursing staff development educators who wish to expand their decision-making skills. Staff development educators may find opportunities to use these methods for such tasks as developing courses, setting departmental goals, and forecasting trends for planning purposes. Brainstorming, Nominal Group Technique, and the Delphi technique provide a structured format that helps increase the quantity and quality of participant responses.
Power plant siting; an application of the nominal group process technique
International Nuclear Information System (INIS)
Voelker, A.H.
1976-01-01
The application of interactive group processes to the problem of facility siting is examined by this report. Much of the discussion is abstracted from experience gained in applying the Nominal Group Process Technique, an interactive group technique, to the identification and rating of factors important in siting nuclear power plants. Through this experience, interactive group process techniques are shown to facilitate the incorporation of the many diverse factors which play a role in siting. In direct contrast to mathematical optimization, commonly represented as the ultimate siting technique, the Nominal Group Process Technique described allows the incorporation of social, economic, and environmental factors and the quantification of the relative importance of these factors. The report concludes that the application of interactive group process techniques to planning and resource management will affect the consideration of social, economic, and environmental concerns and ultimately lead to more rational and credible siting decisions
Directory of Open Access Journals (Sweden)
S. P. Arunachalam
2018-01-01
Full Text Available Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR and atrial fibrillation (AF was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.
Wonjung Noh, PhD, RN; Ji Young Lim, PhD, RN, MBA
2015-01-01
Purpose: The purpose of this study was to identify the financial management educational needs of nurses in order to development an educational program to strengthen their financial management competencies. Methods: Data were collected from two focus groups using the nominal group technique. The study consisted of three steps: a literature review, focus group discussion using the nominal group technique, and data synthesis. Results: After analyzing the results, nine key components were s...
Bhattacharya, Anindya; De, Rajat K
2010-08-01
Distance based clustering algorithms can group genes that show similar expression values under multiple experimental conditions. They are unable to identify a group of genes that have similar pattern of variation in their expression values. Previously we developed an algorithm called divisive correlation clustering algorithm (DCCA) to tackle this situation, which is based on the concept of correlation clustering. But this algorithm may also fail for certain cases. In order to overcome these situations, we propose a new clustering algorithm, called average correlation clustering algorithm (ACCA), which is able to produce better clustering solution than that produced by some others. ACCA is able to find groups of genes having more common transcription factors and similar pattern of variation in their expression values. Moreover, ACCA is more efficient than DCCA with respect to the time of execution. Like DCCA, we use the concept of correlation clustering concept introduced by Bansal et al. ACCA uses the correlation matrix in such a way that all genes in a cluster have the highest average correlation values with the genes in that cluster. We have applied ACCA and some well-known conventional methods including DCCA to two artificial and nine gene expression datasets, and compared the performance of the algorithms. The clustering results of ACCA are found to be more significantly relevant to the biological annotations than those of the other methods. Analysis of the results show the superiority of ACCA over some others in determining a group of genes having more common transcription factors and with similar pattern of variation in their expression profiles. Availability of the software: The software has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/~rajat. Then it needs to be installed. Two word files (included in the zip file) need to
Evaluation of a Small-Group Technique as a Teacher Training Instrument. Final Report.
Whipple, Babette S.
An exploratory study was designed to determine whether the use of a new, small group technique adds significantly to the level of training in early childhood education. Two groups of five student teachers learned the technique and were then evaluated. The evaluation procedure was designed to measure changes in their educational objectives, their…
Energy Technology Data Exchange (ETDEWEB)
Majander, E.O.J.; Manninen, M.T. [VTT Energy, Espoo (Finland)
1996-12-31
The flow induced by a pitched blade turbine was simulated using the sliding mesh technique. The detailed geometry of the turbine was modelled in a computational mesh rotating with the turbine and the geometry of the reactor including baffles was modelled in a stationary co-ordinate system. Effects of grid density were investigated. Turbulence was modelled by using the standard k-{epsilon} model. Results were compared to experimental observations. Velocity components were found to be in good agreement with the measured values throughout the tank. Averaged source terms were calculated from the sliding mesh simulations in order to investigate the reliability of the source term approach. The flow field in the tank was then simulated in a simple grid using these source terms. Agreement with the results of the sliding mesh simulations was good. Commercial CFD-code FLUENT was used in all simulations. (author)
Energy Technology Data Exchange (ETDEWEB)
Majander, E O.J.; Manninen, M T [VTT Energy, Espoo (Finland)
1997-12-31
The flow induced by a pitched blade turbine was simulated using the sliding mesh technique. The detailed geometry of the turbine was modelled in a computational mesh rotating with the turbine and the geometry of the reactor including baffles was modelled in a stationary co-ordinate system. Effects of grid density were investigated. Turbulence was modelled by using the standard k-{epsilon} model. Results were compared to experimental observations. Velocity components were found to be in good agreement with the measured values throughout the tank. Averaged source terms were calculated from the sliding mesh simulations in order to investigate the reliability of the source term approach. The flow field in the tank was then simulated in a simple grid using these source terms. Agreement with the results of the sliding mesh simulations was good. Commercial CFD-code FLUENT was used in all simulations. (author)
Energy Technology Data Exchange (ETDEWEB)
Tatsugami, Fuminari; Higaki, Toru; Nakamura, Yuko; Yamagami, Takuji; Date, Shuji; Awai, Kazuo [Hiroshima University, Department of Diagnostic Radiology, Minami-ku, Hiroshima (Japan); Fujioka, Chikako; Kiguchi, Masao [Hiroshima University, Department of Radiology, Minami-ku, Hiroshima (Japan); Kihara, Yasuki [Hiroshima University, Department of Cardiovascular Medicine, Minami-ku, Hiroshima (Japan)
2015-01-15
To investigate the feasibility of a newly developed noise reduction technique at coronary CT angiography (CTA) that uses multi-phase data-averaging and non-rigid image registration. Sixty-five patients underwent coronary CTA with prospective ECG-triggering. The range of the phase window was set at 70-80 % of the R-R interval. First, three sets of consecutive volume data at 70 %, 75 % and 80 % of the R-R interval were prepared. Second, we applied non-rigid registration to align the 70 % and 80 % images to the 75 % image. Finally, we performed weighted averaging of the three images and generated a de-noised image. The image noise and contrast-to-noise ratio (CNR) in the proximal coronary arteries between the conventional 75 % and the de-noised images were compared. Two radiologists evaluated the image quality using a 5-point scale (1, poor; 5, excellent). On de-noised images, mean image noise was significantly lower than on conventional 75 % images (18.3 HU ± 2.6 vs. 23.0 HU ± 3.3, P < 0.01) and the CNR was significantly higher (P < 0.01). The mean image quality score for conventional 75 % and de-noised images was 3.9 and 4.4, respectively (P < 0.01). Our method reduces image noise and improves image quality at coronary CTA. (orig.)
Mossetti, Stefano; de Bartolo, Daniela; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina; Nava, Elisa
2017-04-01
International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Mossetti, Stefano; Bartolo, Daniela de; Nava, Elisa; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina
2017-01-01
International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. (authors)
Energy Technology Data Exchange (ETDEWEB)
Teo, Troy; Alayoubi, Nadia; Bruce, Neil; Pistorius, Stephen [University of Manitoba/ CancerCare Manitoba, University of Manitoba, University of Manitoba, University of Manitoba / CancerCare Manitoba (Canada)
2016-08-15
Purpose: In image-guided adaptive radiotherapy systems, prediction of tumor motion is required to compensate for system latencies. However, due to the non-stationary nature of respiration, it is a challenge to predict the associated tumor motions. In this work, a systematic design of the neural network (NN) using a mixture of online data acquired during the initial period of the tumor trajectory, coupled with a generalized model optimized using a group of patient data (obtained offline) is presented. Methods: The average error surface obtained from seven patients was used to determine the input data size and number of hidden neurons for the generalized NN. To reduce training time, instead of using random weights to initialize learning (method 1), weights inherited from previous training batches (method 2) were used to predict tumor position for each sliding window. Results: The generalized network was established with 35 input data (∼4.66s) and 20 hidden nodes. For a prediction horizon of 650 ms, mean absolute errors of 0.73 mm and 0.59 mm were obtained for method 1 and 2 respectively. An average initial learning period of 8.82 s is obtained. Conclusions: A network with a relatively short initial learning time was achieved. Its accuracy is comparable to previous studies. This network could be used as a plug-and play predictor in which (a) tumor positions can be predicted as soon as treatment begins and (b) the need for pretreatment data and optimization for individual patients can be avoided.
International Nuclear Information System (INIS)
Teo, Troy; Alayoubi, Nadia; Bruce, Neil; Pistorius, Stephen
2016-01-01
Purpose: In image-guided adaptive radiotherapy systems, prediction of tumor motion is required to compensate for system latencies. However, due to the non-stationary nature of respiration, it is a challenge to predict the associated tumor motions. In this work, a systematic design of the neural network (NN) using a mixture of online data acquired during the initial period of the tumor trajectory, coupled with a generalized model optimized using a group of patient data (obtained offline) is presented. Methods: The average error surface obtained from seven patients was used to determine the input data size and number of hidden neurons for the generalized NN. To reduce training time, instead of using random weights to initialize learning (method 1), weights inherited from previous training batches (method 2) were used to predict tumor position for each sliding window. Results: The generalized network was established with 35 input data (∼4.66s) and 20 hidden nodes. For a prediction horizon of 650 ms, mean absolute errors of 0.73 mm and 0.59 mm were obtained for method 1 and 2 respectively. An average initial learning period of 8.82 s is obtained. Conclusions: A network with a relatively short initial learning time was achieved. Its accuracy is comparable to previous studies. This network could be used as a plug-and play predictor in which (a) tumor positions can be predicted as soon as treatment begins and (b) the need for pretreatment data and optimization for individual patients can be avoided.
Directory of Open Access Journals (Sweden)
Moinuddin Ghauri
2017-06-01
Full Text Available The Temperature Programmed Reduction (TPR technique is employed for the characterisation of various organic sulphur functional groups in coal. The TPR technique is modified into the Temperature Programmed Identification technique to investigate whether this method can detect various functional groups corresponding to their reduction temperatures. Ollerton, Harworth, Silverdale, Prince of Wales coal and Mequinenza lignite were chosen for this study. High pressure oxydesulphurisation of the coal samples was also done. The characterization of various organic sulphur functional groups present in untreated and treated coal by the TPR method and later by the TPI method confirmed that these methods can identify the organic sulphur groups in coal and that the results based on total sulphur are comparable with those provided by standard analytical techniques. The analysis of the untreated and treated coal samples showed that the structural changes in the organic sulphur matrix due to a reaction can be determined.
Multiple group radiator and hybrid test heads, possibilities of combining the array technique
International Nuclear Information System (INIS)
Wuestenberg, H.
1993-01-01
This article is intended to show the important considerations, which led to the development of the multichannel group radiator technique. Trends in development and the advantages and disadvantages of the different possibilities are introduced, against the background of experience now available for these configurative variants of ultrasonic test heads. For this reason, a series of experiences and arguments is reported, from the point of view of the developer of the multi-channel group radiator technique. (orig./HP) [de
Directory of Open Access Journals (Sweden)
Yu-Chia Chang
2008-01-01
Full Text Available Three cruises with shipboard Acoustic Doppler Current Profiler (ADCP were performed along a transect across the Peng-hu Channel (PHC in the Taiwan Strait during 2003 - 2004 in order to investigate the feasibility and accuracy of the phase-averaging method to eliminate tidal components from shipboard ADCP measurement of currents. In each cruise measurement was repeated a number of times along the transect with a specified time lag of either 5, 6.21, or 8 hr, and the repeated data at the same location were averaged to eliminate the tidal currents; this is the so-called ¡§phase-averaging method¡¨. We employed 5-phase-averaging, 4-phase-averaging, 3-phase-averaging, and 2-phase-averaging methods in this study. The residual currents and volume transport of the PHC derived from various phase-averaging methods were intercompared and were also compared with results of the least-square harmonic reduction method proposed by Simpson et al. (1990 and the least-square interpolation method using Gaussian function (Wang et al. 2004. The estimated uncertainty of the residual flow through the PHC derived from the 5-phase-averaging, 4-phase-averaging, 3-phase-averaging, and 2-phase-averaging methods is 0.3, 0.3, 1.3, and 4.6 cm s-1, respectively. Procedures for choosing a best phase average method to remove tidal currents in any particular region are also suggested.
Group Counseling: Techniques for Teaching Social Skills to Students with Special Needs
Stephens, Derk; Jain, Sachin; Kim, Kioh
2010-01-01
This paper examines literature that supports the use of group counseling techniques in the school setting to teach social skills to children and adolescents with special needs. From the review of this literature it was found that group counseling is a very effective way of addressing a variety of social skills problems that can be displayed by…
P-R-R Study Technique, Group Counselling And Gender Influence ...
African Journals Online (AJOL)
Read-Recall (P-R-R) study technique and group counselling on the academic performance of senior secondary school students. The objectives of this study were to determine the effect of Group Counselling combined with P-R-R study ...
International Nuclear Information System (INIS)
Berns, Eric A.; Hendrick, R. Edward; Cutter, Gary R.
2003-01-01
Contrast-detail experiments were performed to optimize technique factors for the detection of low-contrast lesions using a silicon diode array full-field digital mammography (FFDM) system under the conditions of a matched average glandular dose (AGD) for different techniques. Optimization was performed for compressed breast thickness from 2 to 8 cm. FFDM results were compared to screen-film mammography (SFM) at each breast thickness. Four contrast-detail (CD) images were acquired on a SFM unit with optimal techniques at 2, 4, 6, and 8 cm breast thicknesses. The AGD for each breast thickness was calculated based on half-value layer (HVL) and entrance exposure measurements on the SFM unit. A computer algorithm was developed and used to determine FFDM beam current (mAs) that matched AGD between FFDM and SFM at each thickness, while varying target, filter, and peak kilovoltage (kVp) across the full range available for the FFDM unit. CD images were then acquired on FFDM for kVp values from 23-35 for a molybdenum-molybdenum (Mo-Mo), 23-40 for a molybdenum-rhodium (Mo-Rh), and 25-49 for a rhodium-rhodium (Rh-Rh) target-filter under the constraint of matching the AGD from screen-film for each breast thickness (2, 4, 6, and 8 cm). CD images were scored independently for SFM and each FFDM technique by six readers. CD scores were analyzed to assess trends as a function of target-filter and kVp and were compared to SFM at each breast thickness. For 2 cm thick breasts, optimal FFDM CD scores occurred at the lowest possible kVp setting for each target-filter, with significant decreases in FFDM CD scores as kVp was increased under the constraint of matched AGD. For 2 cm breasts, optimal FFDM CD scores were not significantly different from SFM CD scores. For 4-8 cm breasts, optimum FFDM CD scores were superior to SFM CD scores. For 4 cm breasts, FFDM CD scores decreased as kVp increased for each target-filter combination. For 6 cm breasts, CD scores decreased slightly as k
Basch, C E
1987-01-01
The purpose of this article is to increase awareness about and stimulate interest in using focus group interviews, a qualitative research technique, to advance the state-of-the-art of education and learning about health. After a brief discussion of small group process in health education, features of focus group interviews are presented, and a theoretical framework for planning a focus group study is summarized. Then, literature describing traditional and health-related applications of focus group interviews is reviewed and a synthesis of methodological limitations and advantages of this technique is presented. Implications are discussed regarding: need for more inductive qualitative research in health education; utility of focus group interviews for research and for formative and summative evaluation of health education programs; applicability of marketing research to understanding and influencing consumer behavior, despite notable distinctions between educational initiatives and marketing; and need for professional preparation faculty to consider increasing emphasis on qualitative research methods.
Directory of Open Access Journals (Sweden)
Muhammad Usman
2016-12-01
Full Text Available Blood grouping is a vital test in pre-transfusion testing. Both tube and gel agglutination assays are used for ABO grouping. The main object of this study was to compare ABO grouping and D typing on tube and gel agglutination assay in order to assess the efficacy of each technique. A total of 100 healthy blood donors irrespective of age and sex were included in this study. Results showed that there is no significant difference between these two techniques. However, in 10 samples it was detected that the reaction strength in serum ABO grouping by gel agglutination assay is varied by only one grade when compared to tube agglutination assay. Due to numerous positive effects of gel assay it is more beneficial to implement this technique in the setups where blood banks bear heavy routine work load.
Bailey, Anthony
2013-01-01
The nominal group technique (NGT) is a structured process to gather information from a group. The technique was first described in 1975 and has since become a widely-used standard to facilitate working groups. The NGT is effective for generating large numbers of creative new ideas and for group priority setting. This paper describes the process of…
Directory of Open Access Journals (Sweden)
Erica van de Waal
Full Text Available Animal social learning has become a subject of broad interest, but demonstrations of bodily imitation in animals remain rare. Based on Voelkl and Huber's study of imitation by marmosets, we tested four groups of semi-captive vervet monkeys presented with food in modified film canisters ("aethipops'. One individual was trained to take the tops off canisters in each group and demonstrated five openings to them. In three groups these models used their mouth to remove the lid, but in one of the groups the model also spontaneously pulled ropes on a canister to open it. In the last group the model preferred to remove the lid with her hands. Following these spontaneous differentiations of foraging techniques in the models, we observed the techniques used by the other group members to open the canisters. We found that mouth opening was the most common technique overall, but the rope and hands methods were used significantly more in groups they were demonstrated in than in groups where they were not. Our results show bodily matching that is conventionally described as imitation. We discuss the relevance of these findings to discoveries about mirror neurons, and implications of the identity of the model for social transmission.
Noh, Wonjung; Lim, Ji Young
2015-06-01
The purpose of this study was to identify the financial management educational needs of nurses in order to development an educational program to strengthen their financial management competencies. Data were collected from two focus groups using the nominal group technique. The study consisted of three steps: a literature review, focus group discussion using the nominal group technique, and data synthesis. After analyzing the results, nine key components were selected: corporate management and accounting, introduction to financial management in hospitals, basic structure of accounting, basics of hospital accounting, basics of financial statements, understanding the accounts of financial statements, advanced analysis of financial statements, application of financial management, and capital financing of hospitals. The present findings can be used to develop a financial management education program to strengthen the financial management competencies of nurses. Copyright © 2015. Published by Elsevier B.V.
Dose-reduction techniques for high-dose worker groups in nuclear power plants
International Nuclear Information System (INIS)
Khan, T.A.; Baum, J.W.; Dionne, B.J.
1991-03-01
This report summarizes the main findings of a study of the extent of radiation dose received by special work groups in the nuclear power industry. Work groups which chronically get large doses were investigated, using information provided by the industry. The tasks that give high doses to these work groups were examined and techniques described that were found to be particularly successful in reducing dose. Quantitative information on the extent of radiation doses to various work groups shows that significant numbers of workers in several critical groups receive doses greater than 1 and even 2 rem per year, particularly contract personnel and workers at BWR-type plants. The number of radiation workers whose lifetime dose is greater than their age is much less. Although the techniques presented would go some way in reducing dose, it is likely that a sizeable reduction to the high-dose work groups may require development of new dose-reduction techniques as well as major changes in procedures. 10 refs., 26 tabs
Karatas, Zeynep
2011-01-01
The aim of this study is to examine the effects of group practice which is performed using psychodrama techniques on adolescents' conflict resolution skills. The subjects, for this study, were selected among the high school students who have high aggression levels and low problem solving levels attending Haci Zekiye Arslan High School, in Nigde.…
Pranoto, Hadi; Atieka, Nurul; Wihardjo, Sihadi Darmo; Wibowo, Agus; Nurlaila, Siti; Sudarmaji
2016-01-01
This study aims at: determining students motivation before being given a group guidance with self-regulation technique, determining students' motivation after being given a group counseling with self-regulation technique, generating a model of group counseling with self-regulation technique to improve motivation of learning, determining the…
Group techniques as a methodological strategy in acquiring teamwork abilities by college students
Directory of Open Access Journals (Sweden)
César Torres Martín
2013-02-01
Full Text Available From the frame of the European Higher Education Space an adaptation of teaching-learning process is being promoted by means of the pedagogical renewal, introducing into the class a major number of active or participative methodologies in order to provide students with a major autonomy in said process. This requires taking into account the incorporation of basic skills within university curriculum, especially “teamwork”. By means of group techniques students can acquire interpersonal and cognitive skills, as well as abilities that will enable them to face different group situations throughout their academic and professional career. These techniques are necessary not only as a methodological strategy in the classroom, but also as a reflection instrument for students to assess their behavior in group, with an aim to modify conduct strategies that make that relationship with others influences their learning process. Hence the importance of this ability to sensitize students positively for collective work. Thus using the research-action method in the academic classroom during one semester and making systematic intervention with different group techniques, we manage to present obtained results by means of an analysis of the qualitative data, where the selected instruments are group discussion and personal reflection.
Comparison of small-group training with self-directed internet-based training in inhaler techniques.
Toumas, Mariam; Basheti, Iman A; Bosnic-Anticevich, Sinthia Z
2009-08-28
To compare the effectiveness of small-group training in correct inhaler technique with self-directed Internet-based training. Pharmacy students were randomly allocated to 1 of 2 groups: small-group training (n = 123) or self-directed Internet-based training (n = 113). Prior to intervention delivery, all participants were given a placebo Turbuhaler and product information leaflet and received inhaler technique training based on their group. Technique was assessed following training and predictors of correct inhaler technique were examined. There was a significant improvement in the number of participants demonstrating correct technique in both groups (small group training, 12% to 63%; p training, 9% to 59%; p groups in the percent change (n = 234, p > 0.05). Increased student confidence following the intervention was a predictor for correct inhaler technique. Self-directed Internet-based training is as effective as small-group training in improving students' inhaler technique.
Nominal Group Technique and its Applications in Managing Quality in Higher Education
Directory of Open Access Journals (Sweden)
Rafikul Islam
2011-09-01
Full Text Available Quality management is an important aspect in all kinds of businesses – manufacturing or service. Idea generation plays a pivotal role in managing quality in organizations. It is thenew and innovative ideas which can help corporations to survive in the turbulent business environment. Research in group dynamics has shown that more ideas are generated by individuals working alone but in a group environment than the individuals engaged in a formal group discussion. In Nominal Group Technique (NGT, individuals work alone but in a group setting. This paper shows how NGT can be applied to generate large number of ideas to solve quality related problems specifically in Malaysian higher education setting. The paper also discusses the details of NGT working procedure andexplores the areas of its further applications.
Directory of Open Access Journals (Sweden)
Ramin Bairami Habashi
2017-11-01
Full Text Available Lignin is the second most abundant polymer in the world after cellulose. Therefore, characterization of the structure and functional groups of lignin in order to assess its potential applications in various technical fields has become a necessity. One of the major problems related to the characterization of lignin is the lack of well-defined protocols and standards. In this paper, systematic studies have been done to characterize the structure and functional groups of lignin quantitatively using different techniques such as elemental analysis, titration and 1H NMR and FTIR techniques. Lignin as a black liquor was obtained from Choka Paper Factory and it was purified before any test. The lignin was reacted with α-bromoisobutyryl bromide to calculate the number of hydroxyl and methoxyl moles. Using 1H NMR spectroscopic method on α-bromoisobutyrylated lignin (BiBL in the presence of a given amount of N,N-dimethylformamide (DMF as an internal standard, the number of moles of hydroxyl and methoxyl groups per gram of lignin was found to be 6.44 mmol/g and 6.64 mmol/g, respectively. Using aqueous titration, the number of moles of phenolic hydroxyl groups and carboxyl groups of the lignin were calculated as 3.13 mmol/g and 2.84 mmol/g, respectively. The findings obtained by 1H NMR and elemental analysis indicated to phenyl propane unit of the lignin with C9 structural formula as C9 HAl 3.84HAr2.19S0.2O0.8(OH1.38(OCH31.42. Due to poor solubility of the lignin in tetrahydrofuran (THF, acetylated lignin was used in the GPC analysis, by which number-average molecular weight of the lignin was calculated as 992 g/mol.
Coker, Joshua; Castiglioni, Analia; Kraemer, Ryan R; Massie, F Stanford; Morris, Jason L; Rodriguez, Martin; Russell, Stephen W; Shaneyfelt, Terrance; Willett, Lisa L; Estrada, Carlos A
2014-03-01
Current evaluation tools of medical school courses are limited by the scope of questions asked and may not fully engage the student to think on areas to improve. The authors sought to explore whether a technique to study consumer preferences would elicit specific and prioritized information for course evaluation from medical students. Using the nominal group technique (4 sessions), 12 senior medical students prioritized and weighed expectations and topics learned in a 100-hour advanced physical diagnosis course (4-week course; February 2012). Students weighted their top 3 responses (top = 3, middle = 2 and bottom = 1). Before the course, 12 students identified 23 topics they expected to learn; the top 3 were review sensitivity/specificity and high-yield techniques (percentage of total weight, 18.5%), improving diagnosis (13.8%) and reinforce usual and less well-known techniques (13.8%). After the course, students generated 22 topics learned; the top 3 were practice and reinforce advanced maneuvers (25.4%), gaining confidence (22.5%) and learn the evidence (16.9%). The authors observed no differences in the priority of responses before and after the course (P = 0.07). In a physical diagnosis course, medical students elicited specific and prioritized information using the nominal group technique. The course met student expectations regarding education of the evidence-based physical examination, building skills and confidence on the proper techniques and maneuvers and experiential learning. The novel use for curriculum evaluation may be used to evaluate other courses-especially comprehensive and multicomponent courses.
Directory of Open Access Journals (Sweden)
Astiti Kade kAyu
2018-01-01
Full Text Available This study aims to determine the effect of group investigation (GI learning model with brainstorming technique on student physics learning outcomes (PLO compared to jigsaw learning model with brainstroming technique. The learning outcome in this research are the results of learning in the cognitive domain. The method used in this research is experiment with Randomised Postest Only Control Group Design. Population in this research is all students of class XI IPA SMA Negeri 9 Kupang year lesson 2015/2016. The selected sample are 40 students of class XI IPA 1 as the experimental class and 38 students of class XI IPA 2 as the control class using simple random sampling technique. The instrument used is 13 items description test. The first hypothesis was tested by using two tailed t-test. From that, it is obtained that H0 rejected which means there are differences of students physics learning outcome. The second hypothesis was tested using one tailed t-test. It is obtained that H0 rejected which means the students PLO in experiment class were higher than control class. Based on the results of this study, researchers recommend the use of GI learning models with brainstorming techniques to improve PLO, especially in the cognitive domain.
Renormalization-group decimation technique for spectra, wave-functions and density of states
International Nuclear Information System (INIS)
Wiecko, C.; Roman, E.
1983-09-01
The Renormalization Group decimation technique is very useful for problems described by 1-d nearest neighbour tight-binding model with or without translational invariance. We show how spectra, wave-functions and density of states can be calculated with little numerical work from the renormalized coefficients upon iteration. The results of this new procedure are verified using the model of Soukoulis and Economou. (author)
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method covers a general procedure for the measurement of the fast-neutron fluence rate produced by neutron generators utilizing the 3H(d,n)4He reaction. Neutrons so produced are usually referred to as 14-MeV neutrons, but range in energy depending on a number of factors. This test method does not adequately cover fusion sources where the velocity of the plasma may be an important consideration. 1.2 This test method uses threshold activation reactions to determine the average energy of the neutrons and the neutron fluence at that energy. At least three activities, chosen from an appropriate set of dosimetry reactions, are required to characterize the average energy and fluence. The required activities are typically measured by gamma ray spectroscopy. 1.3 The measurement of reaction products in their metastable states is not covered. If the metastable state decays to the ground state, the ground state reaction may be used. 1.4 The values stated in SI units are to be regarded as standard. No oth...
Long-term program for research and development of group separation and disintegration techniques
International Nuclear Information System (INIS)
1988-01-01
In Japan, the basic guidelines state that high-level radioactive wastes released from reprocessing of spent fuel should be processed into stable solid material, followed by storage for cooling for 30-50 years and disposal in the ground at a depth of several hundreds of meters. The Long-Term Program for Research and Development of Group Separation and Disintegration Techniques is aimed at efficient disposal of high-level wastes, reutilization of useful substances contained, and improved safety. Important processes include separation of nuclides (group separation, individual nuclide separation) and conversion (disintegration) of long-lived nuclides into short-lived or non-radioactive one. These processes can reduce the volume of high-level wastes to be left for final disposal. Research and development projects have been under way to provide techniques to separate high-level waste substances into four groups (transuranic elements, strontium/cesium, technetium/platinum group elements, and others). These projects also cover recovery of useful metals and efficient utilization of separated substances. For disintegration, conceptual studies have been carried out for the application of fast neutron beams to conversion of long half-life transuranium elements into short half-life or non-radioactive elements. (N.K.)
Report of the B-factory Group: 1, Physics and techniques
International Nuclear Information System (INIS)
Feldman, G.J.; Cassel, D.G.; Siemann, R.H.
1989-01-01
The study of B meson decay appears to offer a unique opportunity to measure basic parameters of the Standard Model, probe for interactions mediated by higher mass particles, and investigate the origin of CP violation. These opportunities have been enhanced by the results of two measurements. The first is the measurement of a long B meson lifetime. In addition to allowing a simpler identification of B mesons and a measurement of the time of their decay, this observation implies that normal decays are suppressed, making rare decays more prevalent. The second measurement is that neutral B mesons are strongly mixed. This enhances the possibilities for studying CP violation in the B system. The CESR storage ring is likely to dominate the study of B physics in e + e/sup /minus// annihilations for about the next five years. First, CESR has already reached a luminosity of 10 32 cm/sup /minus/1/ sec/sup /minus/1/ and has plans for improvements which may increase the luminosity by a factor of about five. Second, a second-generation detector, CLEO II, will start running in 1989. Given this background, the main focus of this working group was to ask what is needed for the mid- to late-1990 s. Many laboratories are thinking about new facilities involving a variety of techniques. To help clarify the choices, we focused on one example of CP violation and estimated the luminosity required to measure it using different techniques. We will briefly describe the requirements for detectors matched to these techniques. In particular, we will give a conceptual design of a possible detector for asymmetric collisions at the Υ(4S) resonance, one of the attractive techniques which will emerge from this study. A discussion of accelerator technology issues for using these techniques forms the second half of the B-factory Group report, and it follows in these proceedings. 34 refs., 2 figs., 2 tabs
International Nuclear Information System (INIS)
Kent, R.D.; Schlesinger, M.
1987-01-01
For the purpose of computing matrix elements of quantum mechanical operators in complex N-particle systems it is necessary that as much of each irreducible representation be stored in high-speed memory as possible in order to achieve the highest possible rate of computations. A graph theoretic approach to the representation of N-particle systems involving arbitrary single-particle spin is presented. The method involves a generalization of a technique employed by Shavitt in developing the graphical group approach (GUGA) to electronic spin-orbitals. The methods implemented in GENDRT and DRTDIM overcome many deficiencies inherent in other approaches, particularly with respect to utilization of memory resources, computational efficiency in the recognition and evaluation of non-zero matrix elements of certain group theoretic operators and complete labelling of all the basis states of the permutation symmetry (S N ) adapted irreducible representations of SU(n) groups. (orig.)
International Nuclear Information System (INIS)
Zhao, W.H.; Cox, S.F.J.
1980-07-01
In the NMR measurement of dynamic nuclear polarization, a volume average is obtained where the contribution from different parts of the sample is weighted according to the local intensity of the RF field component perpendicular to the large static field. A method of mapping this quantity is described. A small metallic object whose geometry is chosen to perturb the appropriate RF component is scanned through the region to be occupied by the sample. The response of the phase angle of the impedance of a tuned circuit comprising the NMR coil gives a direct measurement of the local weighting factor. The correlation between theory and experiment was obtained by using a circular coil. The measuring method, checked in this way, was then used to investigate the field profiles of practical coils which are required to be rectangular for a proposed experimental neutron polarizing filter. This method can be used to evaluate other practical RF coils. (author)
Critical test of isotropic periodic sum techniques with group-based cut-off schemes.
Nozawa, Takuma; Yasuoka, Kenji; Takahashi, Kazuaki Z
2018-03-08
Truncation is still chosen for many long-range intermolecular interaction calculations to efficiently compute free-boundary systems, macromolecular systems and net-charge molecular systems, for example. Advanced truncation methods have been developed for long-range intermolecular interactions. Every truncation method can be implemented as one of two basic cut-off schemes, namely either an atom-based or a group-based cut-off scheme. The former computes interactions of "atoms" inside the cut-off radius, whereas the latter computes interactions of "molecules" inside the cut-off radius. In this work, the effect of group-based cut-off is investigated for isotropic periodic sum (IPS) techniques, which are promising cut-off treatments to attain advanced accuracy for many types of molecular system. The effect of group-based cut-off is clearly different from that of atom-based cut-off, and severe artefacts are observed in some cases. However, no severe discrepancy from the Ewald sum is observed with the extended IPS techniques.
Peña, Adolfo; Estrada, Carlos A; Soniat, Debbie; Taylor, Benjamin; Burton, Michael
2012-01-01
Pain management in hospitalized patients remains a priority area for improvement; effective strategies for consensus development are needed to prioritize interventions. To identify challenges, barriers, and perspectives of healthcare providers in managing pain among hospitalized patients. Qualitative and quantitative group consensus using a brainstorming technique for quality improvement-the nominal group technique (NGT). One medical, 1 medical-surgical, and 1 surgical hospital unit at a large academic medical center. Nurses, resident physicians, patient care technicians, and unit clerks. Responses and ranking to the NGT question: "What causes uncontrolled pain in your unit?" Twenty-seven health workers generated a total of 94 ideas. The ideas perceived contributing to a suboptimal pain control were grouped as system factors (timeliness, n = 18 ideas; communication, n = 11; pain assessment, n = 8), human factors (knowledge and experience, n = 16; provider bias, n = 8; patient factors, n = 19), and interface of system and human factors (standardization, n = 14). Knowledge, timeliness, provider bias, and patient factors were the top ranked themes. Knowledge and timeliness are considered main priorities to improve pain control. NGT is an efficient tool for identifying general and context-specific priority areas for quality improvement; teams of healthcare providers should consider using NGT to address their own challenges and barriers. Copyright © 2011 Society of Hospital Medicine.
International Nuclear Information System (INIS)
Balleza, M; Vargas, M; Delgadillo, I; Kashina, S; Huerta, M R; Moreno, G
2017-01-01
Several research groups have proposed the electrical impedance tomography (EIT) in order to analyse lung ventilation. With the use of 16 electrodes, the EIT is capable to obtain a set of transversal section images of thorax. In previous works, we have obtained an alternating signal in terms of impedance corresponding to respiration from EIT images. Then, in order to transform those impedance changes into a measurable volume signal a set of calibration equations has been obtained. However, EIT technique is still expensive to attend outpatients in basics hospitals. For that reason, we propose the use of electrical bioimpedance (EBI) technique to monitor respiration behaviour. The aim of this study was to obtain a set of calibration equations to transform EBI impedance changes determined at 4 different frequencies into a measurable volume signal. In this study a group of 8 healthy males was assessed. From obtained results, a high mathematical adjustment in the group calibrations equations was evidenced. Then, the volume determinations obtained by EBI were compared with those obtained by our gold standard. Therefore, despite EBI does not provide a complete information about impedance vectors of lung compared with EIT, it is possible to monitor the respiration. (paper)
Qu, Haiyan; Shewchuk, Richard; Mannon, Roslyn B.; Gaston, Robert; Segev, Dorry L.; Mannon, Elinor C.; Martin, Michelle Y.
2015-01-01
Background and objectives African Americans are disproportionately affected by ESRD, but few receive a living donor kidney transplant. Surveys assessing attitudes toward donation have shown that African Americans are less likely to express a willingness to donate their own organs. Studies aimed at understanding factors that may facilitate the willingness of African Americans to become organ donors are needed. Design, setting, participants, & measurements A novel formative research method was used (the nominal group technique) to identify and prioritize strategies for facilitating increases in organ donation among church-attending African Americans. Four nominal group technique panel interviews were convened (three community and one clergy). Each community panel represented a distinct local church; the clergy panel represented five distinct faith-based denominations. Before nominal group technique interviews, participants completed a questionnaire that assessed willingness to become a donor; 28 African-American adults (≥19 years old) participated in the study. Results In total, 66.7% of participants identified knowledge- or education-related strategies as most important strategies in facilitating willingness to become an organ donor, a view that was even more pronounced among clergy. Three of four nominal group technique panels rated a knowledge-based strategy as the most important and included strategies, such as information on donor involvement and donation-related risks; 29.6% of participants indicated that they disagreed with deceased donation, and 37% of participants disagreed with living donation. Community participants’ reservations about becoming an organ donor were similar for living (38.1%) and deceased (33.4%) donation; in contrast, clergy participants were more likely to express reservations about living donation (33.3% versus 16.7%). Conclusions These data indicate a greater opposition to living donation compared with donation after one’s death
Directory of Open Access Journals (Sweden)
Michel J. Anzanello
2014-09-01
Full Text Available A typical application of multivariate techniques in forensic analysis consists of discriminating between authentic and unauthentic samples of seized drugs, in addition to finding similar properties in the unauthentic samples. In this paper, the performance of several methods belonging to two different classes of multivariate techniques–supervised and unsupervised techniques–were compared. The supervised techniques (ST are the k-Nearest Neighbor (KNN, Support Vector Machine (SVM, Probabilistic Neural Networks (PNN and Linear Discriminant Analysis (LDA; the unsupervised techniques are the k-Means CA and the Fuzzy C-Means (FCM. The methods are applied to Infrared Spectroscopy by Fourier Transform (FTIR from authentic and unauthentic Cialis and Viagra. The FTIR data are also transformed by Principal Components Analysis (PCA and kernel functions aimed at improving the grouping performance. ST proved to be a more reasonable choice when the analysis is conducted on the original data, while the UT led to better results when applied to transformed data.
Sumadi; Degeng, I Nyoman S.; Sulthon; Waras
2017-01-01
This research focused on effects of ability grouping in reciprocal teaching technique of collaborative learning on individual achievements dan social skills. The results research showed that (1) there are differences in individual achievement significantly between high group of homogeneous, middle group of homogeneous, low group of homogeneous,…
Choi, Soohwan; Park, Hyung Joo
2017-10-01
To compare the complications associated with age and technique groups in patients undergoing pectus excavatum (PE) repair. The data of 994 patients who underwent PE repair from March 2011 to December 2015 were retrospectively reviewed. Mean age was 9.59 years (range 31 months-55 years), and 756 patients were men (76.1%). The age groups were defined as follows: Group 1, Group 2, 5-9 years; Group 3, 10-14 years; Group 4, 15-17 years; Group 5, 18-19 years; Group 6, 20-24 years; and Group 7, >24 years. The technique groups were defined as follows: Group 1, patients who underwent repair with claw fixators and hinge plates; Group 2, patients who underwent repair with our 'bridge' technique. Complications were compared between age groups and technique groups. No cases of mortality occurred. Complication rates in the age groups 1-7 were 5.4%, 3.6%, 12.1%, 18.2%, 17.3%, 13.9% and 16.7%, respectively. The complication rate tripled after the age of 10. In multivariable analysis, odds ratio of Groups 4, 5 and 7 and asymmetric types were 3.04, 2.81, 2.97 and 1.70 (P Group 1 was 0.8% (6 of 780). No bar dislocations occurred in technique Group 2. Older patients have more asymmetric pectus deformity and they are also risk factors for complications following PE repair. The bridge technique provides a bar dislocation rate of 0%, even in adult patients. This procedure seems to reduce or prevent major complications following PE repair. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Directory of Open Access Journals (Sweden)
Hiligsmann M
2013-02-01
Full Text Available Mickael Hiligsmann,1-3 Caroline van Durme,2 Piet Geusens,2 Benedict GC Dellaert,4 Carmen D Dirksen,3 Trudy van der Weijden,5 Jean-Yves Reginster,6 Annelies Boonen21Department of Health Services Research, School for Public Health and Primary Care (CAPHRI, Maastricht University, The Netherlands, 2Department of Internal Medicine, CAPHRI, Maastricht University, The Netherlands, 3Department of Clinical Epidemiology and Medical Technology Assessment, CAPHRI, Maastricht University, The Netherlands, 4Department of Business Economics, Erasmus Rotterdam University, The Netherlands, 5Department of General Practice, CAPHRI, Maastricht University, The Netherlands, 6Department of Public Health, Epidemiology and Health Economics, University of Liege, BelgiumBackground: Attribute selection represents an important step in the development of discrete-choice experiments (DCEs, but is often poorly reported. In some situations, the number of attributes identified may exceed what one may find possible to pilot in a DCE. Hence, there is a need to gain insight into methods to select attributes in order to construct the final list of attributes. This study aims to test the feasibility of using the nominal group technique (NGT to select attributes for DCEs.Methods: Patient group discussions (4–8 participants were convened to prioritize a list of 12 potentially important attributes for osteoporosis drug therapy. The NGT consisted of three steps: an individual ranking of the 12 attributes by importance from 1 to 12, a group discussion on each of the attributes, including a group review of the aggregate score of the initial rankings, and a second ranking task of the same attributes.Results: Twenty-six osteoporotic patients participated in five NGT sessions. Most (80% of the patients changed their ranking after the discussion. However, the average initial and final ranking did not differ markedly. In the final ranking, the most important medication attributes were
International Nuclear Information System (INIS)
Qureshi, A.A.; Ullah, K.; Ullah, N.; Mohammad, A.
2004-07-01
The strong in the sedimentary depositional characteristics between the Warcha Sandstone of Nilawahan Group in the Salt Range and the uranium bearing sandstones of Siwalik Group in the foot hills of Himalaya and Sulaiman Ranges tempted the geologists to investigate the former group for the occurrence of any uranium deposits in it. Like volcanic ash beds in Siwaliks, phosphatic nodules may be a possible source of uranium mineralization in Warcha Sandstone of Nilawahan Group. Samples of phosphatic nodules occurring in the Sandstone of Nilawahan Group Salt Range were analyzed using Solid State Nuclear Track Detention Technique (SSNTD) for the determination of their uranium concentration. The results obtained are quite encouraging and favour the idea of exploring the area in detail for any possible occurrence of uranium deposit. Uranium concentration in these samples ranges from (434 + - 39) ppm to (964+ -81)ppm with and average concentration of (699 + - 62) ppm. (author)
Tools, techniques, organisation and culture of the CADD group at Sygnature Discovery.
St-Gallay, Steve A; Sambrook-Smith, Colin P
2017-03-01
Computer-aided drug design encompasses a wide variety of tools and techniques, and can be implemented with a range of organisational structures and focus in different organisations. Here we outline the computational chemistry skills within Sygnature Discovery, along with the software and hardware at our disposal, and briefly discuss the methods that are not employed and why. The goal of the group is to provide support for design and analysis in order to improve the quality of compounds synthesised and reduce the timelines of drug discovery projects, and we reveal how this is achieved at Sygnature. Impact on medicinal chemistry is vital to demonstrating the value of computational chemistry, and we discuss the approaches taken to influence the list of compounds for synthesis, and how we recognise success. Finally we touch on some of the areas being developed within the team in order to provide further value to the projects and clients.
Bruner, D W; Boyd, C P
1999-12-01
Cancer and cancer therapies impair sexual health in a multitude of ways. The promotion of sexual health is therefore vital for preserving quality of life and is an integral part of total or holistic cancer management. Nursing, to provide holistic care, requires research that is meaningful to patients as well as the profession to develop educational and interventional studies to promote sexual health and coping. To obtain meaningful research data instruments that are reliable, valid, and pertinent to patients' needs are required. Several sexual functioning instruments were reviewed for this study and found to be lacking in either a conceptual foundation or psychometric validation. Without a defined conceptual framework, authors of the instruments must have made certain assumptions regarding what women undergoing cancer therapy experience and what they perceive as important. To check these assumptions before assessing women's sexuality after cancer therapies in a larger study, a pilot study was designed to compare what women experience and perceive as important regarding their sexuality with what is assessed in several currently available research instruments, using the focus group technique. Based on the focus group findings, current sexual functioning questionnaires may be lacking in pertinent areas of concern for women treated for breast or gynecologic malignancies. Better conceptual foundations may help future questionnaire design. Self-regulation theory may provide an acceptable conceptual framework from which to develop a sexual functioning questionnaire.
International Nuclear Information System (INIS)
Blinov, N.N.; Guslistyj, V.P.; Misyurev, A.V.; Novitskaya, N.N.; Snigireva, G.P.
1993-01-01
Attempt of using of the nonstatistical techniques for pattern recognition to detect the risk groups among liquidators of the Chernobyl NPP accident aftereffects was described. 14 hematologic, biochemical and biophysical blood serum parameters of the group of liquidators of the Chernobyl NPP accident impact as well as the group of donors free of any radiation dose (controlled group) were taken as the diagnostic parameters. Modification of the nonstatistical techniques for pattern recognition based on the assessment calculations were used. The patients were divided into risk group at the truth ∼ 80%
West, Robert; Evans, Adam; Michie, Susan
2011-12-01
To develop a reliable coding scheme for components of group-based behavioral support for smoking cessation, to establish the frequency of inclusion in English Stop-Smoking Service (SSS) treatment manuals of specific components, and to investigate the associations between inclusion of behavior change techniques (BCTs) and service success rates. A taxonomy of BCTs specific to group-based behavioral support was developed and reliability of use assessed. All English SSSs (n = 145) were contacted to request their group-support treatment manuals. BCTs included in the manuals were identified using this taxonomy. Associations between inclusion of specific BCTs and short-term (4-week) self-reported quit outcomes were assessed. Fourteen group-support BCTs were identified with >90% agreement between coders. One hundred and seven services responded to the request for group-support manuals of which 30 had suitable documents. On average, 7 BCTs were included in each manual. Two were positively associated with 4-week quit rates: "communicate group member identities" and a "betting game" (a financial deposit that is lost if a stop-smoking "buddy" relapses). It is possible to reliably code group-specific BCTs for smoking cessation. Fourteen such techniques are present in guideline documents of which 2 appear to be associated with higher short-term self-reported quit rates when included in treatment manuals of English SSSs.
Hitch, Danielle; Taylor, Michelle; Pepin, Genevieve
2015-05-01
This aim of this study was to obtain a consensus from clinicians regarding occupational therapy for people with depression, for the assessments and practices they use that are not currently supported by research evidence directly related to functional performance. The study also aimed to discover how many of these assessments and practices were currently supported by research evidence. Following a previously reported systematic review of assessments and practices used in occupational therapy for people with depression, a modified nominal group technique was used to discover which assessments and practices occupational therapists currently utilize. Three online surveys gathered initial data on therapeutic options (survey 1), which were then ranked (survey 2) and re-ranked (survey 3) to gain the final consensus. Twelve therapists completed the first survey, whilst 10 clinicians completed both the second and third surveys. Only 30% of the assessments and practices identified by the clinicians were supported by research evidence. A consensus was obtained on a total of 35 other assessments and interventions. These included both occupational-therapy-specific and generic assessments and interventions. Principle conclusion. Very few of the assessments and interventions identified were supported by research evidence directly related to functional performance. While a large number of options were generated, the majority of these were not occupational therapy specific.
Shortt, S E D; Guillemette, Jean-Marc; Duncan, Anne Marie; Kirby, Frances
2010-01-01
The rapid increase in the use of the Internet for continuing education by physicians suggests the need to define quality criteria for accredited online modules. Continuing medical education (CME) directors from Canadian medical schools and academic researchers participated in a consensus process, Modified Nominal Group Technique, to develop agreement on the most important quality criteria to guide module development. Rankings were compared to responses to a survey of a subset of Canadian Medical Association (CMA) members. A list of 17 items was developed, of which 10 were deemed by experts to be important and 7 were considered secondary. A quality module would: be needs-based; presented in a clinical format; utilize evidence-based information; permit interaction with content and experts; facilitate and attempt to document practice change; be accessible for later review; and include a robust course evaluation. There was less agreement among CMA members on criteria ranking, with consensus on ranking reached on only 12 of 17 items. In contrast to experts, members agreed that the need to assess performance change as a result of an educational experience was not important. This project identified 10 quality criteria for accredited online CME modules that representatives of Canadian organizations involved in continuing education believe should be taken into account when developing learning products. The lack of practitioner support for documentation of change in clinical behavior may suggest that they favor traditional attendance- or completion-based CME; this finding requires further research.
Hanasoge, Shravan; Agarwal, Umang; Tandon, Kunj; Koelman, J. M. Vianney A.
2017-09-01
Determining the pressure differential required to achieve a desired flow rate in a porous medium requires solving Darcy's law, a Laplace-like equation, with a spatially varying tensor permeability. In various scenarios, the permeability coefficient is sampled at high spatial resolution, which makes solving Darcy's equation numerically prohibitively expensive. As a consequence, much effort has gone into creating upscaled or low-resolution effective models of the coefficient while ensuring that the estimated flow rate is well reproduced, bringing to the fore the classic tradeoff between computational cost and numerical accuracy. Here we perform a statistical study to characterize the relative success of upscaling methods on a large sample of permeability coefficients that are above the percolation threshold. We introduce a technique based on mode-elimination renormalization group theory (MG) to build coarse-scale permeability coefficients. Comparing the results with coefficients upscaled using other methods, we find that MG is consistently more accurate, particularly due to its ability to address the tensorial nature of the coefficients. MG places a low computational demand, in the manner in which we have implemented it, and accurate flow-rate estimates are obtained when using MG-upscaled permeabilities that approach or are beyond the percolation threshold.
International Nuclear Information System (INIS)
Guardini, S.
2003-01-01
The first evaluation of NDA performance values undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques (WGNDA) was published in 1993. Almost 10 years later the Working Group decided to review those values, to report about improvements and to issue new performance values for techniques which were not applied in the early nineties, or were at that time only emerging. Non-Destructive Assay techniques have become more and more important in recent years, and they are used to a large extent in nuclear material accountancy and control both by operators and control authorities. As a consequence, the performance evaluation for NDA techniques is of particular relevance to safeguards authorities in optimising Safeguards operations and reducing costs. Performance values are important also for NMAC regulators, to define detection levels, limits for anomalies, goal quantities and to negotiate basic audit rules. This paper presents the latest evaluation of ESARDA Performance Values (EPVs) for the most common NDA techniques currently used for the assay of nuclear materials for Safeguards purposes. The main topics covered by the document are: techniques for plutonium bearing materials: PuO 2 and MOX; techniques for U-bearing materials; techniques for U and Pu in liquid form; techniques for spent fuel assay. This issue of the performance values is the result of specific international round robin exercises, field measurements and ad hoc experiments, evaluated and discussed in the ESARDA NDA Working Group. (author)
International Nuclear Information System (INIS)
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs
International Nuclear Information System (INIS)
Urie, Marcia; FitzGerald, T.J.; Followill, David; Laurie, Fran; Marcus, Robert; Michalski, Jeff
2003-01-01
Purpose: To report current technology implementation, radiation therapy physics and treatment planning practices, and results of treatment planning exercises among 261 institutions belonging to the Children's Oncology Group (COG). Methods and Materials: The Radiation Therapy Committee of the newly formed COG mandated that each institution demonstrate basic physics and treatment planning abilities by satisfactorily completing a questionnaire and four treatment planning exercises designed by the Quality Assurance Review Center. The planning cases are (1) a maxillary sinus target volume (for two-dimensional planning), (2) a Hodgkin's disease mantle field (for irregular-field and off-axis dose calculations), (3) a central axis blocked case, and (4) a craniospinal irradiation case. The questionnaire and treatment plans were submitted (as of 1/30/02) by 243 institutions and completed satisfactorily by 233. Data from this questionnaire and analyses of the treatment plans with monitor unit calculations are presented. Results: Of the 243 clinics responding, 54% use multileaf collimators routinely, 94% use asymmetric jaws routinely, and 13% use dynamic wedges. Nearly all institutions calibrate their linear accelerators following American Association of Physicists in Medicine protocols, currently 16% with TG-51 and 81% with TG-21 protocol. Treatment planning systems are relied on very heavily for all calculations, including monitor units. Techniques and results of each of the treatment planning exercises are presented. Conclusions: Together, these data provide a unique compilation of current (2001) radiation therapy practices in institutions treating pediatric patients. Overall, the COG facilities have the equipment and the personnel to perform high-quality radiation therapy. With ongoing quality assurance review, radiation therapy compliance with COG protocols should be high
High average power supercontinuum sources
Indian Academy of Sciences (India)
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.
International Nuclear Information System (INIS)
Williams, P.L.; White, N.; Klem, R.; Wilson, S.E.; Bartholomew, P.
2006-01-01
There are a number of group-based research techniques available to determine the views or perceptions of individuals in relation to specific topics. This paper reports on one method, the nominal group technique (NGT) which was used to collect the views of important stakeholders on the factors affecting the quality of, and capacity to provide clinical education and training in diagnostic imaging and radiotherapy and oncology departments in the UK. Inclusion criteria were devised to recruit learners, educators, practitioners and service managers to the nominal groups. Eight regional groups comprising a total of 92 individuals were enrolled; the numbers in each group varied between 9 and 13. A total of 131 items (factors) were generated across the groups (mean = 16.4). Each group was then asked to select the top three factors from their original list. Consensus on the important factors amongst groups found that all eight groups agreed on one item: staff attitude, motivation and commitment to learners. The 131 items were organised into themes using content analysis. Five main categories and a number of subcategories emerged. The study concluded that the NGT provided data which were congruent with the issues faced by practitioners and learners in their daily work; this was of vital importance if the findings are to be regarded with credibility. Further advantages and limitations of the method are discussed, however it is argued that the NGT is a useful technique to gather relevant opinion; to select priorities and to reach consensus on a wide range of issues
Lagrangian averaging with geodesic mean.
Oliver, Marcel
2017-11-01
This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.
Consumer and Commercial Products, Group IV: Control Techniques Guidelines in Lieu of Regulations
EPA has determined that control techniques guidelines (CTGs) will be substantially as effective as regulations in reducing volatile organic compound (VOC) emissions in ozone nonattainment areas for certain consumer and commercial product categories.
Directory of Open Access Journals (Sweden)
Cristina Poyatos
2011-02-01
Full Text Available First year accounting has generally been perceived as one of the more challenging first year business courses for university students. Various Classroom Assessment Techniques (CATs have been proposed to attempt to enrich and enhance student learning, with these studies generally positioning students as learners alone. This paper uses an educational case study approach and examines the implementation of the IGCRA (individual, group, classroom reflective action technique, a Classroom Assessment Technique, on first year accounting students’ learning performance. Building on theoretical frameworks in the areas of cognitive learning, social development, and dialogical learning, the technique uses reports to promote reflection on both learning and teaching. IGCRA was found to promote feedback on the effectiveness of student, as well as teacher satisfaction. Moreover, the results indicated formative feedback can assist to improve the learning and learning environment for a large group of first year accounting students. Clear guidelines for its implementation are provided in the paper.
Doménech, J D; Muñoz, P; Capmany, J
2011-01-15
In this Letter, the amplitude and group delay characteristics of coupled resonator optical waveguides apodized through the longitudinal offset technique are presented. The devices have been fabricated in silicon-on-insulator technology employing deep ultraviolet lithography. The structures analyzed consisted of three racetracks resonators uniform (nonapodized) and apodized with the aforementioned technique, showing a delay of 5 ± 3 ps and 4 ± 0.5 ps over 1.6 and 1.4 nm bandwidths, respectively.
DEFF Research Database (Denmark)
Andresen, Kristoffer; Laursen, Jannie; Rosenberg, Jacob
2016-01-01
technique for inguinal hernia repair, seen from the instructor's point of view. Methods. We designed a qualitative study using a focus group to allow participants to elaborate freely and facilitate a discussion. Participants were surgeons with extensive experience in performing the Onstep technique from...... course should preferably have experience with other types of hernia repairs. If trainees are inexperienced, the training setup should be a traditional step-by-step programme. A training setup should consist of an explanation of the technique with emphasis on anatomy and difficult parts of the procedure...
Prosopography of social and political groups historically located: method or research technique?
Directory of Open Access Journals (Sweden)
Lorena Madruga Monteiro
2014-06-01
Full Text Available The prosopographical approach has been questioned in different disciplinary domains as its scientific nature. The debate prosopography is a technique, a tool for research, an auxiliary science or method transpires in scientific arguments and those who are dedicated to explaining the prosopographical research assumptions. In the social sciences, for example, prosopography is not seen only as an instrument of research, but as a method associated with a theoretical construct to apprehend the social world. The historians that use prosopographic analysis, in turn, oscillate about the analysis of collective biography is a method or a polling technique. Given this setting we aimed at in this article, discuss the prosopographical approach from their different uses. The study presents a literature review, demonstrating the technique of prosopography as historical research, and further as a method of sociological analysis, and then highlight your procedures and methodological limits.
Using Psychodrama Techniques to Promote Counselor Identity Development in Group Supervision
Scholl, Mark B.; Smith-Adcock, Sondra
2007-01-01
The authors briefly introduce the concepts, techniques, and theory of identity development associated with J. L. Moreno's (1946, 1969, 1993) Psychodrama. Based upon Loganbill, Hardy, and Delworth's (1982) model, counselor identity development is conceptualized as consisting of seven developmental themes or vectors (e.g., issues of awareness and…
International Nuclear Information System (INIS)
Hall, H.K. Jr.; Reineke, K.E.; Ried, J.H.; Sentman, R.C.; Miller, D.
1982-01-01
X-ray crystal structure determination for two tetrasubstituted electrophilic olefins, tetramethyl ethylenetetracarboxylate TMET and dimethyl dicyanofumarate DDCF, revealed two fundamentally different molecular structures. TMET is a nonplanar molecule that possesses two opposite ester groups planar and the others above and below the molecular plane. In contrast, DDCF is a molecule for which both ester groups lie in the plane of the double bond and nitrile groups. DDCF underwent thermal spontaneous copolymerization with electron-rich styrenes to give 1:1 alternating copolymers in moderate yields and molecular weights. These copolymers, which result from the first copolymerization of a tetrasubstituted olefin, possess an average functionality of 1.25 per chain carbon atom. Polymerization is made possible by low steric hindrance and the high delocalization in the propagating radical. The yields were limited by competing cycloaddition reaction. The corresponding diethyl ester also copolymerized, but not so well. Neither electrophilic olefin homopolymerized under γ-irradiation. TMET did not copolymerize at all when treated under identical conditions
Directory of Open Access Journals (Sweden)
Mohammad Amin Karafkan
2015-11-01
Full Text Available Cooperative learning consists of some techniques for helping students work together more effectively. This study investigated the effects of Group Investigation (GI and Cooperative Integrated Reading and Composition (CIRC as cooperative learning techniques on Iranian EFL learners’ reading comprehension at an intermediate level. The participants of the study were 207 male students who studied at an intermediate level at ILI. The participants were randomly assigned into three equal groups: one control group and two experimental groups. The control group was instructed via conventional technique following an individualistic instructional approach. One experimental group received GI technique. The other experimental group received CIRC technique. The findings showed that there was a meaningful difference between the mean of the reading comprehension score of GI experimental group and CRIC experimental group. CRIC technique is more effective than GI technique in enhancing the reading comprehension test scores of students.
International Nuclear Information System (INIS)
Burde, G.I.
2002-01-01
A new approach to the use of the Lie group technique for partial and ordinary differential equations dependent on a small parameter is developed. In addition to determining approximate solutions to the perturbed equation, the approach allows constructing integrable equations that have solutions with (partially) prescribed features. Examples of application of the approach to partial differential equations are given
International Nuclear Information System (INIS)
Rackham, Jamie; Weber, Anne-Laure; Chard, Patrick
2012-01-01
The first evaluation of NDA performance values was undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques and was published in 1993. Almost ten years later in 2002 the Working Group reviewed those values and reported on improvements in performance values and new measurement techniques that had emerged since the original assessment. The 2002 evaluation of NDA performance values did not include waste measurements (although these had been incorporated into the 1993 exercise), because although the same measurement techniques are generally applied, the performance is significantly different compared to the assay of conventional Safeguarded special nuclear material. It was therefore considered more appropriate to perform a separate evaluation of performance values for waste assay. Waste assay is becoming increasingly important within the Safeguards community, particularly since the implementation of the Additional Protocol, which calls for declaration of plutonium and HEU bearing waste in addition to information on existing declared material or facilities. Improvements in the measurement performance in recent years, in particular the accuracy, mean that special nuclear materials can now be accounted for in wastes with greater certainty. This paper presents an evaluation of performance values for the NDA techniques in common usage for the assay of waste containing special nuclear material. The main topics covered by the document are: 1- Techniques for plutonium bearing solid wastes 2- Techniques for uranium bearing solid wastes 3 - Techniques for assay of fissile material in spent fuel wastes. Originally it was intended to include performance values for measurements of uranium and plutonium in liquid wastes; however, as no performance data for liquid waste measurements was obtained it was decided to exclude liquid wastes from this report. This issue of the performance values for waste assay has been evaluated and discussed by the ESARDA
Energy Technology Data Exchange (ETDEWEB)
Rackham, Jamie [Babcock International Group, Sellafield, Seascale, Cumbria, (United Kingdom); Weber, Anne-Laure [Institut de Radioprotection et de Surete Nucleaire Fontenay-Aux-Roses (France); Chard, Patrick [Canberra, Forss Business and Technology park, Thurso, Caithness (United Kingdom)
2012-12-15
The first evaluation of NDA performance values was undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques and was published in 1993. Almost ten years later in 2002 the Working Group reviewed those values and reported on improvements in performance values and new measurement techniques that had emerged since the original assessment. The 2002 evaluation of NDA performance values did not include waste measurements (although these had been incorporated into the 1993 exercise), because although the same measurement techniques are generally applied, the performance is significantly different compared to the assay of conventional Safeguarded special nuclear material. It was therefore considered more appropriate to perform a separate evaluation of performance values for waste assay. Waste assay is becoming increasingly important within the Safeguards community, particularly since the implementation of the Additional Protocol, which calls for declaration of plutonium and HEU bearing waste in addition to information on existing declared material or facilities. Improvements in the measurement performance in recent years, in particular the accuracy, mean that special nuclear materials can now be accounted for in wastes with greater certainty. This paper presents an evaluation of performance values for the NDA techniques in common usage for the assay of waste containing special nuclear material. The main topics covered by the document are: 1- Techniques for plutonium bearing solid wastes 2- Techniques for uranium bearing solid wastes 3 - Techniques for assay of fissile material in spent fuel wastes. Originally it was intended to include performance values for measurements of uranium and plutonium in liquid wastes; however, as no performance data for liquid waste measurements was obtained it was decided to exclude liquid wastes from this report. This issue of the performance values for waste assay has been evaluated and discussed by the ESARDA
Cho, Kathleen R; Cooper, Kumarasen; Croce, Sabrina; Djordevic, Bojana; Herrington, Simon; Howitt, Brooke; Hui, Pei; Ip, Philip; Koebel, Martin; Lax, Sigurd; Quade, Bradley J; Shaw, Patricia; Vidal, August; Yemelyanova, Anna; Clarke, Blaise; Hedrick Ellenson, Lora; Longacre, Teri A; Shih, Ie-Ming; McCluggage, W Glenn; Malpica, Anais; Oliva, Esther; Parkash, Vinita; Matias-Guiu, Xavier
2018-04-11
The aim of this article is to propose guidelines and recommendations in problematic areas in pathologic reporting of endometrial carcinoma (EC) regarding special techniques and ancillary studies. An organizing committee designed a comprehensive survey with different questions related to pathologic features, diagnosis, and prognosis of EC that was sent to all members of the International Society of Gynecological Pathologists. The special techniques/ancillary studies group received 4 different questions to be addressed. Five members of the group reviewed the literature and came up with recommendations and an accompanying text which were discussed and agreed upon by all members of the group. Twelve different recommendations are made. They address the value of immunohistochemistry, ploidy, and molecular analysis for assessing prognosis in EC, the value of steroid hormone receptor analysis to predict response to hormone therapy, and parameters regarding applying immunohistochemistry and molecular tests for assessing mismatch deficiency in EC.
Line group techniques in description of the structural phase transitions in some superconductors
Energy Technology Data Exchange (ETDEWEB)
Meszaros, C.; Bankuti, J. [Roland Eoetvoes Univ., Budapest (Hungary); Balint, A. [Univ. of Agricultural Sciences, Goedoello (Hungary)
1994-12-31
The main features of the theory of line groups, and their irreducible representations are briefly discussed, as well as the most important applications of them. A new approach in the general symmetry analysis of the modulated systems is presented. It is shown, that the line group formalism could be a very effective tool in the examination of the structural phase transitions in High Temperature Superconductors. As an example, the material YBa{sub 2}Cu{sub 3}O{sub 7-x} is discussed briefly.
Averaging in the presence of sliding errors
International Nuclear Information System (INIS)
Yost, G.P.
1991-08-01
In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms
Kane, Michael N.
2003-01-01
A role-play exercise about Alzheimer's disease was designed to teach group work with memory-impaired elders. Written comments from 26 social work students revealed four outcomes: demystifying practical knowledge, respect for diversity among memory-impaired individuals, increased awareness of elders' internal states, and awareness of the challenges…
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Bhagwat, Swarupa Nikhil; Sharma, Jayashree H; Jose, Julie; Modi, Charusmita J
2015-01-01
The routine immunohematological tests can be performed by automated as well as manual techniques. These techniques have advantages and disadvantages inherent to them. The present study aims to compare the results of manual and automated techniques for blood grouping and crossmatching so as to validate the automated system effectively. A total of 1000 samples were subjected to blood grouping by the conventional tube technique (CTT) and the automated microplate LYRA system on Techno TwinStation. A total of 269 samples (multitransfused patients and multigravida females) were compared for 927 crossmatches by the CTT in indirect antiglobulin phase against the column agglutination technique (CAT) performed on Techno TwinStation. For blood grouping, the study showed a concordance in results for 942/1000 samples (94.2%), discordance for 4/1000 (0.4%) samples and uninterpretable result for 54/1000 samples (5.4%). On resolution, the uninterpretable results reduced to 49/1000 samples (4.9%) with 951/1000 samples (95.1%) showing concordant results. For crossmatching, the automated CAT showed concordant results in 887/927 (95.6%) and discordant results in 3/927 (0.32%) crossmatches as compared to the CTT. Total 37/927 (3.9%) crossmatches were not interpretable by the automated technique. The automated system shows a high concordance of results with CTT and hence can be brought into routine use. However, the high proportion of uninterpretable results emphasizes on the fact that proper training and standardization are needed prior to its use.
Requirements for effective academic leadership in Iran: A Nominal Group Technique exercise
Bikmoradi, Ali; Brommels, Mats; Shoghli, Alireza; Sohrabi, Zohreh; Masiello, Italo
2008-01-01
Abstract Background During the last two decades, medical education in Iran has shifted from elite to mass education, with a considerable increase in number of schools, faculties, and programs. Because of this transformation, it is a good case now to explore academic leadership in a non-western country. The objective of this study was to explore the views on effective academic leadership requirements held by key informants in Iran's medical education system. Methods A nominal group study was c...
Directory of Open Access Journals (Sweden)
Melissa Edwards
2013-08-01
Full Text Available This paper reports on an exploratory action research study designed to understand how grassroots community organisations engage in the measurement and reporting of social impact and how they demonstrate their social impact to local government funders. Our findings suggest that the relationships between small non-profit organisations, the communities they serve or represent and their funders are increasingly driven from the top down formalised practices. Volunteer-run grassroots organisations can be marginalized in this process. Members may lack awareness of funders’ strategic approaches or the formalized auditing and control requirements of funders mean grassroots organisations lose capacity to define their programs and projects. We conclude that, to help counter this trend, tools and techniques which open up possibilities for dialogue between those holding power and those seeking support are essential.
International Nuclear Information System (INIS)
Ichiguchi, Katsuji
1998-01-01
A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
Watson, Jane; Chick, Helen
2012-01-01
This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…
Averaging operations on matrices
Indian Academy of Sciences (India)
2014-07-03
Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...
Directory of Open Access Journals (Sweden)
Patricia Bouyer
2015-09-01
Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.
Amino acids analysis using grouping and parceling of neutrons cross sections techniques
International Nuclear Information System (INIS)
Voi, Dante Luiz Voi; Rocha, Helio Fenandes da
2002-01-01
Amino acids used in parenteral administration in hospital patients with special importance in nutritional applications were analyzed to compare with the manufactory data. Individual amino acid samples of phenylalanine, cysteine, methionine, tyrosine and threonine were measured with the neutron crystal spectrometer installed at the J-9 irradiation channel of the 1 kW Argonaut Reactor of the Instituto de Engenharia Nuclear (IEN). Gold and D 2 O high purity samples were used for the experimental system calibration. Neutron cross section values were calculated from chemical composition, conformation and molecular structure analysis of the materials. Literature data were manipulated by parceling and grouping neutron cross sections. (author)
Oliveira, M R; Schwartz, I; Costa, L S; Maia, H; Ribeiro, M; Guerreiro, L B; Acosta, A; Rocha, N S
2018-01-15
To describe the perceptions of patients, their caregivers, and their healthcare providers to the development of a new specific instrument for assessment of the quality of life (QoL) in patients with mucopolysaccharidoses (MPS) using a qualitative focus group (FG) design. FGs were held in two Brazilian states (Rio Grande do Sul and Rio de Janeiro). Three versions of the new instrument were developed, each for a different age group: children (age 8-12 years), adolescents (age 13-17), and adults (age ≥ 18). The FGs mostly confirmed the relevance of items. All FGs unanimously agreed on the facets: School, Happiness, Life Prospects, Religiosity, Pain, Continuity of Treatment, Trust in Treatment, Relationship with Family, Relationship with Healthcare Providers, Acceptance, and Meaning of Life. The overall concept of QoL (as proposed by the WHO-World Health Organization) and its facets apply to this patient population. However, other specific facets-particularly concerning clinical manifestations and the reality of the disease-were suggested, confirming the need for the development of a specific QoL instrument for MPS.
International Nuclear Information System (INIS)
Dorsch, J.; Katsube, T.J.; Sanford, W.E.; Univ. of Tennessee, Knoxville, TN; Dugan, B.E.; Tourkow, L.M.
1996-04-01
Effective porosity (specifically referring to the interconnected pore space) was recently recognized as being essential in determining the effectiveness and extent of matrix diffusion as a transport mechanism within fractured low-permeability rock formations. The research presented in this report was performed to test the applicability of several petrophysical techniques for the determination of effective porosity of fine-grained siliciclastic rocks. In addition, the aim was to gather quantitative data on the effective porosity of Conasauga Group mudrock from the Oak Ridge Reservation (ORR). The quantitative data reported here include not only effective porosities based on diverse measurement techniques, but also data on the sizes of pore throats and their distribution, and specimen bulk and grain densities. The petrophysical techniques employed include the immersion-saturation method, mercury and helium porosimetry, and the radial diffusion-cell method
Gas-phase spectroscopic studies of heavy elements compounds (group V) using FTIR technique
International Nuclear Information System (INIS)
Allaf, A. W.; Ajji, Z.
1998-12-01
Antimony oxide trihalide, OSbX3 where X=F, CI and Br, and antimony (III) oxychloride, OSbC1 molecules were produced by an on-line process for the first time using antimony chloride SbC13 passed over heated silver oxide then followed by passing the obtained products over heated NaF and heated KBr for SbOF3 and OSbBr3, respectively. The obtained OSbC13 reacts with heated silver to produce OSbC1. The products have been characterized by the infrared spectra of their vapors. The low resolution gas-phase Fourier transform infrared spectrum, reported for the first time, shows the most characteristic band of OSbX3 at 1272, 1217 and 1200 cm -1 and the bands are assigned to the O=Sb stretching fundamental of OSbX3 were X=F, Cl and Br, respectively. The band at 1200 cm -1 needs more experimental investigation. The band at 924 cm -1 is assigned to the O=Sb stretching fundamental of OSbCl molecule. This result is in consistent with expectation and shifted to lower frequency in comparison with arsenic analogous molecule which is investigated by matrix-isolation technique. The work will be continued in order to cover the bismuth and arsenic compounds of similar structures. (author)
Ross, Martin R.; Borman, Earle K.
1963-01-01
Ross, Martin R. (Connecticut State Department of Health, Hartford) and Earle K. Borman. Direct and indirect fluorescent-antibody techniques for the psittacosis-lymphogranuloma venereum-trachoma group of agents. J. Bacteriol. 85:851–858. 1963.—Direct and indirect fluorescent-antibody (FA) techniques were developed for the detection of group antigen in infected tissue cultures and the titration of group antibody in human antiserum. The growth of the agent of meningopneumonitis (MP) in mouse embryo lung cell monolayers was followed by infectivity and complement-fixing (CF) antigen titrations, and cytological examination of FA stained cultures. Although infectivity and CF antigen reached a peak at 2 days and remained constant for an additional 3 days, only cells tested 2 to 3 days after infection were suitable for FA staining with labeled anti-MP serum because of excessive artifacts in the older cultures. Fluorescein isothiocyanate-labeled rooster and guinea pig anti-MP serums and human antipsittacosis serums were titrated in direct FA and hemagglutination-inhibition (HI) tests. The rooster conjugate showed brighter staining and higher antibody titers than the guinea pig or human conjugates and was more effective in detecting minimal amounts of virus antigen. FA staining reactions with 1 and 2 units of labeled rooster serum were inhibited by unlabeled rooster serum but clear-cut inhibition with human antipsittacosis serum could not be demonstrated. The indirect FA technique was successfully used for the titration of group antibody in human serum. A comparison of the indirect FA, HI, and CF tests showed the indirect FA technique to be intermediate in sensitivity between the HI and CF tests. None of the three tests showed significant cross reactions with human serums reactive for influenza A and B; parainfluenza 1, 2, and 3; respiratory syncytial virus; Q fever; or the primary atypical pneumonia agent. PMID:14044954
A TECHNIQUE OF IDENTIFICATION OF THE PHASE-DISPLACEMENT GROUP OF THREE-PHASE TRANSFORMER
International Nuclear Information System (INIS)
Aburjania, A.; Begiashvili, V.; Rezan Turan
2007-01-01
It is demonstrated that the arbitrary choice of arbitrarily pisitive direction of induced currents and voltages contradicts the energy conservation law and leads to equilibrium equations and standards making no sense from the physical standpoint. Of 12 recognized standard phase-displacement groups of three-phase transformer, only three have real physical bases. The rest are based on a wrong assumption about mutual biasing of primary and secondary currents. They does not rule out the occurrence of emergency situations and, thus, must be eliminated from use. A new method of identification of the phase-displacement of three-phase transformer is proposed. The method is based on well-known physical laws with consideration for the dual character of the inertia of mutual inductance and exhausts for all possible versions of connection of transformer windings. (author)
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Robinson, Victoria A; Hunter, Duncan; Shortt, Samuel E D
2003-01-01
Little attention has been paid to the need for accountability instruments applicable across all health units in the public health system. One tool, the balanced scorecard was created for industry and has been successfully adapted for use in Ontario hospitals. It consists of 4 quadrants: financial performance, outcomes, customer satisfaction and organizational development. The aim of the present study was to determine if a modified nominal group technique could be used to reach consensus among public health unit staff and public health specialists in Ontario about the components of a balanced scorecard for public health units. A modified nominal group technique consensus method was used with the public health unit staff in 6 Eastern Ontario health units (n=65) and public health specialists (n=18). 73.8% of the public health unit personnel from all six health units in the eastern Ontario region participated in the survey of potential indicators. A total of 74 indicators were identified in each of the 4 quadrants: program performance (n=44); financial performance (n=11); public perceptions (n=11); and organizational performance (n=8). The modified nominal group technique was a successful method of incorporating the views of public health personnel and specialists in the development of a balanced scorecard for public health.
International Nuclear Information System (INIS)
Magalhaes, A.C.N. de.
1982-01-01
By using real space renormalization group methods, bond percolation on d-dimensional hypercubic (d = 2, 3, 4), first - and second - neighbour isotropic square, anisotropic square and 'inhomogeneous' 4-8 lattices is studied. Through some extrapolation methods, critical points and/or frontiers are obtained (as well as the critical exponent ν sub(p) in the isotropic cases) for these lattices that, or agree well with other available results, or are new as far as it is know (first - and second - neighbour isotropic square and 'inhomogeneous' 4-8 lattices). A conjecture concerning approximate (eventually exact) critical points and, in certain situations, critical frontiers of q-state Potts ferromagnets on d-dimensional lattices (d > 1) is formulated. This conjecture is verified within good accuracy for all the lattices whose critical points are known, and it allows the prediction of a great number of new results, some of them it is believed to be exact. Within a real space renomalization group framework, accurate approximations for the critical frontiers associated with the quenched bond-diluted first-neighbour spin-1/2 Ising ferromagnet on triangular and honeycomb lattices are calculated. The best numerical proposals lead, in both pure bond percolation (p = p sub(c)) and pure Ising (p = 1) limits, to the exact critical points and (dt 0 /dp) sub(p = p sub(c)) (where t 0 identical to tanh J/K sub(B) T), and to a 0.15% (0.96%) error in (dt 0 /dp) sub(p = 1) for the triangular (honeycomb) lattice; for p sub(c) 0 (for fixed p) of 0.27% (0.14%) is estimated for the triangular (honeycomb) lattice. It is exhibited, for many star-triangle graph pairs with any number of terminals and different sizes, that the exact q = 1, 2, 3, 4 critical points of Potts ferromagnets can aZZ of them, be obtained from any one of such graph pairs. (Author) [pt
Requirements for effective academic leadership in Iran: A Nominal Group Technique exercise
Directory of Open Access Journals (Sweden)
Shoghli Alireza
2008-04-01
Full Text Available Abstract Background During the last two decades, medical education in Iran has shifted from elite to mass education, with a considerable increase in number of schools, faculties, and programs. Because of this transformation, it is a good case now to explore academic leadership in a non-western country. The objective of this study was to explore the views on effective academic leadership requirements held by key informants in Iran's medical education system. Methods A nominal group study was conducted by strategic sampling in which participants were requested to discuss and report on requirements for academic leadership, suggestions and barriers. Written notes from the discussions were transcribed and subjected to content analysis. Results Six themes of effective academic leadership emerged: 1shared vision, goal, and strategy, 2 teaching and research leadership, 3 fair and efficient management, 4 mutual trust and respect, 5 development and recognition, and 6 transformational leadership. Current Iranian academic leadership suffers from lack of meritocracy, conservative leaders, politicization, bureaucracy, and belief in misconceptions. Conclusion The structure of the Iranian medical university system is not supportive of effective academic leadership. However, participants' views on effective academic leadership are in line with what is also found in the western literature, that is, if the managers could create the premises for a supportive and transformational leadership, they could generate mutual trust and respect in academia and increase scientific production.
Requirements for effective academic leadership in Iran: A Nominal Group Technique exercise
Bikmoradi, Ali; Brommels, Mats; Shoghli, Alireza; Sohrabi, Zohreh; Masiello, Italo
2008-01-01
Background During the last two decades, medical education in Iran has shifted from elite to mass education, with a considerable increase in number of schools, faculties, and programs. Because of this transformation, it is a good case now to explore academic leadership in a non-western country. The objective of this study was to explore the views on effective academic leadership requirements held by key informants in Iran's medical education system. Methods A nominal group study was conducted by strategic sampling in which participants were requested to discuss and report on requirements for academic leadership, suggestions and barriers. Written notes from the discussions were transcribed and subjected to content analysis. Results Six themes of effective academic leadership emerged: 1)shared vision, goal, and strategy, 2) teaching and research leadership, 3) fair and efficient management, 4) mutual trust and respect, 5) development and recognition, and 6) transformational leadership. Current Iranian academic leadership suffers from lack of meritocracy, conservative leaders, politicization, bureaucracy, and belief in misconceptions. Conclusion The structure of the Iranian medical university system is not supportive of effective academic leadership. However, participants' views on effective academic leadership are in line with what is also found in the western literature, that is, if the managers could create the premises for a supportive and transformational leadership, they could generate mutual trust and respect in academia and increase scientific production. PMID:18430241
Study on techniques to use the comprehensive functions of human beings in a group
International Nuclear Information System (INIS)
Numano, Masayoshi; Matsuoka, Takeshi; Fukuto, Junji; Mitomo, Nobuo; Miyazaki, Keiko; Hirao, Yoshihiro; Ando, Hirotomo
1997-01-01
In an atomic power plant, it is necessary to provide sufficient informations concerning the quantity of state in the plant to the operators. However, an atomic power plant is apt to have a black box in its operation because of the automatized systems to avoid human errors. Thus, the informations of plant are needed to be provided in fitting forms for human cognitive functions. Here, an investigation was made focusing on the feedback of operation results and the presentation of the plant states in the PWR plant model. In addition, exchange and sharing of informations as well as role assignment in the operational support among groups were investigated using duty officers on bridge of a modern coastwise tanker as the subjects. To prevent an error manipulation during intelligent works in plant operation, a model room for virtual reality experiment was constructed. Then, an input system for sensory feedback and its plant model were proposed to examine the validity of the feedback inputs. (M.N.)
Sadi, Maryam
2018-01-01
In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.
Muscle Torque and its Relation to Technique, Tactics, Sports Level and Age Group in Judo Contestants
Lech, Grzegorz; Chwała, Wiesław; Ambroży, Tadeusz; Sterkowicz, Stanisław
2015-01-01
The aim of this study was to perform a comparative analysis of maximal muscle torques at individual stages of development of athletes and to determine the relationship between muscle torques, fighting methods and the level of sports performance. The activity of 25 judo contestants during judo combats and the effectiveness of actions were evaluated. Maximum muscle torques in flexors/extensors of the body trunk, shoulder, elbow, hip and knee joints were measured. The level of significance was set at p≤0.05; for multiple comparisons the Mann-Whitney U test, p≤0.016, was used. Intergroup differences in relative torques in five muscle groups studied (elbow extensors, shoulder flexors, knee flexors, knee extensors, hip flexors) were not significant. In cadets, relative maximum muscle torques in hip extensors correlated with the activity index (Spearman’s r=0.756). In juniors, maximum relative torques in elbow flexors and knee flexors correlated with the activity index (r=0.73 and r=0.76, respectively). The effectiveness of actions correlated with relative maximum torque in elbow extensors (r=0.67). In seniors, the relative maximum muscle torque in shoulder flexors correlated with the activity index during the second part of the combat (r=0.821). PMID:25964820
Lech, Grzegorz; Chwała, Wiesław; Ambroży, Tadeusz; Sterkowicz, Stanisław
2015-03-29
The aim of this study was to perform a comparative analysis of maximal muscle torques at individual stages of development of athletes and to determine the relationship between muscle torques, fighting methods and the level of sports performance. The activity of 25 judo contestants during judo combats and the effectiveness of actions were evaluated. Maximum muscle torques in flexors/extensors of the body trunk, shoulder, elbow, hip and knee joints were measured. The level of significance was set at p≤0.05; for multiple comparisons the Mann-Whitney U test, p≤0.016, was used. Intergroup differences in relative torques in five muscle groups studied (elbow extensors, shoulder flexors, knee flexors, knee extensors, hip flexors) were not significant. In cadets, relative maximum muscle torques in hip extensors correlated with the activity index (Spearman's r=0.756). In juniors, maximum relative torques in elbow flexors and knee flexors correlated with the activity index (r=0.73 and r=0.76, respectively). The effectiveness of actions correlated with relative maximum torque in elbow extensors (r=0.67). In seniors, the relative maximum muscle torque in shoulder flexors correlated with the activity index during the second part of the combat (r=0.821).
Risks identification and ranking using AHP and group decision making technique: Presenting “R index”
Directory of Open Access Journals (Sweden)
Safar Fazli
2013-02-01
Full Text Available One of the primary concerns in project development is to detect all sorts of risks associated with a particular project. The main objective of this article is to identify the risks in the construction project and to grade them based on their importance on the project. The designed indicator in this paper is the combinational model of the Analytical Hierarchal Process (AHP method and the group decision – making applied for risks measurement and ranking. This indicator is called "R" which includes three main steps: creating the risks broken structure (RBS, obtaining each risk weight and efficacy, and finally performing the model to rank the risks. A questionnaire is used for gathering data. Based on the results of this survey, there are important risks associated with construction projects. There we need to use some guidelines to reduce the inherent risks including recognition of the common risks beside the political risks; suggestion of a simple, understandable, and practical model; and using plenty of the experts and specialists' opinions through applying step. After analyzing data, the final result from applying R index showed that the risk “economic changes / currency rate and inflation change" has the most importance for the analysis. In the other words, if these risks occur, the project may face with the more threats and it is suggested that an organization should centralize its equipment, personnel, cost, and time on the risk more than ever. The most obvious issue in this paper is a tremendous difference between an importance of the financial risks and the other risks.
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
Average nuclear surface properties
International Nuclear Information System (INIS)
Groote, H. von.
1979-01-01
The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)
Americans' Average Radiation Exposure
International Nuclear Information System (INIS)
2000-01-01
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body
Baumgart, Daniel C; le Claire, Marie
2016-01-01
Crohn's disease (CD) and ulcerative colitis (UC) challenge economies worldwide. Detailed health economic data of DRG based academic inpatient care for inflammatory bowel disease (IBD) patients in Europe is unavailable. IBD was identified through ICD-10 K50 and K51 code groups. We took an actual costing approach, compared expenditures to G-DRG and non-DRG proceeds and performed detailed cost center and type accounting to identify coverage determinants. Of all 3093 hospitalized cases at our department, 164 were CD and 157 UC inpatients in 2012. On average, they were 44.1 (CD 44.9 UC 43.3 all 58) years old, stayed 10.1 (CD 11.8 UC 8.4 vs. all 8) days, carried 5.8 (CD 6.4 UC 5.2 vs. all 6.8) secondary diagnoses, received 7.4 (CD 7.7 UC 7 vs. all 6.2) procedures, had a higher cost weight (CD 2.8 UC 2.4 vs. all 1.6) and required more intense nursing. Their care was more costly (means: total cost IBD 8477€ CD 9051€ UC 7903€ vs. all 5078€). However, expenditures were not fully recovered by DRG proceeds (means: IBD 7413€, CD 8441€, UC 6384€ vs all 4758€). We discovered substantial disease specific mismatches in cost centers and types and identified the medical ward personnel and materials budgets to be most imbalanced. Non-DRG proceeds were almost double (IBD 16.1% vs. all 8.2%), but did not balance deficits at total coverage analysis, that found medications (antimicrobials, biologics and blood products), medical materials (mostly endoscopy items) to contribute most to the deficit. DRGs challenge sophisticated IBD care.
Delineation of facial archetypes by 3d averaging.
Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G
2004-10-01
The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.
DEFF Research Database (Denmark)
Makundi, E A; Manongi, R; Mushi, A K
2005-01-01
in the list implying that priorities should not only be focused on diseases, but should also include health services and social cultural issues. Indeed, methods which are easily understood and applied thus able to give results close to those provided by the burden of disease approaches should be adopted....... The patients/caregivers, women's group representatives, youth leaders, religious leaders and community leaders/elders constituted the principal subjects. Emphasis was on providing qualitative data, which are of vital consideration in multi-disciplinary oriented studies, and not on quantitative information from....... It is the provision of ownership of the derived health priorities to partners including the community that enhances research utilization of the end results. In addition to disease-based methods, the Nominal Group Technique is being proposed as an important research tool for involving the non-experts in priority...
Averaging for solitons with nonlinearity management
International Nuclear Information System (INIS)
Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.
2003-01-01
We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations
Directory of Open Access Journals (Sweden)
Rollin McCraty
2017-10-01
among group members. (2 Training in techniques to increase group coherence and heart rhythm synchronization will correlate with increased prosocial behaviors, such as kindness and cooperation among individuals, improved communication, and decreases in social discord and adversarial interactions. (3 Biomagnetic fields produced by the heart may be a primary mechanism in mediating HRV synchronization among group members. Data supporting each of the hypothesis is discussed.
Kline, Terence R.
2013-01-01
The intent of the project described was to apply the Nominal Group Technique (NGT) to achieve a consensus on Avian Influenza (AI) planning in Northeastern Ohio. Nominal Group Technique is a process first developed by Delbecq, Vande Ven, and Gustafsen (1975) to allow all participants to have an equal say in an open forum setting. A very diverse…
Cunningham, Sheila
2017-09-01
This paper discusses the use of Nominal Group Technique (NGT) for European nursing exchange evaluation at one university. The NGT is a semi-quantitative evaluation method derived from the Delphi method popular in the 1970s and 1980s. The NGT was modified from the traditional version retaining the structured cycles and but adding a broader group discussion. The NGT had been used for 2 successive years but required analysis and evaluation itself for credibility and 'fit' for purpose which is presented here. It aimed to explore nursing students' exchange experiences and aid programme development futures exchanges and closure from exchange. Results varied for the cohorts and students as participants enthusiastically engaged generating ample data which they ranked and categorised collectively. Evaluation of the NGT itself was two fold: by the programme team who considered purpose, audience, inclusivity, context and expertise. Secondly, students were asked for their thoughts using a graffiti board. Students avidly engaged with NGT but importantly also reported an effect from the process itself as an opportunity to reflect and share their experiences. The programme team concluded the NGT offered a credible evaluation tool which made use of authentic student voice and offered interactive group processes. Pedagogially, it enabled active reflection thus aiding reorientation back to the United Kingdom and awareness of 'transformative' consequences of their exchange experiences. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
M. Yousefi
Full Text Available This study presents an agent-based simulation modeling in an emergency department. In a traditional approach, a supervisor (or a manager allocates the resources (receptionist, nurses, doctors, etc. to different sections based on personal experience or by using decision-support tools. In this study, each staff agent took part in the process of allocating resources based on their observation in their respective sections, which gave the system the advantage of utilizing all the available human resources during the workday by being allocated to a different section. In this simulation, unlike previous studies, all staff agents took part in the decision-making process to re-allocate the resources in the emergency department. The simulation modeled the behavior of patients, receptionists, triage nurses, emergency room nurses and doctors. Patients were able to decide whether to stay in the system or leave the department at any stage of treatment. In order to evaluate the performance of this approach, 6 different scenarios were introduced. In each scenario, various key performance indicators were investigated before and after applying the group decision-making. The outputs of each simulation were number of deaths, number of patients who leave the emergency department without being attended, length of stay, waiting time and total number of discharged patients from the emergency department. Applying the self-organizing approach in the simulation showed an average of 12.7 and 14.4% decrease in total waiting time and number of patients who left without being seen, respectively. The results showed an average increase of 11.5% in total number of discharged patients from emergency department.
Finkelstein, Marsha; Llanos, Imelda; Scheiman, Mitchell; Wagener, Sharon Gowdy
2014-01-01
Vision impairment is common in the first year after traumatic brain injury (TBI), including among service members whose brain injuries occurred during deployment in Iraq and Afghanistan. Occupational therapy practitioners provide routine vision screening to inform treatment planning and referral to vision specialists, but existing methods are lacking because many tests were developed for children and do not screen for vision dysfunction typical of TBI. An expert panel was charged with specifying the composition of a vision screening protocol for servicemembers with TBI. A modified nominal group technique fostered discussion and objective determinations of consensus. After considering 29 vision tests, the panel recommended a nine-test vision screening that examines functional performance, self-reported problems, far–near acuity, reading, accommodation, convergence, eye alignment and binocular vision, saccades, pursuits, and visual fields. Research is needed to develop reliable, valid, and clinically feasible vision screening protocols to identify TBI-related vision disorders in adults. PMID:25005505
Directory of Open Access Journals (Sweden)
BO AN LEE
2014-02-01
Full Text Available An electrical resistance tomography (ERT technique combining the particle swarm optimization (PSO algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes. Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images.
International Nuclear Information System (INIS)
Lee, Bo An; Kim, Bong Seok; Ko, Min Seok; Kim, Kyung Young; Kim, Sin
2014-01-01
An electrical resistance tomography (ERT) technique combining the particle swarm optimization (PSO) algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes) comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes). Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images
Energy Technology Data Exchange (ETDEWEB)
Lee, Bo An; Kim, Bong Seok; Ko, Min Seok; Kim, Kyung Young; Kim, Sin [Jeju National Univ., Jeju (Korea, Republic of)
2014-02-15
An electrical resistance tomography (ERT) technique combining the particle swarm optimization (PSO) algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes) comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes). Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images.
Stapleton, Tadhg; Connelly, Deirdre
2010-01-01
Practice in the area of predriving assessment for people with stroke varies, and research findings are not always easily transferred into the clinical setting, particularly when such assessment is not conducted within a dedicated driver assessment programme. This article explores the clinical predriving assessment practices and recommendations of a group of Irish occupational therapists for people with stroke. A consensus meeting of occupational therapists was facilitated using a nominal group technique (NGT) to identify specific components of cognition, perception, and executive function that may influence fitness to return to driving and should be assessed prior to referral for on-road evaluation. Standardised assessments for use in predriving assessment were recommended. Thirteen occupational therapists speed of processing; perceptual components of spatial awareness, depth perception, and visual inattention; and executive components of planning, problem solving, judgment, and self-awareness. Consensus emerged for the use of the following standardised tests: Behavioural Assessment of Dysexecutive Syndrome (BADS), Test of Everyday Attention (TEA), Brain Injury Visual Assessment Battery for Adults (biVABA), Rivermead Perceptual Assessment Battery (RPAB), and Motor Free Visual Perceptual Test (MVPT). Tests were recommended that gave an indication of the patient's underlying component skills in the area of cognition, perception, and executive functions considered important for driving. Further research is needed in this area to develop clinical practice guidelines for occupational therapists for the assessment of fitness to return to driving after stroke.
Xu, Weiyi; Wan, Feng; Lou, Yufeng; Jin, Jiali; Mao, Weilin
2014-01-01
A number of automated devices for pretransfusion testing have recently become available. This study evaluated the Immucor Galileo System, a fully automated device based on the microplate hemagglutination technique for ABO/Rh (D) determinations. Routine ABO/Rh typing tests were performed on 13,045 samples using the Immucor automated instruments. Manual tube method was used to resolve ABO forward and reverse grouping discrepancies. D-negative test results were investigated and confirmed manually by the indirect antiglobulin test (IAT). The system rejected 70 tests for sample inadequacy. 87 samples were read as "No-type-determined" due to forward and reverse grouping discrepancies. 25 tests gave these results because of sample hemolysis. After further tests, we found 34 tests were caused by weakened RBC antibodies, 5 tests were attributable to weak A and/or B antigens, 4 tests were due to mixed-field reactions, and 8 tests had high titer cold agglutinin with blood qualifications which react only at temperatures below 34 degrees C. In the remaining 11 cases, irregular RBC antibodies were identified in 9 samples (seven anti-M and two anti-P) and two subgroups were identified in 2 samples (one A1 and one A2) by a reference laboratory. As for D typing, 2 weak D+ samples missed by automated systems gave negative results, but weak-positive reactions were observed in the IAT. The Immucor Galileo System is reliable and suited for ABO and D blood groups, some reasons may cause a discrepancy in ABO/D typing using a fully automated system. It is suggested that standardization of sample collection may improve the performance of the fully automated system.
Pajewski, Lara; Giannopoulos, Antonios; Sesnic, Silvestar; Randazzo, Andrea; Lambot, Sébastien; Benedetto, Francesco; Economou, Nikos
2017-04-01
This work aims at presenting the main results achieved by Working Group (WG) 3 "Electromagnetic methods for near-field scattering problems by buried structures; data processing techniques" of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.GPRadar.eu, www.cost.eu). The main objective of the Action, started in April 2013 and ending in October 2017, is to exchange and increase scientific-technical knowledge and experience of Ground Penetrating Radar (GPR) techniques in civil engineering, whilst promoting in Europe the effective use of this safe non-destructive technique. The Action involves more than 150 Institutions from 28 COST Countries, a Cooperating State, 6 Near Neighbour Countries and 6 International Partner Countries. Among the most interesting achievements of WG3, we wish to mention the following ones: (i) A new open-source version of the finite-difference time-domain simulator gprMax was developed and released. The new gprMax is written in Python and includes many advanced features such as anisotropic and dispersive-material modelling, building of realistic heterogeneous objects with rough surfaces, built-in libraries of antenna models, optimisation of parameters based on Taguchi's method - and more. (ii) A new freeware CAD was developed and released, for the construction of two-dimensional gprMax models. This tool also includes scripts easing the execution of gprMax on multi-core machines or network of computers and scripts for a basic plotting of gprMax results. (iii) A series of interesting freeware codes were developed will be released by the end of the Action, implementing differential and integral forward-scattering methods, for the solution of simple electromagnetic problems by buried objects. (iv) An open database of synthetic and experimental GPR radargrams was created, in cooperation with WG2. The idea behind this initiative is to give researchers the
Directory of Open Access Journals (Sweden)
Gang He
Full Text Available BACKGROUND: Relationships between the neighborhood environment and children's physical activity have been well documented in Western countries but are less investigated in ultra-dense Asian cities. The aim of this study was to identify the environmental facilitators and barriers of physical activity behaviors among Hong Kong Chinese children using nominal group technique. METHODS: Five nominal groups were conducted among 34 children aged 10-11 years from four types of neighborhoods varying in socio-economic status and walkability in Hong Kong. Environmental factors were generated by children in response to the question "What neighborhood environments do you think would increase or decrease your willingness to do physical activity?" Factors were prioritized in order of their importance to children's physical activity. RESULTS: Sixteen unique environmental factors, which were perceived as the most important to children's physical activity, were identified. Factors perceived as physical activity-facilitators included "Sufficient lighting", "Bridge or tunnel", "Few cars on roads", "Convenient transportation", "Subway station", "Recreation grounds", "Shopping malls with air conditioning", "Fresh air", "Interesting animals", and "Perfume shop". Factors perceived as physical activity-barriers included "People who make me feel unsafe", "Crimes nearby", "Afraid of being taken or hurt at night", "Hard to find toilet in shopping mall", "Too much noise", and "Too many people in recreation grounds". CONCLUSIONS: Specific physical activity-related environmental facilitators and barriers, which are unique in an ultra-dense city, were identified by Hong Kong children. These initial findings can inform future examinations of the physical activity-environment relationship among children in Hong Kong and similar Asian cities.
He, Gang; Cerin, Ester; Huang, Wendy Y; Wong, Stephen H
2014-01-01
Relationships between the neighborhood environment and children's physical activity have been well documented in Western countries but are less investigated in ultra-dense Asian cities. The aim of this study was to identify the environmental facilitators and barriers of physical activity behaviors among Hong Kong Chinese children using nominal group technique. Five nominal groups were conducted among 34 children aged 10-11 years from four types of neighborhoods varying in socio-economic status and walkability in Hong Kong. Environmental factors were generated by children in response to the question "What neighborhood environments do you think would increase or decrease your willingness to do physical activity?" Factors were prioritized in order of their importance to children's physical activity. Sixteen unique environmental factors, which were perceived as the most important to children's physical activity, were identified. Factors perceived as physical activity-facilitators included "Sufficient lighting", "Bridge or tunnel", "Few cars on roads", "Convenient transportation", "Subway station", "Recreation grounds", "Shopping malls with air conditioning", "Fresh air", "Interesting animals", and "Perfume shop". Factors perceived as physical activity-barriers included "People who make me feel unsafe", "Crimes nearby", "Afraid of being taken or hurt at night", "Hard to find toilet in shopping mall", "Too much noise", and "Too many people in recreation grounds". Specific physical activity-related environmental facilitators and barriers, which are unique in an ultra-dense city, were identified by Hong Kong children. These initial findings can inform future examinations of the physical activity-environment relationship among children in Hong Kong and similar Asian cities.
He, Gang; Cerin, Ester; Huang, Wendy Y.; Wong, Stephen H.
2014-01-01
Background Relationships between the neighborhood environment and children’s physical activity have been well documented in Western countries but are less investigated in ultra-dense Asian cities. The aim of this study was to identify the environmental facilitators and barriers of physical activity behaviors among Hong Kong Chinese children using nominal group technique. Methods Five nominal groups were conducted among 34 children aged 10–11 years from four types of neighborhoods varying in socio-economic status and walkability in Hong Kong. Environmental factors were generated by children in response to the question “What neighborhood environments do you think would increase or decrease your willingness to do physical activity?” Factors were prioritized in order of their importance to children’s physical activity. Results Sixteen unique environmental factors, which were perceived as the most important to children’s physical activity, were identified. Factors perceived as physical activity-facilitators included “Sufficient lighting”, “Bridge or tunnel”, “Few cars on roads”, “Convenient transportation”, “Subway station”, “Recreation grounds”, “Shopping malls with air conditioning”, “Fresh air”, “Interesting animals”, and “Perfume shop”. Factors perceived as physical activity-barriers included “People who make me feel unsafe”, “Crimes nearby”, “Afraid of being taken or hurt at night”, “Hard to find toilet in shopping mall”, “Too much noise”, and “Too many people in recreation grounds”. Conclusions Specific physical activity-related environmental facilitators and barriers, which are unique in an ultra-dense city, were identified by Hong Kong children. These initial findings can inform future examinations of the physical activity-environment relationship among children in Hong Kong and similar Asian cities. PMID:25187960
International Nuclear Information System (INIS)
Cosgrove, C.M.
1980-01-01
We investigate the precise interrelationships between several recently developed solution-generating techniques capable of generating asymptotically flat gravitational solutions with arbitrary multipole parameters. The transformations we study in detail here are the Lie groups Q and Q of Cosgrove, the Hoenselaers--Kinnersley--Xanthopoulos (HKX) transformations and their SL(2) tensor generalizations, the Neugebauer--Kramer discrete mapping, the Neugebauer Baecklund transformations I 1 and I 2 , the Harrison Baecklund transformation, and the Belinsky--Zakharov (BZ) one- and two-soliton transformations. Two particular results, among many reported here, are that the BZ soliton transformations are essentially equivalent to Harrison transformations and that the generalized HKX transformation may be deduced as a confluent double soliton transformation. Explicit algebraic expressions are given for the transforms of the Kinnersley--Chitre generating functions under all of the above transformations. In less detail, we also study the Kinnersley--Chitre β transformations, the non-null HKX transformations, and the Hilbert problems proposed independently by Belinsky and Zakharov, and Hauser and Ernst. In conclusion, we describe the nature of the exact solutions constructible in a finite number of steps with the available methods
The difference between alternative averages
Directory of Open Access Journals (Sweden)
James Vaupel
2012-09-01
Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.
Directory of Open Access Journals (Sweden)
Verhelst Rita
2010-09-01
Full Text Available Abstract Background Streptococcus agalactiae (group B streptococcus; GBS is a significant cause of perinatal and neonatal infections worldwide. To detect GBS colonization in pregnant women, the CDC recommends isolation of the bacterium from vaginal and anorectal swab samples by growth in a selective enrichment medium, such as Lim broth (Todd-Hewitt broth supplemented with selective antibiotics, followed by subculture on sheep blood agar. However, this procedure may require 48 h to complete. We compared different sampling and culture techniques for the detection of GBS. Methods A total of 300 swabs was taken from 100 pregnant women at 35-37 weeks of gestation. For each subject, one rectovaginal, one vaginal and one rectal ESwab were collected. Plating onto Columbia CNA agar (CNA, group B streptococcus differential agar (GBSDA (Granada Medium and chromID Strepto B agar (CA, with and without Lim broth enrichment, were compared. The isolates were confirmed as S. agalactiae using the CAMP test on blood agar and by molecular identification with tDNA-PCR or by 16S rRNA gene sequence determination. Results The overall GBS colonization rate was 22%. GBS positivity for rectovaginal sampling (100% was significantly higher than detection on the basis of vaginal sampling (50%, but not significantly higher than for rectal sampling (82%. Direct plating of the rectovaginal swab on CNA, GBSDA and CA resulted in detection of 59, 91 and 95% of the carriers, respectively, whereas subculturing of Lim broth yielded 77, 95 and 100% positivity, respectively. Lim broth enrichment enabled the detection of only one additional GBS positive subject. There was no significant difference between GBSDA and CA, whereas both were more sensitive than CNA. Direct culture onto GBSDA or CA (91 and 95% detected more carriers than Lim broth enrichment and subculture onto CNA (77%. One false negative isolate was observed on GBSDA, and three false positives on CA. Conclusions In
Singh, Jasvinder A; Qu, Haiyan; Yazdany, Jinoos; Chatham, Winn; Dall'era, Maria; Shewchuk, Richard M
2015-09-01
To assess the perspectives of women with lupus nephritis on barriers to medication decision making. We used the nominal group technique (NGT), a structured process to elicit ideas from participants, for a formative assessment. Eight NGT meetings were conducted in English and moderated by an expert NGT researcher at 2 medical centers. Participants responded to the question: "What sorts of things make it hard for people to decide to take the medicines that doctors prescribe for treating their lupus kidney disease?" Patients nominated, discussed, and prioritized barriers to decisional processes involving medications for treating lupus nephritis. Fifty-one women with lupus nephritis with a mean age of 40.6 ± 13.3 years and disease duration of 11.8 ± 8.3 years participated in 8 NGT meetings: 26 African Americans (4 panels), 13 Hispanics (2 panels), and 12 whites (2 panels). Of the participants, 36.5% had obtained at least a college degree and 55.8% needed some help in reading health materials. Of the 248 responses generated (range 19-37 responses/panel), 100 responses (40%) were perceived by patients as having relatively greater importance than other barriers in their own decision-making processes. The most salient perceived barriers, as indicated by percent-weighted votes assigned, were known/anticipated side effects (15.6%), medication expense/ability to afford medications (8.2%), and the fear that the medication could cause other diseases (7.8%). Women with lupus nephritis identified specific barriers to decisions related to medications. Information relevant to known/anticipated medication side effects and medication cost will form the basis of a patient guide for women with systemic lupus erythematosus, currently under development.
Tuffrey-Wijne, I; Wicki, M; Heslop, P; McCarron, M; Todd, S; Oliver, D; de Veer, A; Ahlström, G; Schäper, S; Hynes, G; O'Farrell, J; Adler, J; Riese, F; Curfs, L
2016-03-24
Empirical knowledge around palliative care provision and needs of people with intellectual disabilities is extremely limited, as is the availability of research resources, including expertise and funding. This paper describes a consultation process that sought to develop an agenda for research priorities for palliative care of people with intellectual disabilities in Europe. A two-day workshop was convened, attended by 16 academics and clinicians in the field of palliative care and intellectual disability from six European countries. The first day consisted of round-table presentations and discussions about the current state of the art, research challenges and knowledge gaps. The second day was focused on developing consensus research priorities with 12 of the workshop participants using nominal group technique, a structured method which involved generating a list of research priorities and ranking them in order of importance. A total of 40 research priorities were proposed and collapsed into eleven research themes. The four most important research themes were: investigating issues around end of life decision making; mapping the scale and scope of the issue; investigating the quality of palliative care for people with intellectual disabilities, including the challenges in achieving best practice; and developing outcome measures and instruments for palliative care of people with intellectual disabilities. The proposal of four major priority areas and a range of minor themes for future research in intellectual disability, death, dying and palliative care will help researchers to focus limited resources and research expertise on areas where it is most needed and support the building of collaborations. The next steps are to cross-validate these research priorities with people with intellectual disabilities, carers, clinicians, researchers and other stakeholders across Europe; to validate them with local and national policy makers to determine how they could best be
Improving consensus structure by eliminating averaging artifacts
Directory of Open Access Journals (Sweden)
KC Dukka B
2009-03-01
Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which
International Nuclear Information System (INIS)
1992-01-01
An Advisory Group Meeting convened by the IAEA in October 1992 made recommendations on the setting up of a Co-ordinated Research Programme (CRP) using nuclear and isotopic techniques for international comparative studies of osteoporosis. The proposed CRP will be implemented by the IAEA during the period 1993-1997. The main purpose of this programme is to undertake pilot studies of bone density in selected developing country populations for the purposes of (i) determining the age of peak bone mass in each study group, and (ii) quantifying differences in bone density as functions of the age and sex of persons in the study groups, as well as quantifying differences between the study groups in different countries. The preferred technique for bone density measurements in this study is DEXA (dual energy X-ray absorptiometry). Additional measurements of trace elements in bone (and possibly also teeth) are also foreseen using neutron activation analysis and other appropriate techniques
International Nuclear Information System (INIS)
1986-01-01
The purpose of the FAO/IAEA advisory group meeting was to evaluate the nuclear and related techniques currently used to quantify such functions as animal adaptation, digestion and utilization of poor quality feedstuffs, reproductive efficiency and resistance to disease and other forms of stress. The recommendations made by the advisory group are grouped into five sections: reproduction, parasitic diseases, infectious diseases, environmental physiology and nutrition
Comparison of Interpolation Methods as Applied to Time Synchronous Averaging
National Research Council Canada - National Science Library
Decker, Harry
1999-01-01
Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...
Lee, Linden O.; Bateman, Brian T.; Kheterpal, Sachin; Klumpner, Thomas T.; Housey, Michelle; Aziz, Michael F.; Hand, Karen W.; MacEachern, Mark; Goodier, Christopher G.; Bernstein, Jeffrey; Bauer, Melissa E.; Lirk, Philip; Wilczak, Janet; Soto, Roy; Tom, Simon; Cuff, Germaine; Biggs, Daniel A.; Coffman, Traci; Saager, Leif; Levy, Warren J.; Godbold, Michael; Pace, Nathan L.; Wethington, Kevin L.; Paganelli, William C.; Durieux, Marcel E.; Domino, Karen B.; Nair, Bala; Ehrenfeld, Jesse M.; Wanderer, Jonathan P.; Schonberger, Robert B.; Berris, Joshua; Lins, Steven; Coles, Peter; Cummings, Kenneth C.; Maheshwari, Kamal; Berman, Mitchell F.; Wedeven, Christopher; LaGorio, John; Fleishut, Peter M.; Ellis, Terri A.; Molina, Susan; Carl, Curtis; Kadry, Bassam; van Klei, Wilton A A; Pasma, Wietze; Jameson, Leslie C.; Helsten, Daniel L.; Avidan, Michael S.
BACKGROUND:: Thrombocytopenia has been considered a relative or even absolute contraindication to neuraxial techniques due to the risk of epidural hematoma. There is limited literature to estimate the risk of epidural hematoma in thrombocytopenic parturients. The authors reviewed a large
How to average logarithmic retrievals?
Directory of Open Access Journals (Sweden)
B. Funke
2012-04-01
Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.
Chen, Shyi-Ming; Manalu, Gandhi Maruli Tua; Pan, Jeng-Shyang; Liu, Hsiang-Chuan
2013-06-01
In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization (PSO) techniques. First, we fuzzify the historical training data of the main factor and the secondary factor, respectively, to form two-factors second-order fuzzy logical relationships. Then, we group the two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, we obtain the optimal weighting vector for each fuzzy-trend logical relationship group by using PSO techniques to perform the forecasting. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index and the NTD/USD exchange rates. The experimental results show that the proposed method gets better forecasting performance than the existing methods.
Long, Haiying
2012-01-01
As one of the most widely used creativity assessment tools, the Consensual Assessment Technique (CAT) has been praised as a valid tool to assess creativity. In Amabile's (1982) seminal work, the inter-rater reliability was defined as construct validity of the CAT. During the past three decades, researchers followed this definition and…
Poyatos Matas, Cristina; Ng, Chew; Muurlink, Olav
2011-01-01
First year accounting has generally been perceived as one of the more challenging first year business courses for university students. Various Classroom Assessment Techniques (CATs) have been proposed to attempt to enrich and enhance student learning, with these studies generally positioning students as learners alone. This paper uses an…
Isbir, Gozde Gokçe; Ozan, Yeter Durgun
2018-01-01
Nurses and midwifes without sufficient knowledge of infertilitare not likely to provide counseling and support for people suffering from infertility. This study aimed to evaluate nursing and midwifery students' experiences with the Course on Infertility and Assisted Reproductive Techniques. Our study had a qualitative descriptive design. Total number of the participants was 75. The analysis revealed five primary themes and twenty-one sub-themes. The themes were (1) action, (2) learner centered method, (3) interaction, (4) nursing competencies, and (5) evaluation. The active learning techniques enabled the students to retrieve the knowledge that they obtained for a long time, contributed to social and cultural development and improved skills required for selfevaluation, communication and leadership, enhanced critical thinking, skills increased motivation and satisfaction and helped with knowledge integration. Infertility is a biopsychosocial condition, and it may be difficult for students to understand what infertile individuals experience. The study revealed that active learning techniques enabled the students to acquire not only theoretical knowledge but also an emotional and psychosocial viewpoint and attitude regarding infertility. The content of an infertility course should be created in accordance with changes in the needs of a given society and educational techniques. Copyright © 2017 Elsevier Ltd. All rights reserved.
Averaging in spherically symmetric cosmology
International Nuclear Information System (INIS)
Coley, A. A.; Pelavas, N.
2007-01-01
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis
Devendra, Jaya; Agarwal, Smita; Singh, Pankaj Kumar
2014-11-01
Low socio-economic group patients from rural areas often opt for free cataract surgeries offered by charitable organisations. SICS continues to be a time tested technique for cataract removal in such patients. In recent times, camp patients are sometimes treated by clear corneal phacoemulsification with implantation of a rigid IOL, which being more cost effective is often provided for camp patients. This study was undertaken to find out which surgical technique yielded better outcomes and was more suited for high volume camp surgery. To find the better surgical option- phacoemulsification with rigid IOL or SICS, in poor patients from rural areas. A prospective randomised controlled trial of cataract patients operated by two different techniques. One hundred and twelve eyes were selected and were randomly allocated into two groups of 56 eyes each. At completion of the study, data was analysed for 52 eyes operated by clear corneal phacoemulsification and implantation of a rigid IOL, and 56 eyes operated by SICS. Unpaired t-test was used to calculate the p- value. The results were evaluated on the following criteria. The mean post-operative astigmatism at the end of four weeks - was significantly higher in phacoemulsification group as compared to SICS group The BCVA (best corrected visual acuity) at the end of four weeks - was comparable in both groups. Subjective complaints and/ or complications: In phaco group two patients required sutures and seven had striate keratitis , while none in SICS group. Complaint of irritation was similar in both groups. Surgical time- was less for SICS group as compared to phaco group. SICS by virtue of being a faster surgery with more secure wound and significantly less astigmatism is a better option in camp patients from rural areas as compared to phacoemulsification with rigid IOL.
Averaging models: parameters estimation with the R-Average procedure
Directory of Open Access Journals (Sweden)
S. Noventa
2010-01-01
Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Aperture averaging in strong oceanic turbulence
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
The U.S. German Bilateral Working Group originated in 1990 in order to share and transfer information, ideas, tools and techniques regarding environmental research. The U.S. Environmental Protection Agency (EPA)/Office of Research and Development (ORD) and the German Federal Mini...
Sclafane, Jamie Heather; Merves, Marni Loiacono; Rivera, Angelic; Long, Laura; Wilson, Ken; Bauman, Laurie J.
2012-01-01
The Turn the Tables Technique (T[cube]) is an activity designed to provide group facilitators who lead HIV/STI prevention and sexual health promotion programs with detailed and current information on teenagers' sexual behaviors and beliefs. This information can be used throughout a program to tailor content. Included is a detailed lesson plan of…
Directory of Open Access Journals (Sweden)
Diana Schmidt-Pfister
2011-03-01
Full Text Available This edited volume comprises a range of studies that have employed a group discussion technique in combination with a specific strategy for reconstructive social research—the so-called documentary method. The latter is an empirical research strategy based on the meta-theoretical premises of the praxeological sociology of knowledge, as developed by Ralf BOHNSACK. It seeks to access practice in a more appropriate manner, namely by differentiating between various dimensions of knowledge and sociality. It holds that habitual collective orientations, in particular, are best accessed through group discussions. Thus this book does not address the group discussion technique in general, as might be expected from the title. Instead, it presents various contributions from researchers interpreting transcripts of group discussions according to the documentary method. The chapters are grouped into three main sections, representing different frameworks of practice and habitual orientation: childhood, adolescence, and organizational or societal context. A fourth section includes chapters on further, potentially useful ways of employing this particular technique and approach, as well as a chapter on teaching it in a meaningful way. Each chapter is structured in the same way: introduction to the research field and focus; methodological discussion; exemplary interpretation of group discussions; and concluding remarks. Whilst the transcripts referred to by the authors are very helpfully presented in the chapters, there is a lack of methodological reflection on the group discussion technique itself, which, as mentioned above, is only evaluated in regard to the documentary method. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs110225
Directory of Open Access Journals (Sweden)
Shahrokh Bidgani
2016-03-01
Conclusion: The frequency of GBS culture from rectal samples was higher than vaginal samples. However, the detection percentage of GBS using PCR from vaginal samples was higher than rectal samples. By contrast, the culture is a time-consuming method requiring at least 48 hours for GBS fully identification but PCR is a sensitive and rapid technique in detection of GBS, with the result was acquired during 3 hours.
DEFF Research Database (Denmark)
Hansen, Peter Møller
In this PhD project two newer ultrasound techniques are for the first time used for clinical scans of patients with malignant liver tumors (Study I), arteriovenous fistulas for hemodialysis (Study II) and arteriosclerotic femoral arteries (Study III). The same commercial ultrasound scanner was us...... of the new ultrasound techniques in selected groups of patients. For all three studies the results are promising, and hopefully the techniques will find their way into everyday clinical practice for the benefit of both patients and healthcare practitioners.......In this PhD project two newer ultrasound techniques are for the first time used for clinical scans of patients with malignant liver tumors (Study I), arteriovenous fistulas for hemodialysis (Study II) and arteriosclerotic femoral arteries (Study III). The same commercial ultrasound scanner was used...... in all three studies. Study I was a comparative study of B-mode ultrasound images obtained with conventional technique and the experimental technique Synthetic Aperture Sequential Beamforming (SASB). SASB is a datareducing version of the technique synthetic aperture, which has the potential to produce...
Evaluations of average level spacings
International Nuclear Information System (INIS)
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables
Directory of Open Access Journals (Sweden)
Cornelis (Kees S. van der Waal
2009-09-01
Full Text Available After a breakdown in employment relations in the maintenance section of a higher education institution, the authors were asked to intervene in order to try and solve the employment relations conflict situation. It was decided to employ the Nominal Group Technique (NGT as a tool in problem identification during conflict in the workplace. An initial investigation of documentation and interviews with prominent individuals in the organisation was carried out. The NGT was then used in four focus group discussions to determine the important issues as seen by staff members. The NGT facilitates the determination of shared perceptions and the ranking of ideas. The NGT was used in diverse groups, necessitating adaptations to the technique. The perceived causes of the conflict were established. The NGT can be used in a conflict situation in the workplace in order to establish the perceived causes of employment relations conflict.
Ergodic averages via dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....
International Nuclear Information System (INIS)
Das, A.; Shukla, A.D.
1999-01-01
To measure picogram amounts of PGEs in terrestrial and extraterrestrial samples we have modified the NiS fire assay technique in conjunction with neutron activation analysis. Os, Ir and Ru are almost quantitatively concentrated in the NiS bead. The method should be applicable to other elements (Pt, Pd, and Rh) but these could not be analyzed because of the short half life of their daughter isotopes. The results also show that the chalcophhile elements like Ag also can be quantitatively estimated using this method. (author)
Directory of Open Access Journals (Sweden)
Genevieve Newton
2012-12-01
Full Text Available Breakout groups have been widely used under many different conditions, but the lack of published information related to their use in undergraduate settings highlights the need for research related to their use in this context. This paper describes a study investigating the use of breakout groups in undergraduate education as it specifically relates to teaching a large 4th year undergraduate Nutrition class in a physically constrained lecture space. In total, 220 students completed a midterm survey and 229 completed a final survey designed to measure student satisfaction. Survey results were further analyzed to measure relationships between student perception of breakout group effectiveness and (1 gender and (2 cumulative GPA. Results of both surveys revealed that over 85% of students either agreed or strongly agreed that using breakout groups enhanced their learning experience, with females showing a significantly greater level of satisfaction and higher final course grade than males. Although not stratified by gender, a consistent finding between surveys was a lower perception of breakout group effectiveness by students with a cumulative GPA above 90%. The majority of respondents felt that despite the awkward room space, the breakout groups were easy to create and participate in, which suggests that breakout groups can be successfully used in a large undergraduate classroom despite physical constraints. The findings of this work are relevant given the applicability of breakout groups to a wide range of disciplines, and the relative ease of integration into a traditional lecture format.Les enseignants ont recours aux petits groupes dans de nombreuses conditions différentes, cependant, le manque d’information publiée sur leur utilisation au premier cycle confirme la nécessité d’effectuer des recherches sur ce format dans ce contexte. Le présent article rend compte d’une étude portant sur l’utilisation des petits groupes au premier
Jonathan M. Cohen; Jean C. Mangun; Mae A. Davenport; Andrew D. Carver
2008-01-01
Diverse public opinions, competing management goals, and polarized interest groups combine with problems of scale to create a complex management arena for managers in the Central Hardwood Forest region. A mixed-methods approach that incorporated quantitative analysis of data from a photo evaluation-attitude scale survey instrument was used to assess attitudes toward...
[Drama and forgiveness: the mechanism of action of a new technique in group therapy--face therapy].
Csigó, Katalin; Bender, Márta; Németh, Attila
2006-01-01
In our article we relate our experiences of the face therapy--group therapy sessions held at 2nd Psychiatric Ward of Nyíró Gyula Hospital. Face therapy uses the elements of art therapy and psychodrama: patients form their own head from gypsum and paint it. During the sessions, we analyse the heads and patients reveal their relation to their head. Our paper also presents the structure of thematic sessions and the features of the creative and processing phase. The phenomena that occur during group therapy (self-presentation, self-destruction, creativity) are interpreted with the concepts of psychodynamics and psychodrama. Finally, possible areas of indication are suggested for face therapy and the treatment possibilities for self-destructive phenomena.
An approach to averaging digitized plantagram curves.
Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B
1994-07-01
The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).
When good = better than average
Directory of Open Access Journals (Sweden)
Don A. Moore
2007-10-01
Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.
Tendon surveillance requirements - average tendon force
International Nuclear Information System (INIS)
Fulton, J.F.
1982-01-01
Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)
Autoregressive Moving Average Graph Filtering
Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert
2016-01-01
One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...
Averaging Robertson-Walker cosmologies
International Nuclear Information System (INIS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-01-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models
International Nuclear Information System (INIS)
Armstrong, L.D.; Rymer, G.; Perkins, S.
1994-01-01
This paper addresses a process facilitation technique using computer hardware and software that assists its users in group decision-making, consensus building, surveying and polling, and strategic planning. The process and equipment has been successfully used by the Department of Energy and Martin Marietta Energy Systems, Inc., Environmental Restoration and Waste Management Community Relations program. The technology is used to solicit and encourage qualitative and documented public feedback in government mandated or sponsored public meetings in Oak Ridge, Tennessee
Garcia-Adeva, A. J.; Huber, D. L.
2001-01-01
In this work we generalize and subsequently apply the Effective Field Renormalization Group technique to the problem of ferro- and antiferromagnetically coupled Ising spins with local anisotropy axes in geometrically frustrated geometries (kagome and pyrochlore lattices). In this framework, we calculate the various ground states of these systems and the corresponding critical points. Excellent agreement is found with exact and Monte Carlo results. The effects of frustration are discussed. As ...
DEFF Research Database (Denmark)
Herskin, Mette S.; Ladevig, Jan; Arendt-Nielsen, Lars
2009-01-01
Nociceptive testing is a valuable tool in the development of pharmaceutical products, for basic nociceptive research, and for studying changes in pain sensitivity is investigated after inflammatory states or nerve injury. However, in pigs only very limited knowledge about nociceptive processes...... nociceptive stimulation from a computer-controlled CO2-laser beam applied to either the caudal part of the metatarsus on the hind legs or the shoulder region of gilts. In Exp. 1, effects of laser power output (0, 0.5, 1, 1.5 and 2 W) on nociceptive responses toward stimulation on the caudal aspects...... of the metatarsus were examined using 15 gilts kept in one group and tested in individual feeding stalls after feeding. Increasing the power output led to gradually decreasing latency to respond (P
International Nuclear Information System (INIS)
Shi, Shenggang; Cao, Jingcan; Feng, Li; Liang, Wenyan; Zhang, Liqiu
2014-01-01
Highlights: • Different chemical pollution accidents were simplified using the event tree analysis. • Emergency disposal technique plan repository of chemicals accidents was constructed. • The technique evaluation index system of chemicals accidents disposal was developed. • A combination of group decision and analytical hierarchy process (AHP) was employed. • Group decision introducing similarity and diversity factor was used for data analysis. - Abstract: The environmental pollution resulting from chemical accidents has caused increasingly serious concerns. Therefore, it is very important to be able to determine in advance the appropriate emergency treatment and disposal technology for different types of chemical accidents. However, the formulation of an emergency plan for chemical pollution accidents is considerably difficult due to the substantial uncertainty and complexity of such accidents. This paper explains how the event tree method was used to create 54 different scenarios for chemical pollution accidents, based on the polluted medium, dangerous characteristics and properties of chemicals involved. For each type of chemical accident, feasible emergency treatment and disposal technology schemes were established, considering the areas of pollution source control, pollutant non-proliferation, contaminant elimination and waste disposal. Meanwhile, in order to obtain the optimum emergency disposal technology schemes as soon as the chemical pollution accident occurs from the plan repository, the technique evaluation index system was developed based on group decision-improved analytical hierarchy process (AHP), and has been tested by using a sudden aniline pollution accident that occurred in a river in December 2012
Energy Technology Data Exchange (ETDEWEB)
Shi, Shenggang [College of Environmental Science and Engineering, Beijing Forestry University, Beijing 100083 (China); College of Chemistry, Baotou Teachers’ College, Baotou 014030 (China); Cao, Jingcan; Feng, Li; Liang, Wenyan [College of Environmental Science and Engineering, Beijing Forestry University, Beijing 100083 (China); Zhang, Liqiu, E-mail: zhangliqiu@163.com [College of Environmental Science and Engineering, Beijing Forestry University, Beijing 100083 (China)
2014-07-15
Highlights: • Different chemical pollution accidents were simplified using the event tree analysis. • Emergency disposal technique plan repository of chemicals accidents was constructed. • The technique evaluation index system of chemicals accidents disposal was developed. • A combination of group decision and analytical hierarchy process (AHP) was employed. • Group decision introducing similarity and diversity factor was used for data analysis. - Abstract: The environmental pollution resulting from chemical accidents has caused increasingly serious concerns. Therefore, it is very important to be able to determine in advance the appropriate emergency treatment and disposal technology for different types of chemical accidents. However, the formulation of an emergency plan for chemical pollution accidents is considerably difficult due to the substantial uncertainty and complexity of such accidents. This paper explains how the event tree method was used to create 54 different scenarios for chemical pollution accidents, based on the polluted medium, dangerous characteristics and properties of chemicals involved. For each type of chemical accident, feasible emergency treatment and disposal technology schemes were established, considering the areas of pollution source control, pollutant non-proliferation, contaminant elimination and waste disposal. Meanwhile, in order to obtain the optimum emergency disposal technology schemes as soon as the chemical pollution accident occurs from the plan repository, the technique evaluation index system was developed based on group decision-improved analytical hierarchy process (AHP), and has been tested by using a sudden aniline pollution accident that occurred in a river in December 2012.
Edwards, Roger A; Dee, Deborah; Umer, Amna; Perrine, Cria G; Shealy, Katherine R; Grummer-Strawn, Laurence M
2014-02-01
A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4-6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement.
Bezodis, Ian N; Kerwin, David G; Cooper, Stephen-Mark; Salo, Aki I T
2017-11-15
To understand how training periodization influences sprint performance and key step characteristics over an extended training period in an elite sprint training group. Four sprinters were studied during five months of training. Step velocities, step lengths and step frequencies were measured from video of the maximum velocity phase of training sprints. Bootstrapped mean values were calculated for each athlete for each session and 139 within-athlete, between-session comparisons were made with a repeated measures ANOVA. As training progressed, a link in the changes in velocity and step frequency was maintained. There were 71 between-session comparisons with a change in step velocity yielding at least a large effect size (>1.2), of which 73% had a correspondingly large change in step frequency in the same direction. Within-athlete mean session step length remained relatively constant throughout. Reductions in step velocity and frequency occurred during training phases of high volume lifting and running, with subsequent increases in step velocity and frequency happening during phases of low volume lifting and high intensity sprint work. The importance of step frequency over step length to the changes in performance within a training year was clearly evident for the sprinters studied. Understanding the magnitudes and timings of these changes in relation to the training program is important for coaches and athletes. The underpinning neuro-muscular mechanisms require further investigation, but are likely explained by an increase in force producing capability followed by an increase in the ability to produce that force rapidly.
O'Connor, Teresia M; Cerin, Ester; Hughes, Sheryl O; Robles, Jessica; Thompson, Deborah; Baranowski, Tom; Lee, Rebecca E; Nicklas, Theresa; Shewchuk, Richard M
2013-08-06
Hispanic preschoolers are less active than their non-Hispanic peers. As part of a feasibility study to assess environmental and parenting influences on preschooler physical activity (PA) (Niños Activos), the aim of this study was to identify what parents do to encourage or discourage PA among Hispanic 3-5 year old children to inform the development of a new PA parenting practice instrument and future interventions to increase PA among Hispanic youth. Nominal Group Technique (NGT), a structured multi-step group procedure, was used to elicit and prioritize responses from 10 groups of Hispanic parents regarding what parents do to encourage (5 groups) or discourage (5 groups) preschool aged children to be active. Five groups consisted of parents with low education (less than high school) and 5 with high education (high school or greater) distributed between the two NGT questions. Ten NGT groups (n = 74, range 4-11/group) generated 20-46 and 42-69 responses/group for practices that encourage or discourage PA respectively. Eight to 18 responses/group were elected as the most likely to encourage or discourage PA. Parental engagement in child activities, modeling PA, and feeding the child well were identified as parenting practices that encourage child PA. Allowing TV and videogame use, psychological control, physical or emotional abuse, and lack of parental engagement emerged as parenting practices that discourage children from being active. There were few differences in the pattern of responses by education level. Parents identified ways they encourage and discourage 3-5 year-olds from PA, suggesting both are important targets for interventions. These will inform the development of a new PA parenting practice scale to be further evaluated. Further research should explore the role parents play in discouraging child PA, especially in using psychological control or submitting children to abuse, which were new findings in this study.
Topological quantization of ensemble averages
International Nuclear Information System (INIS)
Prodan, Emil
2009-01-01
We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states
Keogh, Alison; Tully, Mark A; Matthews, James; Hurley, Deirdre A
2015-12-01
Medical Research Council (MRC) guidelines recommend applying theory within complex interventions to explain how behaviour change occurs. Guidelines endorse self-management of chronic low back pain (CLBP) and osteoarthritis (OA), but evidence for its effectiveness is weak. This literature review aimed to determine the use of behaviour change theory and techniques within randomised controlled trials of group-based self-management programmes for chronic musculoskeletal pain, specifically CLBP and OA. A two-phase search strategy of electronic databases was used to identify systematic reviews and studies relevant to this area. Articles were coded for their use of behaviour change theory, and the number of behaviour change techniques (BCTs) was identified using a 93-item taxonomy, Taxonomy (v1). 25 articles of 22 studies met the inclusion criteria, of which only three reported having based their intervention on theory, and all used Social Cognitive Theory. A total of 33 BCTs were coded across all articles with the most commonly identified techniques being 'instruction on how to perform the behaviour', 'demonstration of the behaviour', 'behavioural practice', 'credible source', 'graded tasks' and 'body changes'. Results demonstrate that theoretically driven research within group based self-management programmes for chronic musculoskeletal pain is lacking, or is poorly reported. Future research that follows recommended guidelines regarding the use of theory in study design and reporting is warranted. Copyright © 2015 Elsevier Ltd. All rights reserved.
The average Indian female nose.
Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh
2011-12-01
This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-10-01
It is well known that soil erosion and lake siltation frequently create serious problems, especially in arid and semi-arid zones. Important progress has been made during recent years in the utilization of environmental radionuclides for erosion and sedimentation studies. This advisory group meeting (AGM) was held to discuss the present status of these nuclear techniques and to define the needs for future development. This publication compiles papers presented by the invited experts during the meeting and an updated bibliography on the use of {sup 137}Cs in soil erosion, siltation and other related environmental studies. Refs, figs and tabs.
International Nuclear Information System (INIS)
1995-10-01
It is well known that soil erosion and lake siltation frequently create serious problems, especially in arid and semi-arid zones. Important progress has been made during recent years in the utilization of environmental radionuclides for erosion and sedimentation studies. This advisory group meeting (AGM) was held to discuss the present status of these nuclear techniques and to define the needs for future development. This publication compiles papers presented by the invited experts during the meeting and an updated bibliography on the use of 137 Cs in soil erosion, siltation and other related environmental studies. Refs, figs and tabs
Unscrambling The "Average User" Of Habbo Hotel
Directory of Open Access Journals (Sweden)
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
Ji, Cuiying; Zhang, Xuewei; Yu, Peiqiang
2016-03-05
The non-invasive molecular spectroscopic technique-FT/IR is capable to detect the molecular structure spectral features that are associated with biological, nutritional and biodegradation functions. However, to date, few researches have been conducted to use these non-invasive molecular spectroscopic techniques to study forage internal protein structures associated with biodegradation and biological functions. The objectives of this study were to detect unique aspects and association of protein Amide functional groups in terms of protein Amide I and II spectral profiles and chemical properties in the alfalfa forage (Medicago sativa L.) from different sourced-origins. In this study, alfalfa hay with two different origins was used as modeled forage for molecular structure and chemical property study. In each forage origin, five to seven sources were analyzed. The molecular spectral profiles were determined using FT/IR non-invasive molecular spectroscopy. The parameters of protein spectral profiles included functional groups of Amide I, Amide II and Amide I to II ratio. The results show that the modeled forage Amide I and Amide II were centered at 1653 cm(-1) and 1545 cm(-1), respectively. The Amide I spectral height and area intensities were from 0.02 to 0.03 and 2.67 to 3.36 AI, respectively. The Amide II spectral height and area intensities were from 0.01 to 0.02 and 0.71 to 0.93 AI, respectively. The Amide I to II spectral peak height and area ratios were from 1.86 to 1.88 and 3.68 to 3.79, respectively. Our results show that the non-invasive molecular spectroscopic techniques are capable to detect forage internal protein structure features which are associated with forage chemical properties. Copyright © 2015 Elsevier B.V. All rights reserved.
ON IMPROVEMENT OF METHODOLOGY FOR CALCULATING THE INDICATOR «AVERAGE WAGE»
Directory of Open Access Journals (Sweden)
Oksana V. Kuchmaeva
2015-01-01
Full Text Available The article describes the approaches to the calculation of the indicator of average wages in Russia with the use of several sources of information. The proposed method is based on data collected by Rosstat and the Pension Fund of the Russian Federation. The proposed approach allows capturing data on the wages of almost all groups of employees. Results of experimental calculations on the developed technique are present in this article.
Swanson, R. E.
2017-12-01
Climate data records typically exhibit considerable variation over short time scales both from natural variability and from instrumentation issues. The use of linear least squares regression can provide overall trend information from noisy data, however assessing intermediate time periods can also provide useful information unavailable from basic trend calculations. Extracting the short term information in these data for assessing changes to climate or for comparison of data series from different sources requires the application of filters to separate short period variations from longer period trends. A common method used to smooth data is the moving average, which is a simple digital filter that can distort the resulting series due to the aliasing of the sampling period into the output series. We utilized Hamming filters to compare MSU/AMSU satellite time series developed by three research groups (UAH, RSS and NOAA STAR), the results published in January 2017 [http://journals.ametsoc.org/doi/abs/10.1175/JTECH-D-16-0121.1]. Since the last release date (July 2016) for the data analyzed in that paper, some of these groups have updated their analytical procedures and additional months of data are available to extend the series. An updated analysis of these data using the latest data releases available from each group is to be presented. Improved graphics will be employed to provide a clearer visualization of the differences between each group's results. As in the previous paper, the greatest difference between the UAH TMT series and those from the RSS and NOAA data appears during the early period of data from the MSU instruments before about 2003, as shown in the attached figure, and preliminary results indicate this pattern continues. Also to be presented are other findings regarding seasonal changes which were not included in the previous study.
High average power linear induction accelerator development
International Nuclear Information System (INIS)
Bayless, J.R.; Adler, R.J.
1987-07-01
There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs
Huang, Chien-Hsun; Lee, Fwu-Ling; Liou, Jong-Shian
2010-03-01
The Lactobacillus plantarum group comprises five very closely related species. Some species of this group are considered to be probiotic and widely applied in the food industry. In this study, we compared the use of two different molecular markers, the 16S rRNA and dnaK gene, for discriminating phylogenetic relationships amongst L. plantarum strains using sequencing and DNA fingerprinting. The average sequence similarity for the dnaK gene (89.2%) among five type strains was significantly less than that for the 16S rRNA (99.4%). This result demonstrates that the dnaK gene sequence provided higher resolution than the 16S rRNA and suggests that the dnaK could be used as an additional phylogenetic marker for L. plantarum. Species-specific profiles of the Lactobacillus strains were obtained with RAPD and RFLP methods. Our data indicate that phylogenetic relationships between these strains are easily resolved using sequencing of the dnaK gene or DNA fingerprinting assays.
Deblurring of class-averaged images in single-particle electron microscopy
International Nuclear Information System (INIS)
Park, Wooram; Chirikjian, Gregory S; Madden, Dean R; Rockmore, Daniel N
2010-01-01
This paper proposes a method for the deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid-body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre–Fourier expansions, and Hermite expansion and Laguerre–Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-05-07
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.
Directory of Open Access Journals (Sweden)
Moath Kassim
2018-05-01
Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors
Gregoretti, Francesco; Cesarini, Elisa; Lanzuolo, Chiara; Oliva, Gennaro; Antonelli, Laura
2016-01-01
The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.
Paul, M. P.
1982-01-01
Measurement of integrated columnar electron content and total electron content for the local ionosphere and the overlying protonosphere via Faraday rotation and group delay techniques has proven very useful. A field station was established having the geographic location of 31.5 deg N latitude and 91.06 deg W longitude to accomplish these objectives. A polarimeter receiving system was set up in the beginning to measure the Faraday rotation of 137.35 MHz radio signal from geostationary satellite ATS 3 to yield the integrated columnar electron content of the local ionosphere. The measurement was continued regularly, and the analysis of the data thus collected provided a synopsis of the statistical variation of the ionosphere along with the transient variations that occurred during the periods of geomagnetic and other disturbances.
Kremer, Ingrid E H; Evers, Silvia M A A; Jongen, Peter J; van der Weijden, Trudy; van de Kolk, Ilona; Hiligsmann, Mickaël
2016-01-01
Understanding the preferences of patients with multiple sclerosis (MS) for disease-modifying drugs and involving these patients in clinical decision making can improve the concordance between medical decisions and patient values and may, subsequently, improve adherence to disease-modifying drugs. This study aims first to identify which characteristics-or attributes-of disease-modifying drugs influence patients´ decisions about these treatments and second to quantify the attributes' relative importance among patients. First, three focus groups of relapsing-remitting MS patients were formed to compile a preliminary list of attributes using a nominal group technique. Based on this qualitative research, a survey with several choice tasks (best-worst scaling) was developed to prioritize attributes, asking a larger patient group to choose the most and least important attributes. The attributes' mean relative importance scores (RIS) were calculated. Nineteen patients reported 34 attributes during the focus groups and 185 patients evaluated the importance of the attributes in the survey. The effect on disease progression received the highest RIS (RIS = 9.64, 95% confidence interval: [9.48-9.81]), followed by quality of life (RIS = 9.21 [9.00-9.42]), relapse rate (RIS = 7.76 [7.39-8.13]), severity of side effects (RIS = 7.63 [7.33-7.94]) and relapse severity (RIS = 7.39 [7.06-7.73]). Subgroup analyses showed heterogeneity in preference of patients. For example, side effect-related attributes were statistically more important for patients who had no experience in using disease-modifying drugs compared to experienced patients (p decision making would be needed and requires eliciting individual preferences.
Garcia-Adeva, Angel J.; Huber, David L.
2001-07-01
In this work we generalize and subsequently apply the effective-field renormalization-group (EFRG) technique to the problem of ferro- and antiferromagnetically coupled Ising spins with local anisotropy axes in geometrically frustrated geometries (kagomé and pyrochlore lattices). In this framework, we calculate the various ground states of these systems and the corresponding critical points. Excellent agreement is found with exact and Monte Carlo results. The effects of frustration are discussed. As pointed out by other authors, it turns out that the spin-ice model can be exactly mapped to the standard Ising model, but with effective interactions of the opposite sign to those in the original Hamiltonian. Therefore, the ferromagnetic spin ice is frustrated and does not order. Antiferromagnetic spin ice (in both two and three dimensions) is found to undergo a transition to a long-range-ordered state. The thermal and magnetic critical exponents for this transition are calculated. It is found that the thermal exponent is that of the Ising universality class, whereas the magnetic critical exponent is different, as expected from the fact that the Zeeman term has a different symmetry in these systems. In addition, the recently introduced generalized constant coupling method is also applied to the calculation of the critical points and ground-state configurations. Again, a very good agreement is found with exact, Monte Carlo, and renormalization-group calculations for the critical points. Incidentally, we show that the generalized constant coupling approach can be regarded as the lowest-order limit of the EFRG technique, in which correlations outside a frustrated unit are neglected, and scaling is substituted by strict equality of the thermodynamic quantities.
Pajewski, Lara; Giannopoulos, Antonis; van der Kruk, Jan
2015-04-01
This work aims at presenting the ongoing research activities carried out in Working Group 3 (WG3) 'EM methods for near-field scattering problems by buried structures; data processing techniques' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. WG3 is structured in four Projects. Project 3.1 deals with 'Electromagnetic modelling for GPR applications.' Project 3.2 is concerned with 'Inversion and imaging techniques for GPR applications.' The topic of Project 3.3 is the 'Development of intrinsic models for describing near-field antenna effects, including antenna-medium coupling, for improved radar data processing using full-wave inversion.' Project 3.4 focuses on 'Advanced GPR data-processing algorithms.' Electromagnetic modeling tools that are being developed and improved include the Finite-Difference Time-Domain (FDTD) technique and the spectral domain Cylindrical-Wave Approach (CWA). One of the well-known freeware and versatile FDTD simulators is GprMax that enables an improved realistic representation of the soil/material hosting the sought structures and of the GPR antennas. Here, input/output tools are being developed to ease the definition of scenarios and the visualisation of numerical results. The CWA expresses the field scattered by subsurface two-dimensional targets with arbitrary cross-section as a sum of cylindrical waves. In this way, the interaction is taken into account of multiple scattered fields within the medium hosting the sought targets. Recently, the method has been extended to deal with through-the-wall scenarios. One of the
Averaging of nonlinearity-managed pulses
International Nuclear Information System (INIS)
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-01-01
We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons
Perceived Average Orientation Reflects Effective Gist of the Surface.
Cha, Oakyoon; Chong, Sang Chul
2018-03-01
The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.
Directory of Open Access Journals (Sweden)
Behar E.
2006-12-01
Full Text Available This article is divided into two parts. In the first part, the authors present a comparison of the major techniques for the measurement of the molecular weight of macromolecules. The bibliographic results are gathered in several tables. In the second part, a comparative ebulliometer for the measurement of the number average molecular weight (Mn of heavy crude oil fractions is described. The high efficiency of the apparatus is demonstrated with a preliminary study of atmospheric distillation residues and resins. The measurement of molecular weights up to 2000 g/mol is possible in less than 4 hours with an uncertainty of about 2%. Cet article comprend deux parties. Dans la première, les auteurs présentent une comparaison entre les principales techniques de détermination de la masse molaire de macromolécules. Les résultats de l'étude bibliographique sont rassemblés dans plusieurs tableaux. La seconde partie décrit un ébulliomètre comparatif conçu pour la mesure de la masse molaire moyenne en nombre (Mn des fractions lourdes des bruts. Une illustration de l'efficacité de cet appareil est indiquée avec l'étude préliminaire de résidus de distillation atmosphérique et de résines. En particulier, la mesure de masses molaires pouvant atteindre 2000 g/mol est possible en moins de 4 heures avec une incertitude expérimentale de l'ordre de 2 %.
Adams, Karen
2015-01-01
In this article Karen Adams demonstrates how to incorporate group grammar techniques into a classroom activity. In the activity, students practice using the target grammar to do something they naturally enjoy: learning about each other.
Average geodesic distance of skeleton networks of Sierpinski tetrahedron
Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao
2018-04-01
The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.
Time average vibration fringe analysis using Hilbert transformation
International Nuclear Information System (INIS)
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-01-01
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Fakih, Souhiela; Marriott, Jennifer L; Hussainy, Safeera Y
2016-04-01
The objectives of this study were to investigate how pharmacists, pharmacy assistants and women feel about community pharmacy involvement in weight management, and to identify what pharmacists, pharmacy assistants and women want in weight management educational resources. Three homogenous and one heterogeneous nominal group (NG) sessions of up to 120-min duration were conducted with nine women, ten pharmacists and eight pharmacy assistants. The NG technique was used to conduct each session to determine the most important issues that should be considered surrounding community pharmacy weight management services and development of any educational resources. The heterogeneous NG session was used to finalise what women, pharmacists and pharmacy assistants want in educational resources. Overall, pharmacists, pharmacy assistants and women believe that pharmacy staff have an important role in the management of overweight and obesity because of their accessibility, trust and the availability of products in pharmacy. Regarding the most suitable healthcare professional(s) to treat overweight and obesity, the majority of participants believed that no one member of the healthcare team was most suitable and that overweight and obesity needs to be treated by a multidisciplinary team. The importance of having weight management educational resources for pharmacy staff and women that come from trustworthy resources without financial gain or commercialisation was also emphasised. Pharmacists, pharmacy assistants and women feel that community pharmacies have a definite role to play in weight management. Pharmacy-specific weight management educational resources that are readily available to pharmacy staff and women are highly desirable. © 2015 Royal Pharmaceutical Society.
Shi, Shenggang; Cao, Jingcan; Feng, Li; Liang, Wenyan; Zhang, Liqiu
2014-07-15
The environmental pollution resulting from chemical accidents has caused increasingly serious concerns. Therefore, it is very important to be able to determine in advance the appropriate emergency treatment and disposal technology for different types of chemical accidents. However, the formulation of an emergency plan for chemical pollution accidents is considerably difficult due to the substantial uncertainty and complexity of such accidents. This paper explains how the event tree method was used to create 54 different scenarios for chemical pollution accidents, based on the polluted medium, dangerous characteristics and properties of chemicals involved. For each type of chemical accident, feasible emergency treatment and disposal technology schemes were established, considering the areas of pollution source control, pollutant non-proliferation, contaminant elimination and waste disposal. Meanwhile, in order to obtain the optimum emergency disposal technology schemes as soon as the chemical pollution accident occurs from the plan repository, the technique evaluation index system was developed based on group decision-improved analytical hierarchy process (AHP), and has been tested by using a sudden aniline pollution accident that occurred in a river in December 2012. Copyright © 2014 Elsevier B.V. All rights reserved.
Enticott, Joanne; Buck, Kimberly; Shawyer, Frances
2018-03-01
There is a lack of information on how to execute effective searches of the grey literature on refugee and asylum seeker groups for inclusion in systematic reviews. High-quality government reports and other grey literature relevant to refugees may not always be identified in conventional literature searches. During the process of conducting a recent systematic review, we developed a novel strategy for systematically searching international refugee and asylum seeker-related grey literature. The approach targets governmental health departments and statistical agencies, who have considerable access to refugee and asylum seeker populations for research purposes but typically do not publish findings in academic forums. Compared to a conventional grey literature search strategy, our novel technique yielded an eightfold increase in relevant high-quality grey sources that provided valuable content in informing our review. Incorporating a search of the grey literature into systematic reviews of refugee and asylum seeker research is essential to providing a more complete view of the evidence. Our novel strategy offers a practical and feasible method of conducting systematic grey literature searches that may be adaptable to a range of research questions, contexts, and resource constraints. Copyright © 2017 John Wiley & Sons, Ltd.
Wallace, Sarah J; Worrall, Linda; Rose, Tanya; Le Dorze, Guylaine; Cruice, Madeline; Isaksen, Jytte; Kong, Anthony Pak Hin; Simmons-Mackie, Nina; Scarinci, Nerina; Gauvreau, Christine Alary
2017-07-01
To identify important treatment outcomes from the perspective of people with aphasia and their families using the ICF as a frame of reference. The nominal group technique was used with people with aphasia and their family members in seven countries to identify and rank important treatment outcomes from aphasia rehabilitation. People with aphasia identified outcomes for themselves; and family members identified outcomes for themselves and for the person with aphasia. Outcomes were analysed using qualitative content analysis and ICF linking. A total of 39 people with aphasia and 29 family members participated in one of 16 nominal groups. Inductive qualitative content analysis revealed the following six themes: (1) Improved communication; (2) Increased life participation; (3) Changed attitudes through increased awareness and education about aphasia; (4) Recovered normality; (5) Improved physical and emotional well-being; and (6) Improved health (and support) services. Prioritized outcomes for both participant groups linked to all ICF components; primary activity/participation (39%) and body functions (36%) for people with aphasia, and activity/participation (49%) and environmental factors (28%) for family members. Outcomes prioritized by family members relating to the person with aphasia, primarily linked to body functions (60%). People with aphasia and their families identified treatment outcomes which span all components of the ICF. This has implications for research outcome measurement and clinical service provision which currently focuses on the measurement of body function outcomes. The wide range of desired outcomes generated by both people with aphasia and their family members, highlights the importance of collaborative goal setting within a family-centred approach to rehabilitation. These results will be combined with other stakeholder perspectives to establish a core outcome set for aphasia treatment research. Implications for Rehabilitation Important
The average size of ordered binary subgraphs
van Leeuwen, J.; Hartel, Pieter H.
To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a
Energy Technology Data Exchange (ETDEWEB)
NONE
1991-07-01
A consultants' group met in Vienna from 23 September - 3 October 1991 to explore 'Production System Analysis and Economics for Tsetse Fly Mass-rearing and the Use of the Sterile Insect Technique in Eradication Programmes in Africa'. This report is based on their observations during working visits to the Entomology Unit of the IAEA Agricultural Laboratory at Seibersdorf, and on information supplied by the tsetse team and staff of the Joint FAO/IAEA Division's Insect and Pest Control Section. The consultants conducted a technical, operational and financial review of present rearing methods, equipment, philosophies and production capacities, taking into account one of the recommendations made at the 6th Session of the ''FAO Commission on African Animal Trypanosomiasis'' held in June 1991 in Harare, Zimbabwe. This recommendation, related to the use of the Sterile Insect Technique (SIT), states that {sup F}AO, through the Joint FAO/IAEA Division, should further investigate and improve the use of sterile insects to strengthen the efficacy of tsetse surveys and, where applicable, consider teh use of the SIT to support eradication campaigns where other techniques on their own will not achieve this objective''. In investigating the potential for improved tsetse mass-rearing and analyzing the present costs of pupa/distributable sterile fly production, the consultants noted that: 1. The Seibersdorf Tsetse Unit is conducting an effective research and development programme which strives to emulate a production facility while continuing to pursue R and D. The capacity of the present facility in Seibersdorf is practically limited to a colony size of about 150,000 breeding females. The release of sterile males in an eradication campaign of economical relevance would require a colony containing more than 500,000 female flies. Such a population can only be maintained in an organizational, operational and financially justifiable manner if the rearing technology is transferred from an
Reducing Noise by Repetition: Introduction to Signal Averaging
Hassan, Umer; Anwar, Muhammad Sabieh
2010-01-01
This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…
International Nuclear Information System (INIS)
Barragan, L; Bernal, P; Britton, K; Cerqueira, A; Estrella, O; Fraxeda, R; Garcia, E; Hilson, A; Lagos, G; Morales, R; Orellana, P; Padhy, A; Sixt, R; Soroa, V; Touya, E; Yerovi, M
2001-01-01
The economic and health situation in Latin-America and the Caribbean differ between countries. For nuclear medicine, factors such the availability of radiopharmaceuticals and equipment, may present problems, as may lack of knowledge and protocols. A major problem in this area is that some countries do not have local production of radiopharmaceutical which makes them expansive. Beside different national rules regulate the use if imported products which may prohibit their wide spread use. There is a well known non-homogenous state of equipment, but all of the countries have at least planar cameras with a PIP system (developed by IAEA) that allows dynamic acquisition. There is a lack of software for processing to get the different quantitative data necessary for better interpretation of the studies. A lack of knowledge and diffusion of radionuclide techniques among clinicians is another difficulty. Some of the problems above have been addressed by a group of regional experts, who work during 15 months writing a Manual of Nephro-Urologic Procedures, considering the national and regional realities. In spite of the differences we were able to write a harmonized Manual for static renal scan, dynamic renal scan, both baseline and with pharmacological interventions (diuretic and ACE inhibitors)- transplant studies, cystography (direct and indirect), clearance studies, radiopharmaceutical and equipment quality controls. It was also possible to develop a model renal software to be used with PIP systems. We conclude that in spite of the differences, with a common effort of the countries involved and with the significant support of the International Atomic Energy Agency it is possible to improve the quality of nuclear nephrourology practice in the region (au)
Giebel, Sebastian; Miszczyk, Leszek; Slosarek, Krzysztof; Moukhtari, Leila; Ciceri, Fabio; Esteve, Jordi; Gorin, Norbert-Claude; Labopin, Myriam; Nagler, Arnon; Schmid, Christoph; Mohty, Mohamad
2014-09-01
Total body irradiation (TBI) is widely used for conditioning before hematopoietic cell transplantation. Its efficacy and toxicity may depend on many methodological aspects. The goal of the current study was to explore current clinical practice in this field. A questionnaire was sent to all centers collaborating in the European Group for Blood and Marrow Transplantation and included 19 questions regarding various aspects of TBI. A total of 56 centers from 23 countries responded. All centers differed with regard to at least 1 answer. The total maximum dose of TBI used for myeloablative transplantation ranged from 8 grays (Gy) to 14.4 Gy, whereas the dose per fraction was 1.65 Gy to 8 Gy. A total of 16 dose/fractionation modalities were identified. The dose rate ranged from 2.25 centigrays to 37.5 centigrays per minute. The treatment unit was linear accelerator (LINAC) (91%) or cobalt unit (9%). Beams (photons) used for LINAC were reported to range from 6 to 25 megavolts. The most frequent technique used for irradiation was "patient in 1 field," in which 2 fields and 2 patient positions per fraction are used (64%). In 41% of centers, patients were immobilized during TBI. Approximately 93% of centers used in vivo dosimetry with accepted discrepancies between the planned and measured doses of 1.5% to 10%. In 84% of centers, the lungs were shielded during irradiation. The maximum accepted dose for the lungs was 6 Gy to 14.4 Gy. TBI is an extremely heterogeneous treatment modality. The findings of the current study should warrant caution in the interpretation of clinical studies involving TBI. Further investigation is needed to evaluate how methodological differences influence outcome. Efforts to standardize the method should be considered. © 2014 American Cancer Society.
The B-dot Earth Average Magnetic Field
Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon
2013-01-01
The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.
DSCOVR Magnetometer Level 2 One Minute Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data
DSCOVR Magnetometer Level 2 One Second Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data
Spacetime averaging of exotic singularity universes
International Nuclear Information System (INIS)
Dabrowski, Mariusz P.
2011-01-01
Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.
NOAA Average Annual Salinity (3-Zone)
California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
Clustering Batik Images using Fuzzy C-Means Algorithm Based on Log-Average Luminance
Directory of Open Access Journals (Sweden)
Ahmad Sanmorino
2012-06-01
Full Text Available Batik is a fabric or clothes that are made with a special staining technique called wax-resist dyeing and is one of the cultural heritage which has high artistic value. In order to improve the efficiency and give better semantic to the image, some researchers apply clustering algorithm for managing images before they can be retrieved. Image clustering is a process of grouping images based on their similarity. In this paper we attempt to provide an alternative method of grouping batik image using fuzzy c-means (FCM algorithm based on log-average luminance of the batik. FCM clustering algorithm is an algorithm that works using fuzzy models that allow all data from all cluster members are formed with different degrees of membership between 0 and 1. Log-average luminance (LAL is the average value of the lighting in an image. We can compare different image lighting from one image to another using LAL. From the experiments that have been made, it can be concluded that fuzzy c-means algorithm can be used for batik image clustering based on log-average luminance of each image possessed.
Time series forecasting using ERNN and QR based on Bayesian model averaging
Pwasong, Augustine; Sathasivam, Saratha
2017-08-01
The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.
DEFF Research Database (Denmark)
Forouzesh, Mojtaba; Siwakoti, Yam Prasad; Blaabjerg, Frede
2016-01-01
Magnetically coupled Y-source impedance network is a newly proposed structure with versatile features intended for various power converter applications e.g. in the renewable energy technologies. The voltage gain of the Y-source impedance network rises exponentially as a function of turns ratio, w...
40 CFR 76.11 - Emissions averaging.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...
Determinants of College Grade Point Averages
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…
Computation of the bounce-average code
International Nuclear Information System (INIS)
Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.
1977-01-01
The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended
Rotational averaging of multiphoton absorption cross sections
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic
Should the average tax rate be marginalized?
Czech Academy of Sciences Publication Activity Database
Feldman, N. E.; Katuščák, Peter
-, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Average Bandwidth Allocation Model of WFQ
Directory of Open Access Journals (Sweden)
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Nonequilibrium statistical averages and thermo field dynamics
International Nuclear Information System (INIS)
Marinaro, A.; Scarpetta, Q.
1984-01-01
An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
The definition and computation of average neutron lifetimes
International Nuclear Information System (INIS)
Henry, A.F.
1983-01-01
A precise physical definition is offered for a class of average lifetimes for neutrons in an assembly of materials, either multiplying or not, or if the former, critical or not. A compact theoretical expression for the general member of this class is derived in terms of solutions to the transport equation. Three specific definitions are considered. Particular exact expressions for these are derived and reduced to simple algebraic formulas for one-group and two-group homogeneous bare-core models
Image compression using moving average histogram and RBF network
International Nuclear Information System (INIS)
Khowaja, S.; Ismaili, I.A.
2015-01-01
Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)
40 CFR 63.652 - Emissions averaging provisions.
2010-07-01
... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...
Testing averaged cosmology with type Ia supernovae and BAO data
Energy Technology Data Exchange (ETDEWEB)
Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)
2017-02-01
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.
Testing averaged cosmology with type Ia supernovae and BAO data
International Nuclear Information System (INIS)
Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani
2017-01-01
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.
Van der Haegen, Lise; Cai, Qing; Seurinck, Ruth; Brysbaert, Marc
2011-01-01
The best established lateralized cerebral function is speech production, with the majority of the population having left hemisphere dominance. An important question is how to best assess the laterality of this function. Neuroimaging techniques such as functional Magnetic Resonance Imaging (fMRI) are increasingly used in clinical settings to…
Improved averaging for non-null interferometry
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
Directory of Open Access Journals (Sweden)
Tor Söderström
2012-12-01
Full Text Available This study analyses how changes in the design of screen-based computer simulation training influence the collaborative training process. Specifically, this study examine how the size of a group and a group’s composition influence the way these tools are used. One case study consisted of 18+18 dental students randomized into either collaborative 3D simulation training or conventional collaborative training. The students worked in groups of three. The other case consisted of 12 nursing students working in pairs (partners determined by the students with a 3D simulator. The results showed that simulation training encouraged different types of dialogue compared to conventional training and that the communication patterns were enhanced in the nursing students ́ dyadic simulation training. The concrete changes concerning group size and the composition of the group influenced the nursing students’ engagement with the learning environment and consequently the communication patterns that emerged. These findings suggest that smaller groups will probably be more efficient than larger groups in a free collaboration setting that uses screen-based simulation training.
International Nuclear Information System (INIS)
Schwantes, J.M.; Pellegrini, K.L.; Marsden, Oliva
2017-01-01
The Nuclear Forensics International Technical Working Group (ITWG) recently completed its fourth Collaborative Materials Exercise (CMX-4) in the 21 year history of the Group. This was also the largest materials exercise to date, with participating laboratories from 16 countries or international organizations. Exercise samples (including three separate samples of low enriched uranium oxide) were shipped as part of an illicit trafficking scenario, for which each laboratory was asked to conduct nuclear forensic analyses in support of a fictitious criminal investigation. In all, over 30 analytical techniques were applied to characterize exercise materials, for which ten of those techniques were applied to ITWG exercises for the first time. An objective review of the state of practice and emerging application of analytical techniques of nuclear forensic analysis based upon the outcome of this most recent exercise is provided. (author)
International Nuclear Information System (INIS)
Schwantes, Jon M.; Marsden, Oliva; Pellegrini, Kristi L.
2016-01-01
The Nuclear Forensics International Technical Working Group (ITWG) recently completed its fourth Collaborative Materials Exercise (CMX-4) in the 21 year history of the Group. This was also the largest materials exercise to date, with participating laboratories from 16 countries or international organizations. Moreover, exercise samples (including three separate samples of low enriched uranium oxide) were shipped as part of an illicit trafficking scenario, for which each laboratory was asked to conduct nuclear forensic analyses in support of a fictitious criminal investigation. In all, over 30 analytical techniques were applied to characterize exercise materials, for which ten of those techniques were applied to ITWG exercises for the first time. We performed an objective review of the state of practice and emerging application of analytical techniques of nuclear forensic analysis based upon the outcome of this most recent exercise is provided.
International Nuclear Information System (INIS)
1999-01-01
High-priority opportunities are proposed for use of nuclear techniques to effect improved production and shipping of augmentative biological control agents. Proposed subprojects include use of ionizing radiation to improve the production of insect natural enemies on natural hosts/prey or on artificial diets. Other subprojects pertain to improving the ability to move beneficial organisms in international trade, and in using them in the field. Additional high priority activities were identified proposing use of nuclear techniques to produce sterile and/or substerile F-1 weed biological control agents to help evaluate potential impact on non-target species in the pre-release phase, integration of augmentative releases and F-1 sterility in IPM and area-wide pest management programmes, and utilization of by-products from SIT mass-rearing facilities in augmentative biological control programmes. (author)
Asynchronous Gossip for Averaging and Spectral Ranking
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
Exploiting scale dependence in cosmological averaging
International Nuclear Information System (INIS)
Mattsson, Teppo; Ronkainen, Maria
2008-01-01
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion
Estimating average glandular dose by measuring glandular rate in mammograms
International Nuclear Information System (INIS)
Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru
2003-01-01
The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)
Regional averaging and scaling in relativistic cosmology
International Nuclear Information System (INIS)
Buchert, Thomas; Carfora, Mauro
2002-01-01
Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Average-case analysis of numerical problems
2000-01-01
The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.
Grassmann Averages for Scalable Robust PCA
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Model averaging, optimal inference and habit formation
Directory of Open Access Journals (Sweden)
Thomas H B FitzGerald
2014-06-01
Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.
Generalized Jackknife Estimators of Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...
Average beta measurement in EXTRAP T1
International Nuclear Information System (INIS)
Hedin, E.R.
1988-12-01
Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS
International Nuclear Information System (INIS)
2005-01-01
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department
Bayesian Averaging is Well-Temperated
DEFF Research Database (Denmark)
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...
Gibbs equilibrium averages and Bogolyubov measure
International Nuclear Information System (INIS)
Sankovich, D.P.
2011-01-01
Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Function reconstruction from noisy local averages
International Nuclear Information System (INIS)
Chen Yu; Huang Jianguo; Han Weimin
2008-01-01
A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies
A singularity theorem based on spatial averages
Indian Academy of Sciences (India)
journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.
Multiphase averaging of periodic soliton equations
International Nuclear Information System (INIS)
Forest, M.G.
1979-01-01
The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Essays on model averaging and political economics
Wang, W.
2013-01-01
This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Average Costs versus Net Present Value
E.A. van der Laan (Erwin); R.H. Teunter (Ruud)
2000-01-01
textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives
Average beta-beating from random errors
Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department
2018-01-01
The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic eﬀect on the tune.
Reliability Estimates for Undergraduate Grade Point Average
Westrick, Paul A.
2017-01-01
Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-07-01
The field of aerosol characterization and source identification covers a wide range of scientific and technical activities in many institutions, in both developed and developing countries. This field includes research and applications on urban air pollution, source apportionment of suspended particulate matter, radioactive aerosol particles, organic compounds carried on particulate matter, elemental characterization of particles, and other areas. The subject of this AGM focused on the use of accelerator-based nuclear analytical techniques for determination of elemental composition of particles (by either bulk or single particle analysis) and the use of accumulated knowledge for source identification.
International Nuclear Information System (INIS)
1995-01-01
The field of aerosol characterization and source identification covers a wide range of scientific and technical activities in many institutions, in both developed and developing countries. This field includes research and applications on urban air pollution, source apportionment of suspended particulate matter, radioactive aerosol particles, organic compounds carried on particulate matter, elemental characterization of particles, and other areas. The subject of this AGM focused on the use of accelerator-based nuclear analytical techniques for determination of elemental composition of particles (by either bulk or single particle analysis) and the use of accumulated knowledge for source identification
The Health Effects of Income Inequality: Averages and Disparities.
Truesdale, Beth C; Jencks, Christopher
2016-01-01
Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.
Statistics on exponential averaging of periodograms
Energy Technology Data Exchange (ETDEWEB)
Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).
Statistics on exponential averaging of periodograms
International Nuclear Information System (INIS)
Peeters, T.T.J.M.; Ciftcioglu, Oe.
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Directory of Open Access Journals (Sweden)
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
Weighted estimates for the averaging integral operator
Czech Academy of Sciences Publication Activity Database
Opic, Bohumír; Rákosník, Jiří
2010-01-01
Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231
Average Transverse Momentum Quantities Approaching the Lightfront
Boer, Daniel
2015-01-01
In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...
Average configuration of the geomagnetic tail
International Nuclear Information System (INIS)
Fairfield, D.H.
1979-01-01
Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed
Changing mortality and average cohort life expectancy
Directory of Open Access Journals (Sweden)
Robert Schoen
2005-10-01
Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.
Environmental stresses can alleviate the average deleterious effect of mutations
Directory of Open Access Journals (Sweden)
Leibler Stanislas
2003-05-01
Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.
Post-model selection inference and model averaging
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2011-07-01
Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.
Directory of Open Access Journals (Sweden)
Ana Carolina Robbe Mathias
2007-01-01
Full Text Available A entrevista motivacional e a prevenção de recaída são abordagens de tratamento para pessoas com problemas relativos ao uso indevido de álcool e drogas. Neste trabalho, apresentamos o caso de um paciente demonstrando a utilização das duas abordagens associadas em tratamento em grupo e descrevemos o uso das técnicas, as várias etapas do tratamento e os resultados alcançados. São discutidos os resultados encontrados e as vantagens das técnicas.Motivational Interviewing and relapse prevention are treatment approaches for individuals with alcohol or drug abuse problems. This article describes a group therapy treatment case, showing the association of both techniques. Each step of the treatment techniques is demonstrated and exemplified as long as their results. Results and advantages of the techniques are discussed.
Multiple-level defect species evaluation from average carrier decay
Debuf, Didier
2003-10-01
An expression for the average decay is determined by solving the the carrier continuity equations, which include terms for multiple defect recombination. This expression is the decay measured by techniques such as the contactless photoconductance decay method, which determines the average or volume integrated decay. Implicit in the above is the requirement for good surface passivation such that only bulk properties are observed. A proposed experimental configuration is given to achieve the intended goal of an assessment of the type of defect in an n-type Czochralski-grown silicon semiconductor with an unusually high relative lifetime. The high lifetime is explained in terms of a ground excited state multiple-level defect system. Also, minority carrier trapping is investigated.
Operator product expansion and its thermal average
Energy Technology Data Exchange (ETDEWEB)
Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)
1998-05-01
QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.
Fluctuations of wavefunctions about their classical average
International Nuclear Information System (INIS)
Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics
Phase-averaged transport for quasiperiodic Hamiltonians
Bellissard, J; Schulz-Baldes, H
2002-01-01
For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.
Baseline-dependent averaging in radio interferometry
Wijnholds, S. J.; Willis, A. G.; Salvini, S.
2018-05-01
This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.
Multistage parallel-serial time averaging filters
International Nuclear Information System (INIS)
Theodosiou, G.E.
1980-01-01
Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)
Time-averaged MSD of Brownian motion
International Nuclear Information System (INIS)
Andreanov, Alexei; Grebenkov, Denis S
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution
Time-dependent angularly averaged inverse transport
International Nuclear Information System (INIS)
Bal, Guillaume; Jollivet, Alexandre
2009-01-01
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain
Independence, Odd Girth, and Average Degree
DEFF Research Database (Denmark)
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7. ...
Bootstrapping Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...
Average Nuclear properties based on statistical model
International Nuclear Information System (INIS)
El-Jaick, L.J.
1974-01-01
The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis S.
2012-07-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.
De Luca, G.; Magnus, J.R.
2011-01-01
In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
40 CFR 63.7541 - How do I demonstrate continuous compliance under the emission averaging provision?
2010-07-01
... solid fuel boilers participating in the emissions averaging option as determined in § 63.7522(f) and (g... this section. (i) For each existing solid fuel boiler participating in the emissions averaging option... below the applicable limit. (ii) For each group of boilers participating in the emissions averaging...
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Averaged null energy condition from causality
Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein
2017-07-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.
Beta-energy averaging and beta spectra
International Nuclear Information System (INIS)
Stamatelatos, M.G.; England, T.R.
1976-07-01
A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality
Asymptotic Time Averages and Frequency Distributions
Directory of Open Access Journals (Sweden)
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Chaotic Universe, Friedmannian on the average 2
Energy Technology Data Exchange (ETDEWEB)
Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij
1980-11-01
The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.
High-average-power diode-pumped Yb: YAG lasers
International Nuclear Information System (INIS)
Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B
1999-01-01
A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods
International Nuclear Information System (INIS)
Gondi, Vinai; Cui, Yunfeng; Mehta, Minesh P.; Manfredi, Denise; Xiao, Ying; Galvin, James M.; Rowley, Howard; Tome, Wolfgang A.
2015-01-01
cases passed the pre-enrollment credentialing, the pretreatment centralized review disqualified 5.7% of reviewed cases, prevented unacceptable deviations in 24% of reviewed cases, and limited the final unacceptable deviation rate to 5%. Thus, pretreatment review is deemed necessary in future hippocampal avoidance trials and is potentially useful in other similarly challenging radiation therapy technique trials
Energy Technology Data Exchange (ETDEWEB)
Gondi, Vinai, E-mail: vgondi@chicagocancer.org [Cadence Brain Tumor Center and CDH Proton Center, Warrenville, Illinois (United States); University of Wisconsin School of Medicine & Public Health, Madison, Wisconsin (United States); Cui, Yunfeng [Duke University School of Medicine, Durham, North Carolina (United States); Mehta, Minesh P. [University of Maryland School of Medicine, Baltimore, Maryland (United States); Manfredi, Denise [Radiation Therapy Oncology Group—RTQA, Philadelphia, Pennsylvania (United States); Xiao, Ying; Galvin, James M. [Thomas Jefferson University Hospital, Philadelphia, Pennsylvania (United States); Rowley, Howard [University of Wisconsin School of Medicine & Public Health, Madison, Wisconsin (United States); Tome, Wolfgang A. [Montefiore Medical Center and Institute for Onco-Physics, Albert Einstein College of Medicine of Yeshiva University, Bronx, New York (United States)
2015-03-01
cases passed the pre-enrollment credentialing, the pretreatment centralized review disqualified 5.7% of reviewed cases, prevented unacceptable deviations in 24% of reviewed cases, and limited the final unacceptable deviation rate to 5%. Thus, pretreatment review is deemed necessary in future hippocampal avoidance trials and is potentially useful in other similarly challenging radiation therapy technique trials.
Average resonance capture studies of 102Ru
International Nuclear Information System (INIS)
Shi, Z.R.; Casten, R.F.; Stachel, J.; Bruce, A.M.
1984-01-01
The 102 Ru nucleus has been investigated via the ARC technique which ensures a complete set of J/sup π/ = 0 + , 1 +- , 2 +- , 3 +- , 4 +- , and 5 + levels up to 2 MeV. The results are discussed in the framework of the IBA-1 with Consistent Q. The calculations show good agreement with the empirical data especially for the O 2 + state, suggesting that it can be described in terms of collective degrees of freedom
Yu, Gloria Qingyu; Yu, Peiqiang
2015-09-01
The objectives of this project were to (1) combine vibrational spectroscopy with chemometric multivariate techniques to determine the effect of processing applications on molecular structural changes of lipid biopolymer that mainly related to functional groups in green- and yellow-type Crop Development Centre (CDC) pea varieties [CDC strike (green-type) vs. CDC meadow (yellow-type)] that occurred during various processing applications; (2) relatively quantify the effect of processing applications on the antisymmetric CH3 ("CH3as") and CH2 ("CH2as") (ca. 2960 and 2923 cm(-1), respectively), symmetric CH3 ("CH3s") and CH2 ("CH2s") (ca. 2873 and 2954 cm(-1), respectively) functional groups and carbonyl C=O ester (ca. 1745 cm(-1)) spectral intensities as well as their ratios of antisymmetric CH3 to antisymmetric CH2 (ratio of CH3as to CH2as), ratios of symmetric CH3 to symmetric CH2 (ratio of CH3s to CH2s), and ratios of carbonyl C=O ester peak area to total CH peak area (ratio of C=O ester to CH); and (3) illustrate non-invasive techniques to detect the sensitivity of individual molecular functional group to the various processing applications in the recently developed different types of pea varieties. The hypothesis of this research was that processing applications modified the molecular structure profiles in the processed products as opposed to original unprocessed pea seeds. The results showed that the different processing methods had different impacts on lipid molecular functional groups. Different lipid functional groups had different sensitivity to various heat processing applications. These changes were detected by advanced molecular spectroscopy with chemometric techniques which may be highly related to lipid utilization and availability. The multivariate molecular spectral analyses, cluster analysis, and principal component analysis of original spectra (without spectral parameterization) are unable to fully distinguish the structural differences in the
FEL system with homogeneous average output
Energy Technology Data Exchange (ETDEWEB)
Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph
2018-01-16
A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
[Quetelet, the average man and medical knowledge].
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Angle-averaged Compton cross sections
International Nuclear Information System (INIS)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV
Average Gait Differential Image Based Human Recognition
Directory of Open Access Journals (Sweden)
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Reynolds averaged simulation of unsteady separated flow
International Nuclear Information System (INIS)
Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.
2003-01-01
The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation
Angle-averaged Compton cross sections
Energy Technology Data Exchange (ETDEWEB)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
Chaotic renormalization group approach to disordered systems
International Nuclear Information System (INIS)
Oliveira, P.M.C. de; Continentino, M.A.; Makler, S.S.; Anda, E.V.
1984-01-01
We study the eletronic properties of the disordered linear chain using a technique previously developed by some of the authors for an ordered chain. The equations of motion for the one electron Green function are obtained and the configuration average is done according to the GK scheme. The dynamical problem is transformed, using a renormalization group procedure, into a bidimensional map. The properties of this map are investigated and related to the localization properties of the eletronic system. (Author) [pt
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
The partially averaged field approach to cosmic ray diffusion
International Nuclear Information System (INIS)
Jones, F.C.; Birmingham, T.J.; Kaiser, T.B.
1976-08-01
The kinetic equation for particles interacting with turbulent fluctuations is derived by a new nonlinear technique which successfully corrects the difficulties associated with quasilinear theory. In this new method the effects of the fluctuations are evaluated along particle orbits which themselves include the effects of a statistically averaged subset of the possible configurations of the turbulence. The new method is illustrated by calculating the pitch angle diffusion coefficient D/sub Mu Mu/ for particles interacting with slab-model magnetic turbulence, i.e., magnetic fluctuations linearly polarized transverse to a mean magnetic field. Results are compared with those of quasilinear theory and also with those of Monte Carlo calculations. The major effect of the nonlinear treatment in this illustration is the determination of D/sub Mu Mu/ in the vicinity of 90 deg pitch angles where quasilinear theory breaks down. The spatial diffusion coefficient parallel to a mean magnetic field is evaluated using D/sub Mu Mu/ as calculated by this technique. It is argued that the partially averaged field method is not limited to small amplitude fluctuating fields, and is, hence, not a perturbation theory
Ultra-low noise miniaturized neural amplifier with hardware averaging.
Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M
2015-08-01
Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.
High average power diode pumped solid state lasers for CALIOPE
International Nuclear Information System (INIS)
Comaskey, B.; Halpin, J.; Moran, B.
1994-07-01
Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers
Industrial Applications of High Average Power FELS
Shinn, Michelle D
2005-01-01
The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Geographic Gossip: Efficient Averaging for Sensor Networks
Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.
High-average-power solid state lasers
International Nuclear Information System (INIS)
Summers, M.A.
1989-01-01
In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs
The concept of average LET values determination
International Nuclear Information System (INIS)
Makarewicz, M.
1981-01-01
The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)
On spectral averages in nuclear spectroscopy
International Nuclear Information System (INIS)
Verbaarschot, J.J.M.
1982-01-01
In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)
Sane, Sharad S
2013-01-01
This is a basic text on combinatorics that deals with all the three aspects of the discipline: tricks, techniques and theory, and attempts to blend them. The book has several distinctive features. Probability and random variables with their interconnections to permutations are discussed. The theme of parity has been specially included and it covers applications ranging from solving the Nim game to the quadratic reciprocity law. Chapters related to geometry include triangulations and Sperner's theorem, classification of regular polytopes, tilings and an introduction to the Eulcidean Ramsey theory. Material on group actions covers Sylow theory, automorphism groups and a classification of finite subgroups of orthogonal groups. All chapters have a large number of exercises with varying degrees of difficulty, ranging from material suitable for Mathematical Olympiads to research.
New Nordic diet versus average Danish diet
DEFF Research Database (Denmark)
Khakimov, Bekzod; Poulsen, Sanne Kellebjerg; Savorani, Francesco
2016-01-01
and 3-hydroxybutanoic acid were related to a higher weight loss, while higher concentrations of salicylic, lactic and N-aspartic acids, and 1,5-anhydro-D-sorbitol were related to a lower weight loss. Specific gender- and seasonal differences were also observed. The study strongly indicates that healthy...... metabolites reflecting specific differences in the diets, especially intake of plant foods and seafood, and in energy metabolism related to ketone bodies and gluconeogenesis, formed the predominant metabolite pattern discriminating the intervention groups. Among NND subjects higher levels of vaccenic acid...
Techniques for data compression in experimental nuclear physics problems
International Nuclear Information System (INIS)
Byalko, A.A.; Volkov, N.G.; Tsupko-Sitnikov, V.M.
1984-01-01
Techniques and ways for data compression during physical experiments are estimated. Data compression algorithms are divided into three groups: the first one includes the algorithms based on coding and which posses only average indexes by data files, the second group includes algorithms with data processing elements, the third one - algorithms for converted data storage. The greatest promise for the techniques connected with data conversion is concluded. The techniques possess high indexes for compression efficiency and for fast response, permit to store information close to the source one
Using Bayes Model Averaging for Wind Power Forecasts
Preede Revheim, Pål; Beyer, Hans Georg
2014-05-01
For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data
Principles of resonance-averaged gamma-ray spectroscopy
International Nuclear Information System (INIS)
Chrien, R.E.
1981-01-01
The unambiguous determination of excitation energies, spins, parities, and other properties of nuclear levels is the paramount goal of the nuclear spectroscopist. All developments of nuclear models depend upon the availability of a reliable data base on which to build. In this regard, slow neutron capture gamma-ray spectroscopy has proved to be a valuable tool. The observation of primary radiative transitions connecting initial and final states can provide definite level positions. In particular the use of the resonance-averaged capture technique has received much recent attention because of the claims advanced for this technique (Chrien 1980a, Casten 1980); that it is able to identify all states in a given spin-parity range and to provide definite spin parity information for these states. In view of the importance of this method, it is perhaps surprising that until now no firm analytical basis has been provided which delineates its capabilities and limitations. Such an analysis is necessary to establish the spin-parity assignments derived from this method on a quantitative basis; in other words a quantitative statement of the limits of error must be provided. It is the principal aim of the present paper to present such an analysis. To do this, a historical description of the technique and its applications is presented and the principles of the method are stated. Finally a method of statistical analysis is described, and the results are applied to recent measurements carried out at the filtered beam facilities at the Brookhaven National Laboratory
Grouping techniques in an EFL classroom
Directory of Open Access Journals (Sweden)
Marlene Ramírez Salas
2005-01-01
Full Text Available Este artículo manifiesta la importancia del trabajo grupal en el aula, para motivar la comunicación entre los estudiantes. Asimismo, presenta la definición de trabajo en grupo, sus ventajas, desventajas y algunas actividades y técnicas para formar grupos.
Techniques for Small-Group Discourse
Kilic, Hulya; Cross, Dionne I.; Ersoz, Filyet A.; Mewborn, Denise S.; Swanagan, Diana; Kim, Jisun
2010-01-01
The nature of mathematical discourse and its influence on the development of students' mathematical understanding has received much attention from researchers in recent years. Engaging students in discursive practices can be difficult; teachers can increase their competence in facilitating discourse through greater awareness of the impact of…
Role of spatial averaging in multicellular gradient sensing.
Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew
2016-05-20
Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.
Aarthi, G.; Ramachandra Reddy, G.
2018-03-01
In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.
International Nuclear Information System (INIS)
Marguí, Eva; Hidalgo, Manuela
2014-01-01
The Analytical and Environmental Chemistry Group (QAA) is a consolidated research group of the Department of Chemistry of the University of Girona (North- East Spain). The main research topics of the group are related to the development and application of analytical methodologies for the determination of inorganic and organic species in different kind of environmental, clinical and industrial samples. From the beginning of the 2000’s, one of the research focuses of the group, is the use of X-ray fluorescence spectrometry (XRF) for the determination of trace amounts of metals and metalloids mostly in samples related to the environmental and industrial fields. For instance, in collaboration with the Institute of Earth Sciences “Jaume Almera” (ICTJA-CSIC, Spain), we have developed and successfully applied several analytical approaches based on the use of EDXRF (Energy dispersive XRF), WDXRF (Wavelength dispersive XRF) and PEDXRF (Polarised EDXRF) for the determination of metals at trace levels in complex liquid samples such as sea water or electroplating waters in vegetation samples collected around mining environments or in active pharmaceutical ingredients. At present, the evaluation of the analytical possibilities of TXRF (Total reflection XRF) in the chemical analysis field is also one of the research topics of QAA. In this sense, several contributions related to the use of this technique for element determination in liquid and solid samples have been developed. A summary of these contributions is summarized in the last section of this review
Monthly streamflow forecasting with auto-regressive integrated moving average
Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani
2017-09-01
Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.
Face averages enhance user recognition for smartphone security.
Robertson, David J; Kramer, Robin S S; Burton, A Mike
2015-01-01
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.
International Nuclear Information System (INIS)
Maciejewski, B.; Rodney Withers, H.
2004-01-01
The exploitation of a number of current clinical trials and reports on outcomes after radiation therapy (i.e. breast, head and neck, prostate) in clinical practice reflects many limitations for conventional techniques and dose-fractionation schedules and for 'average' conclusions. Even after decades of evolution of radiation therapy we still do not know how to optimize treatment for the individual patient and only have 'averages' and ill-defined 'probabilities' to guide treatment prescription. Wide clinical and biological heterogeneity within the groups of patients recruited into clinical trials with a few-fold variation in tumour volume within one stage of disease is obvious. Basic radiobiological guidelines concerning average cell killing of uniformly distributed and equally radiosensitive tumour cells arose from elegant but idealistic in vitro experiments and seem to be of uncertain validity. Therefore, we are confronted with more dilemmas than dogmas. Nonlinearity and in homogeneity of human tumour pattern and response to irradiation are discussed. The purpose of this paper is to present and discuss various aspects of non-uniform tumour cell targeted radiotherapy using conformal and dose intensity modulated techniques. (author)
To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space
International Nuclear Information System (INIS)
Khrennikov, Andrei
2007-01-01
We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'
Determining average path length and average trapping time on generalized dual dendrimer
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Harris, Mary B.; Trujillo, Amaryllis E.
1975-01-01
Both a self-management approach, teaching the principles of behavior modification and self-control (n=36), and a group-discussion technique, involving discussion of study habits and problems (n=41), led to improvements in grade point averages compared with a no-treatment control group (n=36) for low-achieving junior high school students. (Author)
Low Average Sidelobe Slot Array Antennas for Radiometer Applications
Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.
2012-01-01
In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E
Huopana, J
2010-01-01
The CLIC (Compact LInear Collider) is being studied at CERN as a potential multi-TeV e+e- collider [1]. The manufacturing and assembly tolerances for the required RF-components are important for the final efficiency and for the operation of CLIC. The proper function of an accelerating structure is very sensitive to errors in shape and location of the accelerating cavity. This causes considerable issues in the field of mechanical design and manufacturing. Currently the design of the accelerating structures is a disk design. Alternatively it is possible to create the accelerating assembly from quadrants, which favour the mass manufacturing. The functional shape inside of the accelerating structure remains the same and a single assembly uses less parts. The alignment of these quadrants has been previously made kinematic by using steel pins or spheres to align the pieces together. This method proved to be a quite tedious and time consuming method of assembly. To limit the number of different error sources, a meth...
Davit, Yohan; Bell, Christopher G.; Byrne, Helen M.; Chapman, Lloyd A.C.; Kimpton, Laura S.; Lang, Georgina E.; Leonard, Katherine H.L.; Oliver, James M.; Pearson, Natalie C.; Shipley, Rebecca J.; Waters, Sarah L.; Whiteley, Jonathan P.; Wood, Brian D.; Quintard, Michel
2013-01-01
doing, compare their respective advantages/disadvantages from a practical point of view. This paper is also intended as a pedagogical guide and may be viewed as a tutorial for graduate students as we provide historical context, detail subtle points
Experimental techniques; Techniques experimentales
Energy Technology Data Exchange (ETDEWEB)
Roussel-Chomaz, P. [GANIL CNRS/IN2P3, CEA/DSM, 14 - Caen (France)
2007-07-01
This lecture presents the experimental techniques, developed in the last 10 or 15 years, in order to perform a new class of experiments with exotic nuclei, where the reactions induced by these nuclei allow to get information on their structure. A brief review of the secondary beams production methods will be given, with some examples of facilities in operation or under project. The important developments performed recently on cryogenic targets will be presented. The different detection systems will be reviewed, both the beam detectors before the targets, and the many kind of detectors necessary to detect all outgoing particles after the reaction: magnetic spectrometer for the heavy fragment, detection systems for the target recoil nucleus, {gamma} detectors. Finally, several typical examples of experiments will be detailed, in order to illustrate the use of each detector either alone, or in coincidence with others. (author)
20 CFR 404.221 - Computing your average monthly wage.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...
Average gluon and quark jet multiplicities at higher orders
Energy Technology Data Exchange (ETDEWEB)
Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics
2013-05-15
We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.
Average and local structure of α-CuI by configurational averaging
International Nuclear Information System (INIS)
Mohn, Chris E; Stoelen, Svein
2007-01-01
Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs
Suicide attempts, platelet monoamine oxidase and the average evoked response
International Nuclear Information System (INIS)
Buchsbaum, M.S.; Haier, R.J.; Murphy, D.L.
1977-01-01
The relationship between suicides and suicide attempts and two biological measures, platelet monoamine oxidase levels (MAO) and average evoked response (AER) augmenting was examined in 79 off-medication psychiatric patients and in 68 college student volunteers chosen from the upper and lower deciles of MAO activity levels. In the patient sample, male individuals with low MAO and AER augmenting, a pattern previously associated with bipolar affective disorders, showed a significantly increased incidence of suicide attempts in comparison with either non-augmenting low MAO or high MAO patients. Within the normal volunteer group, all male low MAO probands with a family history of suicide or suicide attempts were AER augmenters themselves. Four completed suicides were found among relatives of low MAO probands whereas no high MAO proband had a relative who committed suicide. These findings suggest that the combination of low platelet MAO activity and AER augmenting may be associated with a possible genetic vulnerability to psychiatric disorders. (author)
Characterizing individual painDETECT symptoms by average pain severity
Directory of Open Access Journals (Sweden)
Sadosky A
2016-07-01
Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain
Group anxiety management: effectiveness, perceived helpfulness and follow-up.
Cadbury, S; Childs-Clark, A; Sandhu, S
1990-05-01
An evaluation was conducted on out-patient cognitive-behavioural anxiety management groups. Twenty-nine clients assessed before and after the group and at three-month follow-up showed significant improvement on self-report measures. A further follow-up on 21 clients, conducted by an independent assessor at an average of 11 months, showed greater improvement with time. Clients also rated how helpful they had found non-specific therapeutic factors, and specific anxiety management techniques. 'Universality' was the most helpful non-specific factor, and 'the explanation of anxiety' was the most helpful technique.
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kirti AREKAR; Rinku JAIN
2017-01-01
The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE) by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in...
Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor
2016-10-01
Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.
An Experimental Study Related to Planning Abilities of Gifted and Average Students
Directory of Open Access Journals (Sweden)
Marilena Z. Leana-Taşcılar
2016-02-01
Full Text Available Gifted students differ from their average peers in psychological, social, emotional and cognitive development. One of these differences in the cognitive domain is related to executive functions. One of the most important executive functions is planning and organization ability. The aim of this study was to compare planning abilities of gifted students with those of their average peers and to test the effectiveness of a training program on planning abilities of gifted students and average students. First, students’ intelligence and planning abilities were measured and then assigned to either experimental or control group. The groups were matched by intelligence and planning ability (experimental: (13 gifted and 8 average; control: 14 gifted and 8 average. In total 182 students (79 gifted and 103 average participated in the study. Then, a training program was implemented in the experimental group to find out if it improved students’ planning ability. Results showed that boys had better planning abilities than girls did, and gifted students had better planning abilities than their average peers did. Significant results were obtained in favor of the experimental group in the posttest scores
Applications of resonance-averaged gamma-ray spectroscopy with tailored beams
International Nuclear Information System (INIS)
Chrien, R.E.
1982-01-01
The use of techniques based on the direct experimental averaging over compound nuclear capturing states has proved valuable for investigations of nuclear structure. The various methods that have been employed are described, with particular emphasis on the transmission filter, or tailored beam technique. The mathematical limitations on averaging imposed by the filter band pass are discussed. It can readily be demonstrated that a combination of filters at different energies can form a powerful method for spin and parity predictions. Several recent examples from the HFBR program are presented
Applications of resonance-averaged gamma-ray spectroscopy with tailored beams
International Nuclear Information System (INIS)
Chrien, R.E.
1982-01-01
The use of techniques based on the direct experimental averaging over compound nuclear capturing states has proved valuable for investigations of nuclear structure. The various methods that have been employed are described, with particular emphasis on the transmission filter, or tailored beam technique. The mathematical limitations on averaging imposed by the filtre band pass are discussed. It can readily be demonstrated that a combination of filters at different energies can form a powerful method for spin and parity predictions. Several recent examples from the HFBR program are presented. (author)
Average of delta: a new quality control tool for clinical laboratories.
Jones, Graham R D
2016-01-01
Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.
Passive quantum error correction of linear optics networks through error averaging
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
Lipkin, Harry J
2002-01-01
According to the author of this concise, high-level study, physicists often shy away from group theory, perhaps because they are unsure which parts of the subject belong to the physicist and which belong to the mathematician. However, it is possible for physicists to understand and use many techniques which have a group theoretical basis without necessarily understanding all of group theory. This book is designed to familiarize physicists with those techniques. Specifically, the author aims to show how the well-known methods of angular momentum algebra can be extended to treat other Lie group
Strengthened glass for high average power laser applications
International Nuclear Information System (INIS)
Cerqua, K.A.; Lindquist, A.; Jacobs, S.D.; Lambropoulos, J.
1987-01-01
Recent advancements in high repetition rate and high average power laser systems have put increasing demands on the development of improved solid state laser materials with high thermal loading capabilities. The authors have developed a process for strengthening a commercially available Nd doped phosphate glass utilizing an ion-exchange process. Results of thermal loading fracture tests on moderate size (160 x 15 x 8 mm) glass slabs have shown a 6-fold improvement in power loading capabilities for strengthened samples over unstrengthened slabs. Fractographic analysis of post-fracture samples has given insight into the mechanism of fracture in both unstrengthened and strengthened samples. Additional stress analysis calculations have supported these findings. In addition to processing the glass' surface during strengthening in a manner which preserves its post-treatment optical quality, the authors have developed an in-house optical fabrication technique utilizing acid polishing to minimize subsurface damage in samples prior to exchange treatment. Finally, extension of the strengthening process to alternate geometries of laser glass has produced encouraging results, which may expand the potential or strengthened glass in laser systems, making it an exciting prospect for many applications
Potential of high-average-power solid state lasers
International Nuclear Information System (INIS)
Emmett, J.L.; Krupke, W.F.; Sooy, W.R.
1984-01-01
We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels
Average accelerator simulation Truebeam using phase space in IAEA format
International Nuclear Information System (INIS)
Santana, Emico Ferreira; Milian, Felix Mas; Paixao, Paulo Oliveira; Costa, Raranna Alves da; Velasco, Fermin Garcia
2015-01-01
In this paper is used a computational code of radiation transport simulation based on Monte Carlo technique, in order to model a linear accelerator of treatment by Radiotherapy. This work is the initial step of future proposals which aim to study several treatment of patient by Radiotherapy, employing computational modeling in cooperation with the institutions UESC, IPEN, UFRJ e COI. The Chosen simulation code is GATE/Geant4. The average accelerator is TrueBeam of Varian Company. The geometric modeling was based in technical manuals, and radiation sources on the phase space for photons, provided by manufacturer in the IAEA (International Atomic Energy Agency) format. The simulations were carried out in equal conditions to experimental measurements. Were studied photons beams of 6MV, with 10 per 10 cm of field, focusing on a water phantom. For validation were compared dose curves in depth, lateral profiles in different depths of the simulated results and experimental data. The final modeling of this accelerator will be used in future works involving treatments and real patients. (author)
Analytical expressions for conditional averages: A numerical test
DEFF Research Database (Denmark)
Pécseli, H.L.; Trulsen, J.
1991-01-01
Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...
Experimental demonstration of squeezed-state quantum averaging
DEFF Research Database (Denmark)
Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...
The flattening of the average potential in models with fermions
International Nuclear Information System (INIS)
Bornholdt, S.
1993-01-01
The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)
Directory of Open Access Journals (Sweden)
Kléryson Martins Soares Francisco
2009-02-01
Full Text Available O presente estudo tem como objetivo analisar, por meio da técnica do Grupo Focal, o entendimento de adolescentes em relação à saúde bucal. A pesquisa foi realizada em três escolas públicas da cidade de Araçatuba, Estado de São Paulo, com dez alunos em cada uma delas. Para a realização dos grupos focais foram abordadas as seguintes palavras, presentes em perguntas de questionários sobre saúde bucal, as quais apresentaram altos índices de erros: saúde bucal; placa bacteriana; dente permanente; flúor; gengiva sangra?; fio dental; transmissão da cárie. Durante as discussões dos grupos focais, observou-se que muitos adolescentes ficavam surpresos com a situação a qual foram submetidos e com o tema que estavam discutindo. A palavra ‘saúde bucal’ foi associada à condição de limpeza dacavidade bucal, não identificando a saúde bucal como parte da saúde geral. O termo ‘transmissão da cárie’ não teve um entendimento suficiente. A expressão ‘dente permanente’ foi bem compreendida, sendo associada a um tipo de dente que não seria mais substituído.A palavra ‘flúor’ teve maior associação à função de limpeza do que à proteção dos dentes. Conclui-se que a utilização da técnica do Grupo Focal é de grande importância na interpretação do conhecimento dos adolescentes sobre saúde bucal e na adequação da terminologia de questionários sobre o mesmo tema.This study aims to the understanding of adolescents regarding oral health, using the Focus Group technique. The study was conducted at three public schools in the city of Araçatuba, São Paulo State, Brazil, with ten students in each. In order to conduct the focus groups, the following words, which featured high error levels, wereaddressed in survey questions on oral health: oral health; plaque, permanent teeth; fluoride; gum bleeds?; dental floss; transmission of cavities. During the discussions in the focus groups, it was observed that many
20 CFR 404.220 - Average-monthly-wage method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...
A time-averaged cosmic ray propagation theory
International Nuclear Information System (INIS)
Klimas, A.J.
1975-01-01
An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de
7 CFR 51.2561 - Average moisture content.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...
Averaging in SU(2) open quantum random walk
International Nuclear Information System (INIS)
Ampadu Clement
2014-01-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT
Averaging in SU(2) open quantum random walk
Clement, Ampadu
2014-03-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.
The calculation of average error probability in a digital fibre optical communication system
Rugemalira, R. A. M.
1980-03-01
This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity
Energy Technology Data Exchange (ETDEWEB)
Schwantes, Jon M.; Marsden, Oliva; Pellegrini, Kristi L.
2016-09-16
Founded in 1996 upon the initiative of the “Group of 8” governments (G8), the Nuclear Forensics International Technical Working Group (ITWG) is an ad hoc organization of official nuclear forensics practitioners (scientists, law enforcement, and regulators) that can be called upon to provide technical assistance to the global community in the event of a seizure of nuclear or radiological materials. The ITWG is supported by and is affiliated with roughly 40 countries and international partner organizations including the International Atomic Energy Agency (IAEA), EURATOM, INTERPOL, EUROPOL, and the United Nations Interregional Crime and Justice Research Institute (UNICRI). Besides providing a network of nuclear forensics laboratories that are able to assist law enforcement during a nuclear smuggling event, the ITWG is also committed to the advancement of the science of nuclear forensic analysis, largely through participation in periodic table top and Collaborative Materials Exercises (CMXs). Exercise scenarios use “real world” samples with realistic forensics investigation time constraints and reporting requirements. These exercises are designed to promote best practices in the field and test, evaluate, and improve new technical capabilities, methods and techniques in order to advance the science of nuclear forensics. The ITWG recently completed its fourth CMX in the 20 year history of the organization. This was also the largest materials exercise to date, with participating laboratories from 16 countries or organizations. Three samples of low enriched uranium were shipped to these laboratories as part of an illicit trafficking scenario, for which each laboratory was asked to conduct nuclear forensic analyses in support of a fictitious criminal investigation. An objective review of the State Of Practice and Art of international nuclear forensic analysis based upon the outcome of this most recent exercise is provided.
Directory of Open Access Journals (Sweden)
Aneta Rita Borkowska
2014-05-01
Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.
Plant tissue culture techniques
Directory of Open Access Journals (Sweden)
Rolf Dieter Illg
1991-01-01
Full Text Available Plant cell and tissue culture in a simple fashion refers to techniques which utilize either single plant cells, groups of unorganized cells (callus or organized tissues or organs put in culture, under controlled sterile conditions.
Radiochemical procedures and techniques
International Nuclear Information System (INIS)
Flynn, K.
1975-04-01
A summary is presented of the radiochemical procedures and techniques currently in use by the Chemistry Division Nuclear Chemistry Group at Argonne National Laboratory for the analysis of radioactive samples. (U.S.)
International Nuclear Information System (INIS)
Anon.
1976-01-01
The report reviews radon measurement surveys in soils and in water. Special applications, and advantages and limitations of the radon measurement techniques are considered. The working group also gives some directions for further research in this field
The Pulsair 3000 tonometer--how many readings need to be taken to ensure accuracy of the average?
McCaghrey, G E; Matthews, F E
2001-07-01
Manufacturers of non-contact tonometers recommend that a number of readings are taken on each eye, and an average obtained. With the Keeler Pulsair 3000 it is advised to take four readings, and average these. This report analyses readings in 100 subjects, and compares the first reading, and the averages of the first two and first three readings with the "machine standard" of the average of four readings. It is found that, in the subject group investigated, the average of three readings is not different from the average of four in 95% of individuals, with equivalence defined as +/- 1.0 mmHg.
Thermodynamic Integration Methods, Infinite Swapping and the Calculation of Generalized Averages
Doll, J. D.; Dupuis, P.; Nyquist, P.
2016-01-01
In the present paper we examine the risk-sensitive and sampling issues associated with the problem of calculating generalized averages. By combining thermodynamic integration and Stationary Phase Monte Carlo techniques, we develop an approach for such problems and explore its utility for a prototypical class of applications.
Mindfulness for group facilitation
DEFF Research Database (Denmark)
Adriansen, Hanne Kirstine; Krohn, Simon
2014-01-01
In this paper, we argue that mindfulness techniques can be used for enhancing the outcome of group performance. The word mindfulness has different connotations in the academic literature. Broadly speaking there is ‘mindfulness without meditation’ or ‘Western’ mindfulness which involves active...... thinking and ‘Eastern’ mindfulness which refers to an open, accepting state of mind, as intended with Buddhist-inspired techniques such as meditation. In this paper, we are interested in the latter type of mindfulness and demonstrate how Eastern mindfulness techniques can be used as a tool for facilitation....... A brief introduction to the physiology and philosophy of Eastern mindfulness constitutes the basis for the arguments of the effect of mindfulness techniques. The use of mindfulness techniques for group facilitation is novel as it changes the focus from individuals’ mindfulness practice...
Theoretical Issues in Clinical Social Group Work.
Randall, Elizabeth; Wodarski, John S.
1989-01-01
Reviews relevant issues in clinical social group practice including group versus individual treatment, group work advantages, approach rationale, group conditions for change, worker role in group, group composition, group practice technique and method, time as group work dimension, pretherapy training, group therapy precautions, and group work…
Averages of b-hadron, c-hadron, and τ-lepton properties as of summer 2016
Energy Technology Data Exchange (ETDEWEB)
Amhis, Y. [LAL, Universite Paris-Sud, CNRS/IN2P3, Orsay (France); Banerjee, S. [University of Louisville, Louisville, KY (United States); Ben-Haim, E. [Universite Paris Diderot, CNRS/IN2P3, LPNHE, Universite Pierre et Marie Curie, Paris (France); Bernlochner, F.; Dingfelder, J.; Duell, S. [University of Bonn, Bonn (Germany); Bozek, A. [H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Bozzi, C. [INFN, Sezione di Ferrara, Ferrara (Italy); Chrzaszcz, M. [H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Gersabeck, M. [University of Manchester, School of Physics and Astronomy, Manchester (United Kingdom); Gershon, T. [University of Warwick, Department of Physics, Coventry (United Kingdom); Gerstel, D.; Serrano, J. [Aix Marseille Univ., CNRS/IN2P3, CPPM, Marseille (France); Goldenzweig, P. [Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Harr, R. [Wayne State University, Detroit, MI (United States); Hayasaka, K. [Niigata University, Niigata (Japan); Hayashii, H. [Nara Women' s University, Nara (Japan); Kenzie, M. [Cavendish Laboratory, University of Cambridge, Cambridge (United Kingdom); Kuhr, T. [Ludwig-Maximilians-University, Munich (Germany); Leroy, O. [Aix Marseille Univ., CNRS/IN2P3, CPPM, Marseille (France); Lusiani, A. [Scuola Normale Superiore, Pisa (Italy); INFN, Sezione di Pisa, Pisa (Italy); Lyu, X.R. [University of Chinese Academy of Sciences, Beijing (China); Miyabayashi, K. [Niigata University, Niigata (Japan); Naik, P. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Nanut, T. [J. Stefan Institute, Ljubljana (Slovenia); Oyanguren Campos, A. [Centro Mixto Universidad de Valencia-CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Patel, M. [Imperial College London, London (United Kingdom); Pedrini, D. [INFN, Sezione di Milano-Bicocca, Milan (Italy); Petric, M. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Rama, M. [INFN, Sezione di Pisa, Pisa (Italy); Roney, M. [University of Victoria, Victoria, BC (Canada); Rotondo, M. [INFN, Laboratori Nazionali di Frascati, Frascati (Italy); Schneider, O. [Institute of Physics, Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne (Switzerland); Schwanda, C. [Institute of High Energy Physics, Vienna (Austria); Schwartz, A.J. [University of Cincinnati, Cincinnati, OH (United States); Shwartz, B. [Budker Institute of Nuclear Physics (SB RAS), Novosibirsk (Russian Federation); Novosibirsk State University, Novosibirsk (Russian Federation); Tesarek, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); Tonelli, D. [INFN, Sezione di Pisa, Pisa (Italy); Trabelsi, K. [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); SOKENDAI (The Graduate University for Advanced Studies), Hayama (Japan); Urquijo, P. [School of Physics, University of Melbourne, Melbourne, VIC (Australia); Van Kooten, R. [Indiana University, Bloomington, IN (United States); Yelton, J. [University of Florida, Gainesville, FL (US); Zupanc, A. [J. Stefan Institute, Ljubljana (SI); University of Ljubljana, Faculty of Mathematics and Physics, Ljubljana (SI); Collaboration: Heavy Flavor Averaging Group (HFLAV)
2017-12-15
This article reports world averages of measurements of b-hadron, c-hadron, and τ-lepton properties obtained by the Heavy Flavor Averaging Group using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays, and Cabbibo-Kobayashi-Maskawa matrix elements. (orig.)
Averages of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties as of summer 2014
Energy Technology Data Exchange (ETDEWEB)
Amhis, Y.; et al.
2014-12-23
This article reports world averages of measurements of $b$-hadron, $c$-hadron, and $\\tau$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2014. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, $CP$ violation parameters, parameters of semileptonic decays and CKM matrix elements.
Perceptual learning in Williams syndrome: looking beyond averages.
Directory of Open Access Journals (Sweden)
Patricia Gervan
Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.
Averaging and sampling for magnetic-observatory hourly data
Directory of Open Access Journals (Sweden)
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
Making Cooperative Learning Groups Work.
Hawley, James; De Jong, Cherie
1995-01-01
Discusses the use of cooperative-learning groups with middle school students. Describes cooperative-learning techniques, including group roles, peer evaluation, and observation and monitoring. Considers grouping options, including group size and configuration, dyads, the think-pair-share lecture, student teams achievement divisions, jigsaw groups,…
International Nuclear Information System (INIS)
Wiese, E.
1998-01-01
Most of the dismantling techniques used in a Decontamination and Dismantlement (D and D) project are taken from conventional demolition practices. Some modifications to the techniques are made to limit exposure to the workers or to lessen the spread of contamination to the work area. When working on a D and D project, it is best to keep the dismantling techniques and tools as simple as possible. The workers will be more efficient and safer using techniques that are familiar to them. Prior experience with the technique or use of mock-ups is the best way to keep workers safe and to keep the project on schedule
Energy Technology Data Exchange (ETDEWEB)
Wiese, E.
1998-03-13
Most of the dismantling techniques used in a Decontamination and Dismantlement (D and D) project are taken from conventional demolition practices. Some modifications to the techniques are made to limit exposure to the workers or to lessen the spread of contamination to the work area. When working on a D and D project, it is best to keep the dismantling techniques and tools as simple as possible. The workers will be more efficient and safer using techniques that are familiar to them. Prior experience with the technique or use of mock-ups is the best way to keep workers safe and to keep the project on schedule.
Energy Technology Data Exchange (ETDEWEB)
Fields, Susannah
2007-08-16
This project is currently under contract for research through the Department of Homeland Security until 2011. The group I was responsible for studying has to remain confidential so as not to affect the current project. All dates, reference links and authors, and other distinguishing characteristics of the original group have been removed from this report. All references to the name of this group or the individual splinter groups has been changed to 'Group X'. I have been collecting texts from a variety of sources intended for the use of recruiting and radicalizing members for Group X splinter groups for the purpose of researching the motivation and intent of leaders of those groups and their influence over the likelihood of group radicalization. This work included visiting many Group X websites to find information on splinter group leaders and finding their statements to new and old members. This proved difficult because the splinter groups of Group X are united in beliefs, but differ in public opinion. They are eager to tear each other down, prove their superiority, and yet remain anonymous. After a few weeks of intense searching, a list of eight recruiting texts and eight radicalizing texts from a variety of Group X leaders were compiled.
Johnson, D L
1997-01-01
The aim of this book is to provide an introduction to combinatorial group theory. Any reader who has completed first courses in linear algebra, group theory and ring theory will find this book accessible. The emphasis is on computational techniques but rigorous proofs of all theorems are supplied. This new edition has been revised throughout, including new exercises and an additional chapter on proving that certain groups are infinite.
Safety Impact of Average Speed Control in the UK
DEFF Research Database (Denmark)
Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert
2016-01-01
of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....
on the performance of Autoregressive Moving Average Polynomial
African Journals Online (AJOL)
Timothy Ademakinwa
Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
Light-cone averaging in cosmology: formalism and applications
International Nuclear Information System (INIS)
Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.
2011-01-01
We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe
Sawyer, Keith
2015-01-01
Keith Sawyer views the spontaneous collaboration of group creativity and improvisation actions as "group flow," which organizations can use to function at optimum levels. Sawyer establishes ideal conditions for group flow: group goals, close listening, complete concentration, being in control, blending egos, equal participation, knowing…
International Nuclear Information System (INIS)
Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.
1994-01-01
The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL's). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL's which are appropriate for material processing applications, low and intermediate average power DPSSL's are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications
James S. Rentch; B. Desta Fekedulegn; Gary W. Miller
2002-01-01
This study evaluated the use of radial growth averaging as a technique of identifying canopy disturbances in a thinned 55-year-old mixed-oak stand in West Virginia. We used analysis of variance to determine the time interval (averaging period) and lag period (time between thinning and growth increase) that best captured the growth increase associated with different...
Interpreting Bivariate Regression Coefficients: Going beyond the Average
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Average stress in a Stokes suspension of disks
Prosperetti, Andrea
2004-01-01
The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is
47 CFR 1.959 - Computation of average terrain elevation.
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...
47 CFR 80.759 - Average terrain elevation.
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...
The average covering tree value for directed graph games
Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf
We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering
The Average Covering Tree Value for Directed Graph Games
Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.
2012-01-01
Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all
18 CFR 301.7 - Average System Cost methodology functionalization.
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...
Analytic computation of average energy of neutrons inducing fission
International Nuclear Information System (INIS)
Clark, Alexander Rich
2016-01-01
The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.
An alternative scheme of the Bogolyubov's average method
International Nuclear Information System (INIS)
Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.
1990-01-01
In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail
2015-01-01
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
Passman, Donald S
2012-01-01
This volume by a prominent authority on permutation groups consists of lecture notes that provide a self-contained account of distinct classification theorems. A ready source of frequently quoted but usually inaccessible theorems, it is ideally suited for professional group theorists as well as students with a solid background in modern algebra.The three-part treatment begins with an introductory chapter and advances to an economical development of the tools of basic group theory, including group extensions, transfer theorems, and group representations and characters. The final chapter feature
Marcia Pinheiro
2015-01-01
In this paper, we discuss three translation techniques: literal, cultural, and artistic. Literal translation is a well-known technique, which means that it is quite easy to find sources on the topic. Cultural and artistic translation may be new terms. Whilst cultural translation focuses on matching contexts, artistic translation focuses on matching reactions. Because literal translation matches only words, it is not hard to find situations in which we should not use this technique. Because a...
Self-similarity of higher-order moving averages
Arianos, Sergio; Carbone, Anna; Türk, Christian
2011-10-01
In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).
Anomalous behavior of q-averages in nonextensive statistical mechanics
International Nuclear Information System (INIS)
Abe, Sumiyoshi
2009-01-01
A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases
Bootstrapping pre-averaged realized volatility under market microstructure noise
DEFF Research Database (Denmark)
Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour
The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...
Techniques de combustion Combustin Techniques
Directory of Open Access Journals (Sweden)
Perthuis E.
2006-11-01
Full Text Available L'efficacité d'un processus de chauffage par flamme est étroitement liée à la maîtrise des techniques de combustion. Le brûleur, organe essentiel de l'équipement de chauffe, doit d'une part assurer une combustion complète pour utiliser au mieux l'énergie potentielle du combustible et, d'autre part, provoquer dans le foyer les conditions aérodynamiques les plus propices oux transferts de chaleur. En s'appuyant sur les études expérimentales effectuées à la Fondation de Recherches Internationales sur les Flammes (FRIF, au Groupe d'Étude des Flammes de Gaz Naturel (GEFGN et à l'Institut Français du Pétrole (IFP et sur des réalisations industrielles, on présente les propriétés essentielles des flammes de diffusion aux combustibles liquides et gazeux obtenues avec ou sans mise en rotation des fluides, et leurs répercussions sur les transferts thermiques. La recherche des températures de combustion élevées conduit à envisager la marche à excès d'air réduit, le réchauffage de l'air ou son enrichissement à l'oxygène. Par quelques exemples, on évoque l'influence de ces paramètres d'exploitation sur l'économie possible en combustible. The efficiency of a flame heating process is closely linked ta the mastery of, combustion techniques. The burner, an essential element in any heating equipment, must provide complete combustion sa as to make optimum use of the potential energy in the fuel while, at the same time, creating the most suitable conditions for heat transfers in the combustion chamber. On the basis of experimental research performed by FRIF, GEFGN and IFP and of industrial achievements, this article describesthe essential properties of diffusion flames fed by liquid and gaseous fuels and produced with or without fluid swirling, and the effects of such flames on heat transfers. The search for high combustion temperatures means that consideration must be given to operating with reduced excess air, heating the air or
Oppugning the assumptions of spatial averaging of segment and joint orientations.
Pierrynowski, Michael Raymond; Ball, Kevin Arthur
2009-02-09
Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.
Group devaluation and group identification
Leach, C.W.; Rodriguez Mosquera, P.M.; Vliek, M.L.W.; Hirt, E.
2010-01-01
In three studies, we showed that increased in-group identification after (perceived or actual) group devaluation is an assertion of a (preexisting) positive social identity that counters the negative social identity implied in societal devaluation. Two studies with real-world groups used order
Lie groups and algebraic groups
Indian Academy of Sciences (India)
We give an exposition of certain topics in Lie groups and algebraic groups. This is not a complete ... of a polynomial equation is equivalent to the solva- bility of the equation ..... to a subgroup of the group of roots of unity in k (in particular, it is a ...
Directory of Open Access Journals (Sweden)
Kirti AREKAR
2017-12-01
Full Text Available The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in volatility fluctuations, attention, and level between epochs of pre and post particularly high stock returns.
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Lateral dispersion coefficients as functions of averaging time
International Nuclear Information System (INIS)
Sheih, C.M.
1980-01-01
Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion
2010-07-01
... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...
2010-07-01
... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...
Average inactivity time model, associated orderings and reliability properties
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
Average L-shell fluorescence, Auger, and electron yields
International Nuclear Information System (INIS)
Krause, M.O.
1980-01-01
The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization
Simultaneous inference for model averaging of derived parameters
DEFF Research Database (Denmark)
Jensen, Signe Marie; Ritz, Christian
2015-01-01
Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...
Salecker-Wigner-Peres clock and average tunneling times
International Nuclear Information System (INIS)
Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.
2011-01-01
The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).
Average multiplications in deep inelastic processes and their interpretation
International Nuclear Information System (INIS)
Kiselev, A.V.; Petrov, V.A.
1983-01-01
Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity
Fitting a function to time-dependent ensemble averaged data
DEFF Research Database (Denmark)
Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders
2018-01-01
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....
Average wind statistics for SRP area meteorological towers
International Nuclear Information System (INIS)
Laurinat, J.E.
1987-01-01
A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics
Serpent-COREDAX analysis of CANDU-6 time-average model
Energy Technology Data Exchange (ETDEWEB)
Motalab, M.A.; Cho, B.; Kim, W.; Cho, N.Z.; Kim, Y., E-mail: yongheekim@kaist.ac.kr [Korea Advanced Inst. of Science and Technology (KAIST), Dept. of Nuclear and Quantum Engineering Daejeon (Korea, Republic of)
2015-07-01
COREDAX-2 is the nuclear core analysis nodal code that has adopted the Analytic Function Expansion Nodal (AFEN) methodology which has been developed in Korea. AFEN method outperforms in terms of accuracy compared to other conventional nodal methods. To evaluate the possibility of CANDU-type core analysis using the COREDAX-2, the time-average analysis code system was developed. The two-group homogenized cross-sections were calculated using Monte Carlo code, Serpent2. A stand-alone time-average module was developed to determine the time-average burnup distribution in the core for a given fuel management strategy. The coupled Serpent-COREDAX-2 calculation converges to an equilibrium time-average model for the CANDU-6 core. (author)
Asymmetry within social groups
DEFF Research Database (Denmark)
Barker, Jessie; Loope, Kevin J.; Reeve, H. Kern
2016-01-01
Social animals vary in their ability to compete with group members over shared resources and also vary in their cooperative efforts to produce these resources. Competition among groups can promote within-group cooperation, but many existing models of intergroup cooperation do not explicitly account...... of two roles, with relative competitive efficiency and the number of individuals varying between roles. Players in each role make simultaneous, coevolving decisions. The model predicts that although intergroup competition increases cooperative contributions to group resources by both roles, contributions...... are predominantly from individuals in the less competitively efficient role, whereas individuals in the more competitively efficient role generally gain the larger share of these resources. When asymmetry in relative competitive efficiency is greater, a group's per capita cooperation (averaged across both roles...
Wilson, Kristy J.; Brickman, Peggy; Brame, Cynthia J.
2018-01-01
Science, technology, engineering, and mathematics faculty are increasingly incorporating both formal and informal group work in their courses. Implementing group work can be improved by an understanding of the extensive body of educational research studies on this topic. This essay describes an online, evidence-based teaching guide published by…
International Nuclear Information System (INIS)
Eggermont, G.
2006-01-01
In 2005, PISA organised proactive meetings of reflection groups on involvement in decision making, expert culture and ethical aspects of radiation protection.All reflection group meetings address particular targeted audiences while the output publication in book form is put forward
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
Medicare Part B Drug Average Sales Pricing Files
U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...
High Average Power Fiber Laser for Satellite Communications, Phase I
National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...
A time averaged background compensator for Geiger-Mueller counters
International Nuclear Information System (INIS)
Bhattacharya, R.C.; Ghosh, P.K.
1983-01-01
The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)
Time averaging, ageing and delay analysis of financial time series
Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf
2017-06-01
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
Historical Data for Average Processing Time Until Hearing Held
Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...
GIS Tools to Estimate Average Annual Daily Traffic
2012-06-01
This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...
A high speed digital signal averager for pulsed NMR
International Nuclear Information System (INIS)
Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.
1978-01-01
A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)
The average-shadowing property and topological ergodicity for flows
International Nuclear Information System (INIS)
Gu Rongbao; Guo Wenjing
2005-01-01
In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic
Carving Technique – Methodical Perspectives
Directory of Open Access Journals (Sweden)
Adela BADAU
2015-09-01
Full Text Available The alpine skiing has undergone major changes and adjustments due to both technological innovations of materials and update of theoretical and methodological concepts on all levels of specific training. The purpose: the introduction of technological innovation in the field of materials specif ic to carving ski causes a review of methodology, aiming at bringing the execution technique to superior indices in order to obtain positive results. The event took place in Poiana Brasov between December 2014 and March 2015, on an 800m long slope and comp rised a single experimental group made of four males and four females, cadet category, that carried out two lessons per day. The tests targeted the technique level for slalom skiing and giant slalom skiing, having in view four criteria: leg work, basin mov ement, torso position and arms work. As a result of the research and of the statistic - mathematical analysis of the individual values, the giant slalom race registered an average improvement of 3.5 points between the tests, while the slalom race registered 4 points. In conclusion, the use of a specific methodology applied scientifically, which aims to select the most efficient means of action specific to children’s ski, determines technical improvement at an advanced level.
Application of Bayesian approach to estimate average level spacing
International Nuclear Information System (INIS)
Huang Zhongfu; Zhao Zhixiang
1991-01-01
A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out
Annual average equivalent dose of workers form health area
International Nuclear Information System (INIS)
Daltro, T.F.L.; Campos, L.L.
1992-01-01
The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)
A precise measurement of the average b hadron lifetime
Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G
1996-01-01
An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.
Bivariate copulas on the exponentially weighted moving average control chart
Directory of Open Access Journals (Sweden)
Sasigarn Kuvattana
2016-10-01
Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
Directory of Open Access Journals (Sweden)
Tellier Yoann
2018-01-01
Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.