WorldWideScience

Sample records for perform signal averaging

  1. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  2. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  3. Application of NMR circuit for superconducting magnet using signal averaging

    International Nuclear Information System (INIS)

    Yamada, R.; Ishimoto, H.; Shea, M.F.; Schmidt, E.E.; Borer, K.

    1977-01-01

    An NMR circuit was used to measure the absolute field values of Fermilab Energy Doubler magnets up to 44 kG. A signal averaging method to improve the S/N ratio was implemented by means of a Tektronix Digital Processing Oscilloscope, followed by the development of an inexpensive microprocessor based system contained in a NIM module. Some of the data obtained from measuring two superconducting dipole magnets are presented

  4. Ultrasonic correlator versus signal averager as a signal to noise enhancement instrument

    Science.gov (United States)

    Kishoni, Doron; Pietsch, Benjamin E.

    1989-01-01

    Ultrasonic inspection of thick and attenuating materials is hampered by the reduced amplitudes of the propagated waves to a degree that the noise is too high to enable meaningful interpretation of the data. In order to overcome the low Signal to Noise (S/N) ratio, a correlation technique has been developed. In this method, a continuous pseudo-random pattern generated digitally is transmitted and detected by piezoelectric transducers. A correlation is performed in the instrument between the received signal and a variable delayed image of the transmitted one. The result is shown to be proportional to the impulse response of the investigated material, analogous to a signal received from a pulsed system, with an improved S/N ratio. The degree of S/N enhancement depends on the sweep rate. This paper describes the correlator, and compares it to the method of enhancing S/N ratio by averaging the signals. The similarities and differences between the two are highlighted and the potential advantage of the correlator system is explained.

  5. Signal-averaged P wave duration and the dimensions of the atria

    DEFF Research Database (Denmark)

    Dixen, Ulrik; Joens, Christian; Rasmussen, Bo V

    2004-01-01

    Delay of atrial electrical conduction measured as prolonged signal-averaged P wave duration (SAPWD) could be due to atrial enlargement. Here, we aimed to compare different atrial size parameters obtained from echocardiography with the SAPWD measured with a signal-averaged electrocardiogram (SAECG)....

  6. Real-time traffic signal optimization model based on average delay time per person

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2015-10-01

    Full Text Available Real-time traffic signal control is very important for relieving urban traffic congestion. Many existing traffic control models were formulated using optimization approach, with the objective functions of minimizing vehicle delay time. To improve people’s trip efficiency, this article aims to minimize delay time per person. Based on the time-varying traffic flow data at intersections, the article first fits curves of accumulative arrival and departure vehicles, as well as the corresponding functions. Moreover, this article transfers vehicle delay time to personal delay time using average passenger load of cars and buses, employs such time as the objective function, and proposes a signal timing optimization model for intersections to achieve real-time signal parameters, including cycle length and green time. This research further implements a case study based on practical data collected at an intersection in Beijing, China. The average delay time per person and queue length are employed as evaluation indices to show the performances of the model. The results show that the proposed methodology is capable of improving traffic efficiency and is very effective for real-world applications.

  7. Large-signal analysis of DC motor drive system using state-space averaging technique

    International Nuclear Information System (INIS)

    Bekir Yildiz, Ali

    2008-01-01

    The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model

  8. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    Science.gov (United States)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  9. Novel MGF-based expressions for the average bit error probability of binary signalling over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2014-04-01

    The main idea in the moment generating function (MGF) approach is to alternatively express the conditional bit error probability (BEP) in a desired exponential form so that possibly multi-fold performance averaging is readily converted into a computationally efficient single-fold averaging - sometimes into a closed-form - by means of using the MGF of the signal-to-noise ratio. However, as presented in [1] and specifically indicated in [2] and also to the best of our knowledge, there does not exist an MGF-based approach in the literature to represent Wojnar\\'s generic BEP expression in a desired exponential form. This paper presents novel MGF-based expressions for calculating the average BEP of binary signalling over generalized fading channels, specifically by expressing Wojnar\\'s generic BEP expression in a desirable exponential form. We also propose MGF-based expressions to explore the amount of dispersion in the BEP for binary signalling over generalized fading channels.

  10. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  11. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  12. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    Science.gov (United States)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  13. Improving sensitivity in micro-free flow electrophoresis using signal averaging

    Science.gov (United States)

    Turgeon, Ryan T.; Bowser, Michael T.

    2009-01-01

    Microfluidic free-flow electrophoresis (μFFE) is a separation technique that separates continuous streams of analytes as they travel through an electric field in a planar flow channel. The continuous nature of the μFFE separation suggests that approaches more commonly applied in spectroscopy and imaging may be effective in improving sensitivity. The current paper describes the S/N improvements that can be achieved by simply averaging multiple images of a μFFE separation; 20–24-fold improvements in S/N were observed by averaging the signal from 500 images recorded for over 2 min. Up to an 80-fold improvement in S/N was observed by averaging 6500 images. Detection limits as low as 14 pM were achieved for fluorescein, which is impressive considering the non-ideal optical set-up used in these experiments. The limitation to this signal averaging approach was the stability of the μFFE separation. At separation times longer than 20 min bubbles began to form at the electrodes, which disrupted the flow profile through the device, giving rise to erratic peak positions. PMID:19319908

  14. Raven's Test Performance of Sub-Saharan Africans: Average Performance, Psychometric Properties, and the Flynn Effect

    Science.gov (United States)

    Wicherts, Jelte M.; Dolan, Conor V.; Carlson, Jerry S.; van der Maas, Han L. J.

    2010-01-01

    This paper presents a systematic review of published data on the performance of sub-Saharan Africans on Raven's Progressive Matrices. The specific goals were to estimate the average level of performance, to study the Flynn Effect in African samples, and to examine the psychometric meaning of Raven's test scores as measures of general intelligence.…

  15. Digital mammography screening: average glandular dose and first performance parameters

    International Nuclear Information System (INIS)

    Weigel, S.; Girnus, R.; Czwoydzinski, J.; Heindel, W.; Decker, T.; Spital, S.

    2007-01-01

    Purpose: The Radiation Protection Commission demanded structured implementation of digital mammography screening in Germany. The main requirements were the installation of digital reference centers and separate evaluation of the fully digitized screening units. Digital mammography screening must meet the quality standards of the European guidelines and must be compared to analog screening results. We analyzed early surrogate indicators of effective screening and dosage levels for the first German digital screening unit in a routine setting after the first half of the initial screening round. Materials and Methods: We used three digital mammography screening units (one full-field digital scanner [DR] and two computed radiography systems [CR]). Each system has been proven to fulfill the requirements of the National and European guidelines. The radiation exposure levels, the medical workflow and the histological results were documented in a central electronic screening record. Results: In the first year 11,413 women were screened (participation rate 57.5 %). The parenchymal dosages for the three mammographic X-ray systems, averaged for the different breast sizes, were 0.7 (DR), 1.3 (CR), 1.5 (CR) mGy. 7 % of the screened women needed to undergo further examinations. The total number of screen-detected cancers was 129 (detection rate 1.1 %). 21 % of the carcinomas were classified as ductal carcinomas in situ, 40 % of the invasive carcinomas had a histological size ≤ 10 mm and 61 % < 15 mm. The frequency distribution of pT-categories of screen-detected cancer was as follows: pTis 20.9 %, pT1 61.2 %, pT2 14.7 %, pT3 2.3 %, pT4 0.8 %. 73 % of the invasive carcinomas were node-negative. (orig.)

  16. Development and significance of a fetal electrocardiogram recorded by signal-averaged high-amplification electrocardiography.

    Science.gov (United States)

    Hayashi, Risa; Nakai, Kenji; Fukushima, Akimune; Itoh, Manabu; Sugiyama, Toru

    2009-03-01

    Although ultrasonic diagnostic imaging and fetal heart monitors have undergone great technological improvements, the development and use of fetal electrocardiograms to evaluate fetal arrhythmias and autonomic nervous activity have not been fully established. We verified the clinical significance of the novel signal-averaged vector-projected high amplification ECG (SAVP-ECG) method in fetuses from 48 gravidas at 32-41 weeks of gestation and in 34 neonates. SAVP-ECGs from fetuses and newborns were recorded using a modified XYZ-leads system. Once noise and maternal QRS waves were removed, the P, QRS, and T wave intervals were measured from the signal-averaged fetal ECGs. We also compared fetal and neonatal heart rates (HRs), coefficients of variation of heart rate variability (CV) as a parasympathetic nervous activity, and the ratio of low to high frequency (LF/HF ratio) as a sympathetic nervous activity. The rate of detection of a fetal ECG by SAVP-ECG was 72.9%, and the fetal and neonatal QRS and QTc intervals were not significantly different. The neonatal CVs and LF/HF ratios were significantly increased compared with those in the fetus. In conclusion, we have developed a fetal ECG recording method using the SAVP-ECG system, which we used to evaluate autonomic nervous system development.

  17. Signal averaging technique for noninvasive recording of late potentials in patients with coronary artery disease

    Science.gov (United States)

    Abboud, S.; Blatt, C. M.; Lown, B.; Graboys, T. B.; Sadeh, D.; Cohen, R. J.

    1987-01-01

    An advanced non invasive signal averaging technique was used to detect late potentials in two groups of patients: Group A (24 patients) with coronary artery disease (CAD) and without sustained ventricular tachycardia (VT) and Group B (8 patients) with CAD and sustained VT. Recorded analog data were digitized and aligned using a cross correlation function with fast Fourier transform schema, averaged and band pass filtered between 60 and 200 Hz with a non-recursive digital filter. Averaged filtered waveforms were analyzed by computer program for 3 parameters: (1) filtered QRS (fQRS) duration (2) interval between the peak of the R wave peak and the end of fQRS (R-LP) (3) RMS value of last 40 msec of fQRS (RMS). Significant change was found between Groups A and B in fQRS (101 -/+ 13 msec vs 123 -/+ 15 msec; p < .0005) and in R-LP vs 52 -/+ 11 msec vs 71-/+18 msec, p <.002). We conclude that (1) the use of a cross correlation triggering method and non-recursive digital filter enables a reliable recording of late potentials from the body surface; (2) fQRS and R-LP durations are sensitive indicators of CAD patients susceptible to VT.

  18. Average BER analysis of SCM-based free-space optical systems by considering the effect of IM3 with OSSB signals under turbulence channels.

    Science.gov (United States)

    Lim, Wansu; Cho, Tae-Sik; Yun, Changho; Kim, Kiseon

    2009-11-09

    In this paper, we derive the average bit error rate (BER) of subcarrier multiplexing (SCM)-based free space optics (FSO) systems using a dual-drive Mach-Zehnder modulator (DD-MZM) for optical single-sideband (OSSB) signals under atmospheric turbulence channels. In particular, we consider the third-order intermodulation (IM3), a significant performance degradation factor, in the case of high input signal power systems. The derived average BER, as a function of the input signal power and the scintillation index, is employed to determine the optimum number of SCM users upon the designing FSO systems. For instance, when the user number doubles, the input signal power decreases by almost 2 dBm under the log-normal and exponential turbulence channels at a given average BER.

  19. Value of the Signal-Averaged Electrocardiogram in Arrhythmogenic Right Ventricular Cardiomyopathy/Dysplasia

    Science.gov (United States)

    Kamath, Ganesh S.; Zareba, Wojciech; Delaney, Jessica; Koneru, Jayanthi N.; McKenna, William; Gear, Kathleen; Polonsky, Slava; Sherrill, Duane; Bluemke, David; Marcus, Frank; Steinberg, Jonathan S.

    2011-01-01

    Background Arrhythmogenic right ventricular cardiomyopathy/dysplasia (ARVC/D) is an inherited disease causing structural and functional abnormalities of the right ventricle (RV). The presence of late potentials as assessed by the signal averaged electrocardiogram (SAECG) is a minor Task Force criterion. Objective The purpose of this study was to examine the diagnostic and clinical value of the SAECG in a large population of genotyped ARVC/D probands. Methods We compared the SAECGs of 87 ARVC/D probands (age 37 ± 13 years, 47 males) diagnosed as affected or borderline by Task Force criteria without using the SAECG criterion with 103 control subjects. The association of SAECG abnormalities was also correlated with clinical presentation; surface ECG; VT inducibility at electrophysiologic testing; ICD therapy for VT; and RV abnormalities as assessed by cardiac magnetic resonance imaging (cMRI). Results When compared with controls, all 3 components of the SAECG were highly associated with the diagnosis of ARVC/D (p<0.001). These include the filtered QRS duration (fQRSD) (97.8 ± 8.7 msec vs. 119.6 ± 23.8 msec), low amplitude signal (LAS) (24.4 ± 9.2 msec vs. 46.2 ± 23.7 msec) and root mean square amplitude of the last 40 msec of late potentials (RMS-40) (50.4 ± 26.9 µV vs. 27.9 ± 36.3 µV). The sensitivity of using SAECG for diagnosis of ARVC/D was increased from 47% using the established 2 of 3 criteria (i.e. late potentials) to 69% by using a modified criterion of any 1 of the 3 criteria, while maintaining a high specificity of 95%. Abnormal SAECG as defined by this modified criteria was associated with a dilated RV volume and decreased RV ejection fraction detected by cMRI (p<0.05). SAECG abnormalities did not vary with clinical presentation or reliably predict spontaneous or inducible VT, and had limited correlation with ECG findings. Conclusion Using 1 of 3 SAECG criteria contributed to increased sensitivity and specificity for the diagnosis of ARVC/D. This

  20. Advanced pulse oximeter signal processing technology compared to simple averaging. I. Effect on frequency of alarms in the operating room.

    Science.gov (United States)

    Rheineck-Leyssius, A T; Kalkman, C J

    1999-05-01

    To determine the effect of a new signal processing technique (Oxismart, Nellcor, Inc., Pleasanton, CA) on the incidence of false pulse oximeter alarms in the operating room (OR). Prospective observational study. Nonuniversity hospital. 53 ASA physical status I, II, and III consecutive patients undergoing general anesthesia with tracheal intubation. In the OR we compared the number of alarms produced by a recently developed third generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504). Three pulse oximeters were used simultaneously in each patient: a Nellcor pulse oximeter, a Criticare with the signal averaging time set at 3 seconds (Criticareaverage3s) and a similar unit with the signal averaging time set at 21 seconds (Criticareaverage21s). For each pulse oximeter, the number of false (artifact) alarms was counted. One false alarm was produced by the Nellcor (duration 55 sec) and one false alarm by the Criticareaverage21s monitor (5 sec). The incidence of false alarms was higher in Criticareaverage3s. In eight patients, Criticareaverage3s produced 20 false alarms (p signal processing compared with the Criticare monitor with the longer averaging time of 21 seconds.

  1. STUDY OF WITHERS HEIGHT AVERAGE PERFORMANCES IN HUCUL HORSE BREED – HROBY BLOODLINE

    Directory of Open Access Journals (Sweden)

    M. MAFTEI

    2008-10-01

    Full Text Available Study of average performances in a population have a huge importance because, regarding a population, the average of phenotypic value is equal with average of genotypic value. So, the studies of the average value of characters offer us an idea about the population genetic level. The biological material is represented by 177 hucul horse from Hroby bloodline divided in 6 stallion families (tab. 1 analyzed at 18, 30 and 42 months old, owned by Lucina hucul stood farm. The average performances for withers height are presented in tab. 2. We can observe here that the average performances of the character are between characteristic limits of the breed. Both sexes have a small grade of variability with a decreasing tendency in the same time with ageing. We can observe a normal evolution in time for growth process with significant differences only at age of 42 months. We can say in this condition that the average performances for withers height have different values, influenced by the age, with a decreasing tendency.

  2. STUDY OF WITHERS HEIGHT AVERAGE PERFORMANCES IN HUCUL HORSE BREED –GORAL BLOODLINE

    Directory of Open Access Journals (Sweden)

    M. MAFTEI

    2008-10-01

    Full Text Available Study of average performances in a population have a huge importance because, regarding a population, the average of phenotypic value is equal with average of genotypic value. So, the studies of the average value of characters offer us an idea about the population genetic level. The biological material is represented by 87 hucul horse from Goral bloodline divided in 5 stallion families (tab. 1 analyzed at 18, 30 and 42 months old, owned by Lucina hucul stood farm. The average performances for withers height are presented in tab. 2. We can observe here that the average performances of the character are between characteristic limits of the breed. Both sexes have a small grade of variability with a decreasing tendency in the same time with ageing. We can observe a normal evolution in time for growth process with significant differences only at age of 42 months. We can say in this condition that the average performances for withers height have different values, influenced by the age, with a decreasing tendency.

  3. Phase-rectified signal averaging method to predict perinatal outcome in infants with very preterm fetal growth restriction- a secondary analysis of TRUFFLE-trial

    NARCIS (Netherlands)

    Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H A; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T M; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid

    2016-01-01

    Background Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival

  4. Phase-rectified signal averaging method to predict perinatal outcome in infants with very preterm fetal growth restriction- a secondary analysis of TRUFFLE-trial

    NARCIS (Netherlands)

    Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H. A.; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T. M.; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid

    2016-01-01

    Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival after

  5. A Signal Averager Interface between a Biomation 6500 Transient Recorder and a LSI-11 Microcomputer.

    Science.gov (United States)

    1980-06-01

    decode the proper bus synchronizing signals. SA data lines 1 and 2 are decoded to produce SELO L - SEL4 L which select one of four SA registers. The...J42 A > SACCI..N.. 31 is41 4--.----(~~#I)I MMELYH-[@- T~. S5 46NI INI 404 II CSkN M.3 > ____ ____47 INWO L 3CSRRD L U SEL4 L MRPLY L t5CSRWHB H OUTHB L

  6. Improved performance of high average power semiconductor arrays for applications in diode pumped solid state lasers

    International Nuclear Information System (INIS)

    Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.

    1994-01-01

    The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL's). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL's which are appropriate for material processing applications, low and intermediate average power DPSSL's are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications

  7. MOTION ARTIFACT REDUCTION IN FUNCTIONAL NEAR INFRARED SPECTROSCOPY SIGNALS BY AUTOREGRESSIVE MOVING AVERAGE MODELING BASED KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    MEHDI AMIAN

    2013-10-01

    Full Text Available Functional near infrared spectroscopy (fNIRS is a technique that is used for noninvasive measurement of the oxyhemoglobin (HbO2 and deoxyhemoglobin (HHb concentrations in the brain tissue. Since the ratio of the concentration of these two agents is correlated with the neuronal activity, fNIRS can be used for the monitoring and quantifying the cortical activity. The portability of fNIRS makes it a good candidate for studies involving subject's movement. The fNIRS measurements, however, are sensitive to artifacts generated by subject's head motion. This makes fNIRS signals less effective in such applications. In this paper, the autoregressive moving average (ARMA modeling of the fNIRS signal is proposed for state-space representation of the signal which is then fed to the Kalman filter for estimating the motionless signal from motion corrupted signal. Results are compared to the autoregressive model (AR based approach, which has been done previously, and show that the ARMA models outperform AR models. We attribute it to the richer structure, containing more terms indeed, of ARMA than AR. We show that the signal to noise ratio (SNR is about 2 dB higher for ARMA based method.

  8. Total Quality Management (TQM) Practices and School Climate amongst High, Average and Low Performance Secondary Schools

    Science.gov (United States)

    Ismail, Siti Noor

    2014-01-01

    Purpose: This study attempted to determine whether the dimensions of TQM practices are predictors of school climate. It aimed to identify the level of TQM practices and school climate in three different categories of schools, namely high, average and low performance schools. The study also sought to examine which dimensions of TQM practices…

  9. Documenting Student Performance: An Alternative to the Traditional Calculation of Grade Point Averages

    Science.gov (United States)

    Volwerk, Johannes J.; Tindal, Gerald

    2012-01-01

    Traditionally, students in secondary and postsecondary education have grade point averages (GPA) calculated, and a cumulative GPA computed to summarize overall performance at their institutions. GPAs are used for acknowledgement and awards, as partial evidence for admission to other institutions (colleges and universities), and for awarding…

  10. Raven’s test performance of sub-Saharan Africans: average performance, psychometric properties, and the Flynn Effect

    NARCIS (Netherlands)

    Wicherts, J.M.; Dolan, C.V.; Carlson, J.S.; van der Maas, H.L.J.

    2010-01-01

    This paper presents a systematic review of published data on the performance of sub-Saharan Africans on Raven's Progressive Matrices. The specific goals were to estimate the average level of performance, to study the Flynn Effect in African samples, and to examine the psychometric meaning of Raven's

  11. Average bit error probability of binary coherent signaling over generalized fading channels subject to additive generalized gaussian noise

    KAUST Repository

    Soury, Hamza

    2012-06-01

    This letter considers the average bit error probability of binary coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closed form expression in terms of the Fox\\'s H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading and Nakagami-m fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters. © 2012 IEEE.

  12. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading

    KAUST Repository

    Xia, Minghua

    2012-06-01

    Since the electromagnetic spectrum resource becomes more and more scarce, improving spectral efficiency is extremely important for the sustainable development of wireless communication systems and services. Integrating cooperative relaying techniques into spectrum-sharing cognitive radio systems sheds new light on higher spectral efficiency. In this paper, we analyze the end-to-end performance of cooperative amplify-and-forward (AF) relaying in spectrum-sharing systems. In order to achieve the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical tractability, the desired channels from secondary source to relay and from relay to secondary destination are assumed to be subject to Rayleigh fading). Also, both partial and opportunistic relay-selection strategies are exploited to further enhance system performance. Based on the exact distribution functions of the end-to-end signal-to-noise ratio (SNR) obtained herein, the outage probability, average symbol error probability, diversity order, and ergodic capacity of the system under study are analytically investigated. Our results show that system performance is dominated by the resource constraints and it improves slowly with increasing average SNR. Furthermore, larger Nakagami-m fading parameter on interference channels deteriorates system performance slightly. On the other hand, when interference power constraints are stringent, opportunistic relay selection can be exploited to improve system performance significantly. All analytical results are corroborated by simulation results and they are shown to be efficient tools for exact evaluation of system performance.

  13. Testing VRIN framework: Resource value and rareness as sources of competitive advantage and above average performance

    OpenAIRE

    Talaja, Anita

    2012-01-01

    In this study, structural equation model that analyzes the impact of resource and capability characteristics, more specifically value and rareness, on sustainable competitive advantage and above average performance is developed and empirically tested. According to the VRIN framework, if a company possesses and exploits valuable, rare, inimitable and non-substitutable resources and capabilities, it will achieve sustainable competitive advantage. Although the above mentioned statement is widely...

  14. Leveraging Mechanism Simplicity and Strategic Averaging to Identify Signals from Highly Heterogeneous Spatial and Temporal Ozone Data

    Science.gov (United States)

    Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.

    2017-12-01

    We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.

  15. Prolonged signal-averaged P wave duration as a prognostic marker for morbidity and mortality in patients with congestive heart failure

    DEFF Research Database (Denmark)

    Dixen, Ulrik; Wallevik, Laura; Hansen, Maja

    2003-01-01

    To evaluate the prognostic roles of prolonged signal-averaged P wave duration (SAPWD), raised levels of natriuretic peptides, and clinical characteristics in patients with stable congestive heart failure (CHF).......To evaluate the prognostic roles of prolonged signal-averaged P wave duration (SAPWD), raised levels of natriuretic peptides, and clinical characteristics in patients with stable congestive heart failure (CHF)....

  16. Reduced fractal model for quantitative analysis of averaged micromotions in mesoscale: Characterization of blow-like signals

    International Nuclear Information System (INIS)

    Nigmatullin, Raoul R.; Toboev, Vyacheslav A.; Lino, Paolo; Maione, Guido

    2015-01-01

    Highlights: •A new approach describes fractal-branched systems with long-range fluctuations. •A reduced fractal model is proposed. •The approach is used to characterize blow-like signals. •The approach is tested on data from different fields. -- Abstract: It has been shown that many micromotions in the mesoscale region are averaged in accordance with their self-similar (geometrical/dynamical) structure. This distinctive feature helps to reduce a wide set of different micromotions describing relaxation/exchange processes to an averaged collective motion, expressed mathematically in a rather general form. This reduction opens new perspectives in description of different blow-like signals (BLS) in many complex systems. The main characteristic of these signals is a finite duration also when the generalized reduced function is used for their quantitative fitting. As an example, we describe quantitatively available signals that are generated by bronchial asthmatic people, songs by queen bees, and car engine valves operating in the idling regime. We develop a special treatment procedure based on the eigen-coordinates (ECs) method that allows to justify the generalized reduced fractal model (RFM) for description of BLS that can propagate in different complex systems. The obtained describing function is based on the self-similar properties of the different considered micromotions. This kind of cooperative model is proposed here for the first time. In spite of the fact that the nature of the dynamic processes that take place in fractal structure on a mesoscale level is not well understood, the parameters of the RFM fitting function can be used for construction of calibration curves, affected by various external/random factors. Then, the calculated set of the fitting parameters of these calibration curves can characterize BLS of different complex systems affected by those factors. Though the method to construct and analyze the calibration curves goes beyond the scope

  17. Correlations between PANCE performance, physician assistant program grade point average, and selection criteria.

    Science.gov (United States)

    Brown, Gina; Imel, Brittany; Nelson, Alyssa; Hale, LaDonna S; Jansen, Nick

    2013-01-01

    The purpose of this study was to examine correlations between first-time Physician Assistant National Certifying Exam (PANCE) scores and pass/fail status, physician assistant (PA) program didactic grade point average (GPA), and specific selection criteria. This retrospective study evaluated graduating classes from 2007, 2008, and 2009 at a single program (N = 119). There was no correlation between PANCE performance and undergraduate grade point average (GPA), science prerequisite GPA, or health care experience. There was a moderate correlation between PANCE pass/fail and where students took science prerequisites (r = 0.27, P = .003) but not with the PANCE score. PANCE scores were correlated with overall PA program GPA (r = 0.67), PA pharmacology grade (r = 0.68), and PA anatomy grade (r = 0.41) but not with PANCE pass/fail. Correlations between selection criteria and PANCE performance were limited, but further research regarding the influence of prerequisite institution type may be warranted and may improve admission decisions. PANCE scores and PA program GPA correlations may guide academic advising and remediation decisions for current students.

  18. Signal Processing for Improved Wireless Receiver Performance

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2007-01-01

    This thesis is concerned with signal processing for improving the performance of wireless communication receivers for well-established cellular networks such as the GSM/EDGE and WCDMA/HSPA systems. The goal of doing so, is to improve the end-user experience and/or provide a higher system capacity...... by allowing an increased reuse of network resources. To achieve this goal, one must first understand the nature of the problem and an introduction is therefore provided. In addition, the concept of graph-based models and approximations for wireless communications is introduced along with various Belief...... Propagation (BP) methods for detecting the transmitted information, including the Turbo principle. Having established a framework for the research, various approximate detection schemes are discussed. First, the general form of linear detection is presented and it is argued that this may be preferable...

  19. Green Suppliers Performance Evaluation in Belt and Road Using Fuzzy Weighted Average with Social Media Information

    Directory of Open Access Journals (Sweden)

    Kuo-Ping Lin

    2017-12-01

    Full Text Available A decision model for selecting a suitable supplier is a key to reducing the environmental impact in green supply chain management for high-tech companies. Traditional fuzzy weight average (FWA adopts linguistic variable to determine weight by experts. However, the weights of FWA have not considered the public voice, meaning the viewpoints of consumers in green supply chain management. This paper focuses on developing a novel decision model for green supplier selection in the One Belt and One Road (OBOR initiative through a fuzzy weighted average approach with social media. The proposed decision model uses the membership grade of the criteria and sub-criteria and its relative weights, which consider the volume of social media, to establish an analysis matrix of green supplier selection. Then, the proposed fuzzy weighted average approach is considered as an aggregating tool to calculate a synthetic score for each green supplier in the Belt and Road initiative. The final score of the green supplier is ordered by a non-fuzzy performance value ranking method to help the consumer make a decision. A case of green supplier selection in the light-emitting diode (LED industry is used to demonstrate the proposed decision model. The findings demonstrate (1 the consumer’s main concerns are the “Quality” and “Green products” in LED industry, hence, the ranking of suitable supplier in FWA with social media information model obtained the difference result with tradition FWA; (2 OBOR in the LED industry is not fervently discussed in searches of Google and Twitter; and (3 the FWA with social media information could objectively analyze the green supplier selection because the novel model considers the viewpoints of the consumer.

  20. Spectral analysis of 87-lead body surface signal-averaged ECGs in patients with previous anterior myocardial infarction as a marker of ventricular tachycardia.

    Science.gov (United States)

    Hosoya, Y; Kubota, I; Shibata, T; Yamaki, M; Ikeda, K; Tomoike, H

    1992-06-01

    There were few studies on the relation between the body surface distribution of high- and low-frequency components within the QRS complex and ventricular tachycardia (VT). Eighty-seven signal-averaged ECGs were obtained from 30 normal subjects (N group) and 30 patients with previous anterior myocardial infarction (MI) with VT (MI-VT[+] group, n = 10) or without VT (MI-VT[-] group, n = 20). The onset and offset of the QRS complex were determined from 87-lead root mean square values computed from the averaged (but not filtered) ECG waveforms. Fast Fourier transform analysis was performed on signal-averaged ECG. The resulting Fourier coefficients were attenuated by use of the transfer function, and then inverse transform was done with five frequency ranges (0-25, 25-40, 40-80, 80-150, and 150-250 Hz). From the QRS onset to the QRS offset, the time integration of the absolute value of reconstructed waveforms was calculated for each of the five frequency ranges. The body surface distributions of these areas were expressed as QRS area maps. The maximal values of QRS area maps were compared among the three groups. In the frequency ranges of 0-25 and 150-250 Hz, there were no significant differences in the maximal values among these three groups. Both MI groups had significantly smaller maximal values of QRS area maps in the frequency ranges of 25-40 and 40-80 Hz compared with the N group. The MI-VT(+) group had significantly smaller maximal values in the frequency ranges of 40-80 and 80-150 Hz than the MI-VT(-) group. These three groups were clearly differentiated by the maximal values of the 40-80-Hz QRS area map. It was suggested that the maximal value of the 40-80-Hz QRS area map was a new marker for VT after anterior MI.

  1. The predictive value of P-wave duration by signal-averaged electrocardiogram in acute ST elevation myocardial infarction.

    Science.gov (United States)

    Shturman, Alexander; Bickel, Amitai; Atar, Shaul

    2012-08-01

    The prognostic value of P-wave duration has been previously evaluated by signal-averaged ECG (SAECG) in patients with various arrhythmias not associated with acute myocardial infarction (AMI). To investigate the clinical correlates and prognostic value of P-wave duration in patients with ST elevation AMI (STEMI). The patients (n = 89) were evaluated on the first, second and third day after admission, as well as one week and one month post-AMI. Survival was determined 2 years after the index STEMI. In comparison with the upper normal range of P-wave duration ( 40% (128.79 +/- 28 msec) (P = 0.001). P-wave duration above 120 msec was significantly correlated with increased complication rate; namely, sustained ventricular tachyarrhythmia (36%), congestive heart failure (41%), atrial fibrillation (11%), recurrent angina (14%), and re-infarction (8%) (P = 0.012, odds ratio 4.267, 95% confidence interval 1.37-13.32). P-wave duration of 126 msec on the day of admission was found to have the highest predictive value for in-hospital complications including LVEF 40% (area under the curve 0.741, P < 0.001). However, we did not find a significant correlation between P-wave duration and mortality after multivariate analysis. P-wave duration as evaluated by SAECG correlates negatively with LVEF post-STEMI, and P-wave duration above 126 msec can be utilized as a non-invasive predictor of in-hospital complications and low LVEF following STEMI.

  2. [In patients with Graves' disease signal-averaged P wave duration positively correlates with the degree of thyrotoxicosis].

    Science.gov (United States)

    Czarkowski, Marek; Oreziak, Artur; Radomski, Dariusz

    2006-04-01

    Coexistence of the goitre, proptosis and palpitations was observed in XIX century for the first time. Sinus tachyarytmias and atrial fibrillation are typical cardiac symptoms of hyperthyroidism. Atrial fibrillation occurs more often in patients with toxic goiter than in young patients with Grave's disease. These findings suggest that causes of atrial fibrillation might be multifactorial in the elderly. The aims of our study were to evaluate correlations between the parameters of atrial signal averaged ECG (SAECG) and the serum concentration of thyroid free hormones. 25 patient with untreated Grave's disease (G-B) (age 29,6 +/- 9,0 y.o.) and 26 control patients (age 29,3 +/- 6,9 y.o.) were enrolled to our study. None of them had history of atrial fibrillation what was confirmed by 24-hour ECG Holter monitoring. The serum fT3, fT4, TSH were determined in the venous blood by the immunoenzymatic method. Atrial SAECG recording with filtration by zero phase Butterworth filter (45-150 Hz) was done in all subjects. The duration of atrial vector magnitude (hfP) and root meat square of terminal 20ms of atrial vector magnitude (RMS20) were analysed. There were no significant differences in values of SAECG parameters (hfP, RMS20) between investigated groups. The positive correlation between hfP and serum fT3 concentration in group G-B was observed (Spearman's correlation coefficient R = 0.462, p Grave's disease depends not only on hyperthyroidism but on serum concentration of fT3 also.

  3. Compressive sensing scalp EEG signals: implementations and practical performance.

    Science.gov (United States)

    Abdulghani, Amir M; Casson, Alexander J; Rodriguez-Villegas, Esther

    2012-11-01

    Highly miniaturised, wearable computing and communication systems allow unobtrusive, convenient and long term monitoring of a range of physiological parameters. For long term operation from the physically smallest batteries, the average power consumption of a wearable device must be very low. It is well known that the overall power consumption of these devices can be reduced by the inclusion of low power consumption, real-time compression of the raw physiological data in the wearable device itself. Compressive sensing is a new paradigm for providing data compression: it has shown significant promise in fields such as MRI; and is potentially suitable for use in wearable computing systems as the compression process required in the wearable device has a low computational complexity. However, the practical performance very much depends on the characteristics of the signal being sensed. As such the utility of the technique cannot be extrapolated from one application to another. Long term electroencephalography (EEG) is a fundamental tool for the investigation of neurological disorders and is increasingly used in many non-medical applications, such as brain-computer interfaces. This article investigates in detail the practical performance of different implementations of the compressive sensing theory when applied to scalp EEG signals.

  4. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading

    KAUST Repository

    Xia, Minghua; Aissa, Sonia

    2012-01-01

    the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical

  5. Removing the Influence of Shimmer in the Calculation of Harmonics-To-Noise Ratios Using Ensemble-Averages in Voice Signals

    OpenAIRE

    Carlos Ferrer; Eduardo González; María E. Hernández-Díaz; Diana Torres; Anesto del Toro

    2009-01-01

    Harmonics-to-noise ratios (HNRs) are affected by general aperiodicity in voiced speech signals. To specifically reflect a signal-to-additive-noise ratio, the measurement should be insensitive to other periodicity perturbations, like jitter, shimmer, and waveform variability. The ensemble averaging technique is a time-domain method which has been gradually refined in terms of its sensitivity to jitter and waveform variability and required number of pulses. In this paper, shimmer is introduced ...

  6. Is it better to be average? High and low performance as predictors of employee victimization.

    Science.gov (United States)

    Jensen, Jaclyn M; Patel, Pankaj C; Raver, Jana L

    2014-03-01

    Given increased interest in whether targets' behaviors at work are related to their victimization, we investigated employees' job performance level as a precipitating factor for being victimized by peers in one's work group. Drawing on rational choice theory and the victim precipitation model, we argue that perpetrators take into consideration the risks of aggressing against particular targets, such that high performers tend to experience covert forms of victimization from peers, whereas low performers tend to experience overt forms of victimization. We further contend that the motivation to punish performance deviants will be higher when performance differentials are salient, such that the effects of job performance on covert and overt victimization will be exacerbated by group performance polarization, yet mitigated when the target has high equity sensitivity (benevolence). Finally, we investigate whether victimization is associated with future performance impairments. Results from data collected at 3 time points from 576 individuals in 62 work groups largely support the proposed model. The findings suggest that job performance is a precipitating factor to covert victimization for high performers and overt victimization for low performers in the workplace with implications for subsequent performance.

  7. Average Throughput Performance of Myopic Policy in Energy Harvesting Wireless Sensor Networks.

    Science.gov (United States)

    Gul, Omer Melih; Demirekler, Mubeccel

    2017-09-26

    This paper considers a single-hop wireless sensor network where a fusion center collects data from M energy harvesting wireless sensors. The harvested energy is stored losslessly in an infinite-capacity battery at each sensor. In each time slot, the fusion center schedules K sensors for data transmission over K orthogonal channels. The fusion center does not have direct knowledge on the battery states of sensors, or the statistics of their energy harvesting processes. The fusion center only has information of the outcomes of previous transmission attempts. It is assumed that the sensors are data backlogged, there is no battery leakage and the communication is error-free. An energy harvesting sensor can transmit data to the fusion center whenever being scheduled only if it has enough energy for data transmission. We investigate average throughput of Round-Robin type myopic policy both analytically and numerically under an average reward (throughput) criterion. We show that Round-Robin type myopic policy achieves optimality for some class of energy harvesting processes although it is suboptimal for a broad class of energy harvesting processes.

  8. Moving Average Filter-Based Phase-Locked Loops: Performance Analysis and Design Guidelines

    DEFF Research Database (Denmark)

    Golestan, Saeed; Ramezani, Malek; Guerrero, Josep M.

    2014-01-01

    this challenge, incorporating moving average filter(s) (MAF) into the PLL structure has been proposed in some recent literature. A MAF is a linear-phase finite impulse response filter which can act as an ideal low-pass filter, if certain conditions hold. The main aim of this paper is to present the control...... design guidelines for a typical MAF-based PLL. The paper starts with the general description of MAFs. The main challenge associated with using the MAFs is then explained, and its possible solutions are discussed. The paper then proceeds with a brief overview of the different MAF-based PLLs. In each case......, the PLL block diagram description is shown, the advantages and limitations are briefly discussed, and the tuning approach (if available) is evaluated. The paper then presents two systematic methods to design the control parameters of a typical MAF-based PLL: one for the case of using a proportional...

  9. Removing the Influence of Shimmer in the Calculation of Harmonics-To-Noise Ratios Using Ensemble-Averages in Voice Signals

    Directory of Open Access Journals (Sweden)

    Carlos Ferrer

    2009-01-01

    Full Text Available Harmonics-to-noise ratios (HNRs are affected by general aperiodicity in voiced speech signals. To specifically reflect a signal-to-additive-noise ratio, the measurement should be insensitive to other periodicity perturbations, like jitter, shimmer, and waveform variability. The ensemble averaging technique is a time-domain method which has been gradually refined in terms of its sensitivity to jitter and waveform variability and required number of pulses. In this paper, shimmer is introduced in the model of the ensemble average, and a formula is derived which allows the reduction of shimmer effects in HNR calculation. The validity of the technique is evaluated using synthetically shimmered signals, and the prerequisites (glottal pulse positions and amplitudes are obtained by means of fully automated methods. The results demonstrate the feasibility and usefulness of the correction.

  10. Warning Signals for Poor Performance Improve Human-Robot Interaction

    NARCIS (Netherlands)

    van den Brule, Rik; Bijlstra, Gijsbert; Dotsch, Ron; Haselager, Pim; Wigboldus, Daniel HJ

    2016-01-01

    The present research was aimed at investigating whether human-robot interaction (HRI) can be improved by a robot’s nonverbal warning signals. Ideally, when a robot signals that it cannot guarantee good performance, people could take preventive actions to ensure the successful completion of the

  11. ASSESSMENT OF DYNAMIC PRA TECHNIQUES WITH INDUSTRY AVERAGE COMPONENT PERFORMANCE DATA

    Energy Technology Data Exchange (ETDEWEB)

    Yadav, Vaibhav; Agarwal, Vivek; Gribok, Andrei V.; Smith, Curtis L.

    2017-06-01

    In the nuclear industry, risk monitors are intended to provide a point-in-time estimate of the system risk given the current plant configuration. Current risk monitors are limited in that they do not properly take into account the deteriorating states of plant equipment, which are unit-specific. Current approaches to computing risk monitors use probabilistic risk assessment (PRA) techniques, but the assessment is typically a snapshot in time. Living PRA models attempt to address limitations of traditional PRA models in a limited sense by including temporary changes in plant and system configurations. However, information on plant component health are not considered. This often leaves risk monitors using living PRA models incapable of conducting evaluations with dynamic degradation scenarios evolving over time. There is a need to develop enabling approaches to solidify risk monitors to provide time and condition-dependent risk by integrating traditional PRA models with condition monitoring and prognostic techniques. This paper presents estimation of system risk evolution over time by integrating plant risk monitoring data with dynamic PRA methods incorporating aging and degradation. Several online, non-destructive approaches have been developed for diagnosing plant component conditions in nuclear industry, i.e., condition indication index, using vibration analysis, current signatures, and operational history [1]. In this work the component performance measures at U.S. commercial nuclear power plants (NPP) [2] are incorporated within the various dynamic PRA methodologies [3] to provide better estimates of probability of failures. Aging and degradation is modeled within the Level-1 PRA framework and is applied to several failure modes of pumps and can be extended to a range of components, viz. valves, generators, batteries, and pipes.

  12. Error rate performance of narrowband multilevel CPFSK signals

    Science.gov (United States)

    Ekanayake, N.; Fonseka, K. J. P.

    1987-04-01

    The paper presents a relatively simple method for analyzing the effect of IF filtering on the performance of multilevel FM signals. Using this method, the error rate performance of narrowband FM signals is analyzed for three different detection techniques, namely limiter-discriminator detection, differential detection and coherent detection followed by differential decoding. The symbol error probabilities are computed for a Gaussian IF filter and a second-order Butterworth IF filter. It is shown that coherent detection and differential decoding yields better performance than limiter-discriminator detection and differential detection, whereas two noncoherent detectors yield approximately identical performance.

  13. FOREGROUND MODEL AND ANTENNA CALIBRATION ERRORS IN THE MEASUREMENT OF THE SKY-AVERAGED λ21 cm SIGNAL AT z∼ 20

    Energy Technology Data Exchange (ETDEWEB)

    Bernardi, G. [SKA SA, 3rd Floor, The Park, Park Road, Pinelands, 7405 (South Africa); McQuinn, M. [Department of Astronomy, University of California, Berkeley, CA 94720 (United States); Greenhill, L. J., E-mail: gbernardi@ska.ac.za [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2015-01-20

    The most promising near-term observable of the cosmic dark age prior to widespread reionization (z ∼ 15-200) is the sky-averaged λ21 cm background arising from hydrogen in the intergalactic medium. Though an individual antenna could in principle detect the line signature, data analysis must separate foregrounds that are orders of magnitude brighter than the λ21 cm background (but that are anticipated to vary monotonically and gradually with frequency, e.g., they are considered {sup s}pectrally smooth{sup )}. Using more physically motivated models for foregrounds than in previous studies, we show that the intrinsic spectral smoothness of the foregrounds is likely not a concern, and that data analysis for an ideal antenna should be able to detect the λ21 cm signal after subtracting a ∼fifth-order polynomial in log ν. However, we find that the foreground signal is corrupted by the angular and frequency-dependent response of a real antenna. The frequency dependence complicates modeling of foregrounds commonly based on the assumption of spectral smoothness. Our calculations focus on the Large-aperture Experiment to detect the Dark Age, which combines both radiometric and interferometric measurements. We show that statistical uncertainty remaining after fitting antenna gain patterns to interferometric measurements is not anticipated to compromise extraction of the λ21 cm signal for a range of cosmological models after fitting a seventh-order polynomial to radiometric data. Our results generalize to most efforts to measure the sky-averaged spectrum.

  14. Outage performance of cognitive radio systems with Improper Gaussian signaling

    KAUST Repository

    Amin, Osama

    2015-06-14

    Improper Gaussian signaling has proved its ability to improve the achievable rate of the systems that suffer from interference compared with proper Gaussian signaling. In this paper, we first study impact of improper Gaussian signaling on the performance of the cognitive radio system by analyzing the outage probability of both the primary user (PU) and the secondary user (SU). We derive exact expression of the SU outage probability and upper and lower bounds for the PU outage probability. Then, we design the SU signal by adjusting its transmitted power and the circularity coefficient to minimize the SU outage probability while maintaining a certain PU quality-of-service. Finally, we evaluate the proposed bounds and adaptive algorithms by numerical results.

  15. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  16. Numerical Analysis of a Small-Size Vertical-Axis Wind Turbine Performance and Averaged Flow Parameters Around the Rotor

    Directory of Open Access Journals (Sweden)

    Rogowski Krzysztof

    2017-06-01

    Full Text Available Small-scale vertical-axis wind turbines can be used as a source of electricity in rural and urban environments. According to the authors’ knowledge, there are no validated simplified aerodynamic models of these wind turbines, therefore the use of more advanced techniques, such as for example the computational methods for fluid dynamics is justified. The paper contains performance analysis of the small-scale vertical-axis wind turbine with a large solidity. The averaged velocity field and the averaged static pressure distribution around the rotor have been also analyzed. All numerical results presented in this paper are obtained using the SST k-ω turbulence model. Computed power coeffcients are in good agreement with the experimental results. A small change in the tip speed ratio significantly affects the velocity field. Obtained velocity fields can be further used as a base for simplified aerodynamic methods.

  17. Performance Evaluation of Received Signal Strength Based Hard Handover for UTRAN LTE

    DEFF Research Database (Denmark)

    Anas, Mohmmad; Calabrese, Francesco Davide; Mogensen, Preben

    2007-01-01

    This paper evaluates the hard handover performance for UTRAN LTE system. The focus is on the impact that received signal strength based hard handover algorithm have on the system performance measured in terms of number of handovers, time between two consecutive handovers and uplink SINR for a user...... about to experience a handover. A handover algorithm based on received signal strength measurements has been designed and implemented in a dynamic system level simulator and has been studied for different parameter sets in a 3GPP UTRAN LTE recommended simulation scenario. The results suggest...... that a downlink measurement bandwidth of 1.25 MHz and a handover margin of 2 dB to 6 dB are the parameters that will lead to the best compromise between average number of handovers and average uplink SINR for user speeds of 3 kmph to 120 kmph....

  18. Advanced pulse oximeter signal processing technology compared to simple averaging. II. Effect on frequency of alarms in the postanesthesia care unit.

    Science.gov (United States)

    Rheineck-Leyssius, A T; Kalkman, C J

    1999-05-01

    To determine the effect of a new pulse oximeter (Nellcor Symphony N-3000, Pleasanton, CA) with signal processing technique (Oxismart) on the incidence of false alarms in the postanesthesia care unit (PACU). Prospective study. Nonuniversity hospital. 603 consecutive ASA physical status I, II, and III patients recovering from general or regional anesthesia in the PACU. We compared the number of alarms produced by a recently developed "third"-generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504, Waukesha, WI). Patients were randomly assigned to either a Nellcor pulse oximeter or a Criticare with the signal averaging time set at either 12 or 21 seconds. For each patient the number of false (artifact) alarms was counted. The Nellcor generated one false alarm in 199 patients and 36 (in 31 patients) "loss of pulse" alarms. The conventional pulse oximeter with the averaging time set at 12 seconds generated a total of 32 false alarms in 17 of 197 patients [compared with the Nellcor, relative risk (RR) 0.06, confidence interval (CI) 0.01 to 0.25] and a total of 172 "loss of pulse" alarms in 79 patients (RR 0.39, CI 0.28 to 0.55). The conventional pulse oximeter with the averaging time set at 21 seconds generated 12 false alarms in 11 of 207 patients (compared with the Nellcor, RR 0.09, CI 0.02 to 0.48) and a total of 204 "loss of pulse" alarms in 81 patients (RR 0.40, CI 0.28 to 0.56). The lower incidence of false alarms of the conventional pulse oximeter with the longest averaging time compared with the shorter averaging time did not reach statistical significance (false alarms RR 0.62, CI 0.3 to 1.27; "loss of pulse" alarms RR 0.98, CI 0.77 to 1.3). To date, this is the first report of a pulse oximeter that produced almost no false alarms in the PACU.

  19. Optical Performance Monitoring and Signal Optimization in Optical Networks

    DEFF Research Database (Denmark)

    Petersen, Martin Nordal

    2006-01-01

    The thesis studies performance monitoring for the next generation optical networks. The focus is on all-optical networks with bit-rates of 10 Gb/s or above. Next generation all-optical networks offer large challenges as the optical transmitted distance increases and the occurrence of electrical-optical......-electrical regeneration points decreases. This thesis evaluates the impact of signal degrading effects that are becoming of increasing concern in all-optical high-speed networks due to all-optical switching and higher bit-rates. Especially group-velocity-dispersion (GVD) and a number of nonlinear effects will require...... enhanced attention to avoid signal degradations. The requirements for optical performance monitoring features are discussed, and the thesis evaluates the advantages and necessity of increasing the level of performance monitoring parameters in the physical layer. In particular, methods for optical...

  20. Design and evaluation of three-level composite filters obtained by optimizing a compromise average performance measure

    Science.gov (United States)

    Hendrix, Charles D.; Vijaya Kumar, B. V. K.

    1994-06-01

    Correlation filters with three transmittance levels (+1, 0, and -1) are of interest in optical pattern recognition because they can be implemented on available spatial light modulators and because the zero level allows us to include a region of support (ROS). The ROS can provide additional control over the filter's noise tolerance and peak sharpness. A new algorithm based on optimizing a compromise average performance measure (CAPM) is proposed for designing three-level composite filters. The performance of this algorithm is compared to other three-level composite filter designs using a common image database and using figures of merit such as the Fisher ratio, error rate, and light efficiency. It is shown that the CAPM algorithm yields better results.

  1. Impact of revascularization of coronary chronic total occlusion on left ventricular function and electrical stability: analysis by speckle tracking echocardiography and signal-averaged electrocardiogram.

    Science.gov (United States)

    Sotomi, Yohei; Okamura, Atsunori; Iwakura, Katsuomi; Date, Motoo; Nagai, Hiroyuki; Yamasaki, Tomohiro; Koyama, Yasushi; Inoue, Koichi; Sakata, Yasushi; Fujii, Kenshi

    2017-06-01

    The present study aimed to assess the mechanisms of effects of percutaneous coronary intervention (PCI) for chronic total occlusion (CTO) from two different aspects: left ventricular (LV) systolic function assessed by two-dimensional speckle tracking echocardiography (2D-STE) and electrical stability evaluated by late potential on signal-averaged electrocardiogram (SAECG). We conducted a prospective observational study with consecutive CTO-PCI patients. 2D-STE and SAECG were performed before PCI, and after 1-day and 3-months of procedure. 2D-STE computed global longitudinal strain (GLS) and regional longitudinal strain (RLS) in CTO area, collateral blood-supplying donor artery area, and non-CTO/non-donor area. A total of 37 patients (66 ± 11 years, 78% male) were analyzed. RLS in CTO and donor areas and GLS were significantly improved 1-day after the procedure, but these improvements diminished during 3 months. The improvement of RLS in donor area remained significant after 3-months the index procedure (pre-PCI -13.4 ± 4.8% vs. post-3M -15.1 ± 4.5%, P = 0.034). RLS in non-CTO/non-donor area and LV ejection fraction were not influenced. Mitral annulus velocity was improved at 3-month follow-up (5.0 ± 1.4 vs. 5.6 ± 1.7 cm/s, P = 0.049). Before the procedure, 12 patients (35%) had a late potential. All components of the late potential (filtered QRS duration, root-mean-square voltage in the terminal 40 ms, and duration of the low amplitude signal <40 μV) were not improved. CTO-PCI improved RLS in the donor area at 3-month follow-up without changes of LV ejection fraction. Although higher prevalence of late potential in the current population compared to healthy population was observed, late potential as a surrogate of arrhythmogenic substrate was not influenced by CTO-PCI.

  2. Improved Multiscale Entropy Technique with Nearest-Neighbor Moving-Average Kernel for Nonlinear and Nonstationary Short-Time Biomedical Signal Analysis

    Directory of Open Access Journals (Sweden)

    S. P. Arunachalam

    2018-01-01

    Full Text Available Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR and atrial fibrillation (AF was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.

  3. Energy performance certification as a signal of workplace quality

    International Nuclear Information System (INIS)

    Parkinson, Aidan; De Jong, Robert; Cooke, Alison; Guthrie, Peter

    2013-01-01

    Energy performance labelling and certification have been introduced widely to address market failures affecting the uptake of energy efficient technologies, by providing a signal to support decision making during contracting processes. The UK has recently introduced the Energy Performance Certificate (EPC) as a signal of building energy performance. The aims of this article are: to evaluate how valid EPC’s are signals of occupier satisfaction with office facilities; and to understand whether occupant attitudes towards environmental issues have affected commercial office rental values. This was achieved by surveying occupant satisfaction with their workplaces holistically using a novel multi-item rating scale which gathered 204 responses. Responses to this satisfaction scale were matched with the corresponding EPC and rental value of occupier’s workplaces. The satisfaction scale was found to be both a reliable and valid measure. The analysis found that EPC asset rating correlates significantly with occupant satisfaction with all facility attributes. Therefore, EPC ratings may be considered valid signals of overall facility satisfaction within the survey sample. Rental value was found to correlate significantly only with facility aesthetics. No evidence suggests rental value has been affected by occupants' perceptions towards the environmental impact of facilities. - Highlights: • A novel, internally consistent, and valid measure of office facility satisfaction. • EPC’s found to be a valid signal of overall facility satisfaction. • Historic rental value found to be an invalid measure of overall facility satisfaction. • No evidence suggests rental value has been affected by occupants' perceptions towards the environmental impact of facilities. • Occupants with stronger ties to landlords found to be more satisfied with office facilities

  4. Performance of Optimally Merged Multisatellite Precipitation Products Using the Dynamic Bayesian Model Averaging Scheme Over the Tibetan Plateau

    Science.gov (United States)

    Ma, Yingzhao; Hong, Yang; Chen, Yang; Yang, Yuan; Tang, Guoqiang; Yao, Yunjun; Long, Di; Li, Changmin; Han, Zhongying; Liu, Ronghua

    2018-01-01

    Accurate estimation of precipitation from satellites at high spatiotemporal scales over the Tibetan Plateau (TP) remains a challenge. In this study, we proposed a general framework for blending multiple satellite precipitation data using the dynamic Bayesian model averaging (BMA) algorithm. The blended experiment was performed at a daily 0.25° grid scale for 2007-2012 among Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42RT and 3B42V7, Climate Prediction Center MORPHing technique (CMORPH), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR). First, the BMA weights were optimized using the expectation-maximization (EM) method for each member on each day at 200 calibrated sites and then interpolated to the entire plateau using the ordinary kriging (OK) approach. Thus, the merging data were produced by weighted sums of the individuals over the plateau. The dynamic BMA approach showed better performance with a smaller root-mean-square error (RMSE) of 6.77 mm/day, higher correlation coefficient of 0.592, and closer Euclid value of 0.833, compared to the individuals at 15 validated sites. Moreover, BMA has proven to be more robust in terms of seasonality, topography, and other parameters than traditional ensemble methods including simple model averaging (SMA) and one-outlier removed (OOR). Error analysis between BMA and the state-of-the-art IMERG in the summer of 2014 further proved that the performance of BMA was superior with respect to multisatellite precipitation data merging. This study demonstrates that BMA provides a new solution for blending multiple satellite data in regions with limited gauges.

  5. Tracking Neuronal Connectivity from Electric Brain Signals to Predict Performance.

    Science.gov (United States)

    Vecchio, Fabrizio; Miraglia, Francesca; Rossini, Paolo Maria

    2018-05-01

    The human brain is a complex container of interconnected networks. Network neuroscience is a recent venture aiming to explore the connection matrix built from the human brain or human "Connectome." Network-based algorithms provide parameters that define global organization of the brain; when they are applied to electroencephalographic (EEG) signals network, configuration and excitability can be monitored in millisecond time frames, providing remarkable information on their instantaneous efficacy also for a given task's performance via online evaluation of the underlying instantaneous networks before, during, and after the task. Here we provide an updated summary on the connectome analysis for the prediction of performance via the study of task-related dynamics of brain network organization from EEG signals.

  6. SIP Signaling Implementations and Performance Enhancement over MANET: A Survey

    OpenAIRE

    Alshamrani, M; Cruickshank, Haitham; Sun, Zhili; Ansa, G; Alshahwan, F

    2016-01-01

    The implementation of the Session Initiation Protocol (SIP)-based Voice over Internet Protocol (VoIP) and multimedia over MANET is still a challenging issue. Many routing factors affect the performance of SIP signaling and the voice Quality of Service (QoS). Node mobility in MANET causes dynamic changes to route calculations, topology, hop numbers, and the connectivity status between the correspondent nodes. SIP-based VoIP depends on the caller’s registration, call initiation, and call termin...

  7. EXTRAPOLATION TECHNIQUES EVALUATING 24 HOURS OF AVERAGE ELECTROMAGNETIC FIELD EMITTED BY RADIO BASE STATION INSTALLATIONS: SPECTRUM ANALYZER MEASUREMENTS OF LTE AND UMTS SIGNALS.

    Science.gov (United States)

    Mossetti, Stefano; de Bartolo, Daniela; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina; Nava, Elisa

    2017-04-01

    International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Extrapolation techniques evaluating 24 hours of average electromagnetic field emitted by radio base station installations: spectrum analyzer measurements of LTE and UMTS signals

    International Nuclear Information System (INIS)

    Mossetti, Stefano; Bartolo, Daniela de; Nava, Elisa; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina

    2017-01-01

    International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. (authors)

  9. Acoustic/seismic signal propagation and sensor performance modeling

    Science.gov (United States)

    Wilson, D. Keith; Marlin, David H.; Mackay, Sean

    2007-04-01

    Performance, optimal employment, and interpretation of data from acoustic and seismic sensors depend strongly and in complex ways on the environment in which they operate. Software tools for guiding non-expert users of acoustic and seismic sensors are therefore much needed. However, such tools require that many individual components be constructed and correctly connected together. These components include the source signature and directionality, representation of the atmospheric and terrain environment, calculation of the signal propagation, characterization of the sensor response, and mimicking of the data processing at the sensor. Selection of an appropriate signal propagation model is particularly important, as there are significant trade-offs between output fidelity and computation speed. Attenuation of signal energy, random fading, and (for array systems) variations in wavefront angle-of-arrival should all be considered. Characterization of the complex operational environment is often the weak link in sensor modeling: important issues for acoustic and seismic modeling activities include the temporal/spatial resolution of the atmospheric data, knowledge of the surface and subsurface terrain properties, and representation of ambient background noise and vibrations. Design of software tools that address these challenges is illustrated with two examples: a detailed target-to-sensor calculation application called the Sensor Performance Evaluator for Battlefield Environments (SPEBE) and a GIS-embedded approach called Battlefield Terrain Reasoning and Awareness (BTRA).

  10. Applicability of Time-Averaged Holography for Micro-Electro-Mechanical System Performing Non-Linear Oscillations

    Directory of Open Access Journals (Sweden)

    Paulius Palevicius

    2014-01-01

    Full Text Available Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms.

  11. Applicability of Time-Averaged Holography for Micro-Electro-Mechanical System Performing Non-Linear Oscillations

    Science.gov (United States)

    Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas

    2014-01-01

    Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467

  12. Applicability of time-averaged holography for micro-electro-mechanical system performing non-linear oscillations.

    Science.gov (United States)

    Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas

    2014-01-21

    Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms.

  13. Performance study of highly efficient 520 W average power long pulse ceramic Nd:YAG rod laser

    Science.gov (United States)

    Choubey, Ambar; Vishwakarma, S. C.; Ali, Sabir; Jain, R. K.; Upadhyaya, B. N.; Oak, S. M.

    2013-10-01

    We report the performance study of a 2% atomic doped ceramic Nd:YAG rod for long pulse laser operation in the millisecond regime with pulse duration in the range of 0.5-20 ms. A maximum average output power of 520 W with 180 J maximum pulse energy has been achieved with a slope efficiency of 5.4% using a dual rod configuration, which is the highest for typical lamp pumped ceramic Nd:YAG lasers. The laser output characteristics of the ceramic Nd:YAG rod were revealed to be nearly equivalent or superior to those of high-quality single crystal Nd:YAG rod. The laser pump chamber and resonator were designed and optimized to achieve a high efficiency and good beam quality with a beam parameter product of 16 mm mrad (M2˜47). The laser output beam was efficiently coupled through a 400 μm core diameter optical fiber with 90% overall transmission efficiency. This ceramic Nd:YAG laser will be useful for various material processing applications in industry.

  14. Suggestibility and signal detection performance in hallucination-prone students.

    Science.gov (United States)

    Alganami, Fatimah; Varese, Filippo; Wagstaff, Graham F; Bentall, Richard P

    2017-03-01

    Auditory hallucinations are associated with signal detection biases. We examine the extent to which suggestions influence performance on a signal detection task (SDT) in highly hallucination-prone and low hallucination-prone students. We also explore the relationship between trait suggestibility, dissociation and hallucination proneness. In two experiments, students completed on-line measures of hallucination proneness (the revised Launay-Slade Hallucination Scale; LSHS-R), trait suggestibility (Inventory of Suggestibility) and dissociation (Dissociative Experiences Scale-II). Students in the upper and lower tertiles of the LSHS-R performed an auditory SDT. Prior to the task, suggestions were made pertaining to the number of expected targets (Experiment 1, N = 60: high vs. low suggestions; Experiment 2, N = 62, no suggestion vs. high suggestion vs. no voice suggestion). Correlational and regression analyses indicated that trait suggestibility and dissociation predicted hallucination proneness. Highly hallucination-prone students showed a higher SDT bias in both studies. In Experiment 1, both bias scores were significantly affected by suggestions to the same degree. In Experiment 2, highly hallucination-prone students were more reactive to the high suggestion condition than the controls. Suggestions may affect source-monitoring judgments, and this effect may be greater in those who have a predisposition towards hallucinatory experiences.

  15. Candidates Profile in FUVEST Exams from 2004 to 2013: Private and Public School Distribution, FUVEST Average Performance and Chemical Equilibrium Tasks Performance

    Directory of Open Access Journals (Sweden)

    R.S.A.P. Oliveira

    2014-08-01

    Full Text Available INTRODUCTION. Chemical equilibrium is recognized as a topic of several misconceptions. Its origins must be tracked from previous scholarship. Its impact on biochemistry learning is not fully described. A possible bulk of data is the FUVEST exam. OBJECTIVES: Identify students’ errors profile on chemical equilibrium tasks using public data from FUVEST exam. MATERIAL AND METHODS: Data analysis from FUVEST were: i Private and Public school distribution in Elementary and Middle School, and High School candidates of Pharmacy-Biochemistry course and total USP careers until the last call for enrollment (2004-2013; ii Average performance in 1st and 2nd parts of FUVEST exam of Pharmacy-Biochemistry, Chemistry, Engineering, Biological Sciences, Languages and Medicine courses and total enrolled candidates until 1st call for enrollment (2008- 2013; iii Performance of candidates of Pharmacy-Biochemistry, Chemistry, Engineering, Biological Sciences, Languages and Medicine courses and total USP careers in chemical equilibrium issues from 1st part of FUVEST (2011-2013. RESULTS AND DISCUSSION: i 66.2% of candidates came from private Elementary-Middle School courses and 71.8%, came from High School courses; ii Average grade over the period for 1st and 2nd FUVEST parts are respectively (in 100 points: Pharmacy-Biochemistry 66.7 and 61.2, Chemistry 65.9 and 58.9, Engineering 75.9 and 71.9, Biological Sciences 65.6 and 54.6, Languages 49.9 and 43.3, Medicine 83.5 and 79.5, total enrolled candidates 51,5 and 48.9; iii Four chemical equilibrium issues were found during 2011-2013 and the analysis of multiplechoice percentage distribution over the courses showed that there was a similar performance of students among them, except for Engineering and Medicine with higher grades, but the same proportional distribution among choices. CONCLUSION: Approved students came majorly from private schools. There was a different average performance among courses and similar on

  16. Performance analysis of signaling protocols on OBS switches

    Science.gov (United States)

    Kirci, Pinar; Zaim, A. Halim

    2005-10-01

    In this paper, Just-In-Time (JIT), Just-Enough-Time (JET) and Horizon signalling schemes for Optical Burst Switched Networks (OBS) are presented. These signaling schemes run over a core dWDM network and a network architecture based on Optical Burst Switches (OBS) is proposed to support IP, ATM and Burst traffic. In IP and ATM traffic several packets are assembled in a single packet called burst and the burst contention is handled by burst dropping. The burst length distribution in IP traffic is arbitrary between 0 and 1, and is fixed in ATM traffic at 0,5. Burst traffic on the other hand is arbitrary between 1 and 5. The Setup and Setup ack length distributions are arbitrary. We apply the Poisson model with rate λ and Self-Similar model with pareto distribution rate α to identify inter-arrival times in these protocols. We consider a communication between a source client node and a destination client node over an ingress and one or more multiple intermediate switches.We use buffering only in the ingress node. The communication is based on single burst connections in which, the connection is set up just before sending a burst and then closed as soon as the burst is sent. Our analysis accounts for several important parameters, including the burst setup, burst setup ack, keepalive messages and the optical switching protocol. We compare the performance of the three signalling schemes on the network under as burst dropping probability under a range of network scenarios.

  17. Infrasonic detection performance in presence of nuisance signal

    Science.gov (United States)

    Charbit, Maurice; Arrowsmith, Stephen; Che, Il-young; Le Pichon, Alexis; Nouvellet, Adrien; Park, Junghyun; Roueff, Francois

    2014-05-01

    The infrasound network of the International Monitoring System (IMS) consists of sixty stations deployed all over the World by the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). The IMS has been designed to reliably detect, at least by two stations, an explosion greater than 1 kiloton located anywhere on the Earth [1]. Each station is an array of at least four microbarometers with an aperture of 1 to 3 km. The first important issue is to detect the presence of the signal of interest (SOI) embedded in noise. The detector is commonly based on the property that the SOI provides coherent observations on the sensors but not the noise. The statistic of test, called F-stat [2], [5], [6] , calculated in a time cell a few seconds, is commonly used for this purpose. In this paper, we assume that a coherent source is permanently present arriving from an unknown direction of arrivals (DOA). The typical case is the presence of microbaroms or the presence of wind. This source is seen as a nuisance signal (NS). In [4], [3] authors assume that a time cell without the SOI (CH0) is available, whereas a following time cell is considered as the cell under test (CUT). Therefore the DOA and the SNR of the NS can be estimated. If the signal-to-noise ration SNR of the NS is large enough, the distribution of the F-stat under the absence of SOI is known to be a non central Fisher. It follows that the threshold can be performed from a given value of the FAR. The major drawback to keep the NS is that the NS could hide the SOI, this phenomena is similar to the leakage which is a well-known phenomena in the Fourier analysis. An other approach consists to use the DOA estimate of the NS to mitigate the NS by spatial notch filter in the frequency domain. On this approach a new algorithm is provided. To illustrate, numerical results on synthetical and real data are presented, in term of Receiver Operating Characteristic ROC curves. REFERENCES [1] Christie D.R. and Campus P., The IMS

  18. Effects of gradient encoding and number of signal averages on fractional anisotropy and fiber density index in vivo at 1.5 tesla.

    Science.gov (United States)

    Widjaja, E; Mahmoodabadi, S Z; Rea, D; Moineddin, R; Vidarsson, L; Nilsson, D

    2009-01-01

    Tensor estimation can be improved by increasing the number of gradient directions (NGD) or increasing the number of signal averages (NSA), but at a cost of increased scan time. To evaluate the effects of NGD and NSA on fractional anisotropy (FA) and fiber density index (FDI) in vivo. Ten healthy adults were scanned on a 1.5T system using nine different diffusion tensor sequences. Combinations of 7 NGD, 15 NGD, and 25 NGD with 1 NSA, 2 NSA, and 3 NSA were used, with scan times varying from 2 to 18 min. Regions of interest (ROIs) were placed in the internal capsules, middle cerebellar peduncles, and splenium of the corpus callosum, and FA and FDI were calculated. Analysis of variance was used to assess whether there was a difference in FA and FDI of different combinations of NGD and NSA. There was no significant difference in FA of different combinations of NGD and NSA of the ROIs (P>0.005). There was a significant difference in FDI between 7 NGD/1 NSA and 25 NGD/3 NSA in all three ROIs (PNSA, 25 NGD/1 NSA, and 25 NGD/2 NSA and 25 NGD/3 NSA in all ROIs (P>0.005). We have not found any significant difference in FA with varying NGD and NSA in vivo in areas with relatively high anisotropy. However, lower NGD resulted in reduced FDI in vivo. With larger NGD, NSA has less influence on FDI. The optimal sequence among the nine sequences tested with the shortest scan time was 25 NGD/1 NSA.

  19. Women, Men, and Academic Performance in Science and Engineering: The Gender Difference in Undergraduate Grade Point Averages

    Science.gov (United States)

    Sonnert, Gerhard; Fox, Mary Frank

    2012-01-01

    Using longitudinal and multi-institutional data, this article takes an innovative approach in its analyses of gender differences in grade point averages (GPA) among undergraduate students in biology, the physical sciences, and engineering over a 16-year period. Assessed are hypotheses about (a) the gender ecology of science/engineering and (b) the…

  20. Determination of the Average Native Background and the Light-Induced EPR Signals and their Variation in the Teeth Enamel Based on Large-Scale Survey of the Population

    International Nuclear Information System (INIS)

    Ivannikov, Alexander I.; Khailov, Artem M.; Orlenko, Sergey P.; Skvortsov, Valeri G.; Stepanenko, Valeri F.; Zhumadilov, Kassym Sh.; Williams, Benjamin B.; Flood, Ann B.; Swartz, Harold M.

    2016-01-01

    The aim of the study is to determine the average intensity and variation of the native background signal amplitude (NSA) and of the solar light-induced signal amplitude (LSA) in electron paramagnetic resonance (EPR) spectra of tooth enamel for different kinds of teeth and different groups of people. These values are necessary for determination of the intensity of the radiation-induced signal amplitude (RSA) by subtraction of the expected NSA and LSA from the total signal amplitude measured in L-band for in vivo EPR dosimetry. Variation of these signals should be taken into account when estimating the uncertainty of the estimated RSA. A new analysis of several hundred EPR spectra that were measured earlier at X-band in a large-scale examination of the population of the Central Russia was performed. Based on this analysis, the average values and the variation (standard deviation, SD) of the amplitude of the NSA for the teeth from different positions, as well as LSA in outer enamel of the front teeth for different population groups, were determined. To convert data acquired at X-band to values corresponding to the conditions of measurement at L-band, the experimental dependencies of the intensities of the RSA, LSA and NSA on the m.w. power, measured at both X and L-band, were analysed. For the two central upper incisors, which are mainly used in in vivo dosimetry, the mean LSA annual rate induced only in the outer side enamel and its variation were obtained as 10 ± 2 (SD = 8) mGy y"-"1, the same for X- and L-bands (results are presented as the mean ± error of mean). Mean NSA in enamel and its variation for the upper incisors was calculated at 2.0 ± 0.2 (SD = 0.5) Gy, relative to the calibrated RSA dose-response to gamma radiation measured under non-power saturation conditions at X-band. Assuming the same value for L-band under non-power saturating conditions, then for in vivo measurements at L-band at 25 mW (power saturation conditions), a mean NSA and its

  1. Determination of the Average Native Background and the Light-Induced EPR Signals and their Variation in the Teeth Enamel Based on Large-Scale Survey of the Population.

    Science.gov (United States)

    Ivannikov, Alexander I; Khailov, Artem M; Orlenko, Sergey P; Skvortsov, Valeri G; Stepanenko, Valeri F; Zhumadilov, Kassym Sh; Williams, Benjamin B; Flood, Ann B; Swartz, Harold M

    2016-12-01

    The aim of the study is to determine the average intensity and variation of the native background signal amplitude (NSA) and of the solar light-induced signal amplitude (LSA) in electron paramagnetic resonance (EPR) spectra of tooth enamel for different kinds of teeth and different groups of people. These values are necessary for determination of the intensity of the radiation-induced signal amplitude (RSA) by subtraction of the expected NSA and LSA from the total signal amplitude measured in L-band for in vivo EPR dosimetry. Variation of these signals should be taken into account when estimating the uncertainty of the estimated RSA. A new analysis of several hundred EPR spectra that were measured earlier at X-band in a large-scale examination of the population of the Central Russia was performed. Based on this analysis, the average values and the variation (standard deviation, SD) of the amplitude of the NSA for the teeth from different positions, as well as LSA in outer enamel of the front teeth for different population groups, were determined. To convert data acquired at X-band to values corresponding to the conditions of measurement at L-band, the experimental dependencies of the intensities of the RSA, LSA and NSA on the m.w. power, measured at both X and L-band, were analysed. For the two central upper incisors, which are mainly used in in vivo dosimetry, the mean LSA annual rate induced only in the outer side enamel and its variation were obtained as 10 ± 2 (SD = 8) mGy y -1 , the same for X- and L-bands (results are presented as the mean ± error of mean). Mean NSA in enamel and its variation for the upper incisors was calculated at 2.0 ± 0.2 (SD = 0.5) Gy, relative to the calibrated RSA dose-response to gamma radiation measured under non-power saturation conditions at X-band. Assuming the same value for L-band under non-power saturating conditions, then for in vivo measurements at L-band at 25 mW (power saturation conditions), a mean NSA and its

  2. Performance analysis of NOAA tropospheric signal delay model

    International Nuclear Information System (INIS)

    Ibrahim, Hassan E; El-Rabbany, Ahmed

    2011-01-01

    Tropospheric delay is one of the dominant global positioning system (GPS) errors, which degrades the positioning accuracy. Recent development in tropospheric modeling relies on implementation of more accurate numerical weather prediction (NWP) models. In North America one of the NWP-based tropospheric correction models is the NOAA Tropospheric Signal Delay Model (NOAATrop), which was developed by the US National Oceanic and Atmospheric Administration (NOAA). Because of its potential to improve the GPS positioning accuracy, the NOAATrop model became the focus of many researchers. In this paper, we analyzed the performance of the NOAATrop model and examined its effect on ionosphere-free-based precise point positioning (PPP) solution. We generated 3 year long tropospheric zenith total delay (ZTD) data series for the NOAATrop model, Hopfield model, and the International GNSS Services (IGS) final tropospheric correction product, respectively. These data sets were generated at ten IGS reference stations spanning Canada and the United States. We analyzed the NOAATrop ZTD data series and compared them with those of the Hopfield model. The IGS final tropospheric product was used as a reference. The analysis shows that the performance of the NOAATrop model is a function of both season (time of the year) and geographical location. However, its performance was superior to the Hopfield model in all cases. We further investigated the effect of implementing the NOAATrop model on the ionosphere-free-based PPP solution convergence and accuracy. It is shown that the use of the NOAATrop model improved the PPP solution convergence by 1%, 10% and 15% for the latitude, longitude and height components, respectively

  3. An Integrated Signaling-Encryption Mechanism to Reduce Error Propagation in Wireless Communications: Performance Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL

    2015-01-01

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

  4. Photocatalytic performances of BiFeO3 particles with the average size in nanometer, submicrometer, and micrometer

    International Nuclear Information System (INIS)

    Hao, Chunxue; FushengWen,; Xiang, Jianyong; Hou, Hang; Lv, Weiming; Lv, Yifei; Hu, Wentao; Liu, Zhongyuan

    2014-01-01

    Highlights: • Three different synthesis routes have been taken to successfully prepare the BiFeO 3 particles with the different morphologies and average size in 50, 500 nm, and 15 μm. • For photodegradation of dyes under visible irradiation in the presence of BiFeO 3 , the photocatalytic efficiency increases quickly with the decrease in size. • The enhanced photocatalytic efficiency of BiFeO 3 nanoparticles may attribute to more surface active catalytic-sites and shorter distances carriers have to migrate to the surface reaction sites. - Abstract: Three different synthesis routes were taken to successfully prepare the BiFeO 3 particles with the different morphologies and average size in 50, 500 nm, and 15 μm, respectively. The crystal structure was recognized to be a distorted rhombohedral one with the space group R3c. With the decrease in particle size, obvious decrease in peak intensity and redshift in peak position were observed for the Raman active bands. The narrow band gap was determined from the UV–vis absorption spectra, indicating the semiconducting nature of the BiFeO 3 . For photodegradation of dyes under visible irradiation in the presence of BiFeO 3 , the photocatalytic efficiency increased quickly with the decrease in size which may attribute to more surface active catalytic-sites and shorter distances carriers had to migrate to the surface reaction sites

  5. Performance of a geostationary mission, geoCARB, to measure CO2, CH4 and CO column-averaged concentrations

    Directory of Open Access Journals (Sweden)

    I. N. Polonsky

    2014-04-01

    Full Text Available GeoCARB is a proposed instrument to measure column averaged concentrations of CO2, CH4 and CO from geostationary orbit using reflected sunlight in near-infrared absorption bands of the gases. The scanning options, spectral channels and noise characteristics of geoCARB and two descope options are described. The accuracy of concentrations from geoCARB data is investigated using end-to-end retrievals; spectra at the top of the atmosphere in the geoCARB bands are simulated with realistic trace gas profiles, meteorology, aerosol, cloud and surface properties, and then the concentrations of CO2, CH4 and CO are estimated from the spectra after addition of noise characteristic of geoCARB. The sensitivity of the algorithm to aerosol, the prior distributions assumed for the gases and the meteorology are investigated. The contiguous spatial sampling and fine temporal resolution of geoCARB open the possibility of monitoring localised sources such as power plants. Simulations of emissions from a power plant with a Gaussian plume are conducted to assess the accuracy with which the emission strength may be recovered from geoCARB spectra. Scenarios for "clean" and "dirty" power plants are examined. It is found that a reliable estimate of the emission rate is possible, especially for power plants that have particulate filters, by averaging emission rates estimated from multiple snapshots of the CO2 field surrounding the plant. The result holds even in the presence of partial cloud cover.

  6. Brief communication: Using averaged soil moisture estimates to improve the performances of a regional-scale landslide early warning system

    Science.gov (United States)

    Segoni, Samuele; Rosi, Ascanio; Lagomarsino, Daniela; Fanti, Riccardo; Casagli, Nicola

    2018-03-01

    We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS (regional-scale landslide early warning system) based on rainfall thresholds by integrating mean soil moisture values averaged over the territorial units of the system. We tested two approaches. The simplest can be easily applied to improve other RSLEWS: it is based on a soil moisture threshold value under which rainfall thresholds are not used because landslides are not expected to occur. Another approach deeply modifies the original RSLEWS: thresholds based on antecedent rainfall accumulated over long periods are substituted with soil moisture thresholds. A back analysis demonstrated that both approaches consistently reduced false alarms, while the second approach reduced missed alarms as well.

  7. Performance of Reynolds Averaged Navier-Stokes Models in Predicting Separated Flows: Study of the Hump Flow Model Problem

    Science.gov (United States)

    Cappelli, Daniele; Mansour, Nagi N.

    2012-01-01

    Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.

  8. Fleet average NOx emission performance of 2004 model year light-duty vehicles, light-duty trucks and medium-duty passenger vehicles

    International Nuclear Information System (INIS)

    2006-05-01

    The On-Road Vehicle and Engine Emission Regulations came into effect on January 1, 2004. The regulations introduced more stringent national emission standards for on-road vehicles and engines, and also required that companies submit reports containing information concerning the company's fleets. This report presented a summary of the regulatory requirements relating to nitric oxide (NO x ) fleet average emissions for light-duty vehicles, light-duty trucks, and medium-duty passenger vehicles under the new regulations. The effectiveness of the Canadian fleet average NO x emission program at achieving environmental performance objectives was also evaluated. A summary of the fleet average NO x emission performance of individual companies was presented, as well as the overall Canadian fleet average of the 2004 model year based on data submitted by companies in their end of model year reports. A total of 21 companies submitted reports covering 2004 model year vehicles in 10 test groups, comprising 1,350,719 vehicles of the 2004 model year manufactured or imported for the purpose of sale in Canada. The average NO x value for the entire Canadian LDV/LDT fleet was 0.2016463 grams per mile. The average NO x values for the entire Canadian HLDT/MDPV fleet was 0.321976 grams per mile. It was concluded that the NO x values for both fleets were consistent with the environmental performance objectives of the regulations for the 2004 model year. 9 tabs

  9. STAR Performance with SPEAR (Signal Processing Electronic Attack RFIC)

    Science.gov (United States)

    2017-03-01

    re about 6 x ains duplicate e implemente ifferences. A d signal from t first mixer sta ange. The LN etween noise e of a strong a on-chip balu icro...amplifiers filter is embed signal before f lumped com onics in the pr ly 1 x 1 (mm)2 adaptive p signal (in ultaneous h. anufacture whole off... ted circuit. s part of a in Global e die will el receiver ter; (iv) an -to-parallel 7 (mm)2 the same d by the low noise he antenna ge

  10. Outage performance of cognitive radio systems with Improper Gaussian signaling

    KAUST Repository

    Amin, Osama; Abediseid, Walid; Alouini, Mohamed-Slim

    2015-01-01

    design the SU signal by adjusting its transmitted power and the circularity coefficient to minimize the SU outage probability while maintaining a certain PU quality-of-service. Finally, we evaluate the proposed bounds and adaptive algorithms by numerical

  11. SIGNAL RECONSTRUCTION PERFORMANCE OF THE ATLAS HADRONIC TILE CALORIMETER

    CERN Document Server

    Do Amaral Coutinho, Y; The ATLAS collaboration

    2013-01-01

    "The Tile Calorimeter for the ATLAS experiment at the CERN Large Hadron Collider (LHC) is a sampling calorimeter with steel as absorber and scintillators as active medium. The scintillators are readout by wavelength shifting fibers coupled to photomultiplier tubes (PMT). The analogue signals from the PMTs are amplified, shaped and digitized by sampling the signal every 25 ns. The TileCal front-end electronics allows to read out the signals produced by about 10000 channels measuring energies ranging from ~30 MeV to ~2 TeV. The read-out system is responsible for reconstructing the data in real-time fulfilling the tight time constraint imposed by the ATLAS first level trigger rate (100 kHz). The main component of the read-out system is the Digital Signal Processor (DSP) which, using an Optimal Filtering reconstruction algorithm, allows to compute for each channel the signal amplitude, time and quality factor at the required high rate. Currently the ATLAS detector and the LHC are undergoing an upgrade program tha...

  12. Experimental study on the effects of surface gravity waves of different wavelengths on the phase averaged performance characteristics of marine current turbine

    Science.gov (United States)

    Luznik, L.; Lust, E.; Flack, K. A.

    2014-12-01

    There are few studies describing the interaction between marine current turbines and an overlying surface gravity wave field. In this work we present an experimental study on the effects of surface gravity waves of different wavelengths on the wave phase averaged performance characteristics of a marine current turbine model. Measurements are performed with a 1/25 scale (diameter D=0.8m) two bladed horizontal axis turbine towed in the large (116m long) towing tank at the U.S. Naval Academy equipped with a dual-flap, servo-controlled wave maker. Three regular waves with wavelengths of 15.8, 8.8 and 3.9m with wave heights adjusted such that all waveforms have the same energy input per unit width are produced by the wave maker and model turbine is towed into the waves at constant carriage speed of 1.68 m/s. This representing the case of waves travelling in the same direction as the mean current. Thrust and torque developed by the model turbine are measured using a dynamometer mounted in line with the turbine shaft. Shaft rotation speed and blade position are measured using in in-house designed shaft position indexing system. The tip speed ratio (TSR) is adjusted using a hysteresis brake which is attached to the output shaft. Free surface elevation and wave parameters are measured with two optical wave height sensors, one located in the turbine rotor plane and other one diameter upstream of the rotor. All instruments are synchronized in time and data is sampled at a rate of 700 Hz. All measured quantities are conditionally sampled as a function of the measured surface elevation and transformed to wave phase space using the Hilbert Transform. Phenomena observed in earlier experiments with the same turbine such as phase lag in the torque signal and an increase in thrust due to Stokes drift are examined and presented with the present data as well as spectral analysis of the torque and thrust data.

  13. How Well Does High School Grade Point Average Predict College Performance by Student Urbanicity and Timing of College Entry? REL 2017-250

    Science.gov (United States)

    Hodara, Michelle; Lewis, Karyn

    2017-01-01

    This report is a companion to a study that found that high school grade point average was a stronger predictor of performance in college-level English and math than were standardized exam scores among first-time students at the University of Alaska who enrolled directly in college-level courses. This report examines how well high school grade…

  14. Performance of a Bounce-Averaged Global Model of Super-Thermal Electron Transport in the Earth's Magnetic Field

    Science.gov (United States)

    McGuire, Tim

    1998-01-01

    In this paper, we report the results of our recent research on the application of a multiprocessor Cray T916 supercomputer in modeling super-thermal electron transport in the earth's magnetic field. In general, this mathematical model requires numerical solution of a system of partial differential equations. The code we use for this model is moderately vectorized. By using Amdahl's Law for vector processors, it can be verified that the code is about 60% vectorized on a Cray computer. Speedup factors on the order of 2.5 were obtained compared to the unvectorized code. In the following sections, we discuss the methodology of improving the code. In addition to our goal of optimizing the code for solution on the Cray computer, we had the goal of scalability in mind. Scalability combines the concepts of portabilty with near-linear speedup. Specifically, a scalable program is one whose performance is portable across many different architectures with differing numbers of processors for many different problem sizes. Though we have access to a Cray at this time, the goal was to also have code which would run well on a variety of architectures.

  15. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

    Science.gov (United States)

    Liu, Boquan; Polce, Evan; Sprott, Julien C.; Jiang, Jack J.

    2018-01-01

    Purpose: The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Study Design: Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100…

  16. Rapid Prototyping of High Performance Signal Processing Applications

    Science.gov (United States)

    Sane, Nimish

    Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high

  17. Use and Protection of GPS Sidelobe Signals for Enhanced Navigation Performance in High Earth Orbit

    Science.gov (United States)

    Parker, Joel J. K.; Valdez, Jennifer E.; Bauer, Frank H.; Moreau, Michael C.

    2016-01-01

    GPS (Global Positioning System) Space Service Volume (SSV) signal environment is from 3,000-36,000 kilometers altitude. Current SSV specifications only capture performance provided by signals transmitted within 23.5(L1) or 26(L2-L5) off-nadir angle. Recent on-orbit data lessons learned show significant PNT (Positioning, Navigation and Timing) performance improvements when the full aggregate signal is used. Numerous military civil operational missions in High Geosynchronous Earth Orbit (HEOGEO) utilize the full signal to enhance vehicle PNT performance

  18. Design of excitation signals for active system monitoring in a performance assessment setup

    DEFF Research Database (Denmark)

    Green, Torben; Izadi-Zamanabadi, Roozbeh; Niemann, Hans Henrik

    2011-01-01

    This paper investigates how the excitation signal should be chosen for a active performance setup. The signal is used in a setup where the main purpose is to detect whether a parameter change of the controller has changed the global performance significantly. The signal has to be able to excite...... the dynamics of the subsystem under investigation both before and after the parameter change. The controller is well know, but there exists no detailed knowledge about the dynamics of the subsystem....

  19. Panel discussion : signals for capital investment : systems for assessing performance

    International Nuclear Information System (INIS)

    Shalaby, A.; Van Beers, R.; Keizer, C.; Taylor, R.; Rothstein, S.

    2003-01-01

    This session presented highlights of 5 panelists who discussed signals for capital investment in Ontario's newly opened electricity market. Four main issues were highlighted. The panelists emphasized that the industry does not want a market where the price is managed by anyone. They don't want government interference, which will undermine the market's integrity. In addition, the industry wants a market that reflects scarcity, as well as a transparent market, where all the necessary information is available to all players. It was noted that at the moment, green power is not the priority. Rather, emphasis should be placed on reliability, transmission planning, inter-regional coordination, and joint investments with neighbouring jurisdictions. figs

  20. Performance comparison of independent component analysis algorithms for fetal cardiac signal reconstruction: a study on synthetic fMCG data

    International Nuclear Information System (INIS)

    Mantini, D; II, K E Hild; Alleva, G; Comani, S

    2006-01-01

    Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times

  1. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  2. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  3. MOL-Eye: A New Metric for the Performance Evaluation of a Molecular Signal

    OpenAIRE

    Turan, Meric; Kuran, Mehmet Sukru; Yilmaz, H. Birkan; Chae, Chan-Byoung; Tugcu, Tuna

    2017-01-01

    Inspired by the eye diagram in classical radio frequency (RF) based communications, the MOL-Eye diagram is proposed for the performance evaluation of a molecular signal within the context of molecular communication. Utilizing various features of this diagram, three new metrics for the performance evaluation of a molecular signal, namely the maximum eye height, standard deviation of received molecules, and counting SNR (CSNR) are introduced. The applicability of these performance metrics in th...

  4. MAMAP – a new spectrometer system for column-averaged methane and carbon dioxide observations from aircraft: instrument description and performance analysis

    Directory of Open Access Journals (Sweden)

    K. Gerilowski

    2011-02-01

    Full Text Available Carbon dioxide (CO2 and Methane (CH4 are the two most important anthropogenic greenhouse gases. CH4 is furthermore one of the most potent present and future contributors to global warming because of its large global warming potential (GWP. Our knowledge of CH4 and CO2 source strengths is based primarily on bottom-up scaling of sparse in-situ local point measurements of emissions and up-scaling of emission factor estimates or top-down modeling incorporating data from surface networks and more recently also by incorporating data from low spatial resolution satellite observations for CH4. There is a need to measure and retrieve the dry columns of CO2 and CH4 having high spatial resolution and spatial coverage. In order to fill this gap a new passive airborne 2-channel grating spectrometer instrument for remote sensing of small scale and mesoscale column-averaged CH4 and CO2 observations has been developed. This Methane Airborne MAPper (MAMAP instrument measures reflected and scattered solar radiation in the short wave infrared (SWIR and near-infrared (NIR parts of the electro-magnetic spectrum at moderate spectral resolution. The SWIR channel yields measurements of atmospheric absorption bands of CH4 and CO2 in the spectral range between 1.59 and 1.69 μm at a spectral resolution of 0.82 nm. The NIR channel around 0.76 μm measures the atmospheric O2-A-band absorption with a resolution of 0.46 nm. MAMAP has been designed for flexible operation aboard a variety of airborne platforms. The instrument design and the performance of the SWIR channel, together with some results from on-ground and in-flight engineering tests are presented. The SWIR channel performance has been analyzed using a retrieval algorithm applied to the nadir measured spectra. Dry air column-averaged mole fractions are obtained from SWIR

  5. Ambiguity towards Multiple Historical Performance Information Signals: Evidence from Indonesian Open-Ended Mutual Fund Investors

    Directory of Open Access Journals (Sweden)

    Haris Pratama Loeis

    2015-10-01

    Full Text Available This study focuses on the behavior of open-ended mutual fund investors when encountered with multiple information signals of mutual fund’s historical performance. The behavior of investors can be reflected on their decision to subscribe or redeem their funds from mutual funds. Moreover, we observe the presence of ambiguity within investors due to multiple information signals, and also their reaction towards it. Our finding shows that open-ended mutual fund investors do not only have sensitivity towards past performance information signals, but also have additional sensitivity towards the ambiguity of multiple information signals. Because of the presence of ambiguity, investors give more consideration to negative information signals and the worst information signal in their investment decisions. Normal 0 false false false EN-US X-NONE X-NONE

  6. Comparison of two different high performance mixed signal controllers for DC/DC converters

    DEFF Research Database (Denmark)

    Jakobsen, Lars Tønnes; Andersen, Michael Andreas E.

    2006-01-01

    This paper describes how mixed signal controllers combining a cheap microcontroller with a simple analogue circuit can offer high performance digital control for DC/DC converters. Mixed signal controllers have the same versatility and performance as DSP based controllers. It is important to have...... an engineer experienced in microcontroller programming write the software algorithms to achieve optimal performance. Two mixed signal controller designs based on the same 8-bit microcontroller are compared both theoretically and experimentally. A 16-bit PID compensator with a sampling frequency of 200 k......Hz implemented in the 16 MIPS, 8-bit ATTiny26 microcontroller is demonstrated....

  7. Pedestrian headform testing: inferring performance at impact speeds and for headform masses not tested, and estimating average performance in a range of real-world conditions.

    Science.gov (United States)

    Hutchinson, T Paul; Anderson, Robert W G; Searson, Daniel J

    2012-01-01

    Tests are routinely conducted where instrumented headforms are projected at the fronts of cars to assess pedestrian safety. Better information would be obtained by accounting for performance over the range of expected impact conditions in the field. Moreover, methods will be required to integrate the assessment of secondary safety performance with primary safety systems that reduce the speeds of impacts. Thus, we discuss how to estimate performance over a range of impact conditions from performance in one test and how this information can be combined with information on the probability of different impact speeds to provide a balanced assessment of pedestrian safety. Theoretical consideration is given to 2 distinct aspects to impact safety performance: the test impact severity (measured by the head injury criterion, HIC) at a speed at which a structure does not bottom out and the speed at which bottoming out occurs. Further considerations are given to an injury risk function, the distribution of impact speeds likely in the field, and the effect of primary safety systems on impact speeds. These are used to calculate curves that estimate injuriousness for combinations of test HIC, bottoming out speed, and alternative distributions of impact speeds. The injuriousness of a structure that may be struck by the head of a pedestrian depends not only on the result of the impact test but also the bottoming out speed and the distribution of impact speeds. Example calculations indicate that the relationship between the test HIC and injuriousness extends over a larger range than is presently used by the European New Car Assessment Programme (Euro NCAP), that bottoming out at speeds only slightly higher than the test speed can significantly increase the injuriousness of an impact location and that effective primary safety systems that reduce impact speeds significantly modify the relationship between the test HIC and injuriousness. Present testing regimes do not take fully into

  8. Measurement of signal-to-noise ratio performance of TV fluoroscopy systems

    International Nuclear Information System (INIS)

    Geluk, R.J.

    1985-01-01

    A method has been developed for direct measurement of Signal-to-Noise ratio performance on X-ray TV systems. To this end the TV signal resulting from a calibrated test object, is compared with the noise level in the image. The method is objective and produces instantaneous readout, which makes it very suitable for system evaluation under dynamic conditions. (author)

  9. Performance characterization of the IEEE 802.11 signal transmission over a multimode fiber PON

    Science.gov (United States)

    Maksymiuk, L.; Siuzdak, J.

    2014-11-01

    In this paper there are presented measurements concerning performance analysis of the IEEE 802.11 signal distribution over multimode fiber based passive optical network. In the paper there are addressed three main sources of impairments: modal noise, frequency response fluctuation of the multimode fiber and non-linear distortion of the signal in the receiver.

  10. Device to detect the presence of a pure signal in a discrete noisy signal measured at an average rate of constant noise with a probability of false detection lower than one predeterminated

    International Nuclear Information System (INIS)

    Poussier, E.; Rambaut, M.

    1986-01-01

    Detection consists of a measurement of a counting rate. A probability of wrong detection is associated with this counting rate and with an average estimated rate of noise. Detection consists also in comparing the wrong detection probability to a predeterminated rate of wrong detection. The comparison can use tabulated values. Application is made to corpuscule radiation detection [fr

  11. Performance Improvement of Power Analysis Attacks on AES with Encryption-Related Signals

    Science.gov (United States)

    Lee, You-Seok; Lee, Young-Jun; Han, Dong-Guk; Kim, Ho-Won; Kim, Hyoung-Nam

    A power analysis attack is a well-known side-channel attack but the efficiency of the attack is frequently degraded by the existence of power components, irrelative to the encryption included in signals used for the attack. To enhance the performance of the power analysis attack, we propose a preprocessing method based on extracting encryption-related parts from the measured power signals. Experimental results show that the attacks with the preprocessed signals detect correct keys with much fewer signals, compared to the conventional power analysis attacks.

  12. A High Performance Approach to Minimizing Interactions between Inbound and Outbound Signals in Helmet, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose a high performance approach to enhancing communications between astronauts. In the new generation of NASA audio systems for astronauts, inbound signals...

  13. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  14. Predicting Performance on the National Athletic Trainers' Association Board of Certification Examination From Grade Point Average and Number of Clinical Hours.

    Science.gov (United States)

    Middlemas, David A.; Manning, James M.; Gazzillo, Linda M.; Young, John

    2001-06-01

    OBJECTIVE: To determine whether grade point average, hours of clinical education, or both are significant predictors of performance on the National Athletic Trainers' Association Board of Certification examination and whether curriculum and internship candidates' scores on the certification examination can be differentially predicted. DESIGN AND SETTING: Data collection forms and consent forms were mailed to the subjects to collect data for predictor variables. Subject scores on the certification examination were obtained from Columbia Assessment Services. SUBJECTS: A total of 270 first-time candidates for the April and June 1998 certification examinations. MEASUREMENTS: Grade point average, number of clinical hours completed, sex, route to certification eligibility (curriculum or internship), scores on each section of the certification examination, and pass/fail criteria for each section. RESULTS: We found no significant difference between the scores of men and women on any section of the examination. Scores for curriculum and internship candidates differed significantly on the written and practical sections of the examination but not on the simulation section. Grade point average was a significant predictor of scores on each section of the examination and the examination as a whole. Clinical hours completed did not add a significant increment for any section but did add a significant increment for the examination overall. Although no significant difference was noted between curriculum and internship candidates in predicting scores on sections of the examination, a significant difference by route was found in predicting whether candidates would pass the examination as a whole (P =.047). Proportion of variance accounted for was less than R(2) = 0.0723 for any section of the examination and R(2) = 0.057 for the examination as a whole. CONCLUSIONS: Potential predictors of performance on the certification examination can be useful to athletic training educators in

  15. Performance of MgO:PPLN, KTA, and KNbO₃ for mid-wave infrared broadband parametric amplification at high average power.

    Science.gov (United States)

    Baudisch, M; Hemmer, M; Pires, H; Biegert, J

    2014-10-15

    The performance of potassium niobate (KNbO₃), MgO-doped periodically poled lithium niobate (MgO:PPLN), and potassium titanyl arsenate (KTA) were experimentally compared for broadband mid-wave infrared parametric amplification at a high repetition rate. The seed pulses, with an energy of 6.5 μJ, were amplified using 410 μJ pump energy at 1064 nm to a maximum pulse energy of 28.9 μJ at 3 μm wavelength and at a 160 kHz repetition rate in MgO:PPLN while supporting a transform limited duration of 73 fs. The high average powers of the interacting beams used in this study revealed average power-induced processes that limit the scaling of optical parametric amplification in MgO:PPLN; the pump peak intensity was limited to 3.8  GW/cm² due to nonpermanent beam reshaping, whereas in KNbO₃ an absorption-induced temperature gradient in the crystal led to permanent internal distortions in the crystal structure when operated above a pump peak intensity of 14.4  GW/cm².

  16. Effect of grade point average and enrollment in a dental hygiene National Board review course on student performance on the National Board Examination.

    Science.gov (United States)

    DeWald, Janice P; Gutmann, Marylou E; Solomon, Eric S

    2004-01-01

    Passing the National Board Dental Hygiene Examination is a requirement for licensure in all but one state. There are a number of preparation courses for the examination sponsored by corporations and dental hygiene programs. The purpose of this study was to determine if taking a board review course significantly affected student performance on the board examination. Students from the last six dental hygiene classes at Baylor College of Dentistry (n = 168) were divided into two groups depending on whether they took a particular review course. Mean entering college grade point averages (GPA), exiting dental hygiene program GPAs, and National Board scores were compared for the two groups using a t-test for independent samples (p < 0.05). No significant differences were found between the two groups for entering GPA and National Board scores. Exiting GPAs, however, were slightly higher for those not taking the course compared to those taking the course. In addition, a strong correlation (0.71, Pearson Correlation) was found between exiting GPA and National Board score. Exiting GPA was found to be a strong predictor of National Board performance. These results do not appear to support this program's participation in an external preparation course as a means of increasing students' performance on the National Board Dental Hygiene Examination.

  17. Screening applicants for risk of poor academic performance: a novel scoring system using preadmission grade point averages and graduate record examination scores.

    Science.gov (United States)

    Luce, David

    2011-01-01

    The purpose of this study was to develop an effective screening tool for identifying physician assistant (PA) program applicants at highest risk for poor academic performance. Prior to reviewing applications for the class of 2009, a retrospective analysis of preadmission data took place for the classes of 2006, 2007, and 2008. A single composite score was calculated for each student who matriculated (number of subjects, N=228) incorporating the total undergraduate grade point average (UGPA), the science GPA (SGPA), and the three component Graduate Record Examination (GRE) scores: verbal (GRE-V), quantitative (GRE-Q), analytical (GRE-A). Individual applicant scores for each of the five parameters were ranked in descending quintiles. Each applicant's five quintile scores were then added, yielding a total quintile score ranging from 25, which indicated an excellent performance, to 5, which indicated poorer performance. Thirteen of the 228 students had academic difficulty (dismissal, suspension, or one-quarter on academic warning or probation). Twelve of the 13 students having academic difficulty had a preadmission total quintile score 12 (range, 6-14). In response to this descriptive analysis, when selecting applicants for the class of 2009, the admissions committee used the total quintile score for screening applicants for interviews. Analysis of correlations in preadmission, graduate, and postgraduate performance data for the classes of 2009-2013 will continue and may help identify those applicants at risk for academic difficulty. Establishing a threshold total quintile score of applicant GPA and GRE scores may significantly decrease the number of entering PA students at risk for poor academic performance.

  18. Ambiguity Towards Multiple Historical Performance Information Signals: Evidence From Indonesian Open-Ended Mutual Fund Investors

    OpenAIRE

    Haris Pratama Loeis; Ruslan Prijadi

    2015-01-01

    This study focuses on the behavior of open-ended mutual fund investors when encountered with multiple information signals of mutual fund’s historical performance. The behavior of investors can be reflected on their decision to subscribe or redeem their funds from mutual funds. Moreover, we observe the presence of ambiguity within investors due to multiple information signals, and also their reaction towards it. Our finding shows that open-ended mutual fund investors do not only have sen...

  19. Performance Analysis of Control Signal Transmission Technique for Cognitive Radios in Dynamic Spectrum Access Networks

    Science.gov (United States)

    Sakata, Ren; Tomioka, Tazuko; Kobayashi, Takahiro

    When cognitive radio (CR) systems dynamically use the frequency band, a control signal is necessary to indicate which carrier frequencies are currently available in the network. In order to keep efficient spectrum utilization, this control signal also should be transmitted based on the channel conditions. If transmitters dynamically select carrier frequencies, receivers have to receive control signals without knowledge of their carrier frequencies. To enable such transmission and reception, this paper proposes a novel scheme called DCPT (Differential Code Parallel Transmission). With DCPT, receivers can receive low-rate information with no knowledge of the carrier frequencies. The transmitter transmits two signals whose carrier frequencies are spaced by a predefined value. The absolute values of the carrier frequencies can be varied. When the receiver acquires the DCPT signal, it multiplies the signal by a frequency-shifted version of the signal; this yields a DC component that represents the data signal which is then demodulated. The performance was evaluated by means of numerical analysis and computer simulation. We confirmed that DCPT operates successfully even under severe interference if its parameters are appropriately configured.

  20. Observer performance in detecting multiple radiographic signals: prediction and analysis using a generalized ROC approach

    International Nuclear Information System (INIS)

    Metz, C.E.; Starr, S.J.; Lusted, L.B.

    1975-01-01

    The theories of decision processes and signal detection provide a framework for the evaluation of observer performance. Some radiologic procedures involve a search for multiple similar lesions, as in gallstone or pneumoconiosis examinations. A model is presented which attempts to predict, from the conventional receiver operating characteristic (ROC) curve describing the detectability of a single visual signal in a radiograph, observer performance in an experiment requiring detection of more than one such signal. An experiment is described which tests the validity of this model for the case of detecting the presence of zero, one, or two low-contrast radiographic images of a two-mm.-diameter lucite bead embedded in radiographic mottle. Results from six observers, including three radiologists, confirm the validity of the model and suggest that human observer performance for relatively complex detection tasks can be predicted from the results of simpler experiments

  1. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  2. Investigation of the thermal and optical performance of a spatial light modulator with high average power picosecond laser exposure for materials processing applications

    Science.gov (United States)

    Zhu, G.; Whitehead, D.; Perrie, W.; Allegre, O. J.; Olle, V.; Li, Q.; Tang, Y.; Dawson, K.; Jin, Y.; Edwardson, S. P.; Li, L.; Dearden, G.

    2018-03-01

    Spatial light modulators (SLMs) addressed with computer generated holograms (CGHs) can create structured light fields on demand when an incident laser beam is diffracted by a phase CGH. The power handling limitations of these devices based on a liquid crystal layer has always been of some concern. With careful engineering of chip thermal management, we report the detailed optical phase and temperature response of a liquid cooled SLM exposed to picosecond laser powers up to 〈P〉  =  220 W at 1064 nm. This information is critical for determining device performance at high laser powers. SLM chip temperature rose linearly with incident laser exposure, increasing by only 5 °C at 〈P〉  =  220 W incident power, measured with a thermal imaging camera. Thermal response time with continuous exposure was 1-2 s. The optical phase response with incident power approaches 2π radians with average power up to 〈P〉  =  130 W, hence the operational limit, while above this power, liquid crystal thickness variations limit phase response to just over π radians. Modelling of the thermal and phase response with exposure is also presented, supporting experimental observations well. These remarkable performance characteristics show that liquid crystal based SLM technology is highly robust when efficiently cooled. High speed, multi-beam plasmonic surface micro-structuring at a rate R  =  8 cm2 s-1 is achieved on polished metal surfaces at 〈P〉  =  25 W exposure while diffractive, multi-beam surface ablation with average power 〈P〉  =100 W on stainless steel is demonstrated with ablation rate of ~4 mm3 min-1. However, above 130 W, first order diffraction efficiency drops significantly in accord with the observed operational limit. Continuous exposure for a period of 45 min at a laser power of 〈P〉  =  160 W did not result in any detectable drop in diffraction efficiency, confirmed afterwards by the efficient

  3. Detection of auditory signals in quiet and noisy backgrounds while performing a visuo-spatial task

    Directory of Open Access Journals (Sweden)

    Vishakha W Rawool

    2016-01-01

    Full Text Available Context: The ability to detect important auditory signals while performing visual tasks may be further compounded by background chatter. Thus, it is important to know how task performance may interact with background chatter to hinder signal detection. Aim: To examine any interactive effects of speech spectrum noise and task performance on the ability to detect signals. Settings and Design: The setting was a sound-treated booth. A repeated measures design was used. Materials and Methods: Auditory thresholds of 20 normal adults were determined at 0.5, 1, 2 and 4 kHz in the following conditions presented in a random order: (1 quiet with attention; (2 quiet with a visuo-spatial task or puzzle (distraction; (3 noise with attention and (4 noise with task. Statistical Analysis: Multivariate analyses of variance (MANOVA with three repeated factors (quiet versus noise, visuo-spatial task versus no task, signal frequency. Results: MANOVA revealed significant main effects for noise and signal frequency and significant noise–frequency and task–frequency interactions. Distraction caused by performing the task worsened the thresholds for tones presented at the beginning of the experiment and had no effect on tones presented in the middle. At the end of the experiment, thresholds (4 kHz were better while performing the task than those obtained without performing the task. These effects were similar across the quiet and noise conditions. Conclusion: Detection of auditory signals is difficult at the beginning of a distracting visuo-spatial task but over time, task learning and auditory training effects can nullify the effect of distraction and may improve detection of high frequency sounds.

  4. Underlay Cognitive Radio Systems with Improper Gaussian Signaling: Outage Performance Analysis

    KAUST Repository

    Amin, Osama

    2016-03-29

    Improper Gaussian signaling has the ability over proper (conventional) Gaussian signaling to improve the achievable rate of systems that suffer from interference. In this paper, we study the impact of using improper Gaussian signaling on the performance limits of the underlay cognitive radio system by analyzing the achievable outage probability of both the primary user (PU) and secondary user (SU). We derive the exact outage probability expression of the SU and construct upper and lower bounds of the PU outage probability which results in formulating an approximate expression of the PU outage probability. This allows us to design the SU signal by adjusting its transmitted power and the circularity coefficient to minimize the SU outage probability while maintaining a certain PU quality-of-service. Finally, we evaluate the derived expressions for both the SU and the PU and the corresponding adaptive algorithms by numerical results.

  5. Underlay Cognitive Radio Systems with Improper Gaussian Signaling: Outage Performance Analysis

    KAUST Repository

    Amin, Osama; Abediseid, Walid; Alouini, Mohamed-Slim

    2016-01-01

    Improper Gaussian signaling has the ability over proper (conventional) Gaussian signaling to improve the achievable rate of systems that suffer from interference. In this paper, we study the impact of using improper Gaussian signaling on the performance limits of the underlay cognitive radio system by analyzing the achievable outage probability of both the primary user (PU) and secondary user (SU). We derive the exact outage probability expression of the SU and construct upper and lower bounds of the PU outage probability which results in formulating an approximate expression of the PU outage probability. This allows us to design the SU signal by adjusting its transmitted power and the circularity coefficient to minimize the SU outage probability while maintaining a certain PU quality-of-service. Finally, we evaluate the derived expressions for both the SU and the PU and the corresponding adaptive algorithms by numerical results.

  6. Target acquisition performance : Effects of target aspect angle, dynamic imaging and signal processing

    NARCIS (Netherlands)

    Beintema, J.A.; Bijl, P.; Hogervorst, M.A.; Dijk, J.

    2008-01-01

    In an extensive Target Acquisition (TA) performance study, we recorded static and dynamic imagery of a set of military and civilian two-handheld objects at a range of distances and aspect angles with an under-sampled uncooled thermal imager. Next, we applied signal processing techniques including

  7. Comparative Performance Evaluation of Orthogonal-Signal-Generators-Based Single-Phase PLL Algorithms

    DEFF Research Database (Denmark)

    Han, Yang; Luo, Mingyu; Zhao, Xin

    2016-01-01

    The orthogonal signal generator based phase-locked loops (OSG-PLLs) are among the most popular single-phase PLLs within the areas of power electronics and power systems, mainly because they are often easy to be implement and offer a robust performance against the grid disturbances. The main aim o...

  8. Drug Safety Monitoring in Children: Performance of Signal Detection Algorithms and Impact of Age Stratification

    NARCIS (Netherlands)

    O.U. Osokogu (Osemeke); C. Dodd (Caitlin); A.C. Pacurariu (Alexandra C.); F. Kaguelidou (Florentia); D.M. Weibel (Daniel); M.C.J.M. Sturkenboom (Miriam)

    2016-01-01

    textabstractIntroduction: Spontaneous reports of suspected adverse drug reactions (ADRs) can be analyzed to yield additional drug safety evidence for the pediatric population. Signal detection algorithms (SDAs) are required for these analyses; however, the performance of SDAs in the pediatric

  9. Influence of RZ and NRZ signal format on the high-speed performance of gain-clamped semiconductor optical amplifiers

    DEFF Research Database (Denmark)

    Fjelde, Tina; Wolfson, David; Kloch, Allan

    2000-01-01

    High-speed experiments show that the influence from the limited relaxation frequency of GC-SOAs that severely degrades the performance for NRZ signals is reduced by using RZ signals, thus resulting in a higher input power dynamic range.......High-speed experiments show that the influence from the limited relaxation frequency of GC-SOAs that severely degrades the performance for NRZ signals is reduced by using RZ signals, thus resulting in a higher input power dynamic range....

  10. Relation between stability and resilience determines the performance of early warning signals under different environmental drivers.

    Science.gov (United States)

    Dai, Lei; Korolev, Kirill S; Gore, Jeff

    2015-08-11

    Shifting patterns of temporal fluctuations have been found to signal critical transitions in a variety of systems, from ecological communities to human physiology. However, failure of these early warning signals in some systems calls for a better understanding of their limitations. In particular, little is known about the generality of early warning signals in different deteriorating environments. In this study, we characterized how multiple environmental drivers influence the dynamics of laboratory yeast populations, which was previously shown to display alternative stable states [Dai et al., Science, 2012]. We observed that both the coefficient of variation and autocorrelation increased before population collapse in two slowly deteriorating environments, one with a rising death rate and the other one with decreasing nutrient availability. We compared the performance of early warning signals across multiple environments as "indicators for loss of resilience." We find that the varying performance is determined by how a system responds to changes in a specific driver, which can be captured by a relation between stability (recovery rate) and resilience (size of the basin of attraction). Furthermore, we demonstrate that the positive correlation between stability and resilience, as the essential assumption of indicators based on critical slowing down, can break down in this system when multiple environmental drivers are changed simultaneously. Our results suggest that the stability-resilience relation needs to be better understood for the application of early warning signals in different scenarios.

  11. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  12. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  13. A Review on Human Body Communication: Signal Propagation Model, Communication Performance, and Experimental Issues

    Directory of Open Access Journals (Sweden)

    Jian Feng Zhao

    2017-01-01

    Full Text Available Human body communication (HBC, which uses the human body tissue as the transmission medium to transmit health informatics, serves as a promising physical layer solution for the body area network (BAN. The human centric nature of HBC offers an innovative method to transfer the healthcare data, whose transmission requires low interference and reliable data link. Therefore, the deployment of HBC system obtaining good communication performance is required. In this regard, a tutorial review on the important issues related to HBC data transmission such as signal propagation model, channel characteristics, communication performance, and experimental considerations is conducted. In this work, the development of HBC and its first attempts are firstly reviewed. Then a survey on the signal propagation models is introduced. Based on these models, the channel characteristics are summarized; the communication performance and selection of transmission parameters are also investigated. Moreover, the experimental issues, such as electrodes and grounding strategies, are also discussed. Finally, the recommended future studies are provided.

  14. Improved stochastic resonance algorithm for enhancement of signal-to-noise ratio of high-performance liquid chromatographic signal

    International Nuclear Information System (INIS)

    Xie Shaofei; Xiang Bingren; Deng Haishan; Xiang Suyun; Lu Jun

    2007-01-01

    Based on the theory of stochastic resonance, an improved stochastic resonance algorithm with a new criterion for optimizing system parameters to enhance signal-to-noise ratio (SNR) of HPLC/UV chromatographic signal for trace analysis was presented in this study. Compared with the conventional criterion in stochastic resonance, the proposed one can ensure satisfactory SNR as well as good peak shape of chromatographic peak in output signal. Application of the criterion to experimental weak signals of HPLC/UV was investigated and the results showed an excellent quantitative relationship between different concentrations and responses

  15. Monitoring and predicting cognitive state and performance via physiological correlates of neuronal signals.

    Science.gov (United States)

    Russo, Michael B; Stetz, Melba C; Thomas, Maria L

    2005-07-01

    Judgment, decision making, and situational awareness are higher-order mental abilities critically important to operational cognitive performance. Higher-order mental abilities rely on intact functioning of multiple brain regions, including the prefrontal, thalamus, and parietal areas. Real-time monitoring of individuals for cognitive performance capacity via an approach based on sampling multiple neurophysiologic signals and integrating those signals with performance prediction models potentially provides a method of supporting warfighters' and commanders' decision making and other operationally relevant mental processes and is consistent with the goals of augmented cognition. Cognitive neurophysiological assessments that directly measure brain function and subsequent cognition include positron emission tomography, functional magnetic resonance imaging, mass spectroscopy, near-infrared spectroscopy, magnetoencephalography, and electroencephalography (EEG); however, most direct measures are not practical to use in operational environments. More practical, albeit indirect measures that are generated by, but removed from the actual neural sources, are movement activity, oculometrics, heart rate, and voice stress signals. The goal of the papers in this section is to describe advances in selected direct and indirect cognitive neurophysiologic monitoring techniques as applied for the ultimate purpose of preventing operational performance failures. These papers present data acquired in a wide variety of environments, including laboratory, simulator, and clinical arenas. The papers discuss cognitive neurophysiologic measures such as digital signal processing wrist-mounted actigraphy; oculometrics including blinks, saccadic eye movements, pupillary movements, the pupil light reflex; and high-frequency EEG. These neurophysiological indices are related to cognitive performance as measured through standard test batteries and simulators with conditions including sleep loss

  16. GNSS Signal Tracking Performance Improvement for Highly Dynamic Receivers by Gyroscopic Mounting Crystal Oscillator.

    Science.gov (United States)

    Abedi, Maryam; Jin, Tian; Sun, Kewen

    2015-08-31

    In this paper, the efficiency of the gyroscopic mounting method is studied for a highly dynamic GNSS receiver's reference oscillator for reducing signal loss. Analyses are performed separately in two phases, atmospheric and upper atmospheric flights. Results show that the proposed mounting reduces signal loss, especially in parts of the trajectory where its probability is the highest. This reduction effect appears especially for crystal oscillators with a low elevation angle g-sensitivity vector. The gyroscopic mounting influences frequency deviation or jitter caused by dynamic loads on replica carrier and affects the frequency locked loop (FLL) as the dominant tracking loop in highly dynamic GNSS receivers. In terms of steady-state load, the proposed mounting mostly reduces the frequency deviation below the one-sigma threshold of FLL (1σ(FLL)). The mounting method can also reduce the frequency jitter caused by sinusoidal vibrations and reduces the probability of signal loss in parts of the trajectory where the other error sources accompany this vibration load. In the case of random vibration, which is the main disturbance source of FLL, gyroscopic mounting is even able to suppress the disturbances greater than the three-sigma threshold of FLL (3σ(FLL)). In this way, signal tracking performance can be improved by the gyroscopic mounting method for highly dynamic GNSS receivers.

  17. Detection of a random signal in a multi-channel environment: a performance study

    International Nuclear Information System (INIS)

    Frenzel, K.Z.

    1986-01-01

    Performance of the optimal (likelihood ratio) test and suboptimal tests, including the normalized cross correlator and two energy detectors are compared for problems involving non-gaussian as well as gaussian statistics. Also, optimal one-channel processing is compared to optimal two-channel processing for equivalent total signal-to-noise ratios. Receiver operator characteristics (ROC) curves obtained by a combination of simulation and analytic methods are used to evaluate the performance of the processors. It was found that two-channel processing helps the detection performance the most when the noise levels are uncertain. This was true for all signal and noise densities studied. In cases where the noise levels and channel attenuations are known, or when only the attenuations are uncertain, the performance using optimal one-channel processing was close to that found using optimal two-channel processing. When comparing optimal processors to the three suboptimal processors, it was found that when the noise level in each channel is very uncertain, the performance of the normalized cross correlator is much closer to the optimal than that of either of the energy detectors. If, however, the noise levels are know with a fair degree of certainty, the performance of the energy detectors improves considerably, in some cases approaching the optimal performance

  18. A complex symbol signal-to-noise ratio estimator and its performance

    Science.gov (United States)

    Feria, Y.

    1994-01-01

    This article presents an algorithm for estimating the signal-to-noise ratio (SNR) of signals that contain data on a downconverted suppressed carrier or the first harmonic of a square-wave subcarrier. This algorithm can be used to determine the performance of the full-spectrum combiner for the Galileo S-band (2.2- to 2.3-GHz) mission by measuring the input and output symbol SNR. A performance analysis of the algorithm shows that the estimator can estimate the complex symbol SNR using 10,000 symbols at a true symbol SNR of -5 dB with a mean of -4.9985 dB and a standard deviation of 0.2454 dB, and these analytical results are checked by simulations of 100 runs with a mean of -5.06 dB and a standard deviation of 0.2506 dB.

  19. Identifying colon cancer risk modules with better classification performance based on human signaling network.

    Science.gov (United States)

    Qu, Xiaoli; Xie, Ruiqiang; Chen, Lina; Feng, Chenchen; Zhou, Yanyan; Li, Wan; Huang, Hao; Jia, Xu; Lv, Junjie; He, Yuehan; Du, Youwen; Li, Weiguo; Shi, Yuchen; He, Weiming

    2014-10-01

    Identifying differences between normal and tumor samples from a modular perspective may help to improve our understanding of the mechanisms responsible for colon cancer. Many cancer studies have shown that signaling transduction and biological pathways are disturbed in disease states, and expression profiles can distinguish variations in diseases. In this study, we integrated a weighted human signaling network and gene expression profiles to select risk modules associated with tumor conditions. Risk modules as classification features by our method had a better classification performance than other methods, and one risk module for colon cancer had a good classification performance for distinguishing between normal/tumor samples and between tumor stages. All genes in the module were annotated to the biological process of positive regulation of cell proliferation, and were highly associated with colon cancer. These results suggested that these genes might be the potential risk genes for colon cancer. Copyright © 2013. Published by Elsevier Inc.

  20. Performance of Narrowband Signal Detection under Correlated Rayleigh Fading Based on Synthetic Array

    Directory of Open Access Journals (Sweden)

    Ali Broumandan

    2009-01-01

    design parameters of probability of detection (Pd and probability of false alarm (Pfa. An optimum detector based on Estimator-Correlator (EC is developed, and its performance is compared with that of suboptimal Equal-Gain (EG combiner in different channel correlation scenarios. It is shown that in moderate channel correlation scenarios the detection performance of EC and EG is identical. The sensitivity of the proposed method to knowledge of motion parameters is also investigated. An extensive set of measurements based on CDMA-2000 pilot signals using the static antenna and synthetic array are used to experimentally verify these theoretical findings.

  1. Effects of signal salience and noise on performance and stress in an abbreviated vigil

    Science.gov (United States)

    Helton, William Stokely

    Vigilance or sustained attention tasks traditionally require observers to detect predetermined signals that occur unpredictably over periods of 30 min to several hours (Warm, 1984). These tasks are taxing and have been useful in revealing the effects of stress agents, such as infectious disease and drugs, on human performance (Alluisi, 1969; Damos & Parker, 1994; Warm, 1993). However, their long duration has been an inconvenience. Recently, Temple and his associates (Temple et al., 2000) developed an abbreviated 12-min vigilance task that duplicates many of the findings with longer duration vigils. The present study was designed to explore further the similarity of the abbreviated task to long-duration vigils by investigating the effects of signal salience and jet-aircraft engine noise on performance, operator stress, and coping strategies. Forty-eight observers (24 males and 24 females) were assigned at random to each of four conditions resulting from the factorial combination of signal salience (high and low contrast signals) and background noise (quiet and jet-aircraft noise). As is the case with long-duration vigils (Warm, 1993), signal detection in the abbreviated task was poorer for low salience than for high salience signals. In addition, stress scores, as indexed by the Dundee Stress State Questionnaire (Matthews, Joiner, Gilliland, Campbell, & Falconer, 1999), were elevated in the low as compared to the high salience condition. Unlike longer vigils, however, (Becker, Warm, Dember, & Hancock, 1996), signal detection in the abbreviated task was superior in the presence of aircraft noise than in quiet. Noise also attenuated the stress of the vigil, a result that is counter to previous findings regarding the effects of noise in a variety of other scenarios (Clark, 1984). Examination of observers' coping responses, as assessed by the Coping Inventory for Task Situations (Matthews & Campbell, 1998), indicated that problem-focused coping was the overwhelming

  2. A High Performance Pocket-Size System for Evaluations in Acoustic Signal Processing

    Directory of Open Access Journals (Sweden)

    Steeger Gerhard H

    2001-01-01

    Full Text Available Custom-made hardware is attractive for sophisticated signal processing in wearable electroacoustic devices, but has a high initial cost overhead. Thus, signal processing algorithms should be tested thoroughly in real application environments by potential end users prior to the hardware implementation. In addition, the algorithms should be easily alterable during this test phase. A wearable system which meets these requirements has been developed and built. The system is based on the high performance signal processor Motorola DSP56309. This device also includes high quality stereo analog-to-digital-(ADC- and digital-to-analog-(DAC-converters with 20 bit word length each. The available dynamic range exceeds 88 dB. The input and output gains can be adjusted by digitally controlled potentiometers. The housing of the unit is small enough to carry it in a pocket (dimensions 150 × 80 × 25 mm. Software tools have been developed to ease the development of new algorithms. A set of configurable Assembler code modules implements all hardware dependent software routines and gives easy access to the peripherals and interfaces. A comfortable fitting interface allows easy control of the signal processing unit from a PC, even by assistant personnel. The device has proven to be a helpful means for development and field evaluations of advanced new hearing aid algorithms, within interdisciplinary research projects. Now it is offered to the scientific community.

  3. Acquisition and deconvolution of seismic signals by different methods to perform direct ground-force measurements

    Science.gov (United States)

    Poletto, Flavio; Schleifer, Andrea; Zgauc, Franco; Meneghini, Fabio; Petronio, Lorenzo

    2016-12-01

    We present the results of a novel borehole-seismic experiment in which we used different types of onshore-transient-impulsive and non-impulsive-surface sources together with direct ground-force recordings. The ground-force signals were obtained by baseplate load cells located beneath the sources, and by buried soil-stress sensors installed in the very shallow-subsurface together with accelerometers. The aim was to characterize the source's emission by its complex impedance, function of the near-field vibrations and soil stress components, and above all to obtain appropriate deconvolution operators to remove the signature of the sources in the far-field seismic signals. The data analysis shows the differences in the reference measurements utilized to deconvolve the source signature. As downgoing waves, we process the signals of vertical seismic profiles (VSP) recorded in the far-field approximation by an array of permanent geophones cemented at shallow-medium depth outside the casing of an instrumented well. We obtain a significant improvement in the waveform of the radiated seismic-vibrator signals deconvolved by ground force, similar to that of the seismograms generated by the impulsive sources, and demonstrates that the results obtained by different sources present low values in their repeatability norm. The comparison evidences the potentiality of the direct ground-force measurement approach to effectively remove the far-field source signature in VSP onshore data, and to increase the performance of permanent acquisition installations for time-lapse application purposes.

  4. Filtering Performance Comparison of Kernel and Wavelet Filters for Reactivity Signal Noise

    International Nuclear Information System (INIS)

    Park, Moon Ghu; Shin, Ho Cheol; Lee, Yong Kwan; You, Skin

    2006-01-01

    Nuclear reactor power deviation from the critical state is a parameter of specific interest defined by the reactivity measuring neutron population. Reactivity is an extremely important quantity used to define many of the reactor startup physics parameters. The time dependent reactivity is normally determined by solving the using inverse neutron kinetics equation. The reactivity computer is a device to provide an on-line solution of the inverse kinetics equation. The measurement signal of the neutron density is normally noise corrupted and the control rods movement typically gives reactivity variation with edge signals like saw teeth. Those edge regions should be precisely preserved since the measured signal is used to estimate the reactivity wroth which is a crucial parameter to assure the safety of the nuclear reactors. In this paper, three kind of edge preserving noise filters are proposed and their performance is demonstrated using stepwise signals. The tested filters are based on the unilateral, bilateral kernel and wavelet filters which are known to be effective in edge preservation. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters

  5. 1-D Wavelet Signal Analysis of the Actuators Nonlinearities Impact on the Healthy Control Systems Performance

    Directory of Open Access Journals (Sweden)

    Nicolae Tudoroiu

    2017-09-01

    Full Text Available The objective of this paper is to investigate the use of the 1-D wavelet analysis to extract several patterns from signals data sets collected from healthy and faulty input-output signals of control systems as a preliminary step in real-time implementation of fault detection diagnosis and isolation strategies. The 1-D wavelet analysis proved that is an useful tool for signals processing, design and analysis based on wavelet transforms found in a wide range of control systems industrial applications. Based on the fact that in the real life there is a great similitude between the phenomena, we are motivated to extend the applicability of these techniques to solve similar applications from control systems field, such is done in our research work. Their efficiency will be demonstrated on a case study mainly chosen to evaluate the impact of the uncertainties and the nonlinearities of the sensors and actuators on the overall performance of the control systems. The proposed techniques are able to extract in frequency domain some pattern features (signatures of interest directly from the signals data set collected by data acquisition equipment from the control system.

  6. FIPSER: Performance study of a readout concept with few digitization levels for fast signals

    Energy Technology Data Exchange (ETDEWEB)

    Limyansky, B., E-mail: brent.limyansky@gatech.edu [School of Physics and Center for Relativistic Astrophysics, Georgia Institute of Technology, Atlanta (United States); Reese, R., E-mail: bobbeyreese@gmail.com [School of Physics and Center for Relativistic Astrophysics, Georgia Institute of Technology, Atlanta (United States); Cressler, J.D. [School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta (United States); Otte, A.N.; Taboada, I. [School of Physics and Center for Relativistic Astrophysics, Georgia Institute of Technology, Atlanta (United States); Ulusoy, C. [Dept. of Electrical and Computer Engineering, Michigan State University, East Lansing (United States)

    2016-11-21

    We discuss the performance of a readout system, Fixed Pulse Shape Efficient Readout (FIPSER), to digitize signals from detectors with a fixed pulse shape. In this study we are mainly interested in the readout of fast photon detectors like photomultipliers or Silicon photomultipliers. But the concept can be equally applied to the digitization of other detector signals. FIPSER is based on the flash analog to digital converter (FADC) concept, but has the potential to lower costs and power consumption by using an order of magnitude fewer discrete voltage levels. Performance is bolstered by combining the discretized signal with the knowledge of the underlying pulse shape. Simulated FIPSER data was reconstructed with two independent methods. One using a maximum likelihood method and the other using a modified χ{sup 2} test. Both methods show that utilizing 12 discrete voltage levels with a sampling rate of 4 samples per full width half maximum (FWHM) of the pulse achieves an amplitude resolution that is better than the Poisson limit for photon-counting experiments. The time resolution achieved in this configuration ranges between 0.02 and 0.16 FWHM and depends on the pulse amplitude. In a situation where the waveform is composed of two consecutive pulses the pulses can be separated if they are at least 0.05–0.30 FWHM apart with an amplitude resolution that is better than 20%.

  7. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  8. Use of modulated excitation signals in ultrasound. Part II: Design and performance for medical imaging applications

    DEFF Research Database (Denmark)

    Misaridis, Thanassis; Jensen, Jørgen Arendt

    2005-01-01

    ultrasound presents design methods of linear FM signals and mismatched filters, in order to meet the higher demands on resolution in ultrasound imaging. It is shown that for the small time-bandwidth (TB) products available in ultrasound, the rectangular spectrum approximation is not valid, which reduces....... The method is evaluated first for resolution performance and axial sidelobes through simulations with the program Field II. A coded excitation ultrasound imaging system based on a commercial scanner and a 4 MHz probe driven by coded sequences is presented and used for the clinical evaluation of the coded...... excitation/compression scheme. The clinical images show a significant improvement in penetration depth and contrast, while they preserve both axial and lateral resolution. At the maximum acquisition depth of 15 cm, there is an improvement of more than 10 dB in the signal-to-noise ratio of the images...

  9. High-Performance Signal Detection for Adverse Drug Events using MapReduce Paradigm.

    Science.gov (United States)

    Fan, Kai; Sun, Xingzhi; Tao, Ying; Xu, Linhao; Wang, Chen; Mao, Xianling; Peng, Bo; Pan, Yue

    2010-11-13

    Post-marketing pharmacovigilance is important for public health, as many Adverse Drug Events (ADEs) are unknown when those drugs were approved for marketing. However, due to the large number of reported drugs and drug combinations, detecting ADE signals by mining these reports is becoming a challenging task in terms of computational complexity. Recently, a parallel programming model, MapReduce has been introduced by Google to support large-scale data intensive applications. In this study, we proposed a MapReduce-based algorithm, for common ADE detection approach, Proportional Reporting Ratio (PRR), and tested it in mining spontaneous ADE reports from FDA. The purpose is to investigate the possibility of using MapReduce principle to speed up biomedical data mining tasks using this pharmacovigilance case as one specific example. The results demonstrated that MapReduce programming model could improve the performance of common signal detection algorithm for pharmacovigilance in a distributed computation environment at approximately liner speedup rates.

  10. Extreme Temperature Performance of Automotive-Grade Small Signal Bipolar Junction Transistors

    Science.gov (United States)

    Boomer, Kristen; Damron, Benny; Gray, Josh; Hammoud, Ahmad

    2018-01-01

    Electronics designed for space exploration missions must display efficient and reliable operation under extreme temperature conditions. For example, lunar outposts, Mars rovers and landers, James Webb Space Telescope, Europa orbiter, and deep space probes represent examples of missions where extreme temperatures and thermal cycling are encountered. Switching transistors, small signal as well as power level devices, are widely used in electronic controllers, data instrumentation, and power management and distribution systems. Little is known, however, about their performance in extreme temperature environments beyond their specified operating range; in particular under cryogenic conditions. This report summarizes preliminary results obtained on the evaluation of commercial-off-the-shelf (COTS) automotive-grade NPN small signal transistors over a wide temperature range and thermal cycling. The investigations were carried out to establish a baseline on functionality of these transistors and to determine suitability for use outside their recommended temperature limits.

  11. Control mechanism to prevent correlated message arrivals from degrading signaling no. 7 network performance

    Science.gov (United States)

    Kosal, Haluk; Skoog, Ronald A.

    1994-04-01

    Signaling System No. 7 (SS7) is designed to provide a connection-less transfer of signaling messages of reasonable length. Customers having access to user signaling bearer capabilities as specified in the ANSI T1.623 and CCITT Q.931 standards can send bursts of correlated messages (e.g., by doing a file transfer that results in the segmentation of a block of data into a number of consecutive signaling messages) through SS7 networks. These message bursts with short interarrival times could have an adverse impact on the delay performance of the SS7 networks. A control mechanism, Credit Manager, is investigated in this paper to regulate incoming traffic to the SS7 network by imposing appropriate time separation between messages when the incoming stream is too bursty. The credit manager has a credit bank where credits accrue at a fixed rate up to a prespecified credit bank capacity. When a message arrives, the number of octets in that message is compared to the number of credits in the bank. If the number of credits is greater than or equal to the number of octets, then the message is accepted for transmission and the number of credits in the bank is decremented by the number of octets. If the number of credits is less than the number of octets, then the message is delayed until enough credits are accumulated. This paper presents simulation results showing delay performance of the SS7 ISUP and TCAP message traffic with a range of correlated message traffic, and control parameters of the credit manager (i.e., credit generation rate and bank capacity) are determined that ensure the traffic entering the SS7 network is acceptable. The results show that control parameters can be set so that for any incoming traffic stream there is no detrimental impact on the SS7 ISUP and TCAP message delay, and the credit manager accepts a wide range of traffic patterns without causing significant delay.

  12. Task performance changes the amplitude and timing of the BOLD signal

    Directory of Open Access Journals (Sweden)

    Akhrif Atae

    2017-12-01

    Full Text Available Translational studies comparing imaging data of animals and humans have gained increasing scientific interests. With this upcoming translational approach, however, identifying harmonized statistical analysis as well as shared data acquisition protocols and/or combined statistical approaches is necessary. Following this idea, we applied Bayesian Adaptive Regression Splines (BARS, which have until now mainly been used to model neural responses of electrophysiological recordings from rodent data, on human hemodynamic responses as measured via fMRI. Forty-seven healthy subjects were investigated while performing the Attention Network Task in the MRI scanner. Fluctuations in the amplitude and timing of the BOLD response were determined and validated externally with brain activation using GLM and also ecologically with the influence of task performance (i.e. good vs. bad performers. In terms of brain activation, bad performers presented reduced activation bilaterally in the parietal lobules, right prefrontal cortex (PFC and striatum. This was accompanied by an enhanced left PFC recruitment. With regard to the amplitude of the BOLD-signal, bad performers showed enhanced values in the left PFC. In addition, in the regions of reduced activation such as the parietal and striatal regions, the temporal dynamics were higher in bad performers. Based on the relation between BOLD response and neural firing with the amplitude of the BOLD signal reflecting gamma power and timing dynamics beta power, we argue that in bad performers, an enhanced left PFC recruitment hints towards an enhanced functioning of gamma-band activity in a compensatory manner. This was accompanied by reduced parieto-striatal activity, associated with increased and potentially conflicting beta-band activity.

  13. Performance evaluation of radiation sensors with internal signal amplification based on the BJT effect

    International Nuclear Information System (INIS)

    Bosisio, Luciano; Batignani, Giovanni; Bettarini, Stefano; Boscardin, Maurizio; Dalla Betta, Gian-Franco; Giacomini, Gabriele; Piemonte, Claudio; Verzellesi, Giovanni; Zorzi, Nicola

    2006-01-01

    Prototypes of ionizing radiation detectors with internal signal amplification based on the bipolar transistor effect have been fabricated at ITC-irst (Trento, Italy). Results from the electrical characterization and preliminary functional tests of the devices have been previously reported. Here, we present a more detailed investigation of the performance of this type of detector, with particular attention to their noise and rate limits. Measurements of the signal waveform and of the gain versus frequency dependence are performed by illuminating the devices with, respectively, pulsed or sinusoidally modulated IR light. Pulse height spectra of X-rays from an Am241 source have been taken with very simple front-end electronics (an LF351 operational amplifier) or by directly reading with an oscilloscope the voltage drop across a load resistor connected to the emitter. An equivalent noise charge (referred to input) of 380 electrons r.m.s. has been obtained with the first setup for a small device, with an active area of 0.5x0.5mm 2 and a depleted thickness of 0.6mm. The corresponding power dissipation in the BJT was 17μW. The performance limitations of the devices are discussed

  14. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions.

    Science.gov (United States)

    Box, Simon

    2014-12-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.

  15. Performance bounds on micro-Doppler estimation and adaptive waveform design using OFDM signals

    Science.gov (United States)

    Sen, Satyabrata; Barhen, Jacob; Glover, Charles W.

    2014-05-01

    We analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a target having multiple rotating scatterers (e.g., rotor blades of a helicopter, propellers of a submarine). The presence of rotating scatterers introduces Doppler frequency modulation in the received signal by generating sidebands about the transmitted frequencies. This is called the micro-Doppler effects. The use of a frequency-diverse OFDM signal in this context enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. Therefore, to characterize the accuracy of micro-Doppler frequency estimation, we compute the Craḿer-Rao Bound (CRB) on the angular-velocity estimate of the target while considering the scatterer responses as deterministic but unknown nuisance parameters. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to the transmitting OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations at different values of the signal-to-noise ratio (SNR) and the number of OFDM subcarriers. The CRB values not only decrease with the increase in the SNR values, but also reduce as we increase the number of subcarriers implying the significance of frequency-diverse OFDM waveforms. The improvement in estimation accuracy due to the adaptive waveform design is also numerically analyzed. Interestingly, we find that the relative decrease in the CRBs on the angular-velocity estimate is more pronounced for larger number of OFDM subcarriers.

  16. Performance Bounds on Micro-Doppler Estimation and Adaptive Waveform Design Using OFDM Signals

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL; Barhen, Jacob [ORNL; Glover, Charles Wayne [ORNL

    2014-01-01

    We analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a target having multiple rotating scatterers (e.g., rotor blades of a helicopter, propellers of a submarine). The presence of rotating scatterers introduces Doppler frequency modulation in the received signal by generating sidebands about the transmitted frequencies. This is called the micro-Doppler effects. The use of a frequency-diverse OFDM signal in this context enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. Therefore, to characterize the accuracy of micro-Doppler frequency estimation, we compute the Cram er-Rao Bound (CRB) on the angular-velocity estimate of the target while considering the scatterer responses as deterministic but unknown nuisance parameters. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to the transmitting OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations at different values of the signal-to-noise ratio (SNR) and the number of OFDM subcarriers. The CRB values not only decrease with the increase in the SNR values, but also reduce as we increase the number of subcarriers implying the significance of frequency-diverse OFDM waveforms. The improvement in estimation accuracy due to the adaptive waveform design is also numerically analyzed. Interestingly, we find that the relative decrease in the CRBs on the angular-velocity estimate is more pronounced for larger number of OFDM subcarriers.

  17. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  18. Above average increases in body fat from 9-15 years of age had a negative impact on academic performance, independent of physical activity.

    Science.gov (United States)

    Saevarsson, Elvar Smari; Gudmundsdottir, Sigridur Lara; Kantomaa, Marko; Arngrimsson, Sigurbjorn A; Sveinsson, Thorarinn; Skulason, Sigurgrimur; Johannsson, Erlingur

    2018-06-13

    The associations between body fat levels and physical activity with academic performance are inconclusive and were explored using longitudinal data. We enrolled 134/242 adolescents aged 15, who were studied at the age of nine and agreed to be followed up from April to May 2015 for the Health behaviours of Icelandic youth study. Accelerometers measured physical activity, body mass indexes were calculated and dual-energy X-ray absorptiometry scans assessed the participants' body composition at nine and 15. Their language and maths skills were compared to a growth model that estimated the academic performances of children born in 1999. Higher than normal body fat levels between the ages of nine and 15 were negatively associated with maths performance, but the same association was not found for Icelandic language studies. These were Pearson's r = - 0.24 (p = 0.01) for body mass index and Pearson's r = -0.34 (p = 0.01) for the percentage of body fat. No associations were found with changes in physical activity. Children who put on more body fat than normal between the ages of nine and 15 had an increased risk of adverse academic performance that was independent of changes in physical activity. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. Evaluation of high performance data acquisition boards for simultaneous sampling of fast signals from PET detectors

    International Nuclear Information System (INIS)

    Judenhofer, Martin S; Pichler, Bernd J; Cherry, Simon R

    2005-01-01

    Detectors used for positron emission tomography (PET) provide fast, randomly distributed signals that need to be digitized for further processing. One possibility is to sample the signals at the peak initiated by a trigger from a constant fraction discriminator (CFD). For PET detectors, simultaneous acquisition of many channels is often important. To develop and evaluate novel PET detectors, a flexible, relatively low cost and high performance laboratory data acquisition (DAQ) system is therefore required. The use of dedicated DAQ systems, such as a multi-channel analysers (MCAs) or continuous sampling boards at high rates, is expensive. This work evaluates the suitability of well-priced peripheral component interconnect (PCI)-based 8-channel DAQ boards (PD2-MFS-8 2M/14 and PD2-MFS-8-500k/14, United Electronic Industries Inc., Canton, MA, USA) for signal acquisition from novel PET detectors. A software package was developed to access the board, measure basic board parameters, and to acquire, visualize, and analyse energy spectra and position profiles from block detectors. The performance tests showed that the boards input linearity is >99.2% and the standard deviation is 22 Na source was 14.9% (FWHM) at 511 keV and is slightly better than the result obtained with a high-end single channel MCA (8000A, Amptek, USA) using the same detector (16.8%). The crystals (1.2 x 1.2 x 12 mm 3 ) within a 9 x 9 LSO block detector could be clearly separated in an acquired position profile. Thus, these boards are well suited for data acquisition with novel detectors developed for nuclear imaging

  20. Genetically determined measures of striatal D2 signaling predict prefrontal activity during working memory performance.

    Science.gov (United States)

    Bertolino, Alessandro; Taurisano, Paolo; Pisciotta, Nicola Marco; Blasi, Giuseppe; Fazio, Leonardo; Romano, Raffaella; Gelao, Barbara; Lo Bianco, Luciana; Lozupone, Madia; Di Giorgio, Annabella; Caforio, Grazia; Sambataro, Fabio; Niccoli-Asabella, Artor; Papp, Audrey; Ursini, Gianluca; Sinibaldi, Lorenzo; Popolizio, Teresa; Sadee, Wolfgang; Rubini, Giuseppe

    2010-02-22

    Variation of the gene coding for D2 receptors (DRD2) has been associated with risk for schizophrenia and with working memory deficits. A functional intronic SNP (rs1076560) predicts relative expression of the two D2 receptors isoforms, D2S (mainly pre-synaptic) and D2L (mainly post-synaptic). However, the effect of functional genetic variation of DRD2 on striatal dopamine D2 signaling and on its correlation with prefrontal activity during working memory in humans is not known. Thirty-seven healthy subjects were genotyped for rs1076560 (G>T) and underwent SPECT with [123I]IBZM (which binds primarily to post-synaptic D2 receptors) and with [123I]FP-CIT (which binds to pre-synaptic dopamine transporters, whose activity and density is also regulated by pre-synaptic D2 receptors), as well as BOLD fMRI during N-Back working memory. Subjects carrying the T allele (previously associated with reduced D2S expression) had striatal reductions of [123I]IBZM and of [123I]FP-CIT binding. DRD2 genotype also differentially predicted the correlation between striatal dopamine D2 signaling (as identified with factor analysis of the two radiotracers) and activity of the prefrontal cortex during working memory as measured with BOLD fMRI, which was positive in GG subjects and negative in GT. Our results demonstrate that this functional SNP within DRD2 predicts striatal binding of the two radiotracers to dopamine transporters and D2 receptors as well as the correlation between striatal D2 signaling with prefrontal cortex activity during performance of a working memory task. These data are consistent with the possibility that the balance of excitatory/inhibitory modulation of striatal neurons may also affect striatal outputs in relationship with prefrontal activity during working memory performance within the cortico-striatal-thalamic-cortical pathway.

  1. Genetically determined measures of striatal D2 signaling predict prefrontal activity during working memory performance.

    Directory of Open Access Journals (Sweden)

    Alessandro Bertolino

    2010-02-01

    Full Text Available Variation of the gene coding for D2 receptors (DRD2 has been associated with risk for schizophrenia and with working memory deficits. A functional intronic SNP (rs1076560 predicts relative expression of the two D2 receptors isoforms, D2S (mainly pre-synaptic and D2L (mainly post-synaptic. However, the effect of functional genetic variation of DRD2 on striatal dopamine D2 signaling and on its correlation with prefrontal activity during working memory in humans is not known.Thirty-seven healthy subjects were genotyped for rs1076560 (G>T and underwent SPECT with [123I]IBZM (which binds primarily to post-synaptic D2 receptors and with [123I]FP-CIT (which binds to pre-synaptic dopamine transporters, whose activity and density is also regulated by pre-synaptic D2 receptors, as well as BOLD fMRI during N-Back working memory.Subjects carrying the T allele (previously associated with reduced D2S expression had striatal reductions of [123I]IBZM and of [123I]FP-CIT binding. DRD2 genotype also differentially predicted the correlation between striatal dopamine D2 signaling (as identified with factor analysis of the two radiotracers and activity of the prefrontal cortex during working memory as measured with BOLD fMRI, which was positive in GG subjects and negative in GT.Our results demonstrate that this functional SNP within DRD2 predicts striatal binding of the two radiotracers to dopamine transporters and D2 receptors as well as the correlation between striatal D2 signaling with prefrontal cortex activity during performance of a working memory task. These data are consistent with the possibility that the balance of excitatory/inhibitory modulation of striatal neurons may also affect striatal outputs in relationship with prefrontal activity during working memory performance within the cortico-striatal-thalamic-cortical pathway.

  2. Cancelable ECG biometrics using GLRT and performance improvement using guided filter with irreversible guide signal.

    Science.gov (United States)

    Kim, Hanvit; Minh Phuong Nguyen; Se Young Chun

    2017-07-01

    Biometrics such as ECG provides a convenient and powerful security tool to verify or identify an individual. However, one important drawback of biometrics is that it is irrevocable. In other words, biometrics cannot be re-used practically once it is compromised. Cancelable biometrics has been investigated to overcome this drawback. In this paper, we propose a cancelable ECG biometrics by deriving a generalized likelihood ratio test (GLRT) detector from a composite hypothesis testing in randomly projected domain. Since it is common to observe performance degradation for cancelable biometrics, we also propose a guided filtering (GF) with irreversible guide signal that is a non-invertibly transformed signal of ECG authentication template. We evaluated our proposed method using ECG-ID database with 89 subjects. Conventional Euclidean detector with original ECG template yielded 93.9% PD1 (detection probability at 1% FAR) while Euclidean detector with 10% compressed ECG (1/10 of the original data size) yielded 90.8% PD1. Our proposed GLRT detector with 10% compressed ECG yielded 91.4%, which is better than Euclidean with the same compressed ECG. GF with our proposed irreversible ECG template further improved the performance of our GLRT with 10% compressed ECG up to 94.3%, which is higher than Euclidean detector with original ECG. Lastly, we showed that our proposed cancelable ECG biometrics practically met cancelable biometrics criteria such as efficiency, re-usability, diversity and non-invertibility.

  3. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  4. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  5. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  6. ECG signal performance de-noising assessment based on threshold tuning of dual-tree wavelet transform.

    Science.gov (United States)

    El B'charri, Oussama; Latif, Rachid; Elmansouri, Khalifa; Abenaou, Abdenbi; Jenkal, Wissam

    2017-02-07

    Since the electrocardiogram (ECG) signal has a low frequency and a weak amplitude, it is sensitive to miscellaneous mixed noises, which may reduce the diagnostic accuracy and hinder the physician's correct decision on patients. The dual tree wavelet transform (DT-WT) is one of the most recent enhanced versions of discrete wavelet transform. However, threshold tuning on this method for noise removal from ECG signal has not been investigated yet. In this work, we shall provide a comprehensive study on the impact of the choice of threshold algorithm, threshold value, and the appropriate wavelet decomposition level to evaluate the ECG signal de-noising performance. A set of simulations is performed on both synthetic and real ECG signals to achieve the promised results. First, the synthetic ECG signal is used to observe the algorithm response. The evaluation results of synthetic ECG signal corrupted by various types of noise has showed that the modified unified threshold and wavelet hyperbolic threshold de-noising method is better in realistic and colored noises. The tuned threshold is then used on real ECG signals from the MIT-BIH database. The results has shown that the proposed method achieves higher performance than the ordinary dual tree wavelet transform into all kinds of noise removal from ECG signal. The simulation results indicate that the algorithm is robust for all kinds of noises with varying degrees of input noise, providing a high quality clean signal. Moreover, the algorithm is quite simple and can be used in real time ECG monitoring.

  7. An integrated framework for high level design of high performance signal processing circuits on FPGAs

    Science.gov (United States)

    Benkrid, K.; Belkacemi, S.; Sukhsawas, S.

    2005-06-01

    This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.

  8. Heat stress, gastrointestinal permeability and interleukin-6 signaling - Implications for exercise performance and fatigue.

    Science.gov (United States)

    Vargas, Nicole; Marino, Frank

    2016-01-01

    Exercise in heat stress exacerbates performance decrements compared to normothermic environments. It has been documented that the performance decrements are associated with reduced efferent drive from the central nervous system (CNS), however, specific factors that contribute to the decrements are not completely understood. During exertional heat stress, blood flow is preferentially distributed away from the intestinal area to supply the muscles and brain with oxygen. Consequently, the gastrointestinal barrier becomes increasingly permeable, resulting in the release of lipopolysaccharides (LPS, endotoxin) into the circulation. LPS leakage stimulates an acute-phase inflammatory response, including the release of interleukin (IL)-6 in response to an increasingly endotoxic environment. If LPS translocation is too great, heat shock, neurological dysfunction, or death may ensue. IL-6 acts initially in a pro-inflammatory manner during endotoxemia, but can attenuate the response through signaling the hypothalamic pituitary adrenal (HPA)-axis. Likewise, IL-6 is believed to be a thermoregulatory sensor in the gut during the febrile response, hence highlighting its role in periphery - to - brain communication. Recently, IL-6 has been implicated in signaling the CNS and influencing perceptions of fatigue and performance during exercise. Therefore, due to the cascade of events that occur during exertional heat stress, it is possible that the release of LPS and exacerbated response of IL-6 contributes to CNS modulation during exertional heat stress. The purpose of this review is to evaluate previous literature and discuss the potential role for IL-6 during exertional heat stress to modulate performance in favor of whole body preservation.

  9. Performance and Usability of Various Robotic Arm Control Modes from Human Force Signals

    Directory of Open Access Journals (Sweden)

    Sébastien Mick

    2017-10-01

    Full Text Available Elaborating an efficient and usable mapping between input commands and output movements is still a key challenge for the design of robotic arm prostheses. In order to address this issue, we present and compare three different control modes, by assessing them in terms of performance as well as general usability. Using an isometric force transducer as the command device, these modes convert the force input signal into either a position or a velocity vector, whose magnitude is linearly or quadratically related to force input magnitude. With the robotic arm from the open source 3D-printed Poppy Humanoid platform simulating a mobile prosthesis, an experiment was carried out with eighteen able-bodied subjects performing a 3-D target-reaching task using each of the three modes. The subjects were given questionnaires to evaluate the quality of their experience with each mode, providing an assessment of their global usability in the context of the task. According to performance metrics and questionnaire results, velocity control modes were found to perform better than position control mode in terms of accuracy and quality of control as well as user satisfaction and comfort. Subjects also seemed to favor quadratic velocity control over linear (proportional velocity control, even if these two modes did not clearly distinguish from one another when it comes to performance and usability assessment. These results highlight the need to take into account user experience as one of the key criteria for the design of control modes intended to operate limb prostheses.

  10. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  11. Performance Analysis of the Effect of Pulsed-Noise Interference on WLAN Signals Transmitted Over a Nakagami Fading Channel

    National Research Council Canada - National Science Library

    Tsoumanis, Andreas

    2004-01-01

    ...) coding with soft decision decoding (SDD) and maximum- likelihood detection improves performance as compared to uncoded signals, In addition, the combination of maximum-likelihood detection and error connection coding renders pulsed-noise...

  12. CHARACTERIZATION OF THE EFFECTS OF INHALED PERCHLOROETHYLENE ON SUSTAINED ATTENTION IN RATS PERFORMING A VISUAL SIGNAL DETECTION TASK

    Science.gov (United States)

    The aliphatic hydrocarbon perchloroethyelene (PCE) has been associated with neurobehavioral dysfunction including reduced attention in humans. The current study sought to assess the effects of inhaled PCE on sustained attention in rats performing a visual signal detection task (S...

  13. A Novel Blind Source Separation Algorithm and Performance Analysis of Weak Signal against Strong Interference in Passive Radar Systems

    Directory of Open Access Journals (Sweden)

    Chengjie Li

    2016-01-01

    Full Text Available In Passive Radar System, obtaining the mixed weak object signal against the super power signal (jamming is still a challenging task. In this paper, a novel framework based on Passive Radar System is designed for weak object signal separation. Firstly, we propose an Interference Cancellation algorithm (IC-algorithm to extract the mixed weak object signals from the strong jamming. Then, an improved FastICA algorithm with K-means cluster is designed to separate each weak signal from the mixed weak object signals. At last, we discuss the performance of the proposed method and verify the novel method based on several simulations. The experimental results demonstrate the effectiveness of the proposed method.

  14. Difference of performance in response to disease admissions between daily time air quality indices and those derived from average and entropy functions.

    Science.gov (United States)

    Lai, Li-Wei; Cheng, Wan-Li

    2017-06-01

    Daily time air quality indices, which can reflect air quality in 1 day, are suitable for identifying daily exposure during conditions of poor air quality. The aim of this study is to compare the main effectiveness of four daily time indices in representing variation in the number of disease admissions. These indices include pollution standard index (PSI), air quality index (AQI) and their respective indices derived from mean and entropy functions: MEPSI and MEAQI. The hourly concentrations of fine particulate matter less than 10 μm in diameter (PM 10 ), PM 2.5 , O 3 , CO, NO 2 and SO 2 from 1 January 2006 to 31 December 2010 were obtained from 14 air quality monitoring stations owned by the Environmental Protection Administration (EPA) in the Kaoping region, Taiwan.Instead of circulatory system disease admissions, the indices were correlative with the number of respiratory disease admissions with correlative coefficients of 0.49 to 0.56 (P entropy functions improved their performance of reactive range and air pollution identification. The reactive range of MEPSI and MEAQI was 1.4-3 times that of the original indices. The MEPSI and MEAQI increased identification from 40 to 180 in index scale and revealed one to two additional categories of public health effect information. In comparison with other indices, MEAQI is more effective for application to pollution events with multiple air pollutants.

  15. Noisy EEG signals classification based on entropy metrics. Performance assessment using first and second generation statistics.

    Science.gov (United States)

    Cuesta-Frau, David; Miró-Martínez, Pau; Jordán Núñez, Jorge; Oltra-Crespo, Sandra; Molina Picó, Antonio

    2017-08-01

    This paper evaluates the performance of first generation entropy metrics, featured by the well known and widely used Approximate Entropy (ApEn) and Sample Entropy (SampEn) metrics, and what can be considered an evolution from these, Fuzzy Entropy (FuzzyEn), in the Electroencephalogram (EEG) signal classification context. The study uses the commonest artifacts found in real EEGs, such as white noise, and muscular, cardiac, and ocular artifacts. Using two different sets of publicly available EEG records, and a realistic range of amplitudes for interfering artifacts, this work optimises and assesses the robustness of these metrics against artifacts in class segmentation terms probability. The results show that the qualitative behaviour of the two datasets is similar, with SampEn and FuzzyEn performing the best, and the noise and muscular artifacts are the most confounding factors. On the contrary, there is a wide variability as regards initialization parameters. The poor performance achieved by ApEn suggests that this metric should not be used in these contexts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance.

    Science.gov (United States)

    von Trapp, Gardiner; Buran, Bradley N; Sen, Kamal; Semple, Malcolm N; Sanes, Dan H

    2016-10-26

    The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability

  17. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  18. Performance Analysis of Recurrence Matrix Statistics for the Detection of Deterministic Signals in Noise

    National Research Council Canada - National Science Library

    Michalowicz, Joseph V; Nichols, Jonathan M; Bucholtz, Frank

    2008-01-01

    Understanding the limitations to detecting deterministic signals in the presence of noise, especially additive, white Gaussian noise, is of importance for the design of LPI systems and anti-LPI signal defense...

  19. Machine Learning Techniques for Optical Performance Monitoring from Directly Detected PDM-QAM Signals

    DEFF Research Database (Denmark)

    Thrane, Jakob; Wass, Jesper; Piels, Molly

    2017-01-01

    Linear signal processing algorithms are effective in dealing with linear transmission channel and linear signal detection, while the nonlinear signal processing algorithms, from the machine learning community, are effective in dealing with nonlinear transmission channel and nonlinear signal...... detection. In this paper, a brief overview of the various machine learning methods and their application in optical communication is presented and discussed. Moreover, supervised machine learning methods, such as neural networks and support vector machine, are experimentally demonstrated for in-band optical...

  20. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  1. Signal and image processing algorithm performance in a virtual and elastic computing environment

    Science.gov (United States)

    Bennett, Kelly W.; Robertson, James

    2013-05-01

    The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.

  2. Feeding of Whitefly on Tobacco Decreases Aphid Performance via Increased Salicylate Signaling.

    Directory of Open Access Journals (Sweden)

    Haipeng Zhao

    Full Text Available The feeding of Bemisia tabaci nymphs trigger the SA pathway in some plant species. A previous study showed that B. tabaci nymphs induced defense against aphids (Myzus persicae in tobacco. However, the mechanism underlying this defense response is not well understood.Here, the effect of activating the SA signaling pathway in tobacco plants through B. tabaci nymph infestation on subsequent M. persicae colonization is investigated. Performance assays showed that B. tabaci nymphs pre-infestation significantly reduced M. persicae survival and fecundity systemically in wild-type (WT but not salicylate-deficient (NahG plants compared with respective control. However, pre-infestation had no obvious local effects on subsequent M. persicae in either WT or NahG tobacco. SA quantification results indicated that the highest accumulation of SA was induced by B. tabaci nymphs in WT plants after 15 days of infestation. These levels were 8.45- and 6.14-fold higher in the local and systemic leaves, respectively, than in controls. Meanwhile, no significant changes of SA levels were detected in NahG plants. Further, biochemical analysis of defense enzymes polyphenol oxidase (PPO, peroxidase (POD, β-1,3-glucanase, and chitinase demonstrated that B. tabaci nymph infestation increased these enzymes' activity locally and systemically in WT plants, and there was more chitinase and β-1, 3-glucanase activity systemically than locally, which was opposite to the changing trends of PPO. However, B. tabaci nymph infestation caused no obvious increase in enzyme activity in any NahG plants except POD.In conclusion, these results underscore the important role that induction of the SA signaling pathway by B. tabaci nymphs plays in defeating aphids. It also indicates that the activity of β-1, 3-glucanase and chitinase may be positively correlated with resistance to aphids.

  3. Induced Jasmonate Signaling Leads to Contrasting Effects on Root Damage and Herbivore Performance1

    Science.gov (United States)

    Lu, Jing; Robert, Christelle Aurélie Maud; Riemann, Michael; Cosme, Marco; Mène-Saffrané, Laurent; Massana, Josep; Stout, Michael Joseph; Lou, Yonggen; Gershenzon, Jonathan; Erb, Matthias

    2015-01-01

    Induced defenses play a key role in plant resistance against leaf feeders. However, very little is known about the signals that are involved in defending plants against root feeders and how they are influenced by abiotic factors. We investigated these aspects for the interaction between rice (Oryza sativa) and two root-feeding insects: the generalist cucumber beetle (Diabrotica balteata) and the more specialized rice water weevil (Lissorhoptrus oryzophilus). Rice plants responded to root attack by increasing the production of jasmonic acid (JA) and abscisic acid, whereas in contrast to in herbivore-attacked leaves, salicylic acid and ethylene levels remained unchanged. The JA response was decoupled from flooding and remained constant over different soil moisture levels. Exogenous application of methyl JA to the roots markedly decreased the performance of both root herbivores, whereas abscisic acid and the ethylene precursor 1-aminocyclopropane-1-carboxylic acid did not have any effect. JA-deficient antisense 13-lipoxygenase (asLOX) and mutant allene oxide cyclase hebiba plants lost more root biomass under attack from both root herbivores. Surprisingly, herbivore weight gain was decreased markedly in asLOX but not hebiba mutant plants, despite the higher root biomass removal. This effect was correlated with a herbivore-induced reduction of sucrose pools in asLOX roots. Taken together, our experiments show that jasmonates are induced signals that protect rice roots from herbivores under varying abiotic conditions and that boosting jasmonate responses can strongly enhance rice resistance against root pests. Furthermore, we show that a rice 13-lipoxygenase regulates root primary metabolites and specifically improves root herbivore growth. PMID:25627217

  4. Simulated performance of an acoustic modem using phase-modulated signals in a time-varying, shallow-water environment

    DEFF Research Database (Denmark)

    Bjerrum-Niese, Christian; Jensen, Leif Bjørnø

    1996-01-01

    and dynamic multipath channel. Multipath arrivals at the receiver cause phase distortion and fading of the signal envelope. Yet, for extreme ratios of range to depth, the delays of multipath arrivals decrease, and the channel impulse response coherently contributes energy to the signal at short delays......Underwater acoustic modems using coherent modulation, such as phase-shift keying, have proven to efficiently exploit the bandlimited underwater acoustical communication channel. However, the performance of an acoustic modem, given as maximum range and data and error rate, is limited in the complex...... relative to the first arrival, while longer delays give rise to intersymbol interference. Following this, the signal-to-multipath ratio (SMR) is introduced. It is claimed that the SMR determines the performance rather than the signal-to-noise ratio (SNR). Using a ray model including temporal variations...

  5. Preoptometry and optometry school grade point average and optometry admissions test scores as predictors of performance on the national board of examiners in optometry part I (basic science) examination.

    Science.gov (United States)

    Bailey, J E; Yackle, K A; Yuen, M T; Voorhees, L I

    2000-04-01

    To evaluate preoptometry and optometry school grade point averages and Optometry Admission Test (OAT) scores as predictors of performance on the National Board of Examiners in Optometry NBEO Part I (Basic Science) (NBEOPI) examination. Simple and multiple correlation coefficients were computed from data obtained from a sample of three consecutive classes of optometry students (1995-1997; n = 278) at Southern California College of Optometry. The GPA after year two of optometry school was the highest correlation (r = 0.75) among all predictor variables; the average of all scores on the OAT was the highest correlation among preoptometry predictor variables (r = 0.46). Stepwise regression analysis indicated a combination of the optometry GPA, the OAT Academic Average, and the GPA in certain optometry curricular tracks resulted in an improved correlation (multiple r = 0.81). Predicted NBEOPI scores were computed from the regression equation and then analyzed by receiver operating characteristic (roc) and statistic of agreement (kappa) methods. From this analysis, we identified the predicted score that maximized identification of true and false NBEOPI failures (71% and 10%, respectively). Cross validation of this result on a separate class of optometry students resulted in a slightly lower correlation between actual and predicted NBEOPI scores (r = 0.77) but showed the criterion-predicted score to be somewhat lax. The optometry school GPA after 2 years is a reasonably good predictor of performance on the full NBEOPI examination, but the prediction is enhanced by adding the Academic Average OAT score. However, predicting performance in certain subject areas of the NBEOPI examination, for example Psychology and Ocular/Visual Biology, was rather insubstantial. Nevertheless, predicting NBEOPI performance from the best combination of year two optometry GPAs and preoptometry variables is better than has been shown in previous studies predicting optometry GPA from the best

  6. Performance Comparison of Adaptive Estimation Techniques for Power System Small-Signal Stability Assessment

    Directory of Open Access Journals (Sweden)

    E. A. Feilat

    2010-12-01

    Full Text Available This paper demonstrates the assessment of the small-signal stability of a single-machine infinite- bus power system under widely varying loading conditions using the concept of synchronizing and damping torques coefficients. The coefficients are calculated from the time responses of the rotor angle, speed, and torque of the synchronous generator. Three adaptive computation algorithms including Kalman filtering, Adaline, and recursive least squares have been compared to estimate the synchronizing and damping torque coefficients. The steady-state performance of the three adaptive techniques is compared with the conventional static least squares technique by conducting computer simulations at different loading conditions. The algorithms are compared to each other in terms of speed of convergence and accuracy. The recursive least squares estimation offers several advantages including significant reduction in computing time and computational complexity. The tendency of an unsupplemented static exciter to degrade the system damping for medium and heavy loading is verified. Consequently, a power system stabilizer whose parameters are adjusted to compensate for variations in the system loading is designed using phase compensation method. The effectiveness of the stabilizer in enhancing the dynamic stability over wide range of operating conditions is verified through the calculation of the synchronizing and damping torque coefficients using recursive least square technique.

  7. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    Science.gov (United States)

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  8. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  9. A New Second-Order Generalized Integrator Based Quadrature Signal Generator With Enhanced Performance

    DEFF Research Database (Denmark)

    Xin, Zhen; Qin, Zian; Lu, Minghui

    2016-01-01

    Due to the simplicity and flexibility of the structure of the Second-Order Generalized Integrator based Quadrature Signal Generator (SOGI-QSG), it has been widely used over the past decade for many applications such as frequency estimation, grid synchronization, and harmonic extraction. However......, the SOGI-QSG will produce errors when its input signal contains a dc component or harmonic components with unknown frequencies. The accuracy of the signal detection methods using it may hence be compromised. To overcome the drawback, the First-Order System (FOS) concept is first used to illustrate...

  10. Spontaneous Alpha Power Lateralization Predicts Detection Performance in an Un-Cued Signal Detection Task.

    Directory of Open Access Journals (Sweden)

    Gonzalo Boncompte

    Full Text Available Focusing one's attention by external guiding stimuli towards a specific area of the visual field produces systematical neural signatures. One of the most robust is the change in topological distribution of oscillatory alpha band activity across parieto-occipital cortices. In particular, decreases in alpha activity over contralateral and/or increases over ipsilateral scalp sites, respect to the side of the visual field where attention was focused. This evidence comes mainly from experiments where an explicit cue informs subjects where to focus their attention, thus facilitating detection of an upcoming target stimulus. However, recent theoretical models of attention have highlighted a stochastic or non-deterministic component related to visuospatial attentional allocation. In an attempt to evidence this component, here we analyzed alpha activity in a signal detection paradigm in the lack of informative cues; in the absence of preceding information about the location (and time of appearance of target stimuli. We believe that the unpredictability of this situation could be beneficial for unveiling this component. Interestingly, although total alpha power did not differ between Seen and Unseen conditions, we found a significant lateralization of alpha activity over parieto-occipital electrodes, which predicted behavioral performance. This effect had a smaller magnitude compared to paradigms in which attention is externally guided (cued. However we believe that further characterization of this spontaneous component of attention is of great importance in the study of visuospatial attentional dynamics. These results support the presence of a spontaneous component of visuospatial attentional allocation and they advance pre-stimulus alpha-band lateralization as one of its neural signatures.

  11. Machine Learning for Optical Performance Monitoring from Directly Detected PDM-QAM Signals

    DEFF Research Database (Denmark)

    Wass, J.; Thrane, Jakob; Piels, Molly

    2016-01-01

    Supervised machine learning methods are applied and demonstrated experimentally for inband OSNR estimation and modulation format classification in optical communication systems. The proposed methods accurately evaluate coherent signals up to 64QAM using only intensity information....

  12. Operation and performance of a longitudinal damping system using parallel digital signal processing

    International Nuclear Information System (INIS)

    Fox, J.D.; Hindi, H.; Linscott, I.

    1994-06-01

    A programmable longitudinal feedback system based on four AT ampersand T 1610 digital signal processors has been developed as a component of the PEP-II R ampersand D program. This Longitudinal Quick Prototype is a proof of concept for the PEP-II system and implements full speed bunch-by-bunch signal processing for storage rings with bunch spacings of 4 ns. The design implements, via software, a general purpose feedback controller which allows the system to be operated at several accelerator facilities. The system configuration used for tests at the LBL Advanced Light Source is described. Open and closed loop results showing the detection and calculation of feedback signals from bunch motion are presented, and the system is shown to damp coupled-bunch instabilities in the ALS. Use of the system for accelerator diagnostics is illustrated via measurement of injection transients and analysis of open loop bunch motion

  13. Operation and performance of a longitudinal feedback system using digital signal processing

    International Nuclear Information System (INIS)

    Teytelman, D.; Fox, J.; Hindi, H.

    1994-01-01

    A programmable longitudinal feedback system using a parallel array of AT ampersand T 1610 digital signal processors has been developed as a component of the PEP-II R ampersand D program. This system has been installed at the Advanced Light Source (LBL) and implements full speed bunch by bunch signal processing for storage rings with bunch spacing of 4ns. Open and closed loop results showing the action of the feedback system are presented, and the system is shown to damp coupled-bunch instabilities in the ALS. A unified PC-based software environment for the feedback system operation is also described

  14. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    OpenAIRE

    Bahubali K. Shiragapur; Uday Wali

    2016-01-01

    In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR) quantity. The Golay Code (24, 12), Reed-Muller code (16, 11), Hamming code (7, 4) and Hybrid technique (Combination of Signal Scrambling and Signal Distortion) proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conve...

  15. Performance bounds for sparse signal reconstruction with multiple side information [arXiv

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Seiler, Jurgen; Kaup, Andre

    2016-01-01

    In the context of compressive sensing (CS), this paper considers the problem of reconstructing sparse signals with the aid of other given correlated sources as multiple side information (SI). To address this problem, we propose a reconstruction algorithm with multiple SI (RAMSI) that solves...

  16. Toward quantifying metrics for rail-system resilience : Identification and analysis of performance weak resilience signals

    NARCIS (Netherlands)

    Regt, A. de; Siegel, A.W.; Schraagen, J.M.C.

    2016-01-01

    This paper aims to enhance tangibility of the resilience engineering concept by facilitating understanding and operationalization of weak resilience signals (WRSs) in the rail sector. Within complex socio-technical systems, accidents can be seen as unwanted outcomes emerging from uncontrolled

  17. Timing performance of a self-cancelling turn-signal mechanism in motorcycles based on the ATMega328P microcontroller

    Science.gov (United States)

    Nurbuwat, Adzin Kondo; Eryandi, Kholid Yusuf; Estriyanto, Yuyun; Widiastuti, Indah; Pambudi, Nugroho Agung

    2018-02-01

    The objective of this study is to measure the time performance of a self-cancelling turn signal mechanism based on the In this study the performance of self-cancelling turn signal based on ATMega328P microcontroller is measured at low speed and high speed treatment on motorcycles commonly used in Indonesia. Time performance measurements were made by comparing the self-cancelling turn signal based on ATMega328P microcontroller with standard motor turn time. Measurements of time at low speed treatment were performed at a speed range of 15 km / h, 20 km / h, 25 km / h on the U-turn test trajectory. The angle of the turning angle of the potentiometer is determined at 3°. The limit of steering wheel turning angle at the potentiometer is set at 3°. For high-speed treatment is 30 km / h, 40 km / h, 50km / h, and 60 km / h, on the L-turn test track with a tilt angle (roll angle) read by the L3G4200D gyroscope sensor. Each speed test is repeated 3 replications. Standard time is a reference for self-cancelling turn signal performance. The standard time obtained is 15.68 s, 11.96 s, 9.34 s at low speed and 4.63 s, 4.06 s, 3.61 s, 3.13 s at high speed. The time test of self-cancelling turn signal shows 16.10 s, 12.42 s, 10.24 s at the low speed and 5.18, 4.51, 3.73, 3.21 at the high speed. At a speed of 15 km / h occurs the instability of motion turns motorcycle so that testing is more difficult. Small time deviations indicate the tool works well. The largest time deviation value is 0.9 seconds at low speed and 0.55 seconds at high speed. The conclusion at low velocity of the highest deviation value occurred at the speed of 25 km / h test due to the movement of slope with inclination has started to happen which resulted in slow reading of steering movement. At higher speeds the time slows down due to rapid sensor readings on the tilt when turning fast at ever higher speeds. The timing performance of self-cancelling turn signal decreases as the motorcycle turning

  18. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  19. Performance of a novel multiple-signal luminescence sediment tracing method

    Science.gov (United States)

    Reimann, Tony

    2014-05-01

    Optically Stimulated Luminescence (OSL) is commonly used for dating sediments. Luminescence signals build up due to exposure of mineral grains to natural ionizing radiation, and are reset when these grains are exposed to (sun)light during sediment transport and deposition. Generally, luminescence signals can be read in two ways, potentially providing information on the burial history (dating) or the transport history (sediment tracing) of mineral grains. In this study we use a novel luminescence measurement procedure (Reimann et al., submitted) that simultaneously monitors six different luminescence signals from the same sub-sample (aliquot) to infer the transport history of sand grains. Daylight exposure experiments reveal that each of these six signals resets (bleaches) at a different rate, thus allowing to trace the bleaching history of the sediment in six different observation windows. To test the feasibility of luminescence sediment tracing in shallow-marine coastal settings we took eight sediment samples from the pilot mega-nourishment Zandmotor in Kijkduin (South-Holland). This site provides relatively controlled conditions as the morphological evolution of this nourishment is densely monitored (Stive et al., 2013). After sampling the original nourishment source we took samples along the seaward facing contour of the spit that was formed from August 2011 (start of nourishment) to June 2012 (sampling). It is presumed that these samples originate from the source and were transported and deposited within the first year after construction. The measured luminescence of a sediment sample was interpolated onto the daylight bleaching curve of each signal to assign the Equivalent Exposure Time (EET) to a sample. The EET is a quantitative measure of the full daylight equivalent a sample was exposed to during sediment transport, i.e. the higher the EET the longer the sample has been transported or the more efficient it has been exposed to day-light during sediment

  20. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  1. Transmission of Voice Signal: BER Performance Analysis of Different FEC Schemes Based OFDM System over Various Channels

    OpenAIRE

    Rashed, Md. Golam; Kabir, M. Hasnat; Reza, Md. Selim; Islam, Md. Matiqul; Shams, Rifat Ara; Masum, Saleh; Ullah, Sheikh Enayet

    2012-01-01

    In this paper, we investigate the impact of Forward Error Correction (FEC) codes namely Cyclic Redundancy Code and Convolution Code on the performance of OFDM wireless communication system for speech signal transmission over both AWGN and fading (Rayleigh and Rician) channels in term of Bit Error Probability. The simulation has been done in conjunction with QPSK digital modulation and compared with uncoded resultstal modulation. In the fading channels, it is found via computer simulation that...

  2. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    Directory of Open Access Journals (Sweden)

    Wilmar Hernandez

    2007-01-01

    Full Text Available In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart sensors that today’s cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher’s interest in the fusion of intelligent sensors and optimal signal processing techniques.

  3. Performance Analysis of DPSK Signals with Selection Combining and Convolutional Coding in Fading Channel

    National Research Council Canada - National Science Library

    Ong, Choon

    1998-01-01

    The performance analysis of a differential phase shift keyed (DPSK) communications system, operating in a Rayleigh fading environment, employing convolutional coding and diversity processing is presented...

  4. Performance of different synchronization measures in real data: A case study on electroencephalographic signals

    Science.gov (United States)

    Quian Quiroga, R.; Kraskov, A.; Kreuz, T.; Grassberger, P.

    2002-04-01

    We study the synchronization between left and right hemisphere rat electroencephalographic (EEG) channels by using various synchronization measures, namely nonlinear interdependences, phase synchronizations, mutual information, cross correlation, and the coherence function. In passing we show a close relation between two recently proposed phase synchronization measures and we extend the definition of one of them. In three typical examples we observe that except mutual information, all these measures give a useful quantification that is hard to be guessed beforehand from the raw data. Despite their differences, results are qualitatively the same. Therefore, we claim that the applied measures are valuable for the study of synchronization in real data. Moreover, in the particular case of EEG signals their use as complementary variables could be of clinical relevance.

  5. Performance improvement of two-dimensional EUV spectroscopy based on high frame rate CCD and signal normalization method

    International Nuclear Information System (INIS)

    Zhang, H.M.; Morita, S.; Ohishi, T.; Goto, M.; Huang, X.L.

    2014-01-01

    In the Large Helical Device (LHD), the performance of two-dimensional (2-D) extreme ultraviolet (EUV) spectroscopy with wavelength range of 30-650A has been improved by installing a high frame rate CCD and applying a signal intensity normalization method. With upgraded 2-D space-resolved EUV spectrometer, measurement of 2-D impurity emission profiles with high horizontal resolution is possible in high-density NBI discharges. The variation in intensities of EUV emission among a few discharges is significantly reduced by normalizing the signal to the spectral intensity from EUV_—Long spectrometer which works as an impurity monitor with high-time resolution. As a result, high resolution 2-D intensity distribution has been obtained from CIV (384.176A), CV(2x40.27A), CVI(2x33.73A) and HeII(303.78A). (author)

  6. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  7. The Facial Appearance of CEOs: Faces Signal Selection but Not Performance

    Science.gov (United States)

    Garretsen, Harry; Spreeuwers, Luuk J.

    2016-01-01

    Research overwhelmingly shows that facial appearance predicts leader selection. However, the evidence on the relevance of faces for actual leader ability and consequently performance is inconclusive. By using a state-of-the-art, objective measure for face recognition, we test the predictive value of CEOs’ faces for firm performance in a large sample of faces. We first compare the faces of Fortune500 CEOs with those of US citizens and professors. We find clear confirmation that CEOs do look different when compared to citizens or professors, replicating the finding that faces matter for selection. More importantly, we also find that faces of CEOs of top performing firms do not differ from other CEOs. Based on our advanced face recognition method, our results suggest that facial appearance matters for leader selection but that it does not do so for leader performance. PMID:27462986

  8. System performance enhancement with pre-distorted OOFDM signal waveforms in DM/DD systems.

    Science.gov (United States)

    Sánchez, C; Ortega, B; Capmany, J

    2014-03-24

    In this work we propose a pre-distortion technique for the mitigation of the nonlinear distortion present in directly modulated/detected OOFDM systems and explore the system performance achieved under varying system parameters. Simulation results show that the proposed pre-distortion technique efficiently mitigates the nonlinear distortion, achieving transmission information rates around 40 Gbits/s and 18.5 Gbits/s over 40 km and 100 km of single mode fiber links, respectively, under optimum operating conditions. Moreover, the proposed pre-distortion technique can potentially provide higher system performance to that obtained with nonlinear equalization at the receiver.

  9. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  10. Comment on ``Performance of different synchronization measures in real data: A case study on electroencephalographic signals''

    Science.gov (United States)

    Nicolaou, N.; Nasuto, S. J.

    2005-12-01

    We agree with Duckrow and Albano [Phys. Rev. E 67, 063901 (2003)] and Quian Quiroga [Phys. Rev. E 67, 063902 (2003)] that mutual information (MI) is a useful measure of dependence for electroencephalogram (EEG) data, but we show that the improvement seen in the performance of MI on extracting dependence trends from EEG is more dependent on the type of MI estimator rather than any embedding technique used. In an independent study we conducted in search for an optimal MI estimator, and in particular for EEG applications, we examined the performance of a number of MI estimators on the data set used by Quian Quiroga in their original study, where the performance of different dependence measures on real data was investigated [Phys. Rev. E 65, 041903 (2002)]. We show that for EEG applications the best performance among the investigated estimators is achieved by k -nearest neighbors, which supports the conjecture by Quian Quiroga in Phys. Rev. E 67, 063902 (2003) that the nearest neighbor estimator is the most precise method for estimating MI.

  11. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  12. Polygenic signal for symptom dimensions and cognitive performance in patients with chronic schizophrenia.

    Science.gov (United States)

    Xavier, Rose Mary; Dungan, Jennifer R; Keefe, Richard S E; Vorderstrasse, Allison

    2018-06-01

    Genetic etiology of psychopathology symptoms and cognitive performance in schizophrenia is supported by candidate gene and polygenic risk score (PRS) association studies. Such associations are reported to be dependent on several factors - sample characteristics, illness phase, illness severity etc. We aimed to examine if schizophrenia PRS predicted psychopathology symptoms and cognitive performance in patients with chronic schizophrenia. We also examined if schizophrenia associated autosomal loci were associated with specific symptoms or cognitive domains. Case-only analysis using data from the Clinical Antipsychotics Trials of Intervention Effectiveness-Schizophrenia trials ( n  = 730). PRS was constructed using Psychiatric Genomics Consortium (PGC) leave one out genome wide association analysis as the discovery data set. For candidate region analysis, we selected 105-schizophrenia associated autosomal loci from the PGC study. We found a significant effect of PRS on positive symptoms at p -threshold ( P T ) of 0.5 ( R 2  = 0.007, p  = 0.029, empirical p  = 0.029) and negative symptoms at P T of 1e-07 ( R 2  = 0.005, p  = 0.047, empirical p  = 0.048). For models that additionally controlled for neurocognition, best fit PRS predicted positive ( p- threshold 0.01, R 2   =  0.007, p =  0.013, empirical p  = 0.167) and negative symptoms ( p- threshold 0.1, R 2   =  0.012, p =  0.004, empirical p  = 0.329). No associations were seen for overall neurocognitive and social cognitive performance tests. Post-hoc analyses revealed that PRS predicted working memory and vigilance performance but did not survive correction. No candidate regions that survived multiple testing corrections were associated with either symptoms or cognitive performance. Our findings point to potentially distinct pathogenic mechanisms for schizophrenia symptoms.

  13. Performance of the front-end signal processing electronics for the drift chambers of the Stanford Large Detector

    International Nuclear Information System (INIS)

    Honma, A.; Haller, G.M.; Usher, T.; Shypit, R.

    1990-10-01

    This paper reports on the performance of the front-end analog and digital signal processing electronics for the drift chambers of the Stanford Large Detector (SLD) detector at the Stanford Linear Collider. The electronics mounted on printed circuit boards include up to 64 channels of transimpedance amplification, analog sampling, A/D conversion, and associated control circuitry. Measurements of the time resolution, gain, noise, linearity, crosstalk, and stability of the readout electronics are described and presented. The expected contribution of the electronics to the relevant drift chamber measurement resolutions (i.e., timing and charge division) is given

  14. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  15. Desempenho produtivo e massa média de frutos de morangueiro obtidos de diferentes sistemas de cultivo / Performance and average mass of strawberry fruit obtained from different cropping systems

    Directory of Open Access Journals (Sweden)

    Letícia Kurchaidt Pinheiro Camargo

    2010-11-01

    Full Text Available ResumoO trabalho teve como objetivo avaliar a produtividade e a massa média de frutos de oito cultivares de morangueiro (Aromas, Camino Real, Campinas, Dover, Oso Grande, Toyonoka, Tudla-Milsei e Ventana, cultivadas em diferentes sistemas de produção. O delineamento estatístico utilizado foram blocos casualizados com quatro repetições. Os frutos foram colhidos no período de outubro de 2007 a fevereiro de 2008. Os resultados obtidos permitem inferir que, quanto à produtividade, o sistema orgânico foi mais efetivo para as cultivares Oso Grande e Tudla-Milsei e o sistema convencional para Dover e Toyonoka. As maiores massas médias foram encontradas nos frutos das cultivares Tudla-Milsei e Ventana, em ambos os sistemas de cultivo. A cultivar que se destacou tanto no sistema orgânico quanto no sistema convencional, foi a Tudla-Milsei, com as maiores produtividades e os frutos com maior massa média. As cultivares responderam diferentemente em função do manejo cultural empregado em cada sistema de cultivo, o que permite afirmar que há variabilidade entre as cultivares comerciais mais plantadas na atualidade. Portanto, a escolha da cultivar a ser utilizada, visando à produtividade, deverá ocorrer em função do seu desempenho dentro de cada sistema de cultivo. AbstractThe goal of this work was to evaluate the productivity and mass of fruit average of eight cultivars (Aromas, Camino Real, Campinas, Dover, Oso Grande, Toyonoka, Tudla-Milsei and Ventana of strawberry (Fragaria x ananassa grown in different cropping systems. The experimental designed was randomized blocks with 4 replications. The fruits were collected in the period from October 2007 to February 2008. The results allow inferring that the productivity, the organic system was more effective for the Oso Grande e Tudla-Milsei and conventional system for Dover and Toyonoka. The highest masses averages were found in fruits of Tudla-Milsei and Ventana, in both systems. Tudla

  16. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  17. Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function

    Directory of Open Access Journals (Sweden)

    Christofer Toumazou

    2013-07-01

    Full Text Available A novel noise filtering algorithm based on averaging Intrinsic Mode Function (aIMF, which is a derivation of Empirical Mode Decomposition (EMD, is proposed to remove white-Gaussian noise of foreign currency exchange rates that are nonlinear nonstationary times series signals. Noise patterns with different amplitudes and frequencies were randomly mixed into the five exchange rates. A number of filters, namely; Extended Kalman Filter (EKF, Wavelet Transform (WT, Particle Filter (PF and the averaging Intrinsic Mode Function (aIMF algorithm were used to compare filtering and smoothing performance. The aIMF algorithm demonstrated high noise reduction among the performance of these filters.

  18. Network Signaling Channel for Improving ZigBee Performance in Dynamic Cluster-Tree Networks

    Directory of Open Access Journals (Sweden)

    D. Hämäläinen

    2008-03-01

    Full Text Available ZigBee is one of the most potential standardized technologies for wireless sensor networks (WSNs. Yet, sufficient energy-efficiency for the lowest power WSNs is achieved only in rather static networks. This severely limits the applicability of ZigBee in outdoor and mobile applications, where operation environment is harsh and link failures are common. This paper proposes a network channel beaconing (NCB algorithm for improving ZigBee performance in dynamic cluster-tree networks. NCB reduces the energy consumption of passive scans by dedicating one frequency channel for network beacon transmissions and by energy optimizing their transmission rate. According to an energy analysis, the power consumption of network maintenance operations reduces by 70%–76% in dynamic networks. In static networks, energy overhead is negligible. Moreover, the service time for data routing increases up to 37%. The performance of NCB is validated by ns-2 simulations. NCB can be implemented as an extension on MAC and NWK layers and it is fully compatible with ZigBee.

  19. Signal and image processing systems performance evaluation, simulation, and modeling; Proceedings of the Meeting, Orlando, FL, Apr. 4, 5, 1991

    Science.gov (United States)

    Nasr, Hatem N.; Bazakos, Michael E.

    The various aspects of the evaluation and modeling problems in algorithms, sensors, and systems are addressed. Consideration is given to a generic modular imaging IR signal processor, real-time architecture based on the image-processing module family, application of the Proto Ware simulation testbed to the design and evaluation of advanced avionics, development of a fire-and-forget imaging infrared seeker missile simulation, an adaptive morphological filter for image processing, laboratory development of a nonlinear optical tracking filter, a dynamic end-to-end model testbed for IR detection algorithms, wind tunnel model aircraft attitude and motion analysis, an information-theoretic approach to optimal quantization, parametric analysis of target/decoy performance, neural networks for automated target recognition parameters adaptation, performance evaluation of a texture-based segmentation algorithm, evaluation of image tracker algorithms, and multisensor fusion methodologies. (No individual items are abstracted in this volume)

  20. A Novel Method for Control Performance Assessment with Fractional Order Signal Processing and Its Application to Semiconductor Manufacturing

    Directory of Open Access Journals (Sweden)

    Kai Liu

    2018-06-01

    Full Text Available The significant task for control performance assessment (CPA is to review and evaluate the performance of the control system. The control system in the semiconductor industry exhibits a complex dynamic behavior, which is hard to analyze. This paper investigates the interesting crossover properties of Hurst exponent estimations and proposes a novel method for feature extraction of the nonlinear multi-input multi-output (MIMO systems. At first, coupled data from real industry are analyzed by multifractal detrended fluctuation analysis (MFDFA and the resultant multifractal spectrum is obtained. Secondly, the crossover points with spline fit in the scale-law curve are located and then employed to segment the entire scale-law curve into several different scaling regions, in which a single Hurst exponent can be estimated. Thirdly, to further ascertain the origin of the multifractality of control signals, the generalized Hurst exponents of the original series are compared with shuffled data. At last, non-Gaussian statistical properties, multifractal properties and Hurst exponents of the process control variables are derived and compared with different sets of tuning parameters. The results have shown that CPA of the MIMO system can be better employed with the help of fractional order signal processing (FOSP.

  1. The effects of ionizing radiation on the performance of signaled and unsignalled bar-press shock postponement in the rat

    International Nuclear Information System (INIS)

    Burghardt, W.F. Jr.

    1988-01-01

    Forty-eight rats in four conditions were used to determine the efficacy of preshock warning tones in maintaining bar-press shock postponement performance after irradiation. The SIDMAN group performed without external cues. The SIGNAL group received a 5 sec warning tone preceding shock. The COSAV group had preshock warning tones available for 60 sec following a response on another lever, and was used to assess the ability to maintain performance on two levers simultaneously. In VISIG, warning tones always preceded shocks, but followed shock postponement responses unpredictably. Sham-irradiated control groups were used to compare baseline performance on each task, and for comparison with irradiated subjects. Irradiated subjects could perform the movements necessary to successfully avoid shock. They were able to detect and respond appropriately to preshock warning tones when present, although COSAV subjects did not continue to respond to produce them. Irradiated subjects experienced a significant and lasting increase in the number of shocks received, except when no external cues were available

  2. Effects of Varying Gravity Levels on fNIRS Headgear Performance and Signal Recovery

    Science.gov (United States)

    Mackey, Jeffrey R.; Harrivel, Angela R.; Adamovsky, Grigory; Lewandowski, Beth E.; Gotti, Daniel J.; Tin, Padetha; Floyd, Bertram M.

    2013-01-01

    This paper reviews the effects of varying gravitational levels on functional Near-Infrared Spectroscopy (fNIRS) headgear. The fNIRS systems quantify neural activations in the cortex by measuring hemoglobin concentration changes via optical intensity. Such activation measurement allows for the detection of cognitive state, which can be important for emotional stability, human performance and vigilance optimization, and the detection of hazardous operator state. The technique depends on coupling between the fNIRS probe and users skin. Such coupling may be highly susceptible to motion if probe-containing headgear designs are not adequately tested. The lack of reliable and self-applicable headgear robust to the influence of motion artifact currently inhibits its operational use in aerospace environments. Both NASAs Aviation Safety and Human Research Programs are interested in this technology as a method of monitoring cognitive state of pilots and crew.

  3. Performance testing of self-powered detector signal converters at Dukovany nuclear power plant - stage 1

    International Nuclear Information System (INIS)

    Erben, O.; Hajek, P.; Zerola, L.; Karsulin, M.

    1990-11-01

    The converters were manufactured at the Institute of Nuclear Research, Rez. Dynamic functions of the converters were tested during the start-up of reactor unit 4, Dukovany nuclear power plant, and their stability during its normal operation. The results and evaluation of the measurements show a good performance of converters. They have a low offset, good stability and the values of current are in a good agreement with the values obtained using other methods. The values of insulation resistance are in a good agreement with the values obtained manually using the method of additional resistance. These converters are planned to be used in the upgraded in-service inspection system in WWER-440 nuclear power plants. (Z.S.) 9 tabs., 22 figs., 1 ref

  4. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  5. Performance analysis of general purpose and digital signal processor kernels for heterogeneous systems-on-chip

    Directory of Open Access Journals (Sweden)

    T. von Sydow

    2003-01-01

    Full Text Available Various reasons like technology progress, flexibility demands, shortened product cycle time and shortened time to market have brought up the possibility and necessity to integrate different architecture blocks on one heterogeneous System-on-Chip (SoC. Architecture blocks like programmable processor cores (DSP- and GPP-kernels, embedded FPGAs as well as dedicated macros will be integral parts of such a SoC. Especially programmable architecture blocks and associated optimization techniques are discussed in this contribution. Design space exploration and thus the choice which architecture blocks should be integrated in a SoC is a challenging task. Crucial to this exploration is the evaluation of the application domain characteristics and the costs caused by individual architecture blocks integrated on a SoC. An ATE-cost function has been applied to examine the performance of the aforementioned programmable architecture blocks. Therefore, representative discrete devices have been analyzed. Furthermore, several architecture dependent optimization steps and their effects on the cost ratios are presented.

  6. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  7. Effects of platooning on signal-detection performance, workload, and stress: A driving simulator study.

    Science.gov (United States)

    Heikoop, Daniël D; de Winter, Joost C F; van Arem, Bart; Stanton, Neville A

    2017-04-01

    Platooning, whereby automated vehicles travel closely together in a group, is attractive in terms of safety and efficiency. However, concerns exist about the psychological state of the platooning driver, who is exempted from direct control, yet remains responsible for monitoring the outside environment to detect potential threats. By means of a driving simulator experiment, we investigated the effects on recorded and self-reported measures of workload and stress for three task-instruction conditions: (1) No Task, in which participants had to monitor the road, (2) Voluntary Task, in which participants could do whatever they wanted, and (3) Detection Task, in which participants had to detect red cars. Twenty-two participants performed three 40-min runs in a constant-speed platoon, one condition per run in counterbalanced order. Contrary to some classic literature suggesting that humans are poor monitors, in the Detection Task condition participants attained a high mean detection rate (94.7%) and a low mean false alarm rate (0.8%). Results of the Dundee Stress State Questionnaire indicated that automated platooning was less distressing in the Voluntary Task than in the Detection Task and No Task conditions. In terms of heart rate variability, the Voluntary Task condition yielded a lower power in the low-frequency range relative to the high-frequency range (LF/HF ratio) than the Detection Task condition. Moreover, a strong time-on-task effect was found, whereby the mean heart rate dropped from the first to the third run. In conclusion, participants are able to remain attentive for a prolonged platooning drive, and the type of monitoring task has effects on the driver's psychological state. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  9. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  10. Performance Analysis of Long-Reach Coherent Detection OFDM-PON Downstream Transmission Using m-QAM-Mapped OFDM Signal

    Science.gov (United States)

    Pandey, Gaurav; Goel, Aditya

    2017-12-01

    In this paper, orthogonal frequency division multiplexing (OFDM)-passive optical network (PON) downstream transmission is demonstrated over different lengths of fiber at remote node (RN) for different m-QAM (quadrature amplitude modulation)-mapped OFDM signal (m=4, 16, 32 and 64) transmission from the central office (CO) for different data rates (10, 20 30 and 40 Gbps) using coherent detection at the user end or optical network unit (ONU). Investigation is performed with different number of subcarriers (32, 64, 128, 512 and 1,024), back-to-back optical signal-to-noise ratio (OSNR) along with transmitted and received constellation diagrams for m-QAM-mapped coherent OFDM downstream transmission at different speeds over different transmission distances. Received optical power is calculated for different bit error rates (BERs) at different speeds using m-QAM-mapped coherent detection OFDM downstream transmission. No dispersion compensation is utilized in between the fiber span. Simulation results suggest the different lengths and data rates that can be used for different m-QAM-mapped coherent detection OFDM downstream transmission, and the proposed system may be implemented in next-generation high-speed PONs (NG-PONs).

  11. Assessment of the Speech Intelligibility Performance of Post Lingual Cochlear Implant Users at Different Signal-to-Noise Ratios Using the Turkish Matrix Test

    Directory of Open Access Journals (Sweden)

    Zahra Polat

    2016-10-01

    Full Text Available Background: Spoken word recognition and speech perception tests in quiet are being used as a routine in assessment of the benefit which children and adult cochlear implant users receive from their devices. Cochlear implant users generally demonstrate high level performances in these test materials as they are able to achieve high level speech perception ability in quiet situations. Although these test materials provide valuable information regarding Cochlear Implant (CI users’ performances in optimal listening conditions, they do not give realistic information regarding performances in adverse listening conditions, which is the case in the everyday environment. Aims: The aim of this study was to assess the speech intelligibility performance of post lingual CI users in the presence of noise at different signal-to-noise ratio with the Matrix Test developed for Turkish language. Study Design: Cross-sectional study. Methods: The thirty post lingual implant user adult subjects, who had been using implants for a minimum of one year, were evaluated with Turkish Matrix test. Subjects’ speech intelligibility was measured using the adaptive and non-adaptive Matrix Test in quiet and noisy environments. Results: The results of the study show a correlation between Pure Tone Average (PTA values of the subjects and Matrix test Speech Reception Threshold (SRT values in the quiet. Hence, it is possible to asses PTA values of CI users using the Matrix Test also. However, no correlations were found between Matrix SRT values in the quiet and Matrix SRT values in noise. Similarly, the correlation between PTA values and intelligibility scores in noise was also not significant. Therefore, it may not be possible to assess the intelligibility performance of CI users using test batteries performed in quiet conditions. Conclusion: The Matrix Test can be used to assess the benefit of CI users from their systems in everyday life, since it is possible to perform

  12. The effects of virtual reality displays on visual attention and detection of signals performance for main control room training

    International Nuclear Information System (INIS)

    Lin Shiaufeng; Lin Chiuhsiang Joe; Wang Rouwen; Yang Lichen; Yang Chihwei; Cheng Tsungchieh; Wang Jyhgang

    2011-01-01

    the detection of signals for VR simulation in MCR training. This study compares the VR representation methods (Desktop VR and Projector VR) performance on visual attention and detection of signals. We develop a questionnaire for the evaluation of the immersion effect of the virtual display of MCR. The results indicated that different VR representation methods significantly affected the participants' visual attention and detection of signals. (author)

  13. The Effects of Sensor Performance as Modeled by Signal Detection Theory on the Performance of Reinforcement Learning in a Target Acquisition Task

    Science.gov (United States)

    Quirion, Nate

    Unmanned Aerial Systems (UASs) today are fulfilling more roles than ever before. There is a general push to have these systems feature more advanced autonomous capabilities in the near future. To achieve autonomous behavior requires some unique approaches to control and decision making. More advanced versions of these approaches are able to adapt their own behavior and examine their past experiences to increase their future mission performance. To achieve adaptive behavior and decision making capabilities this study used Reinforcement Learning algorithms. In this research the effects of sensor performance, as modeled through Signal Detection Theory (SDT), on the ability of RL algorithms to accomplish a target localization task are examined. Three levels of sensor sensitivity are simulated and compared to the results of the same system using a perfect sensor. To accomplish the target localization task, a hierarchical architecture used two distinct agents. A simulated human operator is assumed to be a perfect decision maker, and is used in the system feedback. An evaluation of the system is performed using multiple metrics, including episodic reward curves and the time taken to locate all targets. Statistical analyses are employed to detect significant differences in the comparison of steady-state behavior of different systems.

  14. Signal processing of data from short sample tests for the projection of conductor performance in ITER magnets

    International Nuclear Information System (INIS)

    Martovetsky, Nicolai

    2008-01-01

    Qualification of the ITER conductor is absolutely necessary. Testing large scale conductors is expensive and time-consuming. To test 3-4 m long straight samples in a bore of a split solenoid is a relatively economical way in comparison with the fabrication of a coil to be tested in a bore of a background field solenoid. However, testing short samples may give ambiguous results due to different constraints in current redistribution in the cable or other end effects which are not present in the large magnet. This paper discusses the processes taking place in the ITER conductor, conditions when conductor performance could be distorted and possible signal processing to deduce the behaviour of ITER conductors in ITER magnets from the test data

  15. Evaluation of the dark signal performance of different SiPM-technologies under irradiation with cold neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Durini, Daniel, E-mail: d.durini@fz-juelich.de [Central Institute of Engineering, Electronics and Analytics ZEA-2 – Electronic Systems, Forschungszentrum Jülich GmbH, D-52425 Jülich (Germany); Degenhardt, Carsten; Rongen, Heinz [Central Institute of Engineering, Electronics and Analytics ZEA-2 – Electronic Systems, Forschungszentrum Jülich GmbH, D-52425 Jülich (Germany); Feoktystov, Artem [Jülich Centre for Neutron Science (JCNS) at Heinz Maier-Leibnitz Zentrum (MLZ), Forschungszentrum Jülich GmbH, Lichtenbergstr. 1, D-85748 Garching (Germany); Schlösser, Mario; Palomino-Razo, Alejandro [Central Institute of Engineering, Electronics and Analytics ZEA-2 – Electronic Systems, Forschungszentrum Jülich GmbH, D-52425 Jülich (Germany); Frielinghaus, Henrich [Jülich Centre for Neutron Science (JCNS) at Heinz Maier-Leibnitz Zentrum (MLZ), Forschungszentrum Jülich GmbH, Lichtenbergstr. 1, D-85748 Garching (Germany); Waasen, Stefan van [Central Institute of Engineering, Electronics and Analytics ZEA-2 – Electronic Systems, Forschungszentrum Jülich GmbH, D-52425 Jülich (Germany)

    2016-11-01

    In this paper we report the results of the assessment of changes in the dark signal delivered by three silicon photomultiplier (SiPM) detector arrays, fabricated by three different manufacturers, when irradiated with cold neutrons (wavelength λ{sub n}=5 Å or neutron energy of E{sub n}=3.27 meV) up to a neutron dose of 6×10{sup 12} n/cm{sup 2}. The dark signals as well as the breakdown voltages (V{sub br}) of the SiPM detectors were monitored during the irradiation. The system was characterized at room temperature. The analog SiPM detectors, with and without a 1 mm thick Cerium doped {sup 6}Li-glass scintillator material located in front of them, were operated using a bias voltage recommended by the respective manufacturer for a proper detector performance. I{sub out}-V{sub bias} measurements, used to determine the breakdown voltage of the devices, were repeated every 30 s during the first hour and every 300 s during the rest of the irradiation time. The digital SiPM detectors were held at the advised bias voltage between the respective breakdown voltage and dark count mappings repeated every 4 min. The measurements were performed on the KWS-1 instrument of the Heinz Maier-Leibnitz Zentrum (MLZ) in Garching, Germany. The two analog and one digital SiPM detector modules under investigation were respectively fabricated by SensL (Ireland), Hamamatsu Photonics (Japan), and Philips Digital Photon Counting (Germany).

  16. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  17. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  18. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  19. Wavelet analysis for nonstationary signals

    International Nuclear Information System (INIS)

    Penha, Rosani Maria Libardi da

    1999-01-01

    Mechanical vibration signals play an important role in anomalies identification resulting of equipment malfunctioning. Traditionally, Fourier spectral analysis is used where the signals are assumed to be stationary. However, occasional transient impulses and start-up process are examples of nonstationary signals that can be found in mechanical vibrations. These signals can provide important information about the equipment condition, as early fault detection. The Fourier analysis can not adequately be applied to nonstationary signals because the results provide data about the frequency composition averaged over the duration of the signal. In this work, two methods for nonstationary signal analysis are used: Short Time Fourier Transform (STFT) and wavelet transform. The STFT is a method of adapting Fourier spectral analysis for nonstationary application to time-frequency domain. To have a unique resolution throughout the entire time-frequency domain is its main limitation. The wavelet transform is a new analysis technique suitable to nonstationary signals, which handles the STFT drawbacks, providing multi-resolution frequency analysis and time localization in a unique time-scale graphic. The multiple frequency resolutions are obtained by scaling (dilatation/compression) the wavelet function. A comparison of the conventional Fourier transform, STFT and wavelet transform is made applying these techniques to: simulated signals, arrangement rotor rig vibration signal and rotate machine vibration signal Hanning window was used to STFT analysis. Daubechies and harmonic wavelets were used to continuos, discrete and multi-resolution wavelet analysis. The results show the Fourier analysis was not able to detect changes in the signal frequencies or discontinuities. The STFT analysis detected the changes in the signal frequencies, but with time-frequency resolution problems. The wavelet continuos and discrete transform demonstrated to be a high efficient tool to detect

  20. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  1. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  2. Average combination difference morphological filters for fault feature extraction of bearing

    Science.gov (United States)

    Lv, Jingxiang; Yu, Jianbo

    2018-02-01

    In order to extract impulse components from vibration signals with much noise and harmonics, a new morphological filter called average combination difference morphological filter (ACDIF) is proposed in this paper. ACDIF constructs firstly several new combination difference (CDIF) operators, and then integrates the best two CDIFs as the final morphological filter. This design scheme enables ACIDF to extract positive and negative impacts existing in vibration signals to enhance accuracy of bearing fault diagnosis. The length of structure element (SE) that affects the performance of ACDIF is determined adaptively by a new indicator called Teager energy kurtosis (TEK). TEK further improves the effectiveness of ACDIF for fault feature extraction. Experimental results on the simulation and bearing vibration signals demonstrate that ACDIF can effectively suppress noise and extract periodic impulses from bearing vibration signals.

  3. Investigation on the performance of an optically generated RF local oscillator signal in Ku-band DVB-S systems

    NARCIS (Netherlands)

    Khan, M.R.H.; Marpaung, D.A.I.; Burla, M.; Roeloffzen, C.G.H.; Bernhardi, Edward; de Ridder, R.M.

    2011-01-01

    We investigate a way to externally generate the local oscillator (LO) signal used for downconversion of the Ku-band (10.7 − 12.75 GHz) RF signal received from a phased array antenna (PAA). The signal is then translated to an intermediate frequency (950 − 2150 MHz) at the output of the mixer of

  4. Signal noise ratio analysis and on-orbit performance estimation of a solar occultation Fourier transform spectrometer

    Science.gov (United States)

    Li, Bicen; Xu, Pengmei; Hou, Lizhou; Wang, Caiqin

    2017-10-01

    Taking the advantages of high spectral resolution, high sensitivity and wide spectral coverage, space borne Fourier transform infrared spectrometer (FTS) plays more and more important role in atmospheric composition sounding. The combination of solar occultation and FTS technique improves the sensitivity of instrument. To achieve both high spectral resolution and high signal to noise ratio (SNR), reasonable allocation and optimization for instrument parameters are the foundation and difficulty. The solar occultation FTS (SOFTS) is a high spectral resolution (0.03 cm-1) FTS operating from 2.4 to 13.3 μm (750-4100cm-1), which will determine the altitude profile information of typical 10-100km for temperature, pressure, and the volume mixing ratios for several dozens of atmospheric compositions. As key performance of SOFTS, SNR is crucially important to high accuracy retrieval of atmospheric composition, which is required to be no less than 100:1 at the radiance of 5800K blackbody. Based on the study of various parameters and its interacting principle, according to interference theory and operation principle of time modulated FTS, a simulation model of FTS SNR has been built, which considers satellite orbit, spectral radiometric features of sun and atmospheric composition, optical system, interferometer and its control system, measurement duration, detector sensitivity, noise of detector and electronic system and so on. According to the testing results of SNR at the illuminating of 1000 blackbody, the on-orbit SNR performance of SOFTS is estimated, which can meet the mission requirement.

  5. Performance comparison of six independent components analysis algorithms for fetal signal extraction from real fMCG data

    International Nuclear Information System (INIS)

    Hild, Kenneth E; Alleva, Giovanna; Nagarajan, Srikantan; Comani, Silvia

    2007-01-01

    In this study we compare the performance of six independent components analysis (ICA) algorithms on 16 real fetal magnetocardiographic (fMCG) datasets for the application of extracting the fetal cardiac signal. We also compare the extraction results for real data with the results previously obtained for synthetic data. The six ICA algorithms are FastICA, CubICA, JADE, Infomax, MRMI-SIG and TDSEP. The results obtained using real fMCG data indicate that the FastICA method consistently outperforms the others in regard to separation quality and that the performance of an ICA method that uses temporal information suffers in the presence of noise. These two results confirm the previous results obtained using synthetic fMCG data. There were also two notable differences between the studies based on real and synthetic data. The differences are that all six ICA algorithms are independent of gestational age and sensor dimensionality for synthetic data, but depend on gestational age and sensor dimensionality for real data. It is possible to explain these differences by assuming that the number of point sources needed to completely explain the data is larger than the dimensionality used in the ICA extraction

  6. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  7. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  8. Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

    Directory of Open Access Journals (Sweden)

    Moath Kassim

    2018-05-01

    Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors

  9. Signaling aggression.

    Science.gov (United States)

    van Staaden, Moira J; Searcy, William A; Hanlon, Roger T

    2011-01-01

    From psychological and sociological standpoints, aggression is regarded as intentional behavior aimed at inflicting pain and manifested by hostility and attacking behaviors. In contrast, biologists define aggression as behavior associated with attack or escalation toward attack, omitting any stipulation about intentions and goals. Certain animal signals are strongly associated with escalation toward attack and have the same function as physical attack in intimidating opponents and winning contests, and ethologists therefore consider them an integral part of aggressive behavior. Aggressive signals have been molded by evolution to make them ever more effective in mediating interactions between the contestants. Early theoretical analyses of aggressive signaling suggested that signals could never be honest about fighting ability or aggressive intentions because weak individuals would exaggerate such signals whenever they were effective in influencing the behavior of opponents. More recent game theory models, however, demonstrate that given the right costs and constraints, aggressive signals are both reliable about strength and intentions and effective in influencing contest outcomes. Here, we review the role of signaling in lieu of physical violence, considering threat displays from an ethological perspective as an adaptive outcome of evolutionary selection pressures. Fighting prowess is conveyed by performance signals whose production is constrained by physical ability and thus limited to just some individuals, whereas aggressive intent is encoded in strategic signals that all signalers are able to produce. We illustrate recent advances in the study of aggressive signaling with case studies of charismatic taxa that employ a range of sensory modalities, viz. visual and chemical signaling in cephalopod behavior, and indicators of aggressive intent in the territorial calls of songbirds. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  11. Generation of earthquake signals

    International Nuclear Information System (INIS)

    Kjell, G.

    1994-01-01

    Seismic verification can be performed either as a full scale test on a shaker table or as numerical calculations. In both cases it is necessary to have an earthquake acceleration time history. This report describes generation of such time histories by filtering white noise. Analogue and digital filtering methods are compared. Different methods of predicting the response spectrum of a white noise signal filtered by a band-pass filter are discussed. Prediction of both the average response level and the statistical variation around this level are considered. Examples with both the IEEE 301 standard response spectrum and a ground spectrum suggested for Swedish nuclear power stations are included in the report

  12. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  13. Signal detection

    International Nuclear Information System (INIS)

    Tholomier, M.

    1985-01-01

    In a scanning electron microscope, whatever is the measured signal, the same set is found: incident beam, sample, signal detection, signal amplification. The resulting signal is used to control the spot luminosity with the observer cathodoscope. This is synchronized with the beam scanning on the sample; on the cathodoscope, the image in secondary electrons, backscattered electrons,... of the sample surface is reconstituted. The best compromise must be found between a register time low enough to remove eventual variations (under the incident beam) of the nature of the observed phenomenon, and a good spatial resolution of the image and a signal-to-noise ratio high enough. The noise is one of the basic limitations of the scanning electron microscope performance. The whose measurement line must be optimized to reduce it [fr

  14. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  15. Digital storage of repeated signals

    International Nuclear Information System (INIS)

    Prozorov, S.P.

    1984-01-01

    An independent digital storage system designed for repeated signal discrimination from background noises is described. The signal averaging is performed off-line in the real time mode by means of multiple selection of the investigated signal and integration in each point. Digital values are added in a simple summator and the result is recorded the storage device with the volume of 1024X20 bit from where it can be output on an oscillograph, a plotter or transmitted to a compUter for subsequent processing. The described storage is reliable and simple device on one base of which the systems for the nuclear magnetic resonapce signal acquisition in different experiments are developed

  16. A new method for quantifying the performance of EEG blind source separation algorithms by referencing a simultaneously recorded ECoG signal.

    Science.gov (United States)

    Oosugi, Naoya; Kitajo, Keiichi; Hasegawa, Naomi; Nagasaka, Yasuo; Okanoya, Kazuo; Fujii, Naotaka

    2017-09-01

    Blind source separation (BSS) algorithms extract neural signals from electroencephalography (EEG) data. However, it is difficult to quantify source separation performance because there is no criterion to dissociate neural signals and noise in EEG signals. This study develops a method for evaluating BSS performance. The idea is neural signals in EEG can be estimated by comparison with simultaneously measured electrocorticography (ECoG). Because the ECoG electrodes cover the majority of the lateral cortical surface and should capture most of the original neural sources in the EEG signals. We measured real EEG and ECoG data and developed an algorithm for evaluating BSS performance. First, EEG signals are separated into EEG components using the BSS algorithm. Second, the EEG components are ranked using the correlation coefficients of the ECoG regression and the components are grouped into subsets based on their ranks. Third, canonical correlation analysis estimates how much information is shared between the subsets of the EEG components and the ECoG signals. We used our algorithm to compare the performance of BSS algorithms (PCA, AMUSE, SOBI, JADE, fastICA) via the EEG and ECoG data of anesthetized nonhuman primates. The results (Best case >JADE = fastICA >AMUSE = SOBI ≥ PCA >random separation) were common to the two subjects. To encourage the further development of better BSS algorithms, our EEG and ECoG data are available on our Web site (http://neurotycho.org/) as a common testing platform. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  17. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  18. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  19. Analysis of the imaging performance in indirect digital mammography detectors by linear systems and signal detection models

    International Nuclear Information System (INIS)

    Liaparinos, P.; Kalyvas, N.; Kandarakis, I.; Cavouras, D.

    2013-01-01

    Purpose: The purpose of this study was to provide an analysis of imaging performance in digital mammography, using indirect detector instrumentation, by combining the Linear Cascaded Systems (LCS) theory and the Signal Detection Theory (SDT). Observer performance was assessed, by examining frequently employed detectors, consisting of phosphor-based X-ray converters (granular Gd 2 O 2 S:Tb and structural CsI:Tl), coupled with the recently introduced complementary metal-oxide-semiconductor (CMOS) sensor. By applying combinations of various irradiation conditions (filter-target and exposure levels at 28 kV) on imaging detectors, our study aimed to find the optimum system set-up for digital mammography. For this purpose, the signal to noise transfer properties of the medical imaging detectors were examined for breast carcinoma detectability. Methods: An analytical model was applied to calculate X-ray interactions within software breast phantoms and detective media. Modeling involved: (a) three X-ray spectra used in digital mammography: 28 kV Mo/Mo (Mo: 0.030 mm), 28 kV Rh/Rh (Rh: 0.025 mm) and 28 kV W/Rh (Rh: 0.060 mm) at different entrance surface air kerma (ESAK) of 3 mGy and 5 mGy, (b) a 5 cm thick Perspex software phantom incorporating a small Ca lesion of varying size (0.1–1 cm), and (c) two 200 μm thick phosphor-based X-ray converters (Gd2O2S:Tb, CsI:Tl), coupled to a CMOS based detector of 22.5 μm pixel size. Results: Best (lowest) contrast threshold (CT) values were obtained with the combination: (i) W/Rh target-filter, (ii) 5 mGy (ESAK), and (iii) CsI:Tl-CMOS detector. For lesion diameter 0.5 cm the CT was found improved, in comparison to other anode/filter combinations, approximately 42% than Rh/Rh and 55% than Mo/Mo, for small sized carcinoma (0.1 cm) and approximately 50% than Rh/Rh and 125% than Mo/Mo, for big sized carcinoma (1 cm), considering 5 mGy X-ray beam. By decreasing lesion diameter and thickness, a limiting CT (100%) was occurred for size

  20. Design Implementation and Testing of a VLSI High Performance ASIC for Extracting the Phase of a Complex Signal

    National Research Council Canada - National Science Library

    Altmeyer, Ronald

    2002-01-01

    This thesis documents the research, circuit design, and simulation testing of a VLSI ASIC which extracts phase angle information from a complex sampled signal using the arctangent relationship: (phi=tan/-1 (Q/1...

  1. Arx: a toolset for the efficient simulation and direct synthesis of high-performance signal processing algorithms

    NARCIS (Netherlands)

    Hofstra, K.L.; Gerez, Sabih H.

    2007-01-01

    This paper addresses the efficient implementation of highperformance signal-processing algorithms. In early stages of such designs many computation-intensive simulations may be necessary. This calls for hardware description formalisms targeted for efficient simulation (such as the programming

  2. Accounting for Laser Extinction, Signal Attenuation, and Secondary Emission While Performing Optical Patternation in a Single Plane

    National Research Council Canada - National Science Library

    Brown, C

    2002-01-01

    An optical patternation method is described where the effects of laser extinction and signal attenuation can be corrected for, and where secondary scattering effects are reduced by probing the spray...

  3. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  4. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  5. PAU/GNSS-R: Implementation, Performance and First Results of a Real-Time Delay-Doppler Map Reflectometer Using Global Navigation Satellite System Signals

    Directory of Open Access Journals (Sweden)

    Enric Valencia

    2008-05-01

    Full Text Available Signals from Global Navigation Satellite Systems (GNSS were originally conceived for position and speed determination, but they can be used as signals of opportunity as well. The reflection process over a given surface modifies the properties of the scattered signal, and therefore, by processing the reflected signal, relevant geophysical data regarding the surface under study (land, sea, ice… can be retrieved. In essence, a GNSS-R receiver is a multi-channel GNSS receiver that computes the received power from a given satellite at a number of different delay and Doppler bins of the incoming signal. The first approaches to build such a receiver consisted of sampling and storing the scattered signal for later post-processing. However, a real-time approach to the problem is desirable to obtain immediately useful geophysical variables and reduce the amount of data. The use of FPGA technology makes this possible, while at the same time the system can be easily reconfigured. The signal tracking and processing constraints made necessary to fully design several new blocks. The uniqueness of the implemented system described in this work is the capability to compute in real-time Delay-Doppler maps (DDMs either for four simultaneous satellites or just one, but with a larger number of bins. The first tests have been conducted from a cliff over the sea and demonstrate the successful performance of the instrument to compute DDMs in real-time from the measured reflected GNSS/R signals. The processing of these measurements shall yield quantitative relationships between the sea state (mainly driven by the surface wind and the swell and the overall DDM shape. The ultimate goal is to use the DDM shape to correct the sea state influence on the L-band brightness temperature to improve the retrieval of the sea surface salinity (SSS.

  6. Integrating angle-frequency domain synchronous averaging technique with feature extraction for gear fault diagnosis

    Science.gov (United States)

    Zhang, Shengli; Tang, J.

    2018-01-01

    Gear fault diagnosis relies heavily on the scrutiny of vibration responses measured. In reality, gear vibration signals are noisy and dominated by meshing frequencies as well as their harmonics, which oftentimes overlay the fault related components. Moreover, many gear transmission systems, e.g., those in wind turbines, constantly operate under non-stationary conditions. To reduce the influences of non-synchronous components and noise, a fault signature enhancement method that is built upon angle-frequency domain synchronous averaging is developed in this paper. Instead of being averaged in the time domain, the signals are processed in the angle-frequency domain to solve the issue of phase shifts between signal segments due to uncertainties caused by clearances, input disturbances, and sampling errors, etc. The enhanced results are then analyzed through feature extraction algorithms to identify the most distinct features for fault classification and identification. Specifically, Kernel Principal Component Analysis (KPCA) targeting at nonlinearity, Multilinear Principal Component Analysis (MPCA) targeting at high dimensionality, and Locally Linear Embedding (LLE) targeting at local similarity among the enhanced data are employed and compared to yield insights. Numerical and experimental investigations are performed, and the results reveal the effectiveness of angle-frequency domain synchronous averaging in enabling feature extraction and classification.

  7. Quantification of signal detection performance degradation induced by phase-retrieval in propagation-based x-ray phase-contrast imaging

    Science.gov (United States)

    Chou, Cheng-Ying; Anastasio, Mark A.

    2016-04-01

    In propagation-based X-ray phase-contrast (PB XPC) imaging, the measured image contains a mixture of absorption- and phase-contrast. To obtain separate images of the projected absorption and phase (i.e., refractive) properties of a sample, phase retrieval methods can be employed. It has been suggested that phase-retrieval can always improve image quality in PB XPC imaging. However, when objective (task-based) measures of image quality are employed, this is not necessarily true and phase retrieval can be detrimental. In this work, signal detection theory is utilized to quantify the performance of a Hotelling observer (HO) for detecting a known signal in a known background. Two cases are considered. In the first case, the HO acts directly on the measured intensity data. In the second case, the HO acts on either the retrieved phase or absorption image. We demonstrate that the performance of the HO is superior when acting on the measured intensity data. The loss of task-specific information induced by phase-retrieval is quantified by computing the efficiency of the HO as the ratio of the test statistic signal-to-noise ratio (SNR) for the two cases. The effect of the system geometry on this efficiency is systematically investigated. Our findings confirm that phase-retrieval can impair signal detection performance in XPC imaging.

  8. Continuous detection of weak sensory signals in afferent spike trains: the role of anti-correlated interspike intervals in detection performance.

    Science.gov (United States)

    Goense, J B M; Ratnam, R

    2003-10-01

    An important problem in sensory processing is deciding whether fluctuating neural activity encodes a stimulus or is due to variability in baseline activity. Neurons that subserve detection must examine incoming spike trains continuously, and quickly and reliably differentiate signals from baseline activity. Here we demonstrate that a neural integrator can perform continuous signal detection, with performance exceeding that of trial-based procedures, where spike counts in signal- and baseline windows are compared. The procedure was applied to data from electrosensory afferents of weakly electric fish (Apteronotus leptorhynchus), where weak perturbations generated by small prey add approximately 1 spike to a baseline of approximately 300 spikes s(-1). The hypothetical postsynaptic neuron, modeling an electrosensory lateral line lobe cell, could detect an added spike within 10-15 ms, achieving near ideal detection performance (80-95%) at false alarm rates of 1-2 Hz, while trial-based testing resulted in only 30-35% correct detections at that false alarm rate. The performance improvement was due to anti-correlations in the afferent spike train, which reduced both the amplitude and duration of fluctuations in postsynaptic membrane activity, and so decreased the number of false alarms. Anti-correlations can be exploited to improve detection performance only if there is memory of prior decisions.

  9. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  10. Signal Processing

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Signal processing techniques, extensively used nowadays to maximize the performance of audio and video equipment, have been a key part in the design of hardware and software for high energy physics detectors since pioneering applications in the UA1 experiment at CERN in 1979

  11. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  12. Analysis of defense signals in Arabidopsis thaliana leaves by ultra-performance liquid chromatography/tandem mass spectrometry: jasmonates, salicylic acid, abscisic acid.

    Science.gov (United States)

    Stingl, Nadja; Krischke, Markus; Fekete, Agnes; Mueller, Martin J

    2013-01-01

    Defense signaling compounds and phytohormones play an essential role in the regulation of plant responses to various environmental abiotic and biotic stresses. Among the most severe stresses are herbivory, pathogen infection, and drought stress. The major hormones involved in the regulation of these responses are 12-oxo-phytodienoic acid (OPDA), the pro-hormone jasmonic acid (JA) and its biologically active isoleucine conjugate (JA-Ile), salicylic acid (SA), and abscisic acid (ABA). These signaling compounds are present and biologically active at very low concentrations from ng/g to μg/g dry weight. Accurate and sensitive quantification of these signals has made a significant contribution to the understanding of plant stress responses. Ultra-performance liquid chromatography (UPLC) coupled with a tandem quadrupole mass spectrometer (MS/MS) has become an essential technique for the analysis and quantification of these compounds.

  13. Potenciais tardios ao eletrocardiograma de alta resolução no domínio do tempo em portadores de insuficiência cardíaca de diferentes etiologias Time domain analysis of the signal averaged electrocardiogram to detect late potentials in heart failure patients with different etiologies

    Directory of Open Access Journals (Sweden)

    Ernani de Sousa Grell

    2006-09-01

    Full Text Available OBJETIVO: Avaliar freqüência, correlações clínicas e influência prognóstica do potencial tardio no eletrocardiograma de alta resolução, em portadores de insuficiência cardíaca de diferentes etiologias. MÉTODOS: Foi estudado o eletrocardiograma de alta resolução, durante 42 meses, em 288 portadores de insuficiência cardíaca de diferentes etiologias, 215 homens (74,65% e 73 mulheres (25,35, de idades entre 16 e 70 anos (média 51,5, desvio-padrão 11,24. As etiologias da insuficiência cardíaca foram: cardiomiopatia hipertensiva, 78(27,1%; cardiomiopatia dilatada idiopática, 73(25,4%; cardiomiopatia isquêmica, 65(22,6%; cardiomiopatia da doença de Chagas, 42(14,6%; cardiomiopatia alcoólica, 9(3,1%; cardiomiopatia periparto, 6(2,1%; valvopatias em 2(4,2% e miocardite viral, 3(1,04%. Foram avaliadas a duração do QRS Standard, duração do QRS filtrado, duração do sinal abaixo de 40µV e a raiz quadrada nos últimos 40ms quanto a idade, sexo, etiologia, achados do eletrocardiograma de repouso de 12 derivações, do ecocardiograma, do eletrocardiograma de longa duração e mortalidade. Para a análise estatística, foram utilizados os testes: exato de Fisher, t de Student, de Man-Whitney, análise de variância, Log-HanK e o método de Kaplan-Meyer. RESULTADOS: O potencial tardio foi diagnosticado em 90 (31,3% pacientes e não houve correlação com as etiologias. Sua presença associou-se a: menor consumo máximo de oxigênio a cicloergoespirometria (p=0,001; taquicardia ventricular sustentada e não sustentada ao Holter (p=0,001, morte súbita e mortalidade (pOBJECTIVE: To evaluate the frequency, clinical correlations and prognosis influence of late potentials on the of heart failure patients with different etiologies using the signal averaged electrocardiogram. METHODS: A 42 month study of the signal averaged electrocardiograms of 288 heart failure patients with different etiologies was conducted. The group of patients

  14. Predicting the performance of a power amplifier using large-signal circuit simulations of an AlGaN/GaN HFET model

    Science.gov (United States)

    Bilbro, Griff L.; Hou, Danqiong; Yin, Hong; Trew, Robert J.

    2009-02-01

    We have quantitatively modeled the conduction current and charge storage of an HFET in terms its physical dimensions and material properties. For DC or small-signal RF operation, no adjustable parameters are necessary to predict the terminal characteristics of the device. Linear performance measures such as small-signal gain and input admittance can be predicted directly from the geometric structure and material properties assumed for the device design. We have validated our model at low-frequency against experimental I-V measurements and against two-dimensional device simulations. We discuss our recent extension of our model to include a larger class of electron velocity-field curves. We also discuss the recent reformulation of our model to facilitate its implementation in commercial large-signal high-frequency circuit simulators. Large signal RF operation is more complex. First, the highest CW microwave power is fundamentally bounded by a brief, reversible channel breakdown in each RF cycle. Second, the highest experimental measurements of efficiency, power, or linearity always require harmonic load pull and possibly also harmonic source pull. Presently, our model accounts for these facts with an adjustable breakdown voltage and with adjustable load impedances and source impedances for the fundamental frequency and its harmonics. This has allowed us to validate our model for large signal RF conditions by simultaneously fitting experimental measurements of output power, gain, and power added efficiency of real devices. We show that the resulting model can be used to compare alternative device designs in terms of their large signal performance, such as their output power at 1dB gain compression or their third order intercept points. In addition, the model provides insight into new device physics features enabled by the unprecedented current and voltage levels of AlGaN/GaN HFETs, including non-ohmic resistance in the source access regions and partial depletion of

  15. Parametric modelling of cardiac system multiple measurement signals: an open-source computer framework for performance evaluation of ECG, PCG and ABP event detectors.

    Science.gov (United States)

    Homaeinezhad, M R; Sabetian, P; Feizollahi, A; Ghaffari, A; Rahmani, R

    2012-02-01

    The major focus of this study is to present a performance accuracy assessment framework based on mathematical modelling of cardiac system multiple measurement signals. Three mathematical algebraic subroutines with simple structural functions for synthetic generation of the synchronously triggered electrocardiogram (ECG), phonocardiogram (PCG) and arterial blood pressure (ABP) signals are described. In the case of ECG signals, normal and abnormal PQRST cycles in complicated conditions such as fascicular ventricular tachycardia, rate dependent conduction block and acute Q-wave infarctions of inferior and anterolateral walls can be simulated. Also, continuous ABP waveform with corresponding individual events such as systolic, diastolic and dicrotic pressures with normal or abnormal morphologies can be generated by another part of the model. In addition, the mathematical synthetic PCG framework is able to generate the S4-S1-S2-S3 cycles in normal and in cardiac disorder conditions such as stenosis, insufficiency, regurgitation and gallop. In the PCG model, the amplitude and frequency content (5-700 Hz) of each sound and variation patterns can be specified. The three proposed models were implemented to generate artificial signals with varies abnormality types and signal-to-noise ratios (SNR), for quantitative detection-delineation performance assessment of several ECG, PCG and ABP individual event detectors designed based on the Hilbert transform, discrete wavelet transform, geometric features such as area curve length (ACLM), the multiple higher order moments (MHOM) metric, and the principal components analysed geometric index (PCAGI). For each method the detection-delineation operating characteristics were obtained automatically in terms of sensitivity, positive predictivity and delineation (segmentation) error rms and checked by the cardiologist. The Matlab m-file script of the synthetic ECG, ABP and PCG signal generators are available in the Appendix.

  16. On the construction of a time base and the elimination of averaging errors in proxy records

    Science.gov (United States)

    Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.

    2009-04-01

    Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The

  17. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  18. Improved slow-light performance of 10 Gb/s NRZ, PSBT and DPSK signals in fiber broadband SBS.

    Science.gov (United States)

    Yi, Lilin; Jaouen, Yves; Hu, Weisheng; Su, Yikai; Bigo, Sébastien

    2007-12-10

    We have demonstrated error-free operations of slow-light via stimulated Brillouin scattering (SBS) in optical fiber for 10-Gb/s signals with different modulation formats, including non-return-to-zero (NRZ), phase-shaped binary transmission (PSBT) and differential phase-shiftkeying (DPSK). The SBS gain bandwidth is broadened by using current noise modulation of the pump laser diode. The gain shape is simply controlled by the noise density function. Super-Gaussian noise modulation of the Brillouin pump allows a flat-top and sharp-edge SBS gain spectrum, which can reduce slow-light induced distortion in case of 10-Gb/s NRZ signal. The corresponding maximal delay-time with error-free operation is 35 ps. Then we propose the PSBT format to minimize distortions resulting from SBS filtering effect and dispersion accompanied with slow light because of its high spectral efficiency and strong dispersion tolerance. The sensitivity of the 10-Gb/s PSBT signal is 5.2 dB better than the NRZ case with a same 35-ps delay. The maximal delay of 51 ps with error-free operation has been achieved. Futhermore, the DPSK format is directly demodulated through a Gaussian-shaped SBS gain, which is achieved using Gaussian-noise modulation of the Brillouin pump. The maximal error-free time delay after demodulation of a 10-Gb/s DPSK signal is as high as 81.5 ps, which is the best demonstrated result for 10-Gb/s slow-light.

  19. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  20. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  1. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  2. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  3. Understanding the Mind or Predicting Signal-Dependent Action? Performance of Children With and Without Autism on Analogues of the False-Belief Task

    OpenAIRE

    Bowler, D. M.; Briskman, J.; Gurvidi, N.; Fornells-Ambrojo, M.

    2005-01-01

    To evaluate the claim that correct performance on unexpected transfer false-belief tasks specifically involves mental-state understanding, two experiments were carried out with children with autism, intellectual disabilities, and typical development. In both experiments, children were given a standard unexpected transfer false-belief task and a mental-state-free, mechanical analogue task in which participants had to predict the destination of a train based on true or false signal information....

  4. An application of commercial data averaging techniques in pulsed photothermal experiments

    International Nuclear Information System (INIS)

    Grozescu, I.V.; Moksin, M.M.; Wahab, Z.A.; Yunus, W.M.M.

    1997-01-01

    We present an application of data averaging technique commonly implemented in many commercial digital oscilloscopes or waveform digitizers. The technique was used for transient data averaging in the pulsed photothermal radiometry experiments. Photothermal signals are surrounded by an important amount of noise which affect the precision of the measurements. The effect of the noise level on photothermal signal parameters in our particular case, fitted decay time, is shown. The results of the analysis can be used in choosing the most effective averaging technique and estimating the averaging parameter values. This would help to reduce the data acquisition time while improving the signal-to-noise ratio

  5. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    estimated using least squares and Newton Raphson iterative methods. To determine the order of the ... r is the degree of polynomial while j is the number of lag of the ..... use a real time series dataset, monthly rainfall and temperature series ...

  6. Evaluating low-resolution tomography neurofeedback by single dissociation of mental grotation task from stop signal task performance.

    Science.gov (United States)

    Getter, Nir; Kaplan, Zeev; Todder, Doron

    2015-10-01

    Electroencephalography source localization neurofeedback, i.e Standardized low-resolution tomography (sLORETA) neurofeedback are non-invasive method for altering region specific brain activity. This is an improvement over traditional neurofeedback which were based on recordings from a single scalp-electrode. We proposed three criteria clusters as a methodological framework to evaluate electroencephalography source localization neurofeedback and present relevant data. Our objective was to evaluate standardized low resolution EEG tomography neurofeedback by examining how training one neuroanatomical area effects the mental rotation task (which is related to the activity of bilateral Parietal regions) and the stop-signal test (which is related to frontal structures). Twelve healthy participants were enrolled in a single session sLORETA neurofeedback protocol. The participants completed both the mental rotation task and the stop-signal test before and after one sLORETA neurofeedback session. During sLORETA neurofeedback sessions participants watched one sitcom episode while the picture quality co-varied with activity in the superior parietal lobule. Participants were rewarded for increasing activity in this region only. Results showed a significant reaction time decrease and an increase in accuracy after sLORETA neurofeedback on the mental rotation task but not after stop signal task. Together with behavioral changes a significant activity increase was found at the left parietal brain after sLORETA neurofeedback compared with baseline. We concluded that activity increase in the parietal region had a specific effect on the mental rotation task. Tasks unrelated to parietal brain activity were unaffected. Therefore, sLORETA neurofeedback could be used as a research, or clinical tool for cognitive disorders. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  8. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    Directory of Open Access Journals (Sweden)

    Jacinta Chan Phooi M'ng

    Full Text Available The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR, Malaysia (MYR, the Philippines (PHP, Singapore (SGD, and Thailand (THB through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA' in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  9. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    Science.gov (United States)

    Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah

    2016-01-01

    The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  10. AC Small Signal Modeling of PWM Y-Source Converter by Circuit Averaging and Averaged Switch Modeling Technique

    DEFF Research Database (Denmark)

    Forouzesh, Mojtaba; Siwakoti, Yam Prasad; Blaabjerg, Frede

    2016-01-01

    Magnetically coupled Y-source impedance network is a newly proposed structure with versatile features intended for various power converter applications e.g. in the renewable energy technologies. The voltage gain of the Y-source impedance network rises exponentially as a function of turns ratio, w...

  11. The Influence of the External Signal Modulation Waveform and Frequency on the Performance of a Photonic Forced Oscillator.

    Science.gov (United States)

    Sánchez-Castro, Noemi; Palomino-Ovando, Martha Alicia; Estrada-Wiese, Denise; Valladares, Nydia Xcaret; Del Río, Jesus Antonio; de la Mora, Maria Beatriz; Doti, Rafael; Faubert, Jocelyn; Lugo, Jesus Eduardo

    2018-05-21

    Photonic crystals have been an object of interest because of their properties to inhibit certain wavelengths and allow the transmission of others. Using these properties, we designed a photonic structure known as photodyne formed by two porous silicon one-dimensional photonic crystals with an air defect between them. When the photodyne is illuminated with appropriate light, it allows us to generate electromagnetic forces within the structure that can be maximized if the light becomes localized inside the defect region. These electromagnetic forces allow the microcavity to oscillate mechanically. In the experiment, a chopper was driven by a signal generator to modulate the laser light that was used. The driven frequency and the signal modulation waveform (rectangular, sinusoidal or triangular) were changed with the idea to find optimal conditions for the structure to oscillate. The microcavity displacement amplitude, velocity amplitude and Fourier spectrum of the latter and its frequency were measured by means of a vibrometer. The mechanical oscillations are modeled and compared with the experimental results and show good agreement. For external frequency values of 5 Hz and 10 Hz, the best option was a sinusoidal waveform, which gave higher photodyne displacements and velocity amplitudes. Nonetheless, for an external frequency of 15 Hz, the best option was the rectangular waveform.

  12. The Influence of the External Signal Modulation Waveform and Frequency on the Performance of a Photonic Forced Oscillator

    Directory of Open Access Journals (Sweden)

    Noemi Sánchez-Castro

    2018-05-01

    Full Text Available Photonic crystals have been an object of interest because of their properties to inhibit certain wavelengths and allow the transmission of others. Using these properties, we designed a photonic structure known as photodyne formed by two porous silicon one-dimensional photonic crystals with an air defect between them. When the photodyne is illuminated with appropriate light, it allows us to generate electromagnetic forces within the structure that can be maximized if the light becomes localized inside the defect region. These electromagnetic forces allow the microcavity to oscillate mechanically. In the experiment, a chopper was driven by a signal generator to modulate the laser light that was used. The driven frequency and the signal modulation waveform (rectangular, sinusoidal or triangular were changed with the idea to find optimal conditions for the structure to oscillate. The microcavity displacement amplitude, velocity amplitude and Fourier spectrum of the latter and its frequency were measured by means of a vibrometer. The mechanical oscillations are modeled and compared with the experimental results and show good agreement. For external frequency values of 5 Hz and 10 Hz, the best option was a sinusoidal waveform, which gave higher photodyne displacements and velocity amplitudes. Nonetheless, for an external frequency of 15 Hz, the best option was the rectangular waveform.

  13. An investigation of signal performance enhancements achieved through innovative pixel design across several generations of indirect detection, active matrix, flat-panel arrays

    International Nuclear Information System (INIS)

    Antonuk, Larry E.; Zhao Qihua; El-Mohri, Youcef; Du Hong; Wang Yi; Street, Robert A.; Ho, Jackson; Weisfield, Richard; Yao, William

    2009-01-01

    Active matrix flat-panel imager (AMFPI) technology is being employed for an increasing variety of imaging applications. An important element in the adoption of this technology has been significant ongoing improvements in optical signal collection achieved through innovations in indirect detection array pixel design. Such improvements have a particularly beneficial effect on performance in applications involving low exposures and/or high spatial frequencies, where detective quantum efficiency is strongly reduced due to the relatively high level of additive electronic noise compared to signal levels of AMFPI devices. In this article, an examination of various signal properties, as determined through measurements and calculations related to novel array designs, is reported in the context of the evolution of AMFPI pixel design. For these studies, dark, optical, and radiation signal measurements were performed on prototype imagers incorporating a variety of increasingly sophisticated array designs, with pixel pitches ranging from 75 to 127 μm. For each design, detailed measurements of fundamental pixel-level properties conducted under radiographic and fluoroscopic operating conditions are reported and the results are compared. A series of 127 μm pitch arrays employing discrete photodiodes culminated in a novel design providing an optical fill factor of ∼80% (thereby assuring improved x-ray sensitivity), and demonstrating low dark current, very low charge trapping and charge release, and a large range of linear signal response. In two of the designs having 75 and 90 μm pitches, a novel continuous photodiode structure was found to provide fill factors that approach the theoretical maximum of 100%. Both sets of novel designs achieved large fill factors by employing architectures in which some, or all of the photodiode structure was elevated above the plane of the pixel addressing transistor. Generally, enhancement of the fill factor in either discrete or continuous

  14. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    Directory of Open Access Journals (Sweden)

    Bahubali K. Shiragapur

    2016-03-01

    Full Text Available In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR quantity. The Golay Code (24, 12, Reed-Muller code (16, 11, Hamming code (7, 4 and Hybrid technique (Combination of Signal Scrambling and Signal Distortion proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conventional and Modified Selective mapping techniques. The simulation results are validated through statistical properties, for proposed technique’s autocorrelation value is maximum shows reduction in PAPR. The symbol preference is the key idea to reduce PAPR based on Hamming distance. The simulation results are discussed in detail, in this article.

  15. Identification of moving vehicle forces on bridge structures via moving average Tikhonov regularization

    Science.gov (United States)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin

    2017-08-01

    Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.

  16. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Science.gov (United States)

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  17. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Directory of Open Access Journals (Sweden)

    Kyungsoo Kim

    2016-06-01

    Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.

  18. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  19. Visualization of Radial Peripapillary Capillaries Using Optical Coherence Tomography Angiography: The Effect of Image Averaging.

    Directory of Open Access Journals (Sweden)

    Shelley Mo

    Full Text Available To assess the effect of image registration and averaging on the visualization and quantification of the radial peripapillary capillary (RPC network on optical coherence tomography angiography (OCTA.Twenty-two healthy controls were imaged with a commercial OCTA system (AngioVue, Optovue, Inc.. Ten 10x10° scans of the optic disc were obtained, and the most superficial layer (50-μm slab extending from the inner limiting membrane was extracted for analysis. Rigid registration was achieved using ImageJ, and averaging of each 2 to 10 frames was performed in five ~2x2° regions of interest (ROI located 1° from the optic disc margin. The ROI were automatically skeletonized. Signal-to-noise ratio (SNR, number of endpoints and mean capillary length from the skeleton, capillary density, and mean intercapillary distance (ICD were measured for the reference and each averaged ROI. Repeated measures analysis of variance was used to assess statistical significance. Three patients with primary open angle glaucoma were also imaged to compare RPC density to controls.Qualitatively, vessels appeared smoother and closer to histologic descriptions with increasing number of averaged frames. Quantitatively, number of endpoints decreased by 51%, and SNR, mean capillary length, capillary density, and ICD increased by 44%, 91%, 11%, and 4.5% from single frame to 10-frame averaged, respectively. The 10-frame averaged images from the glaucomatous eyes revealed decreased density correlating to visual field defects and retinal nerve fiber layer thinning.OCTA image registration and averaging is a viable and accessible method to enhance the visualization of RPCs, with significant improvements in image quality and RPC quantitative parameters. With this technique, we will be able to non-invasively and reliably study RPC involvement in diseases such as glaucoma.

  20. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  1. Brain signal analysis using EEG and Entropy to study the effect of physical and mental tasks on cognitive performance

    Directory of Open Access Journals (Sweden)

    Dineshen Chuckravanen

    2015-07-01

    Full Text Available Some theoretical control models posit that the fatigue which is developed during physical activity is not always peripheral and it is the brain which causes this feeling of fatigue. This fatigue develops due to a decrease of metabolic resources to and from the brain that modulates physical performance. Therefore, this research was conducted to find out if there was finite level ofmetabolic energy resources in the brain, by performing both mental and physical activities to exhaustion. It was found that there was an overflow of information during the exercise-involved experiment. The circular relationship between fatigue, cognitive performance and arousal state insinuates that one should apply more effort to maintain performance levels which would require more energy resources that eventually accelerates the development of fatigue. Thus, there appeared to be a limited amount of energy resources in the brain as shown by the cognitive performance of the participants.

  2. Perceived Average Orientation Reflects Effective Gist of the Surface.

    Science.gov (United States)

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  3. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  4. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  5. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  6. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  7. Performance comparison of weighted sum-minimum mean square error and virtual signal-to-interference plus noise ratio algorithms in simulated and measured channels

    DEFF Research Database (Denmark)

    Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels

    2014-01-01

    A comparison in data achievement between two well-known algorithms with simulated and real measured data is presented. The algorithms maximise the data rate in cooperative base stations (BS) multiple-input-single-output scenario. Weighted sum-minimum mean square error algorithm could be used...... in multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results....

  8. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  9. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  10. A GRID solution for gravitational waves signal analysis from coalescing binaries: performances of test algorithms and further developments

    International Nuclear Information System (INIS)

    Acernese, A; Barone, F; Rosa, R De; Esposito, R; Frasca, S; Mastroserio, P; Milano, L; Palomba, C; Pardi, S; Qipiani, K; Ricci, F; Russo, G

    2004-01-01

    The analysis of data coming from interferometric antennas for gravitational wave detection requires a huge amount of computing power. The usual approach to the detection strategy is to set up computer farms able to perform several tasks in parallel, exchanging data through network links. In this paper a new computation strategy based on the GRID environment, is presented. The GRID environment allows several geographically distributed computing resources to exchange data and programs in a secure way, using standard infrastructures. The computing resources can be geographically distributed also on a large scale. Some preliminary tests were performed using a subnetwork of the GRID infrastructure, producing good results in terms of distribution efficiency and time duration

  11. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  12. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  13. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  14. Performance analysis of spectral-phase-encoded optical code-division multiple-access system regarding the incorrectly decoded signal as a nonstationary random process

    Science.gov (United States)

    Yan, Meng; Yao, Minyu; Zhang, Hongming

    2005-11-01

    The performance of a spectral-phase-encoded (SPE) optical code-division multiple-access (OCDMA) system is analyzed. Regarding the incorrectly decoded signal (IDS) as a nonstationary random process, we derive a novel probability distribution for it. The probability distribution of the IDS is considered a chi-squared distribution with degrees of freedom r=1, which is more reasonable and accurate than in previous work. The bit error rate (BER) of an SPE OCDMA system under multiple-access interference is evaluated. Numerical results show that the system can sustain very low BER even when there are multiple simultaneous users, and as the code length becomes longer or the initial pulse becomes shorter, the system performs better.

  15. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  16. Signal to noise ratio (SNR) and image uniformity: an estimate of performance of magnetic resonance imaging (MRI) system

    International Nuclear Information System (INIS)

    Narayan, P.; Suri, S.; Choudhary, S.R.

    2001-01-01

    In most general definition, noise in an image, is any variation that represents a deviation from truth. Noise sources in MRI can be systematic or random and statistical in nature. Data processing algorithms that smooth and enhance the edges by non-linear intensity assignments among other factors can affect the distribution of statistical noise. The SNR and image uniformity depends on the various parameters of NMR imaging system (viz. General system calibration, Gain coil tuning, AF shielding, coil loading, image processing and scan parameters like TE, TR, interslice distance, slice thickness, pixel size and matrix size). A study on SNR and image uniformity have been performed using standard head AF coil with different TR and the estimates of their variation are presented. A comparison between different techniques has also been evaluated using standard protocol of the Siemens Magnetom Vision Plus MRI system

  17. Effects of Arachidonic Acid Supplementation on Acute Anabolic Signaling and Chronic Functional Performance and Body Composition Adaptations.

    Directory of Open Access Journals (Sweden)

    Eduardo O De Souza

    Full Text Available The primary purpose of this investigation was to examine the effects of arachidonic acid (ARA supplementation on functional performance and body composition in trained males. In addition, we performed a secondary study looking at molecular responses of ARA supplementation following an acute exercise bout in rodents.Thirty strength-trained males (age: 20.4 ± 2.1 yrs were randomly divided into two groups: ARA or placebo (i.e. CTL. Then, both groups underwent an 8-week, 3-day per week, non-periodized training protocol. Quadriceps muscle thickness, whole-body composition scan (DEXA, muscle strength, and power were assessed at baseline and post-test. In the rodent model, male Wistar rats (~250 g, ~8 weeks old were pre-fed with either ARA or water (CTL for 8 days and were fed the final dose of ARA prior to being acutely strength trained via electrical stimulation on unilateral plantar flexions. A mixed muscle sample was removed from the exercised and non-exercised leg 3 hours post-exercise.Lean body mass (2.9%, p<0.0005, upper-body strength (8.7%, p<0.0001, and peak power (12.7%, p<0.0001 increased only in the ARA group. For the animal trial, GSK-β (Ser9 phosphorylation (p<0.001 independent of exercise and AMPK phosphorylation after exercise (p-AMPK less in ARA, p = 0.041 were different in ARA-fed versus CTL rats.Our findings suggest that ARA supplementation can positively augment strength-training induced adaptations in resistance-trained males. However, chronic studies at the molecular level are required to further elucidate how ARA combined with strength training affect muscle adaptation.

  18. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  19. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  20. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  1. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  2. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  3. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  4. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  5. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  6. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  7. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  8. Signal Words

    Science.gov (United States)

    SIGNAL WORDS TOPIC FACT SHEET NPIC fact sheets are designed to answer questions that are commonly asked by the ... making decisions about pesticide use. What are Signal Words? Signal words are found on pesticide product labels, ...

  9. A comparative study between a simplified Kalman filter and Sliding Window Averaging for single trial dynamical estimation of event-related potentials

    DEFF Research Database (Denmark)

    Vedel-Larsen, Esben; Fuglø, Jacob; Channir, Fouad

    2010-01-01

    , are variable and depend on cognitive function. This study compares the performance of a simplified Kalman filter with Sliding Window Averaging in tracking dynamical changes in single trial P300. The comparison is performed on simulated P300 data with added background noise consisting of both simulated and real...... background EEG in various input signal to noise ratios. While both methods can be applied to track dynamical changes, the simplified Kalman filter has an advantage over the Sliding Window Averaging, most notable in a better noise suppression when both are optimized for faster changing latency and amplitude...

  10. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Acute and chronic effects of cannabidiol on Δ9-tetrahydrocannabinol (Δ9-THC)-induced disruption in stop signal task performance

    Science.gov (United States)

    Jacobs, David S.; Kohut, Stephen J.; Jiang, Shan; Nikas, Spyros P.; Makriyannis, Alexandros; Bergman, Jack

    2016-01-01

    Recent clinical and preclinical research suggests that cannabidiol (CBD) and Δ9-tetrahydrocannabinol (Δ9-THC) have interactive effects on measures of cognition; however, the nature of these interactions is not yet fully characterized. To address this, the effects of Δ9-THC and CBD were investigated independently and in combination with proposed therapeutic dose ratios of 1:1 and 1:3 Δ9-THC:CBD in adult rhesus monkeys (n=6) performing a stop signal task (SST). Additionally, the development of tolerance to the effects of THC on SST performance was evaluated by determining the effects of acutely administered Δ9-THC (0.1-3.2 mg/kg), during a 24-day chronic Δ9-THC treatment period with Δ9-THC alone or with CBD. Results indicate that Δ9-THC (0.032 - 0.32 mg/kg) dose-dependently decreased ‘go’ success but did not alter ‘go’ reaction time or stop signal reaction time (SSRT); CBD (0.1-1.0 mg/kg) was without effect on all measures and, when co-administered in a 1:1 dose-ratio, did not exacerbate or attenuate the effects of Δ9-THC. When co-administered in a 1:3 dose-ratio, CBD (1.0 mg/kg) attenuated the disruptive effects of 0.32 mg/kg Δ9-THC but did not alter the effects of other Δ9-THC doses. Increases in ED50 values for the effects of Δ9-THC on SST performance were apparent during chronic Δ9-THC treatment, with little evidence for modification of changes in sensitivity by CBD. These results indicate that CBD, when combined with THC in clinically available dose-ratios does not exacerbate and, under restricted conditions, may even attenuate Δ9-THC’s behavioral effects. PMID:27690502

  12. Acute and chronic effects of cannabidiol on Δ⁹-tetrahydrocannabinol (Δ⁹-THC)-induced disruption in stop signal task performance.

    Science.gov (United States)

    Jacobs, David S; Kohut, Stephen J; Jiang, Shan; Nikas, Spyros P; Makriyannis, Alexandros; Bergman, Jack

    2016-10-01

    Recent clinical and preclinical research has suggested that cannabidiol (CBD) and Δ9-tetrahydrocannabinol (Δ9-THC) have interactive effects on measures of cognition; however, the nature of these interactions is not yet fully characterized. To address this, we investigated the effects of Δ9-THC and CBD independently and in combination with proposed therapeutic dose ratios of 1:1 and 1:3 Δ9-THC:CBD in adult rhesus monkeys (n = 6) performing a stop signal task (SST). Additionally, the development of tolerance to the effects of Δ9-THC on SST performance was evaluated by determining the effects of acutely administered Δ9-THC (0.1-3.2 mg/kg), during a 24-day chronic Δ9-THC treatment period with Δ9-THC alone or in combination with CBD. Results indicate that Δ9-THC (0.032-0.32 mg/kg) dose-dependently decreased go success but did not alter go reaction time (RT) or stop signal RT (SSRT); CBD (0.1-1.0 mg/kg) was without effect on all measures and, when coadministered in a 1:1 dose ratio, did not exacerbate or attenuate the effects of Δ9-THC. When coadministered in a 1:3 dose ratio, CBD (1.0 mg/kg) attenuated the disruptive effects of 0.32 mg/kg Δ9-THC but did not alter the effects of other Δ9-THC doses. Increases in ED50 values for the effects of Δ9-THC on SST performance were apparent during chronic Δ9-THC treatment, with little evidence for modification of changes in sensitivity by CBD. These results indicate that CBD, when combined with Δ9-THC in clinically available dose ratios, does not exacerbate and, under restricted conditions may even attenuate, Δ9-THC's behavioral effects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  14. Ocean tides in GRACE monthly averaged gravity fields

    DEFF Research Database (Denmark)

    Knudsen, Per

    2003-01-01

    The GRACE mission will map the Earth's gravity fields and its variations with unprecedented accuracy during its 5-year lifetime. Unless ocean tide signals and their load upon the solid earth are removed from the GRACE data, their long period aliases obscure more subtle climate signals which GRACE...... aims at. In this analysis the results of Knudsen and Andersen (2002) have been verified using actual post-launch orbit parameter of the GRACE mission. The current ocean tide models are not accurate enough to correct GRACE data at harmonic degrees lower than 47. The accumulated tidal errors may affect...... the GRACE data up to harmonic degree 60. A study of the revised alias frequencies confirm that the ocean tide errors will not cancel in the GRACE monthly averaged temporal gravity fields. The S-2 and the K-2 terms have alias frequencies much longer than 30 days, so they remain almost unreduced...

  15. Signal-to-Noise Ratio in PVT Performance as a Cognitive Measure of the Effect of Sleep Deprivation on the Fidelity of Information Processing.

    Science.gov (United States)

    Chavali, Venkata P; Riedy, Samantha M; Van Dongen, Hans P A

    2017-03-01

    There is a long-standing debate about the best way to characterize performance deficits on the psychomotor vigilance test (PVT), a widely used assay of cognitive impairment in human sleep deprivation studies. Here, we address this issue through the theoretical framework of the diffusion model and propose to express PVT performance in terms of signal-to-noise ratio (SNR). From the equations of the diffusion model for one-choice, reaction-time tasks, we derived an expression for a novel SNR metric for PVT performance. We also showed that LSNR-a commonly used log-transformation of SNR-can be reasonably well approximated by a linear function of the mean response speed, LSNRapx. We computed SNR, LSNR, LSNRapx, and number of lapses for 1284 PVT sessions collected from 99 healthy young adults who participated in laboratory studies with 38 hr of total sleep deprivation. All four PVT metrics captured the effects of time awake and time of day on cognitive performance during sleep deprivation. The LSNR had the best psychometric properties, including high sensitivity, high stability, high degree of normality, absence of floor and ceiling effects, and no bias in the meaning of change scores related to absolute baseline performance. The theoretical motivation of SNR and LSNR permits quantitative interpretation of PVT performance as an assay of the fidelity of information processing in cognition. Furthermore, with a conceptual and statistical meaning grounded in information theory and generalizable across scientific fields, LSNR in particular is a useful tool for systems-integrated fatigue risk management. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  16. Ionization Electron Signal Processing in Single Phase LArTPCs II. Data/Simulation Comparison and Performance in MicroBooNE

    Energy Technology Data Exchange (ETDEWEB)

    Adams, C.; et al.

    2018-04-07

    The single-phase liquid argon time projection chamber (LArTPC) provides a large amount of detailed information in the form of fine-grained drifted ionization charge from particle traces. To fully utilize this information, the deposited charge must be accurately extracted from the raw digitized waveforms via a robust signal processing chain. Enabled by the ultra-low noise levels associated with cryogenic electronics in the MicroBooNE detector, the precise extraction of ionization charge from the induction wire planes in a single-phase LArTPC is qualitatively demonstrated on MicroBooNE data with event display images, and quantitatively demonstrated via waveform-level and track-level metrics. Improved performance of induction plane calorimetry is demonstrated through the agreement of extracted ionization charge measurements across different wire planes for various event topologies. In addition to the comprehensive waveform-level comparison of data and simulation, a calibration of the cryogenic electronics response is presented and solutions to various MicroBooNE-specific TPC issues are discussed. This work presents an important improvement in LArTPC signal processing, the foundation of reconstruction and therefore physics analyses in MicroBooNE.

  17. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  18. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  19. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  20. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  1. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  2. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  3. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  4. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  5. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  6. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  7. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  8. A low-cost, high-performance, digital signal processor-based lock-in amplifier capable of measuring multiple frequency sweeps simultaneously

    International Nuclear Information System (INIS)

    Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose

    2005-01-01

    A high-performance digital lock-in amplifier implemented in a low-cost digital signal processor (DSP) board is described. This lock in is capable of measuring simultaneously multiple frequencies that change in time as frequency sweeps (chirps). The used 32-bit DSP has enough computing power to generate N=3 simultaneous reference signals and accurately measure the N=3 responses, operating as three lock ins connected in parallel to a linear system. The lock in stores the measured values in memory until they are downloaded to the a personal computer (PC). The lock in works in stand-alone mode and can be programmed and configured through the PC serial port. Downsampling and multiple filter stages were used in order to obtain a sharp roll off and a long time constant in the filters. This makes measurements possible in presence of high-noise levels. Before each measurement, the lock in performs an autocalibration that measures the frequency response of analog output and input circuitry in order to compensate for the departure from ideal operation. Improvements from previous lock-in implementations allow measuring the frequency response of a system in a short time. Furthermore, the proposed implementation can measure how the frequency response changes with time, a characteristic that is very important in our biotechnological application. The number of simultaneous components that the lock in can generate and measure can be extended, without reprogramming, by only using other DSPs of the same family that are code compatible and work at higher clock frequencies

  9. A low-cost, high-performance, digital signal processor-based lock-in amplifier capable of measuring multiple frequency sweeps simultaneously

    Energy Technology Data Exchange (ETDEWEB)

    Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose [Laboratorio de Cavitacion y Biotecnologia, San Carlos de Bariloche (8400) (Argentina)

    2005-02-01

    A high-performance digital lock-in amplifier implemented in a low-cost digital signal processor (DSP) board is described. This lock in is capable of measuring simultaneously multiple frequencies that change in time as frequency sweeps (chirps). The used 32-bit DSP has enough computing power to generate N=3 simultaneous reference signals and accurately measure the N=3 responses, operating as three lock ins connected in parallel to a linear system. The lock in stores the measured values in memory until they are downloaded to the a personal computer (PC). The lock in works in stand-alone mode and can be programmed and configured through the PC serial port. Downsampling and multiple filter stages were used in order to obtain a sharp roll off and a long time constant in the filters. This makes measurements possible in presence of high-noise levels. Before each measurement, the lock in performs an autocalibration that measures the frequency response of analog output and input circuitry in order to compensate for the departure from ideal operation. Improvements from previous lock-in implementations allow measuring the frequency response of a system in a short time. Furthermore, the proposed implementation can measure how the frequency response changes with time, a characteristic that is very important in our biotechnological application. The number of simultaneous components that the lock in can generate and measure can be extended, without reprogramming, by only using other DSPs of the same family that are code compatible and work at higher clock frequencies.

  10. Study of runaway electrons using the conditional average sampling method in the Damavand tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Pourshahab, B., E-mail: bpourshahab@gmail.com [University of Isfahan, Department of Nuclear Engineering, Faculty of Advance Sciences and Technologies (Iran, Islamic Republic of); Sadighzadeh, A. [Nuclear Science and Technology Research Institute, Plasma Physics and Nuclear Fusion Research School (Iran, Islamic Republic of); Abdi, M. R., E-mail: r.abdi@phys.ui.ac.ir [University of Isfahan, Department of Physics, Faculty of Science (Iran, Islamic Republic of); Rasouli, C. [Nuclear Science and Technology Research Institute, Plasma Physics and Nuclear Fusion Research School (Iran, Islamic Republic of)

    2017-03-15

    Some experiments for studying the runaway electron (RE) effects have been performed using the poloidal magnetic probes system installed around the plasma column in the Damavand tokamak. In these experiments, the so-called runaway-dominated discharges were considered in which the main part of the plasma current is carried by REs. The induced magnetic effects on the poloidal pickup coils signals are observed simultaneously with the Parail–Pogutse instability moments for REs and hard X-ray bursts. The output signals of all diagnostic systems enter the data acquisition system with 2 Msample/(s channel) sampling rate. The temporal evolution of the diagnostic signals is analyzed by the conditional average sampling (CAS) technique. The CASed profiles indicate RE collisions with the high-field-side plasma facing components at the instability moments. The investigation has been carried out for two discharge modes—low-toroidal-field (LTF) and high-toroidal-field (HTF) ones—related to both up and down limits of the toroidal magnetic field in the Damavand tokamak and their comparison has shown that the RE confinement is better in HTF discharges.

  11. Neural network and wavelet average framing percentage energy for atrial fibrillation classification.

    Science.gov (United States)

    Daqrouq, K; Alkhateeb, A; Ajour, M N; Morfeq, A

    2014-03-01

    ECG signals are an important source of information in the diagnosis of atrial conduction pathology. Nevertheless, diagnosis by visual inspection is a difficult task. This work introduces a novel wavelet feature extraction method for atrial fibrillation derived from the average framing percentage energy (AFE) of terminal wavelet packet transform (WPT) sub signals. Probabilistic neural network (PNN) is used for classification. The presented method is shown to be a potentially effective discriminator in an automated diagnostic process. The ECG signals taken from the MIT-BIH database are used to classify different arrhythmias together with normal ECG. Several published methods were investigated for comparison. The best recognition rate selection was obtained for AFE. The classification performance achieved accuracy 97.92%. It was also suggested to analyze the presented system in an additive white Gaussian noise (AWGN) environment; 55.14% for 0dB and 92.53% for 5dB. It was concluded that the proposed approach of automating classification is worth pursuing with larger samples to validate and extend the present study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Comparação dos erros ortográficos de alunos com desempenho inferior em escrita e alunos com desempenho médio nesta habilidade A comparison study of the orthographics mistakes of students with inferior and students with average writing performance

    Directory of Open Access Journals (Sweden)

    Patrícia Aparecida Zuanetti

    2008-01-01

    Full Text Available OBJETIVO: O objetivo deste estudo foi comparar se crianças com baixo desempenho em escrita cometem mais erros ortográficos que crianças da mesma série com desempenho satisfatório nesta tarefa, e quais os tipos de erros ortográficos mais freqüentes. MÉTODOS: Participaram deste estudo 24 crianças da 2ª série do ensino fundamental de uma escola pública, avaliadas individualmente. O teste aplicado foi o subteste de escrita do Teste de Desempenho Escolar, composto por 34 palavras que são ditadas aos alunos. RESULTADOS: Os alunos com desempenho inferior em escrita cometeram significativamente mais erros ortográficos que o grupo com desempenho satisfatório. Os erros que tiveram diferença estatisticamente significativa entre os dois grupos foram erros do tipo hipercorreção, dificuldade com marcadores de nasalização, relação fonografêmica irregular, omissões de sílabas e erros por troca de letras. Também houve correlação fortemente negativa entre erros ortográficos e desempenho em escrita. CONCLUSÕES: Quanto melhor o desempenho em escrita, menos erros ortográficos possui a elaboração gráfica do aluno. Os erros mais freqüentes no grupo com desempenho baixo, que os difere do outro grupo, dizem respeito aos erros de relação fonografêmica irregular, omissões de sílabas, dificuldade no uso de marcadores de nasalização, hipercorreção e erros por troca de letras. Com o avanço da capacidade de aprendizagem da criança, o desempenho ortográfico tende a melhorar.PURPOSE: The aim of this study was to verify whether children with poor writing performances make more orthographic mistakes than children of the same school grade with average performances, and what are the most frequent types of orthographic mistakes. METHODS: Twenty-four second grade children from a public school were individually analyzed in this study. The test used was the writing subtest of the School Performance Test, which is composed by 34 words that

  13. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  14. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  15. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  16. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  17. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  18. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  19. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  20. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  1. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  2. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  3. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  4. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  5. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  6. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  7. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  8. 4D MR imaging using robust internal respiratory signal

    International Nuclear Information System (INIS)

    Hui, CheukKai; Wen, Zhifei; Beddar, Sam; Stemkens, Bjorn; Tijssen, R H N; Van den Berg, C A T; Hwang, Ken-Pin

    2016-01-01

    The purpose of this study is to investigate the feasibility of using internal respiratory (IR) surrogates to sort four-dimensional (4D) magnetic resonance (MR) images. The 4D MR images were constructed by acquiring fast 2D cine MR images sequentially, with each slice scanned for more than one breathing cycle. The 4D volume was then sorted retrospectively using the IR signal. In this study, we propose to use multiple low-frequency components in the Fourier space as well as the anterior body boundary as potential IR surrogates. From these potential IR surrogates, we used a clustering algorithm to identify those that best represented the respiratory pattern to derive the IR signal. A study with healthy volunteers was performed to assess the feasibility of the proposed IR signal. We compared this proposed IR signal with the respiratory signal obtained using respiratory bellows. Overall, 99% of the IR signals matched the bellows signals. The average difference between the end inspiration times in the IR signal and bellows signal was 0.18 s in this cohort of matching signals. For the acquired images corresponding to the other 1% of non-matching signal pairs, the respiratory motion shown in the images was coherent with the respiratory phases determined by the IR signal, but not the bellows signal. This suggested that the IR signal determined by the proposed method could potentially correct the faulty bellows signal. The sorted 4D images showed minimal mismatched artefacts and potential clinical applicability. The proposed IR signal therefore provides a feasible alternative to effectively sort MR images in 4D. (paper)

  9. The speech signal segmentation algorithm using pitch synchronous analysis

    Directory of Open Access Journals (Sweden)

    Amirgaliyev Yedilkhan

    2017-03-01

    Full Text Available Parameterization of the speech signal using the algorithms of analysis synchronized with the pitch frequency is discussed. Speech parameterization is performed by the average number of zero transitions function and the signal energy function. Parameterization results are used to segment the speech signal and to isolate the segments with stable spectral characteristics. Segmentation results can be used to generate a digital voice pattern of a person or be applied in the automatic speech recognition. Stages needed for continuous speech segmentation are described.

  10. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  11. Morphologie de l'onde P du signal électrocardiographique. Analyse de forme des signaux bidimensionnels: mesure d'effets pharmacologiques sur les ondes P, QRS et T en représentation temps-fréquence

    OpenAIRE

    Oficjalska , Barbara

    1994-01-01

    The aim of this work is to develop a signal processing methodology in order to improve fine studies of the cardiac signal, and specially of P wave, particularly focusing the measurement of shape variations. After a review of cardiac signal characteristics, a description of its physiological and pathological variability and of the different recording techniques, a critical study of cardiac signal processing methods is performed: noise reduction, specific filtering, signal averaging and jitter ...

  12. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  13. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  14. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  15. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  16. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  17. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  18. Fast optical signal not detected in awake behaving monkeys.

    Science.gov (United States)

    Radhakrishnan, Harsha; Vanduffel, Wim; Deng, Hong Ping; Ekstrom, Leeland; Boas, David A; Franceschini, Maria Angela

    2009-04-01

    While the ability of near-infrared spectroscopy (NIRS) to measure cerebral hemodynamic evoked responses (slow optical signal) is well established, its ability to measure non-invasively the 'fast optical signal' is still controversial. Here, we aim to determine the feasibility of performing NIRS measurements of the 'fast optical signal' or Event-Related Optical Signals (EROS) under optimal experimental conditions in awake behaving macaque monkeys. These monkeys were implanted with a 'recording well' to expose the dura above the primary visual cortex (V1). A custom-made optical probe was inserted and fixed into the well. The close proximity of the probe to the brain maximized the sensitivity to changes in optical properties in the cortex. Motion artifacts were minimized by physical restraint of the head. Full-field contrast-reversing checkerboard stimuli were presented to monkeys trained to perform a visual fixation task. In separate sessions, two NIRS systems (CW4 and ISS FD oximeter), which previously showed the ability to measure the fast signal in human, were used. In some sessions EEG was acquired simultaneously with the optical signal. The increased sensitivity to cortical optical changes with our experimental setup was quantified with 3D Monte Carlo simulations on a segmented MRI monkey head. Averages of thousands of stimuli in the same animal, or grand averages across the two animals and across repeated sessions, did not lead to detection of the fast optical signal using either amplitude or phase of the optical signal. Hemodynamic responses and visual evoked potentials were instead always detected with single trials or averages of a few stimuli. Based on these negative results, despite the optimal experimental conditions, we doubt the usefulness of non-invasive fast optical signal measurements with NIRS.

  19. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  20. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  1. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  2. Dietary pyridoxine deficiency reduced growth performance and impaired intestinal immune function associated with TOR and NF-κB signalling of young grass carp (Ctenopharyngodon idella).

    Science.gov (United States)

    Zheng, Xin; Feng, Lin; Jiang, Wei-Dan; Wu, Pei; Liu, Yang; Jiang, Jun; Kuang, Sheng-Yao; Tang, Ling; Tang, Wu-Neng; Zhang, Yong-An; Zhou, Xiao-Qiu

    2017-11-01

    The objective of this study was to evaluate the effects of dietary pyridoxine (PN) deficiency on growth performance, intestinal immune function and the potential regulation mechanisms in young grass carp (Ctenopharyngodon idella). Fish were fed six diets containing graded levels of PN (0.12-7.48 mg/kg) for 70 days. After that, a challenge test was conducted by infection of Aeromonas hydrophila for 14 days. The results showed that compared with the optimal PN level, PN deficiency: (1) reduced the production of innate immune components such as lysozyme (LZ), acid phosphatase (ACP), complements and antimicrobial peptides and adaptive immune components such as immunoglobulins in three intestinal segments of young grass carp (P TOR) signalling [TOR/ribosomal protein S6 kinases 1 (S6K1) and eIF4E-binding proteins (4E-BP)] in three intestinal segments of young grass carp; (3) up-regulated the mRNA levels of pro-inflammatory cytokines such as tumour necrosis factor α (TNF-α) [not in the proximal intestine (PI) and distal intestine (DI)], IL-1β, IL-6, IL-8, IL-12p35, IL-12p40, IL-15 and IL-17D [(rather than interferon γ2 (IFN-γ2)] partly relating to nuclear factor kappa B (NF-κB) signalling [IκB kinase β (IKKβ) and IKKγ/inhibitor of κBα (IκBα)/NF-κB (p65 and c-Rel)] in three intestinal segments of young grass carp. These results suggest that PN deficiency could impair the intestinal immune function, and the potential regulation mechanisms were partly associated with TOR and NF-κB signalling pathways. In addition, based on percent weight gain (PWG), the ability against enteritis and LZ activity, the dietary PN requirements for young grass carp were estimated to be 4.43, 4.75 and 5.07 mg/kg diet, respectively. Copyright © 2017. Published by Elsevier Ltd.

  3. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  4. A Martian PFS average spectrum: Comparison with ISO SWS

    Science.gov (United States)

    Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.

    2005-08-01

    The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.

  5. Adaptive Control for Buck Power Converter Using Fixed Point Inducting Control and Zero Average Dynamics Strategies

    Science.gov (United States)

    Hoyos Velasco, Fredy Edimer; García, Nicolás Toro; Garcés Gómez, Yeison Alberto

    In this paper, the output voltage of a buck power converter is controlled by means of a quasi-sliding scheme. The Fixed Point Inducting Control (FPIC) technique is used for the control design, based on the Zero Average Dynamics (ZAD) strategy, including load estimation by means of the Least Mean Squares (LMS) method. The control scheme is tested in a Rapid Control Prototyping (RCP) system based on Digital Signal Processing (DSP) for dSPACE platform. The closed loop system shows adequate performance. The experimental and simulation results match. The main contribution of this paper is to introduce the load estimator by means of LMS, to make ZAD and FPIC control feasible in load variation conditions. In addition, comparison results for controlled buck converter with SMC, PID and ZAD-FPIC control techniques are shown.

  6. ATP signals

    DEFF Research Database (Denmark)

    Novak, Ivana

    2016-01-01

    The Department of Biology at the University of Copenhagen explains the function of ATP signalling in the pancreas......The Department of Biology at the University of Copenhagen explains the function of ATP signalling in the pancreas...

  7. Social Memory Formation Rapidly and Differentially Affects the Motivation and Performance of Vocal Communication Signals in the Bengalese Finch (Lonchura striata var. domestica).

    Science.gov (United States)

    Toccalino, Danielle C; Sun, Herie; Sakata, Jon T

    2016-01-01

    Cognitive processes like the formation of social memories can shape the nature of social interactions between conspecifics. Male songbirds use vocal signals during courtship interactions with females, but the degree to which social memory and familiarity influences the likelihood and structure of male courtship song remains largely unknown. Using a habituation-dishabituation paradigm, we found that a single, brief (memory for that female: adult male Bengalese finches were significantly less likely to produce courtship song to an individual female when re-exposed to her 5 min later (i.e., habituation). Familiarity also rapidly decreased the duration of courtship songs but did not affect other measures of song performance (e.g., song tempo and the stereotypy of syllable structure and sequencing). Consistent with a contribution of social memory to the decrease in courtship song with repeated exposures to the same female, the likelihood that male Bengalese finches produced courtship song increased when they were exposed to a different female (i.e., dishabituation). Three consecutive exposures to individual females also led to the formation of a longer-term memory that persisted over days. Specifically, when courtship song production was assessed 2 days after initial exposures to females, males produced fewer and shorter courtship songs to familiar females than to unfamiliar females. Measures of song performance, however, were not different between courtship songs produced to familiar and unfamiliar females. The formation of a longer-term memory for individual females seemed to require at least three exposures because males did not differentially produce courtship song to unfamiliar females and females that they had been exposed to only once or twice. Taken together, these data indicate that brief exposures to individual females led to the rapid formation and persistence of social memories and support the existence of distinct mechanisms underlying the motivation to

  8. Social Memory Formation Rapidly and Differentially Affects the Motivation and Performance of Vocal Communication Signals in the Bengalese Finch (Lonchura striata var. domestica)

    Science.gov (United States)

    Toccalino, Danielle C.; Sun, Herie; Sakata, Jon T.

    2016-01-01

    Cognitive processes like the formation of social memories can shape the nature of social interactions between conspecifics. Male songbirds use vocal signals during courtship interactions with females, but the degree to which social memory and familiarity influences the likelihood and structure of male courtship song remains largely unknown. Using a habituation-dishabituation paradigm, we found that a single, brief (female led to the formation of a short-term memory for that female: adult male Bengalese finches were significantly less likely to produce courtship song to an individual female when re-exposed to her 5 min later (i.e., habituation). Familiarity also rapidly decreased the duration of courtship songs but did not affect other measures of song performance (e.g., song tempo and the stereotypy of syllable structure and sequencing). Consistent with a contribution of social memory to the decrease in courtship song with repeated exposures to the same female, the likelihood that male Bengalese finches produced courtship song increased when they were exposed to a different female (i.e., dishabituation). Three consecutive exposures to individual females also led to the formation of a longer-term memory that persisted over days. Specifically, when courtship song production was assessed 2 days after initial exposures to females, males produced fewer and shorter courtship songs to familiar females than to unfamiliar females. Measures of song performance, however, were not different between courtship songs produced to familiar and unfamiliar females. The formation of a longer-term memory for individual females seemed to require at least three exposures because males did not differentially produce courtship song to unfamiliar females and females that they had been exposed to only once or twice. Taken together, these data indicate that brief exposures to individual females led to the rapid formation and persistence of social memories and support the existence of distinct

  9. How to improve a critical performance for an ExoMars 2020 Scientific Instrument (RLS). Raman Laser Spectrometer Signal to Noise Ratio (SNR) Optimization

    Science.gov (United States)

    Canora, C. P.; Moral, A. G.; Rull, F.; Maurice, S.; Hutchinson, I.; Ramos, G.; López-Reyes, G.; Belenguer, T.; Canchal, R.; Prieto, J. A. R.; Rodriguez, P.; Santamaria, P.; Berrocal, A.; Colombo, M.; Gallago, P.; Seoane, L.; Quintana, C.; Ibarmia, S.; Zafra, J.; Saiz, J.; Santiago, A.; Marin, A.; Gordillo, C.; Escribano, D.; Sanz-Palominoa, M.

    2017-09-01

    The Raman Laser Spectrometer (RLS) is one of the Pasteur Payload instruments, within the ESA's Aurora Exploration Programme, ExoMars mission. Raman spectroscopy is based on the analysis of spectral fingerprints due to the inelastic scattering of light when interacting with matter. RLS is composed by Units: SPU (Spectrometer Unit), iOH (Internal Optical Head), and ICEU (Instrument Control and Excitation Unit) and the harnesses (EH and OH). The iOH focuses the excitation laser on the samples and collects the Raman emission from the sample via SPU (CCD) and the video data (analog) is received, digitalizing it and transmiting it to the processor module (ICEU). The main sources of noise arise from the sample, the background, and the instrument (Laser, CCD, focuss, acquisition parameters, operation control). In this last case the sources are mainly perturbations from the optics, dark signal and readout noise. Also flicker noise arising from laser emission fluctuations can be considered as instrument noise. In order to evaluate the SNR of a Raman instrument in a practical manner it is useful to perform end-to-end measurements on given standards samples. These measurements have to be compared with radiometric simulations using Raman efficiency values from literature and taking into account the different instrumental contributions to the SNR. The RLS EQM instrument performances results and its functionalities have been demonstrated in accordance with the science expectations. The Instrument obtained SNR performances in the RLS EQM will be compared experimentally and via analysis, with the Instrument Radiometric Model tool. The characterization process for SNR optimization is still on going. The operational parameters and RLS algorithms (fluorescence removal and acquisition parameters estimation) will be improved in future models (EQM-2) until FM Model delivery.

  10. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  11. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  12. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  13. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  14. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  15. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  16. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  17. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  18. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  19. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  20. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  1. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  2. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  3. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  4. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  5. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  6. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  7. A robust detector for rolling element bearing condition monitoring based on the modulation signal bispectrum and its performance evaluation against the Kurtogram

    Science.gov (United States)

    Tian, Xiange; Xi Gu, James; Rehab, Ibrahim; Abdalla, Gaballa M.; Gu, Fengshou; Ball, A. D.

    2018-02-01

    Envelope analysis is a widely used method for rolling element bearing fault detection. To obtain high detection accuracy, it is critical to determine an optimal frequency narrowband for the envelope demodulation. However, many of the schemes which are used for the narrowband selection, such as the Kurtogram, can produce poor detection results because they are sensitive to random noise and aperiodic impulses which normally occur in practical applications. To achieve the purposes of denoising and frequency band optimisation, this paper proposes a novel modulation signal bispectrum (MSB) based robust detector for bearing fault detection. Because of its inherent noise suppression capability, the MSB allows effective suppression of both stationary random noise and discrete aperiodic noise. The high magnitude features that result from the use of the MSB also enhance the modulation effects of a bearing fault and can be used to provide optimal frequency bands for fault detection. The Kurtogram is generally accepted as a powerful means of selecting the most appropriate frequency band for envelope analysis, and as such it has been used as the benchmark comparator for performance evaluation in this paper. Both simulated and experimental data analysis results show that the proposed method produces more accurate and robust detection results than Kurtogram based approaches for common bearing faults under a range of representative scenarios.

  8. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  9. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  10. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  11. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  12. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  13. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  14. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  15. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  16. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  17. Averaging scheme for atomic resolution off-axis electron holograms.

    Science.gov (United States)

    Niermann, T; Lehmann, M

    2014-08-01

    All micrographs are limited by shot-noise, which is intrinsic to the detection process of electrons. For beam insensitive specimen this limitation can in principle easily be circumvented by prolonged exposure times. However, in the high-resolution regime several instrumental instabilities limit the applicable exposure time. Particularly in the case of off-axis holography the holograms are highly sensitive to the position and voltage of the electron-optical biprism. We present a novel reconstruction algorithm to average series of off-axis holograms while compensating for specimen drift, biprism drift, drift of biprism voltage, and drift of defocus, which all might cause problematic changes from exposure to exposure. We show an application of the algorithm utilizing also the possibilities of double biprism holography, which results in a high quality exit-wave reconstruction with 75 pm resolution at a very high signal-to-noise ratio. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Adaptive signal processor

    Energy Technology Data Exchange (ETDEWEB)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 ..mu..sec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed.

  19. Adaptive signal processor

    International Nuclear Information System (INIS)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 μsec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed

  20. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  1. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  2. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  3. Systematic approach to peak-to-average power ratio in OFDM

    Science.gov (United States)

    Schurgers, Curt

    2001-11-01

    OFDM multicarrier systems support high data rate wireless transmission using orthogonal frequency channels, and require no extensive equalization, yet offer excellent immunity against fading and inter-symbol interference. The major drawback of these systems is the large Peak-to-Average power Ratio (PAR) of the transmit signal, which renders a straightforward implementation very costly and inefficient. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exist to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this work, we provide a systematic approach that resolves this ambiguity and spans the existing PAR solutions. The basis of our framework is the observation that efficient system implementations require a reduced signal dynamic range. This range reduction can be modeled as a hard limiting, also referred to as clipping, where the extra distortion has to be considered as part of the total noise tradeoff. We illustrate that the different PAR solutions manipulate this tradeoff in alternative ways in order to improve the performance. Furthermore, we discuss and compare a broad range of such techniques and organize them into three classes: block coding, clip effect transformation and probabilistic.

  4. Perceptual learning in Williams syndrome: looking beyond averages.

    Directory of Open Access Journals (Sweden)

    Patricia Gervan

    Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.

  5. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  6. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  7. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  8. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  9. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  10. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  11. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  12. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  13. Signal-averaged P wave duration and the long-term risk of permanent atrial fibrillation

    DEFF Research Database (Denmark)

    Dixen, Ulrik; Larsen, Mette Vang; Ravn, Lasse Steen

    2008-01-01

    of permanent AF. The risk of permanent AF after 3 years follow-up was 0.72 with an SAPWD equal to 180 ms versus 0.39 with a normal SAPWD (130 ms). We found no prognostic effect of age, gender, dilated left atrium, long duration of AF history, or long duration of the most recent episode of AF. Co...

  14. An Investigation of Vibration Signal Averaging of Individual Components in an Epicyclic Gearbox

    Science.gov (United States)

    1991-06-01

    kHz. These were mounted on a set of small steel blocks bonded to the gearbox casing at various positions. A six channel PCB charge amplifying power... callable library of routines, ATLAB, available with the data acquisition card was used to acquire the digitised data into Fortran 2-byte integer arrays. A

  15. A novel power spectrum calculation method using phase-compensation and weighted averaging for the estimation of ultrasound attenuation.

    Science.gov (United States)

    Heo, Seo Weon; Kim, Hyungsuk

    2010-05-01

    An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.

  16. PERFORMANCE

    Directory of Open Access Journals (Sweden)

    M Cilli

    2014-10-01

    Full Text Available This study aimed to investigate the kinematic and kinetic changes when resistance is applied in horizontal and vertical directions, produced by using different percentages of body weight, caused by jumping movements during a dynamic warm-up. The group of subjects consisted of 35 voluntary male athletes (19 basketball and 16 volleyball players; age: 23.4 ± 1.4 years, training experience: 9.6 ± 2.7 years; height: 177.2 ± 5.7 cm, body weight: 69.9 ± 6.9 kg studying Physical Education, who had a jump training background and who were training for 2 hours, on 4 days in a week. A dynamic warm-up protocol containing seven specific resistance movements with specific resistance corresponding to different percentages of body weight (2%, 4%, 6%, 8%, 10% was applied randomly on non consecutive days. Effects of different warm-up protocols were assessed by pre-/post- exercise changes in jump height in the countermovement jump (CMJ and the squat jump (SJ measured using a force platform and changes in hip and knee joint angles at the end of the eccentric phase measured using a video camera. A significant increase in jump height was observed in the dynamic resistance warm-up conducted with different percentages of body weight (p 0.05. In jump movements before and after the warm-up, while no significant difference between the vertical ground reaction forces applied by athletes was observed (p>0.05, in some cases of resistance, a significant reduction was observed in hip and knee joint angles (p<0.05. The dynamic resistance warm-up method was found to cause changes in the kinematics of jumping movements, as well as an increase in jump height values. As a result, dynamic warm-up exercises could be applicable in cases of resistance corresponding to 6-10% of body weight applied in horizontal and vertical directions in order to increase the jump performance acutely.

  17. The Value and Feasibility of Farming Differently Than the Local Average

    OpenAIRE

    Morris, Cooper; Dhuyvetter, Kevin; Yeager, Elizabeth A; Regier, Greg

    2018-01-01

    The purpose of this research is to quantify the value of being different than the local average and feasibility of distinguishing particular parts of an operation from the local average. Kansas crop farms are broken down by their farm characteristics, production practices, and management performances. An ordinary least squares regression model is used to quantify the value of having different than average characteristics, practices, and management performances. The degree farms have distingui...

  18. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  19. Return to mixed signals

    International Nuclear Information System (INIS)

    Gadomski, C.R.; Maloney, P.

    1991-01-01

    This article examines the performance of independent energy stocks. The topics discussed in the article include the performance of the energy stocks compared to the performance of the DOW Averages and industry prospects for the short term. The article includes two side bars concerning Wall Street analysts recommendations of independent energy companies and a rating of the Independent Energy 100 based on their performance as of August 1991

  20. Fast Decentralized Averaging via Multi-scale Gossip

    Science.gov (United States)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  1. Direct measurement of fast transients by using boot-strapped waveform averaging

    Science.gov (United States)

    Olsson, Mattias; Edman, Fredrik; Karki, Khadga Jung

    2018-03-01

    An approximation to coherent sampling, also known as boot-strapped waveform averaging, is presented. The method uses digital cavities to determine the condition for coherent sampling. It can be used to increase the effective sampling rate of a repetitive signal and the signal to noise ratio simultaneously. The method is demonstrated by using it to directly measure the fluorescence lifetime from Rhodamine 6G by digitizing the signal from a fast avalanche photodiode. The obtained lifetime of 4.0 ns is in agreement with the known values.

  2. Average properties of bidisperse bubbly flows

    Science.gov (United States)

    Serrano-García, J. C.; Mendez-Díaz, S.; Zenit, R.

    2018-03-01

    Experiments were performed in a vertical channel to study the properties of a bubbly flow composed of two distinct bubble size species. Bubbles were produced using a capillary bank with tubes with two distinct inner diameters; the flow through each capillary size was controlled such that the amount of large or small bubbles could be controlled. Using water and water-glycerin mixtures, a wide range of Reynolds and Weber number ranges were investigated. The gas volume fraction ranged between 0.5% and 6%. The measurements of the mean bubble velocity of each species and the liquid velocity variance were obtained and contrasted with the monodisperse flows with equivalent gas volume fractions. We found that the bidispersity can induce a reduction of the mean bubble velocity of the large species; for the small size species, the bubble velocity can be increased, decreased, or remain unaffected depending of the flow conditions. The liquid velocity variance of the bidisperse flows is, in general, bound by the values of the small and large monodisperse values; interestingly, in some cases, the liquid velocity fluctuations can be larger than either monodisperse case. A simple model for the liquid agitation for bidisperse flows is proposed, with good agreement with the experimental measurements.

  3. Power Based Phase-Locked Loop Under Adverse Conditions with Moving Average Filter for Single-Phase System

    Directory of Open Access Journals (Sweden)

    Menxi Xie

    2017-06-01

    Full Text Available High performance synchronization methord is citical for grid connected power converter. For single-phase system, power based phase-locked loop(pPLL uses a multiplier as phase detector(PD. As single-phase grid voltage is distorted, the phase error information contains ac disturbances oscillating at integer multiples of fundamental frequency which lead to detection error. This paper presents a new scheme based on moving average filter(MAF applied in-loop of pPLL. The signal characteristic of phase error is dissussed in detail. A predictive rule is adopted to compensate the delay induced by MAF, thus achieving fast dynamic response. In the case of frequency deviate from nomimal, estimated frequency is fed back to adjust the filter window length of MAF and buffer size of predictive rule. Simulation and experimental results show that proposed PLL achieves good performance under adverse grid conditions.

  4. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  5. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  6. Motor current signature analysis for gearbox condition monitoring under transient speeds using wavelet analysis and dual-level time synchronous averaging

    Science.gov (United States)

    Bravo-Imaz, Inaki; Davari Ardakani, Hossein; Liu, Zongchang; García-Arribas, Alfredo; Arnaiz, Aitor; Lee, Jay

    2017-09-01

    This paper focuses on analyzing motor current signature for fault diagnosis of gearboxes operating under transient speed regimes. Two different strategies are evaluated, extensively tested and compared to analyze the motor current signature in order to implement a condition monitoring system for gearboxes in industrial machinery. A specially designed test bench is used, thoroughly monitored to fully characterize the experiments, in which gears in different health status are tested. The measured signals are analyzed using discrete wavelet decomposition, in different decomposition levels using a range of mother wavelets. Moreover, a dual-level time synchronous averaging analysis is performed on the same signal to compare the performance of the two methods. From both analyses, the relevant features of the signals are extracted and cataloged using a self-organizing map, which allows for an easy detection and classification of the diverse health states of the gears. The results demonstrate the effectiveness of both methods for diagnosing gearbox faults. A slightly better performance was observed for dual-level time synchronous averaging method. Based on the obtained results, the proposed methods can used as effective and reliable condition monitoring procedures for gearbox condition monitoring using only motor current signature.

  7. Parallel Array Bistable Stochastic Resonance System with Independent Input and Its Signal-to-Noise Ratio Improvement

    Directory of Open Access Journals (Sweden)

    Wei Li

    2014-01-01

    with independent components and averaged output; second, we give a deduction of the output signal-to-noise ratio (SNR for this system to show the performance. Our examples show the enhancement of the system and how different parameters influence the performance of the proposed parallel array.

  8. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  9. Peak-to-average power ratio reduction in interleaved OFDMA systems

    KAUST Repository

    Al-Shuhail, Shamael; Ali, Anum; Al-Naffouri, Tareq Y.

    2015-01-01

    Orthogonal frequency division multiple access (OFDMA) systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and to keep up with the capacity/rate demands. One of these impairments is high peak-to-average power ratio (PAPR) and clipping is the simplest peak reduction scheme. However, in general, when multiple users are subjected to clipping, frequency domain clipping distortions spread over the spectrum of all users. This results in compromised performance and hence clipping distortions need to be mitigated at the receiver. Mitigating these distortions in multiuser case is not simple and requires complex clipping mitigation procedures at the receiver. However, it was observed that interleaved OFDMA presents a special structure that results in only self-inflicted clipping distortions (i.e., the distortions of a particular user do not interfere with other users). In this work, we prove analytically that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a compressed sensing system that utilizes the sparsity of the clipping distortions and recovers it on each user. We provide numerical results that validate our analysis and show promising performance for the proposed clipping recovery scheme.

  10. Peak-to-average power ratio reduction in interleaved OFDMA systems

    KAUST Repository

    Al-Shuhail, Shamael

    2015-12-07

    Orthogonal frequency division multiple access (OFDMA) systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and to keep up with the capacity/rate demands. One of these impairments is high peak-to-average power ratio (PAPR) and clipping is the simplest peak reduction scheme. However, in general, when multiple users are subjected to clipping, frequency domain clipping distortions spread over the spectrum of all users. This results in compromised performance and hence clipping distortions need to be mitigated at the receiver. Mitigating these distortions in multiuser case is not simple and requires complex clipping mitigation procedures at the receiver. However, it was observed that interleaved OFDMA presents a special structure that results in only self-inflicted clipping distortions (i.e., the distortions of a particular user do not interfere with other users). In this work, we prove analytically that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a compressed sensing system that utilizes the sparsity of the clipping distortions and recovers it on each user. We provide numerical results that validate our analysis and show promising performance for the proposed clipping recovery scheme.

  11. Do We Perceive Others Better than Ourselves? A Perceptual Benefit for Noise-Vocoded Speech Produced by an Average Speaker.

    Directory of Open Access Journals (Sweden)

    William L Schuerman

    Full Text Available In different tasks involving action perception, performance has been found to be facilitated when the presented stimuli were produced by the participants themselves rather than by another participant. These results suggest that the same mental representations are accessed during both production and perception. However, with regard to spoken word perception, evidence also suggests that listeners' representations for speech reflect the input from their surrounding linguistic community rather than their own idiosyncratic productions. Furthermore, speech perception is heavily influenced by indexical cues that may lead listeners to frame their interpretations of incoming speech signals with regard to speaker identity. In order to determine whether word recognition evinces similar self-advantages as found in action perception, it was necessary to eliminate indexical cues from the speech signal. We therefore asked participants to identify noise-vocoded versions of Dutch words that were based on either their own recordings or those of a statistically average speaker. The majority of participants were more accurate for the average speaker than for themselves, even after taking into account differences in intelligibility. These results suggest that the speech representations accessed during perception of noise-vocoded speech are more reflective of the input of the speech community, and hence that speech perception is not necessarily based on representations of one's own speech.

  12. Data Point Averaging for Computational Fluid Dynamics Data

    Science.gov (United States)

    Norman, Jr., David (Inventor)

    2016-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  13. A Framework for Control System Design Subject to Average Data-Rate Constraints

    DEFF Research Database (Denmark)

    Silva, Eduardo; Derpich, Milan; Østergaard, Jan

    2011-01-01

    This paper studies discrete-time control systems subject to average data-rate limits. We focus on a situation where a noisy linear system has been designed assuming transparent feedback and, due to implementation constraints, a source-coding scheme (with unity signal transfer function) has to be ...

  14. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  15. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    Science.gov (United States)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  16. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  17. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  18. Topological signal processing

    CERN Document Server

    Robinson, Michael

    2014-01-01

    Signal processing is the discipline of extracting information from collections of measurements. To be effective, the measurements must be organized and then filtered, detected, or transformed to expose the desired information.  Distortions caused by uncertainty, noise, and clutter degrade the performance of practical signal processing systems. In aggressively uncertain situations, the full truth about an underlying signal cannot be known.  This book develops the theory and practice of signal processing systems for these situations that extract useful, qualitative information using the mathematics of topology -- the study of spaces under continuous transformations.  Since the collection of continuous transformations is large and varied, tools which are topologically-motivated are automatically insensitive to substantial distortion. The target audience comprises practitioners as well as researchers, but the book may also be beneficial for graduate students.

  19. Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure

    NARCIS (Netherlands)

    Talsma, D.

    2008-01-01

    The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the

  20. Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure.

    NARCIS (Netherlands)

    Talsma, D.

    2008-01-01

    The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the

  1. Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.

    Science.gov (United States)

    Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D

    2018-04-19

    The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.

  2. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  3. Monthly streamflow forecasting with auto-regressive integrated moving average

    Science.gov (United States)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  4. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  5. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Henriques, A.

    2006-01-01

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author) [fr

  6. Vibrationally averaged dipole moments of methane and benzene isotopologues

    Energy Technology Data Exchange (ETDEWEB)

    Arapiraca, A. F. C. [Laboratório de Átomos e Moléculas Especiais, Departamento de Física, ICEx, Universidade Federal de Minas Gerais, P. O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Centro Federal de Educação Tecnológica de Minas Gerais, Coordenação de Ciências, CEFET-MG, Campus I, 30.421-169 Belo Horizonte, MG (Brazil); Mohallem, J. R., E-mail: rachid@fisica.ufmg.br [Laboratório de Átomos e Moléculas Especiais, Departamento de Física, ICEx, Universidade Federal de Minas Gerais, P. O. Box 702, 30123-970 Belo Horizonte, MG (Brazil)

    2016-04-14

    DFT-B3LYP post-Born-Oppenheimer (finite-nuclear-mass-correction (FNMC)) calculations of vibrationally averaged isotopic dipole moments of methane and benzene, which compare well with experimental values, are reported. For methane, in addition to the principal vibrational contribution to the molecular asymmetry, FNMC accounts for the surprisingly large Born-Oppenheimer error of about 34% to the dipole moments. This unexpected result is explained in terms of concurrent electronic and vibrational contributions. The calculated dipole moment of C{sub 6}H{sub 3}D{sub 3} is about twice as large as the measured dipole moment of C{sub 6}H{sub 5}D. Computational progress is advanced concerning applications to larger systems and the choice of appropriate basis sets. The simpler procedure of performing vibrational averaging on the Born-Oppenheimer level and then adding the FNMC contribution evaluated at the equilibrium distance is shown to be appropriate. Also, the basis set choice is made by heuristic analysis of the physical behavior of the systems, instead of by comparison with experiments.

  7. Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis

    Directory of Open Access Journals (Sweden)

    LiMin Wang

    2014-01-01

    Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.

  8. An Exponentially Weighted Moving Average Control Chart for Bernoulli Data

    DEFF Research Database (Denmark)

    Spliid, Henrik

    2010-01-01

    of the transformation is given and its limit for small values of p is derived. Control of high yield processes is discussed and the chart is shown to perform very well in comparison with both the most common alternative EWMA chart and the CUSUM chart. The construction and the use of the proposed EWMA chart......We consider a production process in which units are produced in a sequential manner. The units can, for example, be manufactured items or services, provided to clients. Each unit produced can be a failure with probability p or a success (non-failure) with probability (1-p). A novel exponentially...... weighted moving average (EWMA) control chart intended for surveillance of the probability of failure, p, is described. The chart is based on counting the number of non-failures produced between failures in combination with a variance-stabilizing transformation. The distribution function...

  9. An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet

    International Nuclear Information System (INIS)

    Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi

    2010-01-01

    Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm

  10. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    Science.gov (United States)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  11. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  12. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  13. High-density digital links optimization of signal integrity and noise performance of the high-density digital links of the ATLAS-TRT readout system

    International Nuclear Information System (INIS)

    Mandl, M.

    2000-02-01

    The TRT - Transition Radiation Tracker - is a sub detector of the particle detector ATLAS - A Toroidal LHC ApparatuS. About 420,000 detecting elements are distributed over 22 m3. They produce each second approximately 20 Tbit of data, which has to be transferred from the front-end electronics inside the detector to the back-end electronics outside the detector for further processing. The task of this thesis is to guarantee the integrity of the signals and the electromagnetic compatibility inside the TRT as well as to the aggressive surroundings. The electromagnetic environment of particle detectors in high-energy physics adds special constraints to the high data rates and the high complexity: high sensibility of the detecting elements and their pre amplifiers, confined space, limited material budget, a radioactive environment, and high static magnetic fields. Thus many industrial standard measures have to be abandoned. Special design is essential to compensate this disadvantage. (author)

  14. High-Density Digital Links Optimization of Signal Integrity and Noise Performance of the High-Density Digital Links of the ATLAS-TRT Readout System

    CERN Document Server

    Mandl, M

    2000-01-01

    The Transition Radiation Tracker (TRT) is a sub detector of the particle detector ATLAS (A Toroidal LHC ApparatuS). About 420,000 detecting elements are distributed over 22 m3. They produce each second approximately 20 Tbit of data which has to be transferred from the front-end electronics inside the detector to the back-end electronics outside the detector for further processing. The task of this thesis is to guarantee the integrity of the signals and the electromagnetic compatibility inside the TRT as well as to the aggressive surroundings. The electromagnetic environment of particle detectors in high-energy physics adds special constraints to the high data rates and the high complexity: high sensibility of the detecting elements and their pre amplifiers, confined space, limited material budget, a radioactive environment, and high static magnetic fields. Thus many industrial standard measures have to be abandoned. Special design is essential to compensate this disadvantage.

  15. Single Event Upset Analysis: On-orbit performance of the Alpha Magnetic Spectrometer Digital Signal Processor Memory aboard the International Space Station

    Science.gov (United States)

    Li, Jiaqiang; Choutko, Vitaly; Xiao, Liyi

    2018-03-01

    Based on the collection of error data from the Alpha Magnetic Spectrometer (AMS) Digital Signal Processors (DSP), on-orbit Single Event Upsets (SEUs) of the DSP program memory are analyzed. The daily error distribution and time intervals between errors are calculated to evaluate the reliability of the system. The particle density distribution of International Space Station (ISS) orbit is presented and the effects from the South Atlantic Anomaly (SAA) and the geomagnetic poles are analyzed. The impact of solar events on the DSP program memory is carried out combining data analysis and Monte Carlo simulation (MC). From the analysis and simulation results, it is concluded that the area corresponding to the SAA is the main source of errors on the ISS orbit. Solar events can also cause errors on DSP program memory, but the effect depends on the on-orbit particle density.

  16. Power Efficiency Improvements through Peak-to-Average Power Ratio Reduction and Power Amplifier Linearization

    Directory of Open Access Journals (Sweden)

    Zhou G Tong

    2007-01-01

    Full Text Available Many modern communication signal formats, such as orthogonal frequency-division multiplexing (OFDM and code-division multiple access (CDMA, have high peak-to-average power ratios (PARs. A signal with a high PAR not only is vulnerable in the presence of nonlinear components such as power amplifiers (PAs, but also leads to low transmission power efficiency. Selected mapping (SLM and clipping are well-known PAR reduction techniques. We propose to combine SLM with threshold clipping and digital baseband predistortion to improve the overall efficiency of the transmission system. Testbed experiments demonstrate the effectiveness of the proposed approach.

  17. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  18. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  19. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  20. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  1. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  2. Understanding signal integrity

    CERN Document Server

    Thierauf, Stephen C

    2010-01-01

    This unique book provides you with practical guidance on understanding and interpreting signal integrity (SI) performance to help you with your challenging circuit board design projects. You find high-level discussions of important SI concepts presented in a clear and easily accessible format, including question and answer sections and bulleted lists.This valuable resource features rules of thumb and simple equations to help you make estimates of critical signal integrity parameters without using circuit simulators of CAD (computer-aided design). The book is supported with over 120 illustratio

  3. Fundamentals of statistical signal processing

    CERN Document Server

    Kay, Steven M

    1993-01-01

    A unified presentation of parameter estimation for those involved in the design and implementation of statistical signal processing algorithms. Covers important approaches to obtaining an optimal estimator and analyzing its performance; and includes numerous examples as well as applications to real- world problems. MARKETS: For practicing engineers and scientists who design and analyze signal processing systems, i.e., to extract information from noisy signals — radar engineer, sonar engineer, geophysicist, oceanographer, biomedical engineer, communications engineer, economist, statistician, physicist, etc.

  4. Integrin Signalling

    OpenAIRE

    Schelfaut, Roselien

    2005-01-01

    Integrins are receptors presented on most cells. By binding ligand they can generate signalling pathways inside the cell. Those pathways are a linkage to proteins in the cytosol. It is known that tumor cells can survive and proliferate in the absence of a solid support while normal cells need to be bound to ligand. To understand why tumour cells act that way, we first have to know how ligand-binding to integrins affect the cell. This research field includes studies on activation of proteins b...

  5. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  6. Quantified moving average strategy of crude oil futures market based on fuzzy logic rules and genetic algorithms

    Science.gov (United States)

    Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing

    2017-09-01

    The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.

  7. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    Science.gov (United States)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  8. Averaged emission factors for the Hungarian car fleet

    Energy Technology Data Exchange (ETDEWEB)

    Haszpra, L. [Inst. for Atmospheric Physics, Budapest (Hungary); Szilagyi, I. [Central Research Inst. for Chemistry, Budapest (Hungary)

    1995-12-31

    The vehicular emission of non-methane hydrocarbon (NMHC) is one of the largest anthropogenic sources of NMHC in Hungary and in most of the industrialized countries. Non-methane hydrocarbon plays key role in the formation of photo-chemical air pollution, usually characterized by the ozone concentration, which seriously endangers the environment and human health. The ozone forming potential of the different NMHCs differs from each other significantly, while the NMHC composition of the car exhaust is influenced by the fuel and engine type, technical condition of the vehicle, vehicle speed and several other factors. In Hungary the majority of the cars are still of Eastern European origin. They represent the technological standard of the 70`s, although there are changes recently. Due to the long-term economical decline in Hungary the average age of the cars was about 9 years in 1990 and reached 10 years by 1993. The condition of the majority of the cars is poor. In addition, almost one third (31.2 %) of the cars are equipped with two-stroke engines which emit less NO{sub x} but much more hydrocarbon. The number of cars equipped with catalytic converter was negligible in 1990 and is slowly increasing only recently. As a consequence of these facts the traffic emission in Hungary may differ from that measured in or estimated for the Western European countries and the differences should be taken into account in the air pollution models. For the estimation of the average emission of the Hungarian car fleet a one-day roadway tunnel experiment was performed in the downtown of Budapest in summer, 1991. (orig.)

  9. Transit-Based Emergency Evacuation with Transit Signal Priority in Sudden-Onset Disaster

    Directory of Open Access Journals (Sweden)

    Ciyun Lin

    2016-01-01

    Full Text Available This study presents methods of transit signal priority without transit-only lanes for a transit-based emergency evacuation in a sudden-onset disaster. Arterial priority signal coordination is optimized when a traffic signal control system provides priority signals for transit vehicles along an evacuation route. Transit signal priority is determined by “transit vehicle arrival time estimation,” “queuing vehicle dissipation time estimation,” “traffic signal status estimation,” “transit signal optimization,” and “arterial traffic signal coordination for transit vehicle in evacuation route.” It takes advantage of the large capacities of transit vehicles, reduces the evacuation time, and evacuates as many evacuees as possible. The proposed methods were tested on a simulation platform with Paramics V6.0. To evaluate and compare the performance of transit signal priority, three scenarios were simulated in the simulator. The results indicate that the methods of this study can reduce the travel times of transit vehicles along an evacuation route by 13% and 10%, improve the standard deviation of travel time by 16% and 46%, and decrease the average person delay at a signalized intersection by 22% and 17% when the traffic flow saturation along an evacuation route is 0.81.0, respectively.

  10. Quantum signaling game

    International Nuclear Information System (INIS)

    Frackiewicz, Piotr

    2014-01-01

    We present a quantum approach to a signaling game; a special kind of extensive game of incomplete information. Our model is based on quantum schemes for games in strategic form where players perform unitary operators on their own qubits of some fixed initial state and the payoff function is given by a measurement on the resulting final state. We show that the quantum game induced by our scheme coincides with a signaling game as a special case and outputs nonclassical results in general. As an example, we consider a quantum extension of the signaling game in which the chance move is a three-parameter unitary operator whereas the players' actions are equivalent to classical ones. In this case, we study the game in terms of Nash equilibria and refine the pure Nash equilibria adapting to the quantum game the notion of a weak perfect Bayesian equilibrium. (paper)

  11. Re-Normalization Method of Doppler Lidar Signal for Error Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.

  12. Flame Motion In Gas Turbine Burner From Averages Of Single-Pulse Flame Fronts

    Energy Technology Data Exchange (ETDEWEB)

    Tylli, N.; Hubschmid, W.; Inauen, A.; Bombach, R.; Schenker, S.; Guethe, F. [Alstom (Switzerland); Haffner, K. [Alstom (Switzerland)

    2005-03-01

    Thermo acoustic instabilities of a gas turbine burner were investigated by flame front localization from measured OH laser-induced fluorescence single pulse signals. The average position of the flame was obtained from the superposition of the single pulse flame fronts at constant phase of the dominant acoustic oscillation. One observes that the flame position varies periodically with the phase angle of the dominant acoustic oscillation. (author)

  13. Relationships between feeding behavior and average daily gain in cattle

    Directory of Open Access Journals (Sweden)

    Bruno Fagundes Cunha Lage

    2013-12-01

    Full Text Available Several studies have reported relationship between eating behavior and performance in feedlot cattle. The evaluation of behavior traits demands high degree of work and trained manpower, therefore, in recent years has been used an automated feed intake measurement system (GrowSafe System ®, that identify and record individual feeding patterns. The aim of this study was to evaluate the relationship between feeding behavior traits and average daily gain in Nellore calves undergoing feed efficiency test. Date from 85 Nelore males was recorded during the feed efficiency test performed in 2012, at Centro APTA Bovinos de Corte, Instituto de Zootecnia, São Paulo State. Were analyzed the behavioral traits: time at feeder (TF, head down duration (HD, representing the time when the animal is actually eating, frequency of visits (FV and feed rate (FR calculated as the amount of dry matter (DM consumed by time at feeder (g.min-1. The ADG was calculated by linear regression of individual weights on days in test. ADG classes were obtained considering the average ADG and standard deviation (SD being: high ADG (>mean + 1.0 SD, medium ADG (± 1.0 SD from the mean and low ADG (0.05 among ADG classes for FV, indicating that these traits are not related to each other. These results shows that the ADG is related to the agility in eat food and not to the time spent in the bunk or to the number of visits in a range of 24 hours.

  14. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  15. Advanced optical signal processing of broadband parallel data signals

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo; Hu, Hao; Kjøller, Niels-Kristian

    2016-01-01

    Optical signal processing may aid in reducing the number of active components in communication systems with many parallel channels, by e.g. using telescopic time lens arrangements to perform format conversion and allow for WDM regeneration.......Optical signal processing may aid in reducing the number of active components in communication systems with many parallel channels, by e.g. using telescopic time lens arrangements to perform format conversion and allow for WDM regeneration....

  16. Theory and analysis of accuracy for the method of characteristics direction probabilities with boundary averaging

    International Nuclear Information System (INIS)

    Liu, Zhouyu; Collins, Benjamin; Kochunas, Brendan; Downar, Thomas; Xu, Yunlin; Wu, Hongchun

    2015-01-01

    Highlights: • The CDP combines the benefits of the CPM’s efficiency and the MOC’s flexibility. • Boundary averaging reduces the computation effort with losing minor accuracy. • An analysis model is used to justify the choice of optimize averaging strategy. • Numerical results show the performance and accuracy. - Abstract: The method of characteristic direction probabilities (CDP) combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC) for the solution of the integral form of the Botlzmann Transport Equation. By coupling only the fine regions traversed by the characteristic rays in a particular direction, the computational effort required to calculate the probability matrices and to solve the matrix system is considerably reduced compared to the CPM. Furthermore, boundary averaging is performed to reduce the storage and computation but the capability of dealing with complicated geometries is preserved since the same ray tracing information is used as in MOC. An analysis model for the outgoing angular flux is used to analyze a variety of outgoing angular flux averaging methods for the boundary and to justify the choice of optimize averaging strategy. The boundary average CDP method was then implemented in the Michigan PArallel Characteristic based Transport (MPACT) code to perform 2-D and 3-D transport calculations. The numerical results are given for different cases to show the effect of averaging on the outgoing angular flux, region scalar flux and the eigenvalue. Comparison of the results with the case with no averaging demonstrates that an angular dependent averaging strategy is possible for the CDP to improve its computational performance without compromising the achievable accuracy

  17. Effect of parameters in moving average method for event detection enhancement using phase sensitive OTDR

    Science.gov (United States)

    Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum

    2017-04-01

    We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.

  18. MARD—A moving average rose diagram application for the geosciences

    Science.gov (United States)

    Munro, Mark A.; Blenkinsop, Thomas G.

    2012-12-01

    MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.

  19. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  20. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.