WorldWideScience

Sample records for perform signal averaging

  1. Ultrasonic correlator versus signal averager as a signal to noise enhancement instrument

    Science.gov (United States)

    Kishoni, Doron; Pietsch, Benjamin E.

    1989-01-01

    Ultrasonic inspection of thick and attenuating materials is hampered by the reduced amplitudes of the propagated waves to a degree that the noise is too high to enable meaningful interpretation of the data. In order to overcome the low Signal to Noise (S/N) ratio, a correlation technique has been developed. In this method, a continuous pseudo-random pattern generated digitally is transmitted and detected by piezoelectric transducers. A correlation is performed in the instrument between the received signal and a variable delayed image of the transmitted one. The result is shown to be proportional to the impulse response of the investigated material, analogous to a signal received from a pulsed system, with an improved S/N ratio. The degree of S/N enhancement depends on the sweep rate. This paper describes the correlator, and compares it to the method of enhancing S/N ratio by averaging the signals. The similarities and differences between the two are highlighted and the potential advantage of the correlator system is explained.

  2. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  3. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  4. Signal-averaged P wave duration and the dimensions of the atria

    DEFF Research Database (Denmark)

    Dixen, Ulrik; Joens, Christian; Rasmussen, Bo V

    2004-01-01

    Delay of atrial electrical conduction measured as prolonged signal-averaged P wave duration (SAPWD) could be due to atrial enlargement. Here, we aimed to compare different atrial size parameters obtained from echocardiography with the SAPWD measured with a signal-averaged electrocardiogram (SAECG)....

  5. Large-signal analysis of DC motor drive system using state-space averaging technique

    International Nuclear Information System (INIS)

    Bekir Yildiz, Ali

    2008-01-01

    The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model

  6. Real-time traffic signal optimization model based on average delay time per person

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2015-10-01

    Full Text Available Real-time traffic signal control is very important for relieving urban traffic congestion. Many existing traffic control models were formulated using optimization approach, with the objective functions of minimizing vehicle delay time. To improve people’s trip efficiency, this article aims to minimize delay time per person. Based on the time-varying traffic flow data at intersections, the article first fits curves of accumulative arrival and departure vehicles, as well as the corresponding functions. Moreover, this article transfers vehicle delay time to personal delay time using average passenger load of cars and buses, employs such time as the objective function, and proposes a signal timing optimization model for intersections to achieve real-time signal parameters, including cycle length and green time. This research further implements a case study based on practical data collected at an intersection in Beijing, China. The average delay time per person and queue length are employed as evaluation indices to show the performances of the model. The results show that the proposed methodology is capable of improving traffic efficiency and is very effective for real-world applications.

  7. Improving sensitivity in micro-free flow electrophoresis using signal averaging

    Science.gov (United States)

    Turgeon, Ryan T.; Bowser, Michael T.

    2009-01-01

    Microfluidic free-flow electrophoresis (μFFE) is a separation technique that separates continuous streams of analytes as they travel through an electric field in a planar flow channel. The continuous nature of the μFFE separation suggests that approaches more commonly applied in spectroscopy and imaging may be effective in improving sensitivity. The current paper describes the S/N improvements that can be achieved by simply averaging multiple images of a μFFE separation; 20–24-fold improvements in S/N were observed by averaging the signal from 500 images recorded for over 2 min. Up to an 80-fold improvement in S/N was observed by averaging 6500 images. Detection limits as low as 14 pM were achieved for fluorescein, which is impressive considering the non-ideal optical set-up used in these experiments. The limitation to this signal averaging approach was the stability of the μFFE separation. At separation times longer than 20 min bubbles began to form at the electrodes, which disrupted the flow profile through the device, giving rise to erratic peak positions. PMID:19319908

  8. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    Science.gov (United States)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  9. Novel MGF-based expressions for the average bit error probability of binary signalling over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2014-04-01

    The main idea in the moment generating function (MGF) approach is to alternatively express the conditional bit error probability (BEP) in a desired exponential form so that possibly multi-fold performance averaging is readily converted into a computationally efficient single-fold averaging - sometimes into a closed-form - by means of using the MGF of the signal-to-noise ratio. However, as presented in [1] and specifically indicated in [2] and also to the best of our knowledge, there does not exist an MGF-based approach in the literature to represent Wojnar\\'s generic BEP expression in a desired exponential form. This paper presents novel MGF-based expressions for calculating the average BEP of binary signalling over generalized fading channels, specifically by expressing Wojnar\\'s generic BEP expression in a desirable exponential form. We also propose MGF-based expressions to explore the amount of dispersion in the BEP for binary signalling over generalized fading channels.

  10. Average BER analysis of SCM-based free-space optical systems by considering the effect of IM3 with OSSB signals under turbulence channels.

    Science.gov (United States)

    Lim, Wansu; Cho, Tae-Sik; Yun, Changho; Kim, Kiseon

    2009-11-09

    In this paper, we derive the average bit error rate (BER) of subcarrier multiplexing (SCM)-based free space optics (FSO) systems using a dual-drive Mach-Zehnder modulator (DD-MZM) for optical single-sideband (OSSB) signals under atmospheric turbulence channels. In particular, we consider the third-order intermodulation (IM3), a significant performance degradation factor, in the case of high input signal power systems. The derived average BER, as a function of the input signal power and the scintillation index, is employed to determine the optimum number of SCM users upon the designing FSO systems. For instance, when the user number doubles, the input signal power decreases by almost 2 dBm under the log-normal and exponential turbulence channels at a given average BER.

  11. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    Science.gov (United States)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  12. Application of NMR circuit for superconducting magnet using signal averaging

    International Nuclear Information System (INIS)

    Yamada, R.; Ishimoto, H.; Shea, M.F.; Schmidt, E.E.; Borer, K.

    1977-01-01

    An NMR circuit was used to measure the absolute field values of Fermilab Energy Doubler magnets up to 44 kG. A signal averaging method to improve the S/N ratio was implemented by means of a Tektronix Digital Processing Oscilloscope, followed by the development of an inexpensive microprocessor based system contained in a NIM module. Some of the data obtained from measuring two superconducting dipole magnets are presented

  13. Performance Evaluation of Received Signal Strength Based Hard Handover for UTRAN LTE

    DEFF Research Database (Denmark)

    Anas, Mohmmad; Calabrese, Francesco Davide; Mogensen, Preben

    2007-01-01

    This paper evaluates the hard handover performance for UTRAN LTE system. The focus is on the impact that received signal strength based hard handover algorithm have on the system performance measured in terms of number of handovers, time between two consecutive handovers and uplink SINR for a user...... about to experience a handover. A handover algorithm based on received signal strength measurements has been designed and implemented in a dynamic system level simulator and has been studied for different parameter sets in a 3GPP UTRAN LTE recommended simulation scenario. The results suggest...... that a downlink measurement bandwidth of 1.25 MHz and a handover margin of 2 dB to 6 dB are the parameters that will lead to the best compromise between average number of handovers and average uplink SINR for user speeds of 3 kmph to 120 kmph....

  14. Phase-rectified signal averaging method to predict perinatal outcome in infants with very preterm fetal growth restriction- a secondary analysis of TRUFFLE-trial

    NARCIS (Netherlands)

    Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H A; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T M; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid

    2016-01-01

    Background Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival

  15. Phase-rectified signal averaging method to predict perinatal outcome in infants with very preterm fetal growth restriction- a secondary analysis of TRUFFLE-trial

    NARCIS (Netherlands)

    Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H. A.; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T. M.; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid

    2016-01-01

    Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival after

  16. Prolonged signal-averaged P wave duration as a prognostic marker for morbidity and mortality in patients with congestive heart failure

    DEFF Research Database (Denmark)

    Dixen, Ulrik; Wallevik, Laura; Hansen, Maja

    2003-01-01

    To evaluate the prognostic roles of prolonged signal-averaged P wave duration (SAPWD), raised levels of natriuretic peptides, and clinical characteristics in patients with stable congestive heart failure (CHF).......To evaluate the prognostic roles of prolonged signal-averaged P wave duration (SAPWD), raised levels of natriuretic peptides, and clinical characteristics in patients with stable congestive heart failure (CHF)....

  17. Advanced pulse oximeter signal processing technology compared to simple averaging. I. Effect on frequency of alarms in the operating room.

    Science.gov (United States)

    Rheineck-Leyssius, A T; Kalkman, C J

    1999-05-01

    To determine the effect of a new signal processing technique (Oxismart, Nellcor, Inc., Pleasanton, CA) on the incidence of false pulse oximeter alarms in the operating room (OR). Prospective observational study. Nonuniversity hospital. 53 ASA physical status I, II, and III consecutive patients undergoing general anesthesia with tracheal intubation. In the OR we compared the number of alarms produced by a recently developed third generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504). Three pulse oximeters were used simultaneously in each patient: a Nellcor pulse oximeter, a Criticare with the signal averaging time set at 3 seconds (Criticareaverage3s) and a similar unit with the signal averaging time set at 21 seconds (Criticareaverage21s). For each pulse oximeter, the number of false (artifact) alarms was counted. One false alarm was produced by the Nellcor (duration 55 sec) and one false alarm by the Criticareaverage21s monitor (5 sec). The incidence of false alarms was higher in Criticareaverage3s. In eight patients, Criticareaverage3s produced 20 false alarms (p signal processing compared with the Criticare monitor with the longer averaging time of 21 seconds.

  18. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading

    KAUST Repository

    Xia, Minghua

    2012-06-01

    Since the electromagnetic spectrum resource becomes more and more scarce, improving spectral efficiency is extremely important for the sustainable development of wireless communication systems and services. Integrating cooperative relaying techniques into spectrum-sharing cognitive radio systems sheds new light on higher spectral efficiency. In this paper, we analyze the end-to-end performance of cooperative amplify-and-forward (AF) relaying in spectrum-sharing systems. In order to achieve the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical tractability, the desired channels from secondary source to relay and from relay to secondary destination are assumed to be subject to Rayleigh fading). Also, both partial and opportunistic relay-selection strategies are exploited to further enhance system performance. Based on the exact distribution functions of the end-to-end signal-to-noise ratio (SNR) obtained herein, the outage probability, average symbol error probability, diversity order, and ergodic capacity of the system under study are analytically investigated. Our results show that system performance is dominated by the resource constraints and it improves slowly with increasing average SNR. Furthermore, larger Nakagami-m fading parameter on interference channels deteriorates system performance slightly. On the other hand, when interference power constraints are stringent, opportunistic relay selection can be exploited to improve system performance significantly. All analytical results are corroborated by simulation results and they are shown to be efficient tools for exact evaluation of system performance.

  19. STUDY OF WITHERS HEIGHT AVERAGE PERFORMANCES IN HUCUL HORSE BREED – HROBY BLOODLINE

    Directory of Open Access Journals (Sweden)

    M. MAFTEI

    2008-10-01

    Full Text Available Study of average performances in a population have a huge importance because, regarding a population, the average of phenotypic value is equal with average of genotypic value. So, the studies of the average value of characters offer us an idea about the population genetic level. The biological material is represented by 177 hucul horse from Hroby bloodline divided in 6 stallion families (tab. 1 analyzed at 18, 30 and 42 months old, owned by Lucina hucul stood farm. The average performances for withers height are presented in tab. 2. We can observe here that the average performances of the character are between characteristic limits of the breed. Both sexes have a small grade of variability with a decreasing tendency in the same time with ageing. We can observe a normal evolution in time for growth process with significant differences only at age of 42 months. We can say in this condition that the average performances for withers height have different values, influenced by the age, with a decreasing tendency.

  20. STUDY OF WITHERS HEIGHT AVERAGE PERFORMANCES IN HUCUL HORSE BREED –GORAL BLOODLINE

    Directory of Open Access Journals (Sweden)

    M. MAFTEI

    2008-10-01

    Full Text Available Study of average performances in a population have a huge importance because, regarding a population, the average of phenotypic value is equal with average of genotypic value. So, the studies of the average value of characters offer us an idea about the population genetic level. The biological material is represented by 87 hucul horse from Goral bloodline divided in 5 stallion families (tab. 1 analyzed at 18, 30 and 42 months old, owned by Lucina hucul stood farm. The average performances for withers height are presented in tab. 2. We can observe here that the average performances of the character are between characteristic limits of the breed. Both sexes have a small grade of variability with a decreasing tendency in the same time with ageing. We can observe a normal evolution in time for growth process with significant differences only at age of 42 months. We can say in this condition that the average performances for withers height have different values, influenced by the age, with a decreasing tendency.

  1. Removing the Influence of Shimmer in the Calculation of Harmonics-To-Noise Ratios Using Ensemble-Averages in Voice Signals

    Directory of Open Access Journals (Sweden)

    Carlos Ferrer

    2009-01-01

    Full Text Available Harmonics-to-noise ratios (HNRs are affected by general aperiodicity in voiced speech signals. To specifically reflect a signal-to-additive-noise ratio, the measurement should be insensitive to other periodicity perturbations, like jitter, shimmer, and waveform variability. The ensemble averaging technique is a time-domain method which has been gradually refined in terms of its sensitivity to jitter and waveform variability and required number of pulses. In this paper, shimmer is introduced in the model of the ensemble average, and a formula is derived which allows the reduction of shimmer effects in HNR calculation. The validity of the technique is evaluated using synthetically shimmered signals, and the prerequisites (glottal pulse positions and amplitudes are obtained by means of fully automated methods. The results demonstrate the feasibility and usefulness of the correction.

  2. Development and significance of a fetal electrocardiogram recorded by signal-averaged high-amplification electrocardiography.

    Science.gov (United States)

    Hayashi, Risa; Nakai, Kenji; Fukushima, Akimune; Itoh, Manabu; Sugiyama, Toru

    2009-03-01

    Although ultrasonic diagnostic imaging and fetal heart monitors have undergone great technological improvements, the development and use of fetal electrocardiograms to evaluate fetal arrhythmias and autonomic nervous activity have not been fully established. We verified the clinical significance of the novel signal-averaged vector-projected high amplification ECG (SAVP-ECG) method in fetuses from 48 gravidas at 32-41 weeks of gestation and in 34 neonates. SAVP-ECGs from fetuses and newborns were recorded using a modified XYZ-leads system. Once noise and maternal QRS waves were removed, the P, QRS, and T wave intervals were measured from the signal-averaged fetal ECGs. We also compared fetal and neonatal heart rates (HRs), coefficients of variation of heart rate variability (CV) as a parasympathetic nervous activity, and the ratio of low to high frequency (LF/HF ratio) as a sympathetic nervous activity. The rate of detection of a fetal ECG by SAVP-ECG was 72.9%, and the fetal and neonatal QRS and QTc intervals were not significantly different. The neonatal CVs and LF/HF ratios were significantly increased compared with those in the fetus. In conclusion, we have developed a fetal ECG recording method using the SAVP-ECG system, which we used to evaluate autonomic nervous system development.

  3. Compressive sensing scalp EEG signals: implementations and practical performance.

    Science.gov (United States)

    Abdulghani, Amir M; Casson, Alexander J; Rodriguez-Villegas, Esther

    2012-11-01

    Highly miniaturised, wearable computing and communication systems allow unobtrusive, convenient and long term monitoring of a range of physiological parameters. For long term operation from the physically smallest batteries, the average power consumption of a wearable device must be very low. It is well known that the overall power consumption of these devices can be reduced by the inclusion of low power consumption, real-time compression of the raw physiological data in the wearable device itself. Compressive sensing is a new paradigm for providing data compression: it has shown significant promise in fields such as MRI; and is potentially suitable for use in wearable computing systems as the compression process required in the wearable device has a low computational complexity. However, the practical performance very much depends on the characteristics of the signal being sensed. As such the utility of the technique cannot be extrapolated from one application to another. Long term electroencephalography (EEG) is a fundamental tool for the investigation of neurological disorders and is increasingly used in many non-medical applications, such as brain-computer interfaces. This article investigates in detail the practical performance of different implementations of the compressive sensing theory when applied to scalp EEG signals.

  4. MOTION ARTIFACT REDUCTION IN FUNCTIONAL NEAR INFRARED SPECTROSCOPY SIGNALS BY AUTOREGRESSIVE MOVING AVERAGE MODELING BASED KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    MEHDI AMIAN

    2013-10-01

    Full Text Available Functional near infrared spectroscopy (fNIRS is a technique that is used for noninvasive measurement of the oxyhemoglobin (HbO2 and deoxyhemoglobin (HHb concentrations in the brain tissue. Since the ratio of the concentration of these two agents is correlated with the neuronal activity, fNIRS can be used for the monitoring and quantifying the cortical activity. The portability of fNIRS makes it a good candidate for studies involving subject's movement. The fNIRS measurements, however, are sensitive to artifacts generated by subject's head motion. This makes fNIRS signals less effective in such applications. In this paper, the autoregressive moving average (ARMA modeling of the fNIRS signal is proposed for state-space representation of the signal which is then fed to the Kalman filter for estimating the motionless signal from motion corrupted signal. Results are compared to the autoregressive model (AR based approach, which has been done previously, and show that the ARMA models outperform AR models. We attribute it to the richer structure, containing more terms indeed, of ARMA than AR. We show that the signal to noise ratio (SNR is about 2 dB higher for ARMA based method.

  5. Removing the Influence of Shimmer in the Calculation of Harmonics-To-Noise Ratios Using Ensemble-Averages in Voice Signals

    OpenAIRE

    Carlos Ferrer; Eduardo González; María E. Hernández-Díaz; Diana Torres; Anesto del Toro

    2009-01-01

    Harmonics-to-noise ratios (HNRs) are affected by general aperiodicity in voiced speech signals. To specifically reflect a signal-to-additive-noise ratio, the measurement should be insensitive to other periodicity perturbations, like jitter, shimmer, and waveform variability. The ensemble averaging technique is a time-domain method which has been gradually refined in terms of its sensitivity to jitter and waveform variability and required number of pulses. In this paper, shimmer is introduced ...

  6. Raven's Test Performance of Sub-Saharan Africans: Average Performance, Psychometric Properties, and the Flynn Effect

    Science.gov (United States)

    Wicherts, Jelte M.; Dolan, Conor V.; Carlson, Jerry S.; van der Maas, Han L. J.

    2010-01-01

    This paper presents a systematic review of published data on the performance of sub-Saharan Africans on Raven's Progressive Matrices. The specific goals were to estimate the average level of performance, to study the Flynn Effect in African samples, and to examine the psychometric meaning of Raven's test scores as measures of general intelligence.…

  7. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Science.gov (United States)

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  8. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Directory of Open Access Journals (Sweden)

    Kyungsoo Kim

    2016-06-01

    Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.

  9. Raven’s test performance of sub-Saharan Africans: average performance, psychometric properties, and the Flynn Effect

    NARCIS (Netherlands)

    Wicherts, J.M.; Dolan, C.V.; Carlson, J.S.; van der Maas, H.L.J.

    2010-01-01

    This paper presents a systematic review of published data on the performance of sub-Saharan Africans on Raven's Progressive Matrices. The specific goals were to estimate the average level of performance, to study the Flynn Effect in African samples, and to examine the psychometric meaning of Raven's

  10. Reduced fractal model for quantitative analysis of averaged micromotions in mesoscale: Characterization of blow-like signals

    International Nuclear Information System (INIS)

    Nigmatullin, Raoul R.; Toboev, Vyacheslav A.; Lino, Paolo; Maione, Guido

    2015-01-01

    Highlights: •A new approach describes fractal-branched systems with long-range fluctuations. •A reduced fractal model is proposed. •The approach is used to characterize blow-like signals. •The approach is tested on data from different fields. -- Abstract: It has been shown that many micromotions in the mesoscale region are averaged in accordance with their self-similar (geometrical/dynamical) structure. This distinctive feature helps to reduce a wide set of different micromotions describing relaxation/exchange processes to an averaged collective motion, expressed mathematically in a rather general form. This reduction opens new perspectives in description of different blow-like signals (BLS) in many complex systems. The main characteristic of these signals is a finite duration also when the generalized reduced function is used for their quantitative fitting. As an example, we describe quantitatively available signals that are generated by bronchial asthmatic people, songs by queen bees, and car engine valves operating in the idling regime. We develop a special treatment procedure based on the eigen-coordinates (ECs) method that allows to justify the generalized reduced fractal model (RFM) for description of BLS that can propagate in different complex systems. The obtained describing function is based on the self-similar properties of the different considered micromotions. This kind of cooperative model is proposed here for the first time. In spite of the fact that the nature of the dynamic processes that take place in fractal structure on a mesoscale level is not well understood, the parameters of the RFM fitting function can be used for construction of calibration curves, affected by various external/random factors. Then, the calculated set of the fitting parameters of these calibration curves can characterize BLS of different complex systems affected by those factors. Though the method to construct and analyze the calibration curves goes beyond the scope

  11. Signal averaging technique for noninvasive recording of late potentials in patients with coronary artery disease

    Science.gov (United States)

    Abboud, S.; Blatt, C. M.; Lown, B.; Graboys, T. B.; Sadeh, D.; Cohen, R. J.

    1987-01-01

    An advanced non invasive signal averaging technique was used to detect late potentials in two groups of patients: Group A (24 patients) with coronary artery disease (CAD) and without sustained ventricular tachycardia (VT) and Group B (8 patients) with CAD and sustained VT. Recorded analog data were digitized and aligned using a cross correlation function with fast Fourier transform schema, averaged and band pass filtered between 60 and 200 Hz with a non-recursive digital filter. Averaged filtered waveforms were analyzed by computer program for 3 parameters: (1) filtered QRS (fQRS) duration (2) interval between the peak of the R wave peak and the end of fQRS (R-LP) (3) RMS value of last 40 msec of fQRS (RMS). Significant change was found between Groups A and B in fQRS (101 -/+ 13 msec vs 123 -/+ 15 msec; p < .0005) and in R-LP vs 52 -/+ 11 msec vs 71-/+18 msec, p <.002). We conclude that (1) the use of a cross correlation triggering method and non-recursive digital filter enables a reliable recording of late potentials from the body surface; (2) fQRS and R-LP durations are sensitive indicators of CAD patients susceptible to VT.

  12. Error rate performance of narrowband multilevel CPFSK signals

    Science.gov (United States)

    Ekanayake, N.; Fonseka, K. J. P.

    1987-04-01

    The paper presents a relatively simple method for analyzing the effect of IF filtering on the performance of multilevel FM signals. Using this method, the error rate performance of narrowband FM signals is analyzed for three different detection techniques, namely limiter-discriminator detection, differential detection and coherent detection followed by differential decoding. The symbol error probabilities are computed for a Gaussian IF filter and a second-order Butterworth IF filter. It is shown that coherent detection and differential decoding yields better performance than limiter-discriminator detection and differential detection, whereas two noncoherent detectors yield approximately identical performance.

  13. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  14. Advanced pulse oximeter signal processing technology compared to simple averaging. II. Effect on frequency of alarms in the postanesthesia care unit.

    Science.gov (United States)

    Rheineck-Leyssius, A T; Kalkman, C J

    1999-05-01

    To determine the effect of a new pulse oximeter (Nellcor Symphony N-3000, Pleasanton, CA) with signal processing technique (Oxismart) on the incidence of false alarms in the postanesthesia care unit (PACU). Prospective study. Nonuniversity hospital. 603 consecutive ASA physical status I, II, and III patients recovering from general or regional anesthesia in the PACU. We compared the number of alarms produced by a recently developed "third"-generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504, Waukesha, WI). Patients were randomly assigned to either a Nellcor pulse oximeter or a Criticare with the signal averaging time set at either 12 or 21 seconds. For each patient the number of false (artifact) alarms was counted. The Nellcor generated one false alarm in 199 patients and 36 (in 31 patients) "loss of pulse" alarms. The conventional pulse oximeter with the averaging time set at 12 seconds generated a total of 32 false alarms in 17 of 197 patients [compared with the Nellcor, relative risk (RR) 0.06, confidence interval (CI) 0.01 to 0.25] and a total of 172 "loss of pulse" alarms in 79 patients (RR 0.39, CI 0.28 to 0.55). The conventional pulse oximeter with the averaging time set at 21 seconds generated 12 false alarms in 11 of 207 patients (compared with the Nellcor, RR 0.09, CI 0.02 to 0.48) and a total of 204 "loss of pulse" alarms in 81 patients (RR 0.40, CI 0.28 to 0.56). The lower incidence of false alarms of the conventional pulse oximeter with the longest averaging time compared with the shorter averaging time did not reach statistical significance (false alarms RR 0.62, CI 0.3 to 1.27; "loss of pulse" alarms RR 0.98, CI 0.77 to 1.3). To date, this is the first report of a pulse oximeter that produced almost no false alarms in the PACU.

  15. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  16. Leveraging Mechanism Simplicity and Strategic Averaging to Identify Signals from Highly Heterogeneous Spatial and Temporal Ozone Data

    Science.gov (United States)

    Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.

    2017-12-01

    We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.

  17. Spectral analysis of 87-lead body surface signal-averaged ECGs in patients with previous anterior myocardial infarction as a marker of ventricular tachycardia.

    Science.gov (United States)

    Hosoya, Y; Kubota, I; Shibata, T; Yamaki, M; Ikeda, K; Tomoike, H

    1992-06-01

    There were few studies on the relation between the body surface distribution of high- and low-frequency components within the QRS complex and ventricular tachycardia (VT). Eighty-seven signal-averaged ECGs were obtained from 30 normal subjects (N group) and 30 patients with previous anterior myocardial infarction (MI) with VT (MI-VT[+] group, n = 10) or without VT (MI-VT[-] group, n = 20). The onset and offset of the QRS complex were determined from 87-lead root mean square values computed from the averaged (but not filtered) ECG waveforms. Fast Fourier transform analysis was performed on signal-averaged ECG. The resulting Fourier coefficients were attenuated by use of the transfer function, and then inverse transform was done with five frequency ranges (0-25, 25-40, 40-80, 80-150, and 150-250 Hz). From the QRS onset to the QRS offset, the time integration of the absolute value of reconstructed waveforms was calculated for each of the five frequency ranges. The body surface distributions of these areas were expressed as QRS area maps. The maximal values of QRS area maps were compared among the three groups. In the frequency ranges of 0-25 and 150-250 Hz, there were no significant differences in the maximal values among these three groups. Both MI groups had significantly smaller maximal values of QRS area maps in the frequency ranges of 25-40 and 40-80 Hz compared with the N group. The MI-VT(+) group had significantly smaller maximal values in the frequency ranges of 40-80 and 80-150 Hz than the MI-VT(-) group. These three groups were clearly differentiated by the maximal values of the 40-80-Hz QRS area map. It was suggested that the maximal value of the 40-80-Hz QRS area map was a new marker for VT after anterior MI.

  18. Performance comparison of independent component analysis algorithms for fetal cardiac signal reconstruction: a study on synthetic fMCG data

    International Nuclear Information System (INIS)

    Mantini, D; II, K E Hild; Alleva, G; Comani, S

    2006-01-01

    Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times

  19. An Integrated Signaling-Encryption Mechanism to Reduce Error Propagation in Wireless Communications: Performance Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL

    2015-01-01

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

  20. Average combination difference morphological filters for fault feature extraction of bearing

    Science.gov (United States)

    Lv, Jingxiang; Yu, Jianbo

    2018-02-01

    In order to extract impulse components from vibration signals with much noise and harmonics, a new morphological filter called average combination difference morphological filter (ACDIF) is proposed in this paper. ACDIF constructs firstly several new combination difference (CDIF) operators, and then integrates the best two CDIFs as the final morphological filter. This design scheme enables ACIDF to extract positive and negative impacts existing in vibration signals to enhance accuracy of bearing fault diagnosis. The length of structure element (SE) that affects the performance of ACDIF is determined adaptively by a new indicator called Teager energy kurtosis (TEK). TEK further improves the effectiveness of ACDIF for fault feature extraction. Experimental results on the simulation and bearing vibration signals demonstrate that ACDIF can effectively suppress noise and extract periodic impulses from bearing vibration signals.

  1. FOREGROUND MODEL AND ANTENNA CALIBRATION ERRORS IN THE MEASUREMENT OF THE SKY-AVERAGED λ21 cm SIGNAL AT z∼ 20

    Energy Technology Data Exchange (ETDEWEB)

    Bernardi, G. [SKA SA, 3rd Floor, The Park, Park Road, Pinelands, 7405 (South Africa); McQuinn, M. [Department of Astronomy, University of California, Berkeley, CA 94720 (United States); Greenhill, L. J., E-mail: gbernardi@ska.ac.za [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2015-01-20

    The most promising near-term observable of the cosmic dark age prior to widespread reionization (z ∼ 15-200) is the sky-averaged λ21 cm background arising from hydrogen in the intergalactic medium. Though an individual antenna could in principle detect the line signature, data analysis must separate foregrounds that are orders of magnitude brighter than the λ21 cm background (but that are anticipated to vary monotonically and gradually with frequency, e.g., they are considered {sup s}pectrally smooth{sup )}. Using more physically motivated models for foregrounds than in previous studies, we show that the intrinsic spectral smoothness of the foregrounds is likely not a concern, and that data analysis for an ideal antenna should be able to detect the λ21 cm signal after subtracting a ∼fifth-order polynomial in log ν. However, we find that the foreground signal is corrupted by the angular and frequency-dependent response of a real antenna. The frequency dependence complicates modeling of foregrounds commonly based on the assumption of spectral smoothness. Our calculations focus on the Large-aperture Experiment to detect the Dark Age, which combines both radiometric and interferometric measurements. We show that statistical uncertainty remaining after fitting antenna gain patterns to interferometric measurements is not anticipated to compromise extraction of the λ21 cm signal for a range of cosmological models after fitting a seventh-order polynomial to radiometric data. Our results generalize to most efforts to measure the sky-averaged spectrum.

  2. An application of commercial data averaging techniques in pulsed photothermal experiments

    International Nuclear Information System (INIS)

    Grozescu, I.V.; Moksin, M.M.; Wahab, Z.A.; Yunus, W.M.M.

    1997-01-01

    We present an application of data averaging technique commonly implemented in many commercial digital oscilloscopes or waveform digitizers. The technique was used for transient data averaging in the pulsed photothermal radiometry experiments. Photothermal signals are surrounded by an important amount of noise which affect the precision of the measurements. The effect of the noise level on photothermal signal parameters in our particular case, fitted decay time, is shown. The results of the analysis can be used in choosing the most effective averaging technique and estimating the averaging parameter values. This would help to reduce the data acquisition time while improving the signal-to-noise ratio

  3. Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function

    Directory of Open Access Journals (Sweden)

    Christofer Toumazou

    2013-07-01

    Full Text Available A novel noise filtering algorithm based on averaging Intrinsic Mode Function (aIMF, which is a derivation of Empirical Mode Decomposition (EMD, is proposed to remove white-Gaussian noise of foreign currency exchange rates that are nonlinear nonstationary times series signals. Noise patterns with different amplitudes and frequencies were randomly mixed into the five exchange rates. A number of filters, namely; Extended Kalman Filter (EKF, Wavelet Transform (WT, Particle Filter (PF and the averaging Intrinsic Mode Function (aIMF algorithm were used to compare filtering and smoothing performance. The aIMF algorithm demonstrated high noise reduction among the performance of these filters.

  4. Improved performance of high average power semiconductor arrays for applications in diode pumped solid state lasers

    International Nuclear Information System (INIS)

    Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.

    1994-01-01

    The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL's). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL's which are appropriate for material processing applications, low and intermediate average power DPSSL's are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications

  5. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    OpenAIRE

    Bahubali K. Shiragapur; Uday Wali

    2016-01-01

    In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR) quantity. The Golay Code (24, 12), Reed-Muller code (16, 11), Hamming code (7, 4) and Hybrid technique (Combination of Signal Scrambling and Signal Distortion) proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conve...

  6. Warning Signals for Poor Performance Improve Human-Robot Interaction

    NARCIS (Netherlands)

    van den Brule, Rik; Bijlstra, Gijsbert; Dotsch, Ron; Haselager, Pim; Wigboldus, Daniel HJ

    2016-01-01

    The present research was aimed at investigating whether human-robot interaction (HRI) can be improved by a robot’s nonverbal warning signals. Ideally, when a robot signals that it cannot guarantee good performance, people could take preventive actions to ensure the successful completion of the

  7. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  8. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  9. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  10. Documenting Student Performance: An Alternative to the Traditional Calculation of Grade Point Averages

    Science.gov (United States)

    Volwerk, Johannes J.; Tindal, Gerald

    2012-01-01

    Traditionally, students in secondary and postsecondary education have grade point averages (GPA) calculated, and a cumulative GPA computed to summarize overall performance at their institutions. GPAs are used for acknowledgement and awards, as partial evidence for admission to other institutions (colleges and universities), and for awarding…

  11. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  12. Average bit error probability of binary coherent signaling over generalized fading channels subject to additive generalized gaussian noise

    KAUST Repository

    Soury, Hamza

    2012-06-01

    This letter considers the average bit error probability of binary coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closed form expression in terms of the Fox\\'s H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading and Nakagami-m fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters. © 2012 IEEE.

  13. Value of the Signal-Averaged Electrocardiogram in Arrhythmogenic Right Ventricular Cardiomyopathy/Dysplasia

    Science.gov (United States)

    Kamath, Ganesh S.; Zareba, Wojciech; Delaney, Jessica; Koneru, Jayanthi N.; McKenna, William; Gear, Kathleen; Polonsky, Slava; Sherrill, Duane; Bluemke, David; Marcus, Frank; Steinberg, Jonathan S.

    2011-01-01

    Background Arrhythmogenic right ventricular cardiomyopathy/dysplasia (ARVC/D) is an inherited disease causing structural and functional abnormalities of the right ventricle (RV). The presence of late potentials as assessed by the signal averaged electrocardiogram (SAECG) is a minor Task Force criterion. Objective The purpose of this study was to examine the diagnostic and clinical value of the SAECG in a large population of genotyped ARVC/D probands. Methods We compared the SAECGs of 87 ARVC/D probands (age 37 ± 13 years, 47 males) diagnosed as affected or borderline by Task Force criteria without using the SAECG criterion with 103 control subjects. The association of SAECG abnormalities was also correlated with clinical presentation; surface ECG; VT inducibility at electrophysiologic testing; ICD therapy for VT; and RV abnormalities as assessed by cardiac magnetic resonance imaging (cMRI). Results When compared with controls, all 3 components of the SAECG were highly associated with the diagnosis of ARVC/D (p<0.001). These include the filtered QRS duration (fQRSD) (97.8 ± 8.7 msec vs. 119.6 ± 23.8 msec), low amplitude signal (LAS) (24.4 ± 9.2 msec vs. 46.2 ± 23.7 msec) and root mean square amplitude of the last 40 msec of late potentials (RMS-40) (50.4 ± 26.9 µV vs. 27.9 ± 36.3 µV). The sensitivity of using SAECG for diagnosis of ARVC/D was increased from 47% using the established 2 of 3 criteria (i.e. late potentials) to 69% by using a modified criterion of any 1 of the 3 criteria, while maintaining a high specificity of 95%. Abnormal SAECG as defined by this modified criteria was associated with a dilated RV volume and decreased RV ejection fraction detected by cMRI (p<0.05). SAECG abnormalities did not vary with clinical presentation or reliably predict spontaneous or inducible VT, and had limited correlation with ECG findings. Conclusion Using 1 of 3 SAECG criteria contributed to increased sensitivity and specificity for the diagnosis of ARVC/D. This

  14. Integrating angle-frequency domain synchronous averaging technique with feature extraction for gear fault diagnosis

    Science.gov (United States)

    Zhang, Shengli; Tang, J.

    2018-01-01

    Gear fault diagnosis relies heavily on the scrutiny of vibration responses measured. In reality, gear vibration signals are noisy and dominated by meshing frequencies as well as their harmonics, which oftentimes overlay the fault related components. Moreover, many gear transmission systems, e.g., those in wind turbines, constantly operate under non-stationary conditions. To reduce the influences of non-synchronous components and noise, a fault signature enhancement method that is built upon angle-frequency domain synchronous averaging is developed in this paper. Instead of being averaged in the time domain, the signals are processed in the angle-frequency domain to solve the issue of phase shifts between signal segments due to uncertainties caused by clearances, input disturbances, and sampling errors, etc. The enhanced results are then analyzed through feature extraction algorithms to identify the most distinct features for fault classification and identification. Specifically, Kernel Principal Component Analysis (KPCA) targeting at nonlinearity, Multilinear Principal Component Analysis (MPCA) targeting at high dimensionality, and Locally Linear Embedding (LLE) targeting at local similarity among the enhanced data are employed and compared to yield insights. Numerical and experimental investigations are performed, and the results reveal the effectiveness of angle-frequency domain synchronous averaging in enabling feature extraction and classification.

  15. Determination of the Average Native Background and the Light-Induced EPR Signals and their Variation in the Teeth Enamel Based on Large-Scale Survey of the Population

    International Nuclear Information System (INIS)

    Ivannikov, Alexander I.; Khailov, Artem M.; Orlenko, Sergey P.; Skvortsov, Valeri G.; Stepanenko, Valeri F.; Zhumadilov, Kassym Sh.; Williams, Benjamin B.; Flood, Ann B.; Swartz, Harold M.

    2016-01-01

    The aim of the study is to determine the average intensity and variation of the native background signal amplitude (NSA) and of the solar light-induced signal amplitude (LSA) in electron paramagnetic resonance (EPR) spectra of tooth enamel for different kinds of teeth and different groups of people. These values are necessary for determination of the intensity of the radiation-induced signal amplitude (RSA) by subtraction of the expected NSA and LSA from the total signal amplitude measured in L-band for in vivo EPR dosimetry. Variation of these signals should be taken into account when estimating the uncertainty of the estimated RSA. A new analysis of several hundred EPR spectra that were measured earlier at X-band in a large-scale examination of the population of the Central Russia was performed. Based on this analysis, the average values and the variation (standard deviation, SD) of the amplitude of the NSA for the teeth from different positions, as well as LSA in outer enamel of the front teeth for different population groups, were determined. To convert data acquired at X-band to values corresponding to the conditions of measurement at L-band, the experimental dependencies of the intensities of the RSA, LSA and NSA on the m.w. power, measured at both X and L-band, were analysed. For the two central upper incisors, which are mainly used in in vivo dosimetry, the mean LSA annual rate induced only in the outer side enamel and its variation were obtained as 10 ± 2 (SD = 8) mGy y"-"1, the same for X- and L-bands (results are presented as the mean ± error of mean). Mean NSA in enamel and its variation for the upper incisors was calculated at 2.0 ± 0.2 (SD = 0.5) Gy, relative to the calibrated RSA dose-response to gamma radiation measured under non-power saturation conditions at X-band. Assuming the same value for L-band under non-power saturating conditions, then for in vivo measurements at L-band at 25 mW (power saturation conditions), a mean NSA and its

  16. Energy performance certification as a signal of workplace quality

    International Nuclear Information System (INIS)

    Parkinson, Aidan; De Jong, Robert; Cooke, Alison; Guthrie, Peter

    2013-01-01

    Energy performance labelling and certification have been introduced widely to address market failures affecting the uptake of energy efficient technologies, by providing a signal to support decision making during contracting processes. The UK has recently introduced the Energy Performance Certificate (EPC) as a signal of building energy performance. The aims of this article are: to evaluate how valid EPC’s are signals of occupier satisfaction with office facilities; and to understand whether occupant attitudes towards environmental issues have affected commercial office rental values. This was achieved by surveying occupant satisfaction with their workplaces holistically using a novel multi-item rating scale which gathered 204 responses. Responses to this satisfaction scale were matched with the corresponding EPC and rental value of occupier’s workplaces. The satisfaction scale was found to be both a reliable and valid measure. The analysis found that EPC asset rating correlates significantly with occupant satisfaction with all facility attributes. Therefore, EPC ratings may be considered valid signals of overall facility satisfaction within the survey sample. Rental value was found to correlate significantly only with facility aesthetics. No evidence suggests rental value has been affected by occupants' perceptions towards the environmental impact of facilities. - Highlights: • A novel, internally consistent, and valid measure of office facility satisfaction. • EPC’s found to be a valid signal of overall facility satisfaction. • Historic rental value found to be an invalid measure of overall facility satisfaction. • No evidence suggests rental value has been affected by occupants' perceptions towards the environmental impact of facilities. • Occupants with stronger ties to landlords found to be more satisfied with office facilities

  17. Optical Performance Monitoring and Signal Optimization in Optical Networks

    DEFF Research Database (Denmark)

    Petersen, Martin Nordal

    2006-01-01

    The thesis studies performance monitoring for the next generation optical networks. The focus is on all-optical networks with bit-rates of 10 Gb/s or above. Next generation all-optical networks offer large challenges as the optical transmitted distance increases and the occurrence of electrical-optical......-electrical regeneration points decreases. This thesis evaluates the impact of signal degrading effects that are becoming of increasing concern in all-optical high-speed networks due to all-optical switching and higher bit-rates. Especially group-velocity-dispersion (GVD) and a number of nonlinear effects will require...... enhanced attention to avoid signal degradations. The requirements for optical performance monitoring features are discussed, and the thesis evaluates the advantages and necessity of increasing the level of performance monitoring parameters in the physical layer. In particular, methods for optical...

  18. Correlations between PANCE performance, physician assistant program grade point average, and selection criteria.

    Science.gov (United States)

    Brown, Gina; Imel, Brittany; Nelson, Alyssa; Hale, LaDonna S; Jansen, Nick

    2013-01-01

    The purpose of this study was to examine correlations between first-time Physician Assistant National Certifying Exam (PANCE) scores and pass/fail status, physician assistant (PA) program didactic grade point average (GPA), and specific selection criteria. This retrospective study evaluated graduating classes from 2007, 2008, and 2009 at a single program (N = 119). There was no correlation between PANCE performance and undergraduate grade point average (GPA), science prerequisite GPA, or health care experience. There was a moderate correlation between PANCE pass/fail and where students took science prerequisites (r = 0.27, P = .003) but not with the PANCE score. PANCE scores were correlated with overall PA program GPA (r = 0.67), PA pharmacology grade (r = 0.68), and PA anatomy grade (r = 0.41) but not with PANCE pass/fail. Correlations between selection criteria and PANCE performance were limited, but further research regarding the influence of prerequisite institution type may be warranted and may improve admission decisions. PANCE scores and PA program GPA correlations may guide academic advising and remediation decisions for current students.

  19. Improved Multiscale Entropy Technique with Nearest-Neighbor Moving-Average Kernel for Nonlinear and Nonstationary Short-Time Biomedical Signal Analysis

    Directory of Open Access Journals (Sweden)

    S. P. Arunachalam

    2018-01-01

    Full Text Available Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR and atrial fibrillation (AF was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.

  20. Digital storage of repeated signals

    International Nuclear Information System (INIS)

    Prozorov, S.P.

    1984-01-01

    An independent digital storage system designed for repeated signal discrimination from background noises is described. The signal averaging is performed off-line in the real time mode by means of multiple selection of the investigated signal and integration in each point. Digital values are added in a simple summator and the result is recorded the storage device with the volume of 1024X20 bit from where it can be output on an oscillograph, a plotter or transmitted to a compUter for subsequent processing. The described storage is reliable and simple device on one base of which the systems for the nuclear magnetic resonapce signal acquisition in different experiments are developed

  1. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  2. Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

    Directory of Open Access Journals (Sweden)

    Moath Kassim

    2018-05-01

    Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors

  3. Fast optical signal not detected in awake behaving monkeys.

    Science.gov (United States)

    Radhakrishnan, Harsha; Vanduffel, Wim; Deng, Hong Ping; Ekstrom, Leeland; Boas, David A; Franceschini, Maria Angela

    2009-04-01

    While the ability of near-infrared spectroscopy (NIRS) to measure cerebral hemodynamic evoked responses (slow optical signal) is well established, its ability to measure non-invasively the 'fast optical signal' is still controversial. Here, we aim to determine the feasibility of performing NIRS measurements of the 'fast optical signal' or Event-Related Optical Signals (EROS) under optimal experimental conditions in awake behaving macaque monkeys. These monkeys were implanted with a 'recording well' to expose the dura above the primary visual cortex (V1). A custom-made optical probe was inserted and fixed into the well. The close proximity of the probe to the brain maximized the sensitivity to changes in optical properties in the cortex. Motion artifacts were minimized by physical restraint of the head. Full-field contrast-reversing checkerboard stimuli were presented to monkeys trained to perform a visual fixation task. In separate sessions, two NIRS systems (CW4 and ISS FD oximeter), which previously showed the ability to measure the fast signal in human, were used. In some sessions EEG was acquired simultaneously with the optical signal. The increased sensitivity to cortical optical changes with our experimental setup was quantified with 3D Monte Carlo simulations on a segmented MRI monkey head. Averages of thousands of stimuli in the same animal, or grand averages across the two animals and across repeated sessions, did not lead to detection of the fast optical signal using either amplitude or phase of the optical signal. Hemodynamic responses and visual evoked potentials were instead always detected with single trials or averages of a few stimuli. Based on these negative results, despite the optimal experimental conditions, we doubt the usefulness of non-invasive fast optical signal measurements with NIRS.

  4. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  5. Determination of the Average Native Background and the Light-Induced EPR Signals and their Variation in the Teeth Enamel Based on Large-Scale Survey of the Population.

    Science.gov (United States)

    Ivannikov, Alexander I; Khailov, Artem M; Orlenko, Sergey P; Skvortsov, Valeri G; Stepanenko, Valeri F; Zhumadilov, Kassym Sh; Williams, Benjamin B; Flood, Ann B; Swartz, Harold M

    2016-12-01

    The aim of the study is to determine the average intensity and variation of the native background signal amplitude (NSA) and of the solar light-induced signal amplitude (LSA) in electron paramagnetic resonance (EPR) spectra of tooth enamel for different kinds of teeth and different groups of people. These values are necessary for determination of the intensity of the radiation-induced signal amplitude (RSA) by subtraction of the expected NSA and LSA from the total signal amplitude measured in L-band for in vivo EPR dosimetry. Variation of these signals should be taken into account when estimating the uncertainty of the estimated RSA. A new analysis of several hundred EPR spectra that were measured earlier at X-band in a large-scale examination of the population of the Central Russia was performed. Based on this analysis, the average values and the variation (standard deviation, SD) of the amplitude of the NSA for the teeth from different positions, as well as LSA in outer enamel of the front teeth for different population groups, were determined. To convert data acquired at X-band to values corresponding to the conditions of measurement at L-band, the experimental dependencies of the intensities of the RSA, LSA and NSA on the m.w. power, measured at both X and L-band, were analysed. For the two central upper incisors, which are mainly used in in vivo dosimetry, the mean LSA annual rate induced only in the outer side enamel and its variation were obtained as 10 ± 2 (SD = 8) mGy y -1 , the same for X- and L-bands (results are presented as the mean ± error of mean). Mean NSA in enamel and its variation for the upper incisors was calculated at 2.0 ± 0.2 (SD = 0.5) Gy, relative to the calibrated RSA dose-response to gamma radiation measured under non-power saturation conditions at X-band. Assuming the same value for L-band under non-power saturating conditions, then for in vivo measurements at L-band at 25 mW (power saturation conditions), a mean NSA and its

  6. Motor current signature analysis for gearbox condition monitoring under transient speeds using wavelet analysis and dual-level time synchronous averaging

    Science.gov (United States)

    Bravo-Imaz, Inaki; Davari Ardakani, Hossein; Liu, Zongchang; García-Arribas, Alfredo; Arnaiz, Aitor; Lee, Jay

    2017-09-01

    This paper focuses on analyzing motor current signature for fault diagnosis of gearboxes operating under transient speed regimes. Two different strategies are evaluated, extensively tested and compared to analyze the motor current signature in order to implement a condition monitoring system for gearboxes in industrial machinery. A specially designed test bench is used, thoroughly monitored to fully characterize the experiments, in which gears in different health status are tested. The measured signals are analyzed using discrete wavelet decomposition, in different decomposition levels using a range of mother wavelets. Moreover, a dual-level time synchronous averaging analysis is performed on the same signal to compare the performance of the two methods. From both analyses, the relevant features of the signals are extracted and cataloged using a self-organizing map, which allows for an easy detection and classification of the diverse health states of the gears. The results demonstrate the effectiveness of both methods for diagnosing gearbox faults. A slightly better performance was observed for dual-level time synchronous averaging method. Based on the obtained results, the proposed methods can used as effective and reliable condition monitoring procedures for gearbox condition monitoring using only motor current signature.

  7. Outage performance of cognitive radio systems with Improper Gaussian signaling

    KAUST Repository

    Amin, Osama

    2015-06-14

    Improper Gaussian signaling has proved its ability to improve the achievable rate of the systems that suffer from interference compared with proper Gaussian signaling. In this paper, we first study impact of improper Gaussian signaling on the performance of the cognitive radio system by analyzing the outage probability of both the primary user (PU) and the secondary user (SU). We derive exact expression of the SU outage probability and upper and lower bounds for the PU outage probability. Then, we design the SU signal by adjusting its transmitted power and the circularity coefficient to minimize the SU outage probability while maintaining a certain PU quality-of-service. Finally, we evaluate the proposed bounds and adaptive algorithms by numerical results.

  8. Acoustic/seismic signal propagation and sensor performance modeling

    Science.gov (United States)

    Wilson, D. Keith; Marlin, David H.; Mackay, Sean

    2007-04-01

    Performance, optimal employment, and interpretation of data from acoustic and seismic sensors depend strongly and in complex ways on the environment in which they operate. Software tools for guiding non-expert users of acoustic and seismic sensors are therefore much needed. However, such tools require that many individual components be constructed and correctly connected together. These components include the source signature and directionality, representation of the atmospheric and terrain environment, calculation of the signal propagation, characterization of the sensor response, and mimicking of the data processing at the sensor. Selection of an appropriate signal propagation model is particularly important, as there are significant trade-offs between output fidelity and computation speed. Attenuation of signal energy, random fading, and (for array systems) variations in wavefront angle-of-arrival should all be considered. Characterization of the complex operational environment is often the weak link in sensor modeling: important issues for acoustic and seismic modeling activities include the temporal/spatial resolution of the atmospheric data, knowledge of the surface and subsurface terrain properties, and representation of ambient background noise and vibrations. Design of software tools that address these challenges is illustrated with two examples: a detailed target-to-sensor calculation application called the Sensor Performance Evaluator for Battlefield Environments (SPEBE) and a GIS-embedded approach called Battlefield Terrain Reasoning and Awareness (BTRA).

  9. Comparison of two different high performance mixed signal controllers for DC/DC converters

    DEFF Research Database (Denmark)

    Jakobsen, Lars Tønnes; Andersen, Michael Andreas E.

    2006-01-01

    This paper describes how mixed signal controllers combining a cheap microcontroller with a simple analogue circuit can offer high performance digital control for DC/DC converters. Mixed signal controllers have the same versatility and performance as DSP based controllers. It is important to have...... an engineer experienced in microcontroller programming write the software algorithms to achieve optimal performance. Two mixed signal controller designs based on the same 8-bit microcontroller are compared both theoretically and experimentally. A 16-bit PID compensator with a sampling frequency of 200 k......Hz implemented in the 16 MIPS, 8-bit ATTiny26 microcontroller is demonstrated....

  10. Use and Protection of GPS Sidelobe Signals for Enhanced Navigation Performance in High Earth Orbit

    Science.gov (United States)

    Parker, Joel J. K.; Valdez, Jennifer E.; Bauer, Frank H.; Moreau, Michael C.

    2016-01-01

    GPS (Global Positioning System) Space Service Volume (SSV) signal environment is from 3,000-36,000 kilometers altitude. Current SSV specifications only capture performance provided by signals transmitted within 23.5(L1) or 26(L2-L5) off-nadir angle. Recent on-orbit data lessons learned show significant PNT (Positioning, Navigation and Timing) performance improvements when the full aggregate signal is used. Numerous military civil operational missions in High Geosynchronous Earth Orbit (HEOGEO) utilize the full signal to enhance vehicle PNT performance

  11. Design of excitation signals for active system monitoring in a performance assessment setup

    DEFF Research Database (Denmark)

    Green, Torben; Izadi-Zamanabadi, Roozbeh; Niemann, Hans Henrik

    2011-01-01

    This paper investigates how the excitation signal should be chosen for a active performance setup. The signal is used in a setup where the main purpose is to detect whether a parameter change of the controller has changed the global performance significantly. The signal has to be able to excite...... the dynamics of the subsystem under investigation both before and after the parameter change. The controller is well know, but there exists no detailed knowledge about the dynamics of the subsystem....

  12. Tracking Neuronal Connectivity from Electric Brain Signals to Predict Performance.

    Science.gov (United States)

    Vecchio, Fabrizio; Miraglia, Francesca; Rossini, Paolo Maria

    2018-05-01

    The human brain is a complex container of interconnected networks. Network neuroscience is a recent venture aiming to explore the connection matrix built from the human brain or human "Connectome." Network-based algorithms provide parameters that define global organization of the brain; when they are applied to electroencephalographic (EEG) signals network, configuration and excitability can be monitored in millisecond time frames, providing remarkable information on their instantaneous efficacy also for a given task's performance via online evaluation of the underlying instantaneous networks before, during, and after the task. Here we provide an updated summary on the connectome analysis for the prediction of performance via the study of task-related dynamics of brain network organization from EEG signals.

  13. MOL-Eye: A New Metric for the Performance Evaluation of a Molecular Signal

    OpenAIRE

    Turan, Meric; Kuran, Mehmet Sukru; Yilmaz, H. Birkan; Chae, Chan-Byoung; Tugcu, Tuna

    2017-01-01

    Inspired by the eye diagram in classical radio frequency (RF) based communications, the MOL-Eye diagram is proposed for the performance evaluation of a molecular signal within the context of molecular communication. Utilizing various features of this diagram, three new metrics for the performance evaluation of a molecular signal, namely the maximum eye height, standard deviation of received molecules, and counting SNR (CSNR) are introduced. The applicability of these performance metrics in th...

  14. Performance Improvement of Power Analysis Attacks on AES with Encryption-Related Signals

    Science.gov (United States)

    Lee, You-Seok; Lee, Young-Jun; Han, Dong-Guk; Kim, Ho-Won; Kim, Hyoung-Nam

    A power analysis attack is a well-known side-channel attack but the efficiency of the attack is frequently degraded by the existence of power components, irrelative to the encryption included in signals used for the attack. To enhance the performance of the power analysis attack, we propose a preprocessing method based on extracting encryption-related parts from the measured power signals. Experimental results show that the attacks with the preprocessed signals detect correct keys with much fewer signals, compared to the conventional power analysis attacks.

  15. The speech signal segmentation algorithm using pitch synchronous analysis

    Directory of Open Access Journals (Sweden)

    Amirgaliyev Yedilkhan

    2017-03-01

    Full Text Available Parameterization of the speech signal using the algorithms of analysis synchronized with the pitch frequency is discussed. Speech parameterization is performed by the average number of zero transitions function and the signal energy function. Parameterization results are used to segment the speech signal and to isolate the segments with stable spectral characteristics. Segmentation results can be used to generate a digital voice pattern of a person or be applied in the automatic speech recognition. Stages needed for continuous speech segmentation are described.

  16. Measurement of signal-to-noise ratio performance of TV fluoroscopy systems

    International Nuclear Information System (INIS)

    Geluk, R.J.

    1985-01-01

    A method has been developed for direct measurement of Signal-to-Noise ratio performance on X-ray TV systems. To this end the TV signal resulting from a calibrated test object, is compared with the noise level in the image. The method is objective and produces instantaneous readout, which makes it very suitable for system evaluation under dynamic conditions. (author)

  17. Identification of moving vehicle forces on bridge structures via moving average Tikhonov regularization

    Science.gov (United States)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin

    2017-08-01

    Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.

  18. Signals, systems, transforms, and digital signal processing with Matlab

    CERN Document Server

    Corinthios, Michael

    2009-01-01

    Continuous-Time and Discrete-Time Signals and SystemsIntroductionContinuous-Time SignalsPeriodic FunctionsUnit Step FunctionGraphical Representation of FunctionsEven and Odd Parts of a FunctionDirac-Delta ImpulseBasic Properties of the Dirac-Delta ImpulseOther Important Properties of the ImpulseContinuous-Time SystemsCausality, StabilityExamples of Electrical Continuous-Time SystemsMechanical SystemsTransfer Function and Frequency ResponseConvolution and CorrelationA Right-Sided and a Left-Sided FunctionConvolution with an Impulse and Its DerivativesAdditional Convolution PropertiesCorrelation FunctionProperties of the Correlation FunctionGraphical InterpretationCorrelation of Periodic FunctionsAverage, Energy and Power of Continuous-Time SignalsDiscrete-Time SignalsPeriodicityDifference EquationsEven/Odd DecompositionAverage Value, Energy and Power SequencesCausality, StabilityProblemsAnswers to Selected ProblemsFourier Series ExpansionTrigonometric Fourier SeriesExponential Fourier SeriesExponential versus ...

  19. On the construction of a time base and the elimination of averaging errors in proxy records

    Science.gov (United States)

    Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.

    2009-04-01

    Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The

  20. Fleet average NOx emission performance of 2004 model year light-duty vehicles, light-duty trucks and medium-duty passenger vehicles

    International Nuclear Information System (INIS)

    2006-05-01

    The On-Road Vehicle and Engine Emission Regulations came into effect on January 1, 2004. The regulations introduced more stringent national emission standards for on-road vehicles and engines, and also required that companies submit reports containing information concerning the company's fleets. This report presented a summary of the regulatory requirements relating to nitric oxide (NO x ) fleet average emissions for light-duty vehicles, light-duty trucks, and medium-duty passenger vehicles under the new regulations. The effectiveness of the Canadian fleet average NO x emission program at achieving environmental performance objectives was also evaluated. A summary of the fleet average NO x emission performance of individual companies was presented, as well as the overall Canadian fleet average of the 2004 model year based on data submitted by companies in their end of model year reports. A total of 21 companies submitted reports covering 2004 model year vehicles in 10 test groups, comprising 1,350,719 vehicles of the 2004 model year manufactured or imported for the purpose of sale in Canada. The average NO x value for the entire Canadian LDV/LDT fleet was 0.2016463 grams per mile. The average NO x values for the entire Canadian HLDT/MDPV fleet was 0.321976 grams per mile. It was concluded that the NO x values for both fleets were consistent with the environmental performance objectives of the regulations for the 2004 model year. 9 tabs

  1. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.

  2. Candidates Profile in FUVEST Exams from 2004 to 2013: Private and Public School Distribution, FUVEST Average Performance and Chemical Equilibrium Tasks Performance

    Directory of Open Access Journals (Sweden)

    R.S.A.P. Oliveira

    2014-08-01

    Full Text Available INTRODUCTION. Chemical equilibrium is recognized as a topic of several misconceptions. Its origins must be tracked from previous scholarship. Its impact on biochemistry learning is not fully described. A possible bulk of data is the FUVEST exam. OBJECTIVES: Identify students’ errors profile on chemical equilibrium tasks using public data from FUVEST exam. MATERIAL AND METHODS: Data analysis from FUVEST were: i Private and Public school distribution in Elementary and Middle School, and High School candidates of Pharmacy-Biochemistry course and total USP careers until the last call for enrollment (2004-2013; ii Average performance in 1st and 2nd parts of FUVEST exam of Pharmacy-Biochemistry, Chemistry, Engineering, Biological Sciences, Languages and Medicine courses and total enrolled candidates until 1st call for enrollment (2008- 2013; iii Performance of candidates of Pharmacy-Biochemistry, Chemistry, Engineering, Biological Sciences, Languages and Medicine courses and total USP careers in chemical equilibrium issues from 1st part of FUVEST (2011-2013. RESULTS AND DISCUSSION: i 66.2% of candidates came from private Elementary-Middle School courses and 71.8%, came from High School courses; ii Average grade over the period for 1st and 2nd FUVEST parts are respectively (in 100 points: Pharmacy-Biochemistry 66.7 and 61.2, Chemistry 65.9 and 58.9, Engineering 75.9 and 71.9, Biological Sciences 65.6 and 54.6, Languages 49.9 and 43.3, Medicine 83.5 and 79.5, total enrolled candidates 51,5 and 48.9; iii Four chemical equilibrium issues were found during 2011-2013 and the analysis of multiplechoice percentage distribution over the courses showed that there was a similar performance of students among them, except for Engineering and Medicine with higher grades, but the same proportional distribution among choices. CONCLUSION: Approved students came majorly from private schools. There was a different average performance among courses and similar on

  3. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  4. A comparative study between a simplified Kalman filter and Sliding Window Averaging for single trial dynamical estimation of event-related potentials

    DEFF Research Database (Denmark)

    Vedel-Larsen, Esben; Fuglø, Jacob; Channir, Fouad

    2010-01-01

    , are variable and depend on cognitive function. This study compares the performance of a simplified Kalman filter with Sliding Window Averaging in tracking dynamical changes in single trial P300. The comparison is performed on simulated P300 data with added background noise consisting of both simulated and real...... background EEG in various input signal to noise ratios. While both methods can be applied to track dynamical changes, the simplified Kalman filter has an advantage over the Sliding Window Averaging, most notable in a better noise suppression when both are optimized for faster changing latency and amplitude...

  5. Neural network and wavelet average framing percentage energy for atrial fibrillation classification.

    Science.gov (United States)

    Daqrouq, K; Alkhateeb, A; Ajour, M N; Morfeq, A

    2014-03-01

    ECG signals are an important source of information in the diagnosis of atrial conduction pathology. Nevertheless, diagnosis by visual inspection is a difficult task. This work introduces a novel wavelet feature extraction method for atrial fibrillation derived from the average framing percentage energy (AFE) of terminal wavelet packet transform (WPT) sub signals. Probabilistic neural network (PNN) is used for classification. The presented method is shown to be a potentially effective discriminator in an automated diagnostic process. The ECG signals taken from the MIT-BIH database are used to classify different arrhythmias together with normal ECG. Several published methods were investigated for comparison. The best recognition rate selection was obtained for AFE. The classification performance achieved accuracy 97.92%. It was also suggested to analyze the presented system in an additive white Gaussian noise (AWGN) environment; 55.14% for 0dB and 92.53% for 5dB. It was concluded that the proposed approach of automating classification is worth pursuing with larger samples to validate and extend the present study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  7. Ambiguity towards Multiple Historical Performance Information Signals: Evidence from Indonesian Open-Ended Mutual Fund Investors

    Directory of Open Access Journals (Sweden)

    Haris Pratama Loeis

    2015-10-01

    Full Text Available This study focuses on the behavior of open-ended mutual fund investors when encountered with multiple information signals of mutual fund’s historical performance. The behavior of investors can be reflected on their decision to subscribe or redeem their funds from mutual funds. Moreover, we observe the presence of ambiguity within investors due to multiple information signals, and also their reaction towards it. Our finding shows that open-ended mutual fund investors do not only have sensitivity towards past performance information signals, but also have additional sensitivity towards the ambiguity of multiple information signals. Because of the presence of ambiguity, investors give more consideration to negative information signals and the worst information signal in their investment decisions. Normal 0 false false false EN-US X-NONE X-NONE

  8. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    Directory of Open Access Journals (Sweden)

    Jacinta Chan Phooi M'ng

    Full Text Available The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR, Malaysia (MYR, the Philippines (PHP, Singapore (SGD, and Thailand (THB through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA' in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  9. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    Science.gov (United States)

    Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah

    2016-01-01

    The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  10. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  11. Total Quality Management (TQM) Practices and School Climate amongst High, Average and Low Performance Secondary Schools

    Science.gov (United States)

    Ismail, Siti Noor

    2014-01-01

    Purpose: This study attempted to determine whether the dimensions of TQM practices are predictors of school climate. It aimed to identify the level of TQM practices and school climate in three different categories of schools, namely high, average and low performance schools. The study also sought to examine which dimensions of TQM practices…

  12. Detection of auditory signals in quiet and noisy backgrounds while performing a visuo-spatial task

    Directory of Open Access Journals (Sweden)

    Vishakha W Rawool

    2016-01-01

    Full Text Available Context: The ability to detect important auditory signals while performing visual tasks may be further compounded by background chatter. Thus, it is important to know how task performance may interact with background chatter to hinder signal detection. Aim: To examine any interactive effects of speech spectrum noise and task performance on the ability to detect signals. Settings and Design: The setting was a sound-treated booth. A repeated measures design was used. Materials and Methods: Auditory thresholds of 20 normal adults were determined at 0.5, 1, 2 and 4 kHz in the following conditions presented in a random order: (1 quiet with attention; (2 quiet with a visuo-spatial task or puzzle (distraction; (3 noise with attention and (4 noise with task. Statistical Analysis: Multivariate analyses of variance (MANOVA with three repeated factors (quiet versus noise, visuo-spatial task versus no task, signal frequency. Results: MANOVA revealed significant main effects for noise and signal frequency and significant noise–frequency and task–frequency interactions. Distraction caused by performing the task worsened the thresholds for tones presented at the beginning of the experiment and had no effect on tones presented in the middle. At the end of the experiment, thresholds (4 kHz were better while performing the task than those obtained without performing the task. These effects were similar across the quiet and noise conditions. Conclusion: Detection of auditory signals is difficult at the beginning of a distracting visuo-spatial task but over time, task learning and auditory training effects can nullify the effect of distraction and may improve detection of high frequency sounds.

  13. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  14. On the Performance of Optical Wireless Links over Random Foggy Channels

    KAUST Repository

    Esmail, Maged Abdullah; Fathallah, Habib; Alouini, Mohamed-Slim

    2017-01-01

    Fog and dust are used to be considered as major performance degrading factors for free space optic (FSO) communication links. Despite the number of field measurements, performed in foggy environments during the last decades, most of the proposed channel attenuation models are deterministic, i.e., assumed the channel attenuation constant over time. Stochastic behavior of the channel is still understudied. In this work, we investigate the probabilistic behavior of the FSO channel in fog and develop a new statistical model for the signal attenuation. Moreover, we derive a probability distribution function (PDF) for the channel state. Using this PDF, we study the FSO system performance considering various metrics including average signal-to-noise ratio, average bit error rate, channel capacity, and probability of outage. Closed form expressions are derived for the average SNR and outage probability. We found acceptable performance with moderate and light fog. However, under thick and dense fog, the system performance poorly deteriorates. Finally, we derived closed form expressions for the average attenuation-distance product and the link availability that will potentially be very helpful for network design and planning.

  15. On the Performance of Optical Wireless Links over Random Foggy Channels

    KAUST Repository

    Esmail, Maged

    2017-02-17

    Fog and dust are used to be considered as major performance degrading factors for free space optic (FSO) communication links. Despite the number of field measurements, performed in foggy environments during the last decades, most of the proposed channel attenuation models are deterministic, i.e., assumed the channel attenuation constant over time. Stochastic behavior of the channel is still understudied. In this work, we investigate the probabilistic behavior of the FSO channel in fog and develop a new statistical model for the signal attenuation. Moreover, we derive a probability distribution function (PDF) for the channel state. Using this PDF, we study the FSO system performance considering various metrics including average signal-to-noise ratio, average bit error rate, channel capacity, and probability of outage. Closed form expressions are derived for the average SNR and outage probability. We found acceptable performance with moderate and light fog. However, under thick and dense fog, the system performance poorly deteriorates. Finally, we derived closed form expressions for the average attenuation-distance product and the link availability that will potentially be very helpful for network design and planning.

  16. Monitoring and predicting cognitive state and performance via physiological correlates of neuronal signals.

    Science.gov (United States)

    Russo, Michael B; Stetz, Melba C; Thomas, Maria L

    2005-07-01

    Judgment, decision making, and situational awareness are higher-order mental abilities critically important to operational cognitive performance. Higher-order mental abilities rely on intact functioning of multiple brain regions, including the prefrontal, thalamus, and parietal areas. Real-time monitoring of individuals for cognitive performance capacity via an approach based on sampling multiple neurophysiologic signals and integrating those signals with performance prediction models potentially provides a method of supporting warfighters' and commanders' decision making and other operationally relevant mental processes and is consistent with the goals of augmented cognition. Cognitive neurophysiological assessments that directly measure brain function and subsequent cognition include positron emission tomography, functional magnetic resonance imaging, mass spectroscopy, near-infrared spectroscopy, magnetoencephalography, and electroencephalography (EEG); however, most direct measures are not practical to use in operational environments. More practical, albeit indirect measures that are generated by, but removed from the actual neural sources, are movement activity, oculometrics, heart rate, and voice stress signals. The goal of the papers in this section is to describe advances in selected direct and indirect cognitive neurophysiologic monitoring techniques as applied for the ultimate purpose of preventing operational performance failures. These papers present data acquired in a wide variety of environments, including laboratory, simulator, and clinical arenas. The papers discuss cognitive neurophysiologic measures such as digital signal processing wrist-mounted actigraphy; oculometrics including blinks, saccadic eye movements, pupillary movements, the pupil light reflex; and high-frequency EEG. These neurophysiological indices are related to cognitive performance as measured through standard test batteries and simulators with conditions including sleep loss

  17. Underlay Cognitive Radio Systems with Improper Gaussian Signaling: Outage Performance Analysis

    KAUST Repository

    Amin, Osama

    2016-03-29

    Improper Gaussian signaling has the ability over proper (conventional) Gaussian signaling to improve the achievable rate of systems that suffer from interference. In this paper, we study the impact of using improper Gaussian signaling on the performance limits of the underlay cognitive radio system by analyzing the achievable outage probability of both the primary user (PU) and secondary user (SU). We derive the exact outage probability expression of the SU and construct upper and lower bounds of the PU outage probability which results in formulating an approximate expression of the PU outage probability. This allows us to design the SU signal by adjusting its transmitted power and the circularity coefficient to minimize the SU outage probability while maintaining a certain PU quality-of-service. Finally, we evaluate the derived expressions for both the SU and the PU and the corresponding adaptive algorithms by numerical results.

  18. Underlay Cognitive Radio Systems with Improper Gaussian Signaling: Outage Performance Analysis

    KAUST Repository

    Amin, Osama; Abediseid, Walid; Alouini, Mohamed-Slim

    2016-01-01

    Improper Gaussian signaling has the ability over proper (conventional) Gaussian signaling to improve the achievable rate of systems that suffer from interference. In this paper, we study the impact of using improper Gaussian signaling on the performance limits of the underlay cognitive radio system by analyzing the achievable outage probability of both the primary user (PU) and secondary user (SU). We derive the exact outage probability expression of the SU and construct upper and lower bounds of the PU outage probability which results in formulating an approximate expression of the PU outage probability. This allows us to design the SU signal by adjusting its transmitted power and the circularity coefficient to minimize the SU outage probability while maintaining a certain PU quality-of-service. Finally, we evaluate the derived expressions for both the SU and the PU and the corresponding adaptive algorithms by numerical results.

  19. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    Directory of Open Access Journals (Sweden)

    Bahubali K. Shiragapur

    2016-03-01

    Full Text Available In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR quantity. The Golay Code (24, 12, Reed-Muller code (16, 11, Hamming code (7, 4 and Hybrid technique (Combination of Signal Scrambling and Signal Distortion proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conventional and Modified Selective mapping techniques. The simulation results are validated through statistical properties, for proposed technique’s autocorrelation value is maximum shows reduction in PAPR. The symbol preference is the key idea to reduce PAPR based on Hamming distance. The simulation results are discussed in detail, in this article.

  20. Analysis of the finescale timing of repeated signals: does shell rapping in hermit crabs signal stamina?

    Science.gov (United States)

    Briffa; Elwood

    2000-01-01

    Hermit crabs, Pagurus bernhardus, sometimes exchange shells after a period of shell rapping, when the initiating or attacking crab brings its shell rapidly and repeatedly into contact with the shell of the noninitiator or defender in a series of bouts. Bouts are separated by pauses, and raps within bouts are separated by very short periods called 'gaps'. Since within-contest variation is missed when signals are studied by averaging performance rates over entire contests, we analysed the fine within-bout structure of this repeated, aggressive signal. We found that the pattern is consistent with high levels of fatigue in initiators. The duration of the gaps between individual raps increased both within bouts and from bout to bout, and we conclude that this activity is costly to perform. Furthermore, long pauses between bouts is correlated with increased vigour of rapping in the subsequent bout, which suggests that the pause allows for recovery from fatigue induced by rapping. These between-bout pauses may be assessed by noninitiators and provide a signal of stamina. Copyright 2000 The Association for the Study of Animal Behaviour.

  1. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  2. ECG signal performance de-noising assessment based on threshold tuning of dual-tree wavelet transform.

    Science.gov (United States)

    El B'charri, Oussama; Latif, Rachid; Elmansouri, Khalifa; Abenaou, Abdenbi; Jenkal, Wissam

    2017-02-07

    Since the electrocardiogram (ECG) signal has a low frequency and a weak amplitude, it is sensitive to miscellaneous mixed noises, which may reduce the diagnostic accuracy and hinder the physician's correct decision on patients. The dual tree wavelet transform (DT-WT) is one of the most recent enhanced versions of discrete wavelet transform. However, threshold tuning on this method for noise removal from ECG signal has not been investigated yet. In this work, we shall provide a comprehensive study on the impact of the choice of threshold algorithm, threshold value, and the appropriate wavelet decomposition level to evaluate the ECG signal de-noising performance. A set of simulations is performed on both synthetic and real ECG signals to achieve the promised results. First, the synthetic ECG signal is used to observe the algorithm response. The evaluation results of synthetic ECG signal corrupted by various types of noise has showed that the modified unified threshold and wavelet hyperbolic threshold de-noising method is better in realistic and colored noises. The tuned threshold is then used on real ECG signals from the MIT-BIH database. The results has shown that the proposed method achieves higher performance than the ordinary dual tree wavelet transform into all kinds of noise removal from ECG signal. The simulation results indicate that the algorithm is robust for all kinds of noises with varying degrees of input noise, providing a high quality clean signal. Moreover, the algorithm is quite simple and can be used in real time ECG monitoring.

  3. Alpha neurofeedback training improves SSVEP-based BCI performance

    Science.gov (United States)

    Wan, Feng; Nuno da Cruz, Janir; Nan, Wenya; Wong, Chi Man; Vai, Mang I.; Rosa, Agostinho

    2016-06-01

    Objective. Steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) can provide relatively easy, reliable and high speed communication. However, the performance is still not satisfactory, especially in some users who are not able to generate strong enough SSVEP signals. This work aims to strengthen a user’s SSVEP by alpha down-regulating neurofeedback training (NFT) and consequently improve the performance of the user in using SSVEP-based BCIs. Approach. An experiment with two steps was designed and conducted. The first step was to investigate the relationship between the resting alpha activity and the SSVEP-based BCI performance, in order to determine the training parameter for the NFT. Then in the second step, half of the subjects with ‘low’ performance (i.e. BCI classification accuracy <80%) were randomly assigned to a NFT group to perform a real-time NFT, and the rest half to a non-NFT control group for comparison. Main results. The first step revealed a significant negative correlation between the BCI performance and the individual alpha band (IAB) amplitudes in the eyes-open resting condition in a total of 33 subjects. In the second step, it was found that during the IAB down-regulating NFT, on average the subjects were able to successfully decrease their IAB amplitude over training sessions. More importantly, the NFT group showed an average increase of 16.5% in the SSVEP signal SNR (signal-to-noise ratio) and an average increase of 20.3% in the BCI classification accuracy, which was significant compared to the non-NFT control group. Significance. These findings indicate that the alpha down-regulating NFT can be used to improve the SSVEP signal quality and the subjects’ performance in using SSVEP-based BCIs. It could be helpful to the SSVEP related studies and would contribute to more effective SSVEP-based BCI applications.

  4. Direct measurement of fast transients by using boot-strapped waveform averaging

    Science.gov (United States)

    Olsson, Mattias; Edman, Fredrik; Karki, Khadga Jung

    2018-03-01

    An approximation to coherent sampling, also known as boot-strapped waveform averaging, is presented. The method uses digital cavities to determine the condition for coherent sampling. It can be used to increase the effective sampling rate of a repetitive signal and the signal to noise ratio simultaneously. The method is demonstrated by using it to directly measure the fluorescence lifetime from Rhodamine 6G by digitizing the signal from a fast avalanche photodiode. The obtained lifetime of 4.0 ns is in agreement with the known values.

  5. A complex symbol signal-to-noise ratio estimator and its performance

    Science.gov (United States)

    Feria, Y.

    1994-01-01

    This article presents an algorithm for estimating the signal-to-noise ratio (SNR) of signals that contain data on a downconverted suppressed carrier or the first harmonic of a square-wave subcarrier. This algorithm can be used to determine the performance of the full-spectrum combiner for the Galileo S-band (2.2- to 2.3-GHz) mission by measuring the input and output symbol SNR. A performance analysis of the algorithm shows that the estimator can estimate the complex symbol SNR using 10,000 symbols at a true symbol SNR of -5 dB with a mean of -4.9985 dB and a standard deviation of 0.2454 dB, and these analytical results are checked by simulations of 100 runs with a mean of -5.06 dB and a standard deviation of 0.2506 dB.

  6. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading

    KAUST Repository

    Xia, Minghua; Aissa, Sonia

    2012-01-01

    the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical

  7. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  8. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  9. Testing VRIN framework: Resource value and rareness as sources of competitive advantage and above average performance

    OpenAIRE

    Talaja, Anita

    2012-01-01

    In this study, structural equation model that analyzes the impact of resource and capability characteristics, more specifically value and rareness, on sustainable competitive advantage and above average performance is developed and empirically tested. According to the VRIN framework, if a company possesses and exploits valuable, rare, inimitable and non-substitutable resources and capabilities, it will achieve sustainable competitive advantage. Although the above mentioned statement is widely...

  10. Suggestibility and signal detection performance in hallucination-prone students.

    Science.gov (United States)

    Alganami, Fatimah; Varese, Filippo; Wagstaff, Graham F; Bentall, Richard P

    2017-03-01

    Auditory hallucinations are associated with signal detection biases. We examine the extent to which suggestions influence performance on a signal detection task (SDT) in highly hallucination-prone and low hallucination-prone students. We also explore the relationship between trait suggestibility, dissociation and hallucination proneness. In two experiments, students completed on-line measures of hallucination proneness (the revised Launay-Slade Hallucination Scale; LSHS-R), trait suggestibility (Inventory of Suggestibility) and dissociation (Dissociative Experiences Scale-II). Students in the upper and lower tertiles of the LSHS-R performed an auditory SDT. Prior to the task, suggestions were made pertaining to the number of expected targets (Experiment 1, N = 60: high vs. low suggestions; Experiment 2, N = 62, no suggestion vs. high suggestion vs. no voice suggestion). Correlational and regression analyses indicated that trait suggestibility and dissociation predicted hallucination proneness. Highly hallucination-prone students showed a higher SDT bias in both studies. In Experiment 1, both bias scores were significantly affected by suggestions to the same degree. In Experiment 2, highly hallucination-prone students were more reactive to the high suggestion condition than the controls. Suggestions may affect source-monitoring judgments, and this effect may be greater in those who have a predisposition towards hallucinatory experiences.

  11. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  12. Study of runaway electrons using the conditional average sampling method in the Damavand tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Pourshahab, B., E-mail: bpourshahab@gmail.com [University of Isfahan, Department of Nuclear Engineering, Faculty of Advance Sciences and Technologies (Iran, Islamic Republic of); Sadighzadeh, A. [Nuclear Science and Technology Research Institute, Plasma Physics and Nuclear Fusion Research School (Iran, Islamic Republic of); Abdi, M. R., E-mail: r.abdi@phys.ui.ac.ir [University of Isfahan, Department of Physics, Faculty of Science (Iran, Islamic Republic of); Rasouli, C. [Nuclear Science and Technology Research Institute, Plasma Physics and Nuclear Fusion Research School (Iran, Islamic Republic of)

    2017-03-15

    Some experiments for studying the runaway electron (RE) effects have been performed using the poloidal magnetic probes system installed around the plasma column in the Damavand tokamak. In these experiments, the so-called runaway-dominated discharges were considered in which the main part of the plasma current is carried by REs. The induced magnetic effects on the poloidal pickup coils signals are observed simultaneously with the Parail–Pogutse instability moments for REs and hard X-ray bursts. The output signals of all diagnostic systems enter the data acquisition system with 2 Msample/(s channel) sampling rate. The temporal evolution of the diagnostic signals is analyzed by the conditional average sampling (CAS) technique. The CASed profiles indicate RE collisions with the high-field-side plasma facing components at the instability moments. The investigation has been carried out for two discharge modes—low-toroidal-field (LTF) and high-toroidal-field (HTF) ones—related to both up and down limits of the toroidal magnetic field in the Damavand tokamak and their comparison has shown that the RE confinement is better in HTF discharges.

  13. Non-sky-averaged sensitivity curves for space-based gravitational-wave observatories

    International Nuclear Information System (INIS)

    Vallisneri, Michele; Galley, Chad R

    2012-01-01

    The signal-to-noise ratio (SNR) is used in gravitational-wave observations as the basic figure of merit for detection confidence and, together with the Fisher matrix, for the amount of physical information that can be extracted from a detected signal. SNRs are usually computed from a sensitivity curve, which describes the gravitational-wave amplitude needed by a monochromatic source of given frequency to achieve a threshold SNR. Although the term 'sensitivity' is used loosely to refer to the detector's noise spectral density, the two quantities are not the same: the sensitivity includes also the frequency- and orientation-dependent response of the detector to gravitational waves and takes into account the duration of observation. For interferometric space-based detectors similar to LISA, which are sensitive to long-lived signals and have constantly changing position and orientation, exact SNRs need to be computed on a source-by-source basis. For convenience, most authors prefer to work with sky-averaged sensitivities, accepting inaccurate SNRs for individual sources and giving up control over the statistical distribution of SNRs for source populations. In this paper, we describe a straightforward end-to-end recipe to compute the non-sky-averaged sensitivity of interferometric space-based detectors of any geometry. This recipe includes the effects of spacecraft motion and of seasonal variations in the partially subtracted confusion foreground from Galactic binaries, and it can be used to generate a sampling distribution of sensitivities for a given source population. In effect, we derive error bars for the sky-averaged sensitivity curve, which provide a stringent statistical interpretation for previously unqualified statements about sky-averaged SNRs. As a worked-out example, we consider isotropic and Galactic-disk populations of monochromatic sources, as observed with the 'classic LISA' configuration. We confirm that the (standard) inverse-rms average sensitivity

  14. High Resolution of the ECG Signal by Polynomial Approximation

    Directory of Open Access Journals (Sweden)

    G. Rozinaj

    2006-04-01

    Full Text Available Averaging techniques as temporal averaging and space averaging have been successfully used in many applications for attenuating interference [6], [7], [8], [9], [10]. In this paper we introduce interference removing of the ECG signal by polynomial approximation, with smoothing discrete dependencies, to make up for averaging methods. The method is suitable for low-level signals of the electrical activity of the heart often less than 10 m V. Most low-level signals arising from PR, ST and TP segments which can be detected eventually and their physiologic meaning can be appreciated. Of special importance for the diagnostic of the electrical activity of the heart is the activity bundle of His between P and R waveforms. We have established an artificial sine wave to ECG signal between P and R wave. The aim focus is to verify the smoothing method by polynomial approximation if the SNR (signal-to-noise ratio is negative (i.e. a signal is lower than noise.

  15. Performance analysis of signaling protocols on OBS switches

    Science.gov (United States)

    Kirci, Pinar; Zaim, A. Halim

    2005-10-01

    In this paper, Just-In-Time (JIT), Just-Enough-Time (JET) and Horizon signalling schemes for Optical Burst Switched Networks (OBS) are presented. These signaling schemes run over a core dWDM network and a network architecture based on Optical Burst Switches (OBS) is proposed to support IP, ATM and Burst traffic. In IP and ATM traffic several packets are assembled in a single packet called burst and the burst contention is handled by burst dropping. The burst length distribution in IP traffic is arbitrary between 0 and 1, and is fixed in ATM traffic at 0,5. Burst traffic on the other hand is arbitrary between 1 and 5. The Setup and Setup ack length distributions are arbitrary. We apply the Poisson model with rate λ and Self-Similar model with pareto distribution rate α to identify inter-arrival times in these protocols. We consider a communication between a source client node and a destination client node over an ingress and one or more multiple intermediate switches.We use buffering only in the ingress node. The communication is based on single burst connections in which, the connection is set up just before sending a burst and then closed as soon as the burst is sent. Our analysis accounts for several important parameters, including the burst setup, burst setup ack, keepalive messages and the optical switching protocol. We compare the performance of the three signalling schemes on the network under as burst dropping probability under a range of network scenarios.

  16. Influence of RZ and NRZ signal format on the high-speed performance of gain-clamped semiconductor optical amplifiers

    DEFF Research Database (Denmark)

    Fjelde, Tina; Wolfson, David; Kloch, Allan

    2000-01-01

    High-speed experiments show that the influence from the limited relaxation frequency of GC-SOAs that severely degrades the performance for NRZ signals is reduced by using RZ signals, thus resulting in a higher input power dynamic range.......High-speed experiments show that the influence from the limited relaxation frequency of GC-SOAs that severely degrades the performance for NRZ signals is reduced by using RZ signals, thus resulting in a higher input power dynamic range....

  17. Quantified moving average strategy of crude oil futures market based on fuzzy logic rules and genetic algorithms

    Science.gov (United States)

    Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing

    2017-09-01

    The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.

  18. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    Science.gov (United States)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  19. Performance characterization of the IEEE 802.11 signal transmission over a multimode fiber PON

    Science.gov (United States)

    Maksymiuk, L.; Siuzdak, J.

    2014-11-01

    In this paper there are presented measurements concerning performance analysis of the IEEE 802.11 signal distribution over multimode fiber based passive optical network. In the paper there are addressed three main sources of impairments: modal noise, frequency response fluctuation of the multimode fiber and non-linear distortion of the signal in the receiver.

  20. Task performance changes the amplitude and timing of the BOLD signal

    Directory of Open Access Journals (Sweden)

    Akhrif Atae

    2017-12-01

    Full Text Available Translational studies comparing imaging data of animals and humans have gained increasing scientific interests. With this upcoming translational approach, however, identifying harmonized statistical analysis as well as shared data acquisition protocols and/or combined statistical approaches is necessary. Following this idea, we applied Bayesian Adaptive Regression Splines (BARS, which have until now mainly been used to model neural responses of electrophysiological recordings from rodent data, on human hemodynamic responses as measured via fMRI. Forty-seven healthy subjects were investigated while performing the Attention Network Task in the MRI scanner. Fluctuations in the amplitude and timing of the BOLD response were determined and validated externally with brain activation using GLM and also ecologically with the influence of task performance (i.e. good vs. bad performers. In terms of brain activation, bad performers presented reduced activation bilaterally in the parietal lobules, right prefrontal cortex (PFC and striatum. This was accompanied by an enhanced left PFC recruitment. With regard to the amplitude of the BOLD-signal, bad performers showed enhanced values in the left PFC. In addition, in the regions of reduced activation such as the parietal and striatal regions, the temporal dynamics were higher in bad performers. Based on the relation between BOLD response and neural firing with the amplitude of the BOLD signal reflecting gamma power and timing dynamics beta power, we argue that in bad performers, an enhanced left PFC recruitment hints towards an enhanced functioning of gamma-band activity in a compensatory manner. This was accompanied by reduced parieto-striatal activity, associated with increased and potentially conflicting beta-band activity.

  1. Design and evaluation of three-level composite filters obtained by optimizing a compromise average performance measure

    Science.gov (United States)

    Hendrix, Charles D.; Vijaya Kumar, B. V. K.

    1994-06-01

    Correlation filters with three transmittance levels (+1, 0, and -1) are of interest in optical pattern recognition because they can be implemented on available spatial light modulators and because the zero level allows us to include a region of support (ROS). The ROS can provide additional control over the filter's noise tolerance and peak sharpness. A new algorithm based on optimizing a compromise average performance measure (CAPM) is proposed for designing three-level composite filters. The performance of this algorithm is compared to other three-level composite filter designs using a common image database and using figures of merit such as the Fisher ratio, error rate, and light efficiency. It is shown that the CAPM algorithm yields better results.

  2. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

    Science.gov (United States)

    Liu, Boquan; Polce, Evan; Sprott, Julien C.; Jiang, Jack J.

    2018-01-01

    Purpose: The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Study Design: Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100…

  3. Observer performance in detecting multiple radiographic signals: prediction and analysis using a generalized ROC approach

    International Nuclear Information System (INIS)

    Metz, C.E.; Starr, S.J.; Lusted, L.B.

    1975-01-01

    The theories of decision processes and signal detection provide a framework for the evaluation of observer performance. Some radiologic procedures involve a search for multiple similar lesions, as in gallstone or pneumoconiosis examinations. A model is presented which attempts to predict, from the conventional receiver operating characteristic (ROC) curve describing the detectability of a single visual signal in a radiograph, observer performance in an experiment requiring detection of more than one such signal. An experiment is described which tests the validity of this model for the case of detecting the presence of zero, one, or two low-contrast radiographic images of a two-mm.-diameter lucite bead embedded in radiographic mottle. Results from six observers, including three radiologists, confirm the validity of the model and suggest that human observer performance for relatively complex detection tasks can be predicted from the results of simpler experiments

  4. MARD—A moving average rose diagram application for the geosciences

    Science.gov (United States)

    Munro, Mark A.; Blenkinsop, Thomas G.

    2012-12-01

    MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.

  5. Parallel Array Bistable Stochastic Resonance System with Independent Input and Its Signal-to-Noise Ratio Improvement

    Directory of Open Access Journals (Sweden)

    Wei Li

    2014-01-01

    with independent components and averaged output; second, we give a deduction of the output signal-to-noise ratio (SNR for this system to show the performance. Our examples show the enhancement of the system and how different parameters influence the performance of the proposed parallel array.

  6. Detection of a random signal in a multi-channel environment: a performance study

    International Nuclear Information System (INIS)

    Frenzel, K.Z.

    1986-01-01

    Performance of the optimal (likelihood ratio) test and suboptimal tests, including the normalized cross correlator and two energy detectors are compared for problems involving non-gaussian as well as gaussian statistics. Also, optimal one-channel processing is compared to optimal two-channel processing for equivalent total signal-to-noise ratios. Receiver operator characteristics (ROC) curves obtained by a combination of simulation and analytic methods are used to evaluate the performance of the processors. It was found that two-channel processing helps the detection performance the most when the noise levels are uncertain. This was true for all signal and noise densities studied. In cases where the noise levels and channel attenuations are known, or when only the attenuations are uncertain, the performance using optimal one-channel processing was close to that found using optimal two-channel processing. When comparing optimal processors to the three suboptimal processors, it was found that when the noise level in each channel is very uncertain, the performance of the normalized cross correlator is much closer to the optimal than that of either of the energy detectors. If, however, the noise levels are know with a fair degree of certainty, the performance of the energy detectors improves considerably, in some cases approaching the optimal performance

  7. Systematic approach to peak-to-average power ratio in OFDM

    Science.gov (United States)

    Schurgers, Curt

    2001-11-01

    OFDM multicarrier systems support high data rate wireless transmission using orthogonal frequency channels, and require no extensive equalization, yet offer excellent immunity against fading and inter-symbol interference. The major drawback of these systems is the large Peak-to-Average power Ratio (PAR) of the transmit signal, which renders a straightforward implementation very costly and inefficient. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exist to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this work, we provide a systematic approach that resolves this ambiguity and spans the existing PAR solutions. The basis of our framework is the observation that efficient system implementations require a reduced signal dynamic range. This range reduction can be modeled as a hard limiting, also referred to as clipping, where the extra distortion has to be considered as part of the total noise tradeoff. We illustrate that the different PAR solutions manipulate this tradeoff in alternative ways in order to improve the performance. Furthermore, we discuss and compare a broad range of such techniques and organize them into three classes: block coding, clip effect transformation and probabilistic.

  8. Learning-based traffic signal control algorithms with neighborhood information sharing: An application for sustainable mobility

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhu, Feng [Purdue University, West Lafayette, IN (United States). Lyles School of Civil Engineering; Ukkusuri, Satish V. [Purdue University, West Lafayette, IN (United States). Lyles School of Civil Engineering

    2017-10-04

    Here, this research applies R-Markov Average Reward Technique based reinforcement learning (RL) algorithm, namely RMART, for vehicular signal control problem leveraging information sharing among signal controllers in connected vehicle environment. We implemented the algorithm in a network of 18 signalized intersections and compare the performance of RMART with fixed, adaptive, and variants of the RL schemes. Results show significant improvement in system performance for RMART algorithm with information sharing over both traditional fixed signal timing plans and real time adaptive control schemes. Additionally, the comparison with reinforcement learning algorithms including Q learning and SARSA indicate that RMART performs better at higher congestion levels. Further, a multi-reward structure is proposed that dynamically adjusts the reward function with varying congestion states at the intersection. Finally, the results from test networks show significant reduction in emissions (CO, CO2, NOx, VOC, PM10) when RL algorithms are implemented compared to fixed signal timings and adaptive schemes.

  9. Classification of EEG signals using a genetic-based machine learning classifier.

    Science.gov (United States)

    Skinner, B T; Nguyen, H T; Liu, D K

    2007-01-01

    This paper investigates the efficacy of the genetic-based learning classifier system XCS, for the classification of noisy, artefact-inclusive human electroencephalogram (EEG) signals represented using large condition strings (108bits). EEG signals from three participants were recorded while they performed four mental tasks designed to elicit hemispheric responses. Autoregressive (AR) models and Fast Fourier Transform (FFT) methods were used to form feature vectors with which mental tasks can be discriminated. XCS achieved a maximum classification accuracy of 99.3% and a best average of 88.9%. The relative classification performance of XCS was then compared against four non-evolutionary classifier systems originating from different learning techniques. The experimental results will be used as part of our larger research effort investigating the feasibility of using EEG signals as an interface to allow paralysed persons to control a powered wheelchair or other devices.

  10. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  11. Infrasonic detection performance in presence of nuisance signal

    Science.gov (United States)

    Charbit, Maurice; Arrowsmith, Stephen; Che, Il-young; Le Pichon, Alexis; Nouvellet, Adrien; Park, Junghyun; Roueff, Francois

    2014-05-01

    The infrasound network of the International Monitoring System (IMS) consists of sixty stations deployed all over the World by the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). The IMS has been designed to reliably detect, at least by two stations, an explosion greater than 1 kiloton located anywhere on the Earth [1]. Each station is an array of at least four microbarometers with an aperture of 1 to 3 km. The first important issue is to detect the presence of the signal of interest (SOI) embedded in noise. The detector is commonly based on the property that the SOI provides coherent observations on the sensors but not the noise. The statistic of test, called F-stat [2], [5], [6] , calculated in a time cell a few seconds, is commonly used for this purpose. In this paper, we assume that a coherent source is permanently present arriving from an unknown direction of arrivals (DOA). The typical case is the presence of microbaroms or the presence of wind. This source is seen as a nuisance signal (NS). In [4], [3] authors assume that a time cell without the SOI (CH0) is available, whereas a following time cell is considered as the cell under test (CUT). Therefore the DOA and the SNR of the NS can be estimated. If the signal-to-noise ration SNR of the NS is large enough, the distribution of the F-stat under the absence of SOI is known to be a non central Fisher. It follows that the threshold can be performed from a given value of the FAR. The major drawback to keep the NS is that the NS could hide the SOI, this phenomena is similar to the leakage which is a well-known phenomena in the Fourier analysis. An other approach consists to use the DOA estimate of the NS to mitigate the NS by spatial notch filter in the frequency domain. On this approach a new algorithm is provided. To illustrate, numerical results on synthetical and real data are presented, in term of Receiver Operating Characteristic ROC curves. REFERENCES [1] Christie D.R. and Campus P., The IMS

  12. Filtering Performance Comparison of Kernel and Wavelet Filters for Reactivity Signal Noise

    International Nuclear Information System (INIS)

    Park, Moon Ghu; Shin, Ho Cheol; Lee, Yong Kwan; You, Skin

    2006-01-01

    Nuclear reactor power deviation from the critical state is a parameter of specific interest defined by the reactivity measuring neutron population. Reactivity is an extremely important quantity used to define many of the reactor startup physics parameters. The time dependent reactivity is normally determined by solving the using inverse neutron kinetics equation. The reactivity computer is a device to provide an on-line solution of the inverse kinetics equation. The measurement signal of the neutron density is normally noise corrupted and the control rods movement typically gives reactivity variation with edge signals like saw teeth. Those edge regions should be precisely preserved since the measured signal is used to estimate the reactivity wroth which is a crucial parameter to assure the safety of the nuclear reactors. In this paper, three kind of edge preserving noise filters are proposed and their performance is demonstrated using stepwise signals. The tested filters are based on the unilateral, bilateral kernel and wavelet filters which are known to be effective in edge preservation. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters

  13. Signal Processing for Improved Wireless Receiver Performance

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2007-01-01

    This thesis is concerned with signal processing for improving the performance of wireless communication receivers for well-established cellular networks such as the GSM/EDGE and WCDMA/HSPA systems. The goal of doing so, is to improve the end-user experience and/or provide a higher system capacity...... by allowing an increased reuse of network resources. To achieve this goal, one must first understand the nature of the problem and an introduction is therefore provided. In addition, the concept of graph-based models and approximations for wireless communications is introduced along with various Belief...... Propagation (BP) methods for detecting the transmitted information, including the Turbo principle. Having established a framework for the research, various approximate detection schemes are discussed. First, the general form of linear detection is presented and it is argued that this may be preferable...

  14. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  15. Generation of earthquake signals

    International Nuclear Information System (INIS)

    Kjell, G.

    1994-01-01

    Seismic verification can be performed either as a full scale test on a shaker table or as numerical calculations. In both cases it is necessary to have an earthquake acceleration time history. This report describes generation of such time histories by filtering white noise. Analogue and digital filtering methods are compared. Different methods of predicting the response spectrum of a white noise signal filtered by a band-pass filter are discussed. Prediction of both the average response level and the statistical variation around this level are considered. Examples with both the IEEE 301 standard response spectrum and a ground spectrum suggested for Swedish nuclear power stations are included in the report

  16. Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure

    NARCIS (Netherlands)

    Talsma, D.

    2008-01-01

    The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the

  17. Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure.

    NARCIS (Netherlands)

    Talsma, D.

    2008-01-01

    The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the

  18. Perceived Average Orientation Reflects Effective Gist of the Surface.

    Science.gov (United States)

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  19. Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals

    Science.gov (United States)

    Zeng, Ying; Yang, Kai; Tong, Li; Yan, Bin

    2018-01-01

    Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods. PMID:29534515

  20. Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals.

    Science.gov (United States)

    Zhuang, Ning; Zeng, Ying; Yang, Kai; Zhang, Chi; Tong, Li; Yan, Bin

    2018-03-12

    Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods.

  1. Signal processing for passive detection and classification of underwater acoustic signals

    Science.gov (United States)

    Chung, Kil Woo

    2011-12-01

    This dissertation examines signal processing for passive detection, classification and tracking of underwater acoustic signals for improving port security and the security of coastal and offshore operations. First, we consider the problem of passive acoustic detection of a diver in a shallow water environment. A frequency-domain multi-band matched-filter approach to swimmer detection is presented. The idea is to break the frequency contents of the hydrophone signals into multiple narrow frequency bands, followed by time averaged (about half of a second) energy calculation over each band. Then, spectra composed of such energy samples over the chosen frequency bands are correlated to form a decision variable. The frequency bands with highest Signal/Noise ratio are used for detection. The performance of the proposed approach is demonstrated for experimental data collected for a diver in the Hudson River. We also propose a new referenceless frequency-domain multi-band detector which, unlike other reference-based detectors, does not require a diver specific signature. Instead, our detector matches to a general feature of the diver spectrum in the high frequency range: the spectrum is roughly periodic in time and approximately flat when the diver exhales. The performance of the proposed approach is demonstrated by using experimental data collected from the Hudson River. Moreover, we present detection, classification and tracking of small vessel signals. Hydroacoustic sensors can be applied for the detection of noise generated by vessels, and this noise can be used for vessel detection, classification and tracking. This dissertation presents recent improvements aimed at the measurement and separation of ship DEMON (Detection of Envelope Modulation on Noise) acoustic signatures in busy harbor conditions. Ship signature measurements were conducted in the Hudson River and NY Harbor. The DEMON spectra demonstrated much better temporal stability compared with the full ship

  2. SIP Signaling Implementations and Performance Enhancement over MANET: A Survey

    OpenAIRE

    Alshamrani, M; Cruickshank, Haitham; Sun, Zhili; Ansa, G; Alshahwan, F

    2016-01-01

    The implementation of the Session Initiation Protocol (SIP)-based Voice over Internet Protocol (VoIP) and multimedia over MANET is still a challenging issue. Many routing factors affect the performance of SIP signaling and the voice Quality of Service (QoS). Node mobility in MANET causes dynamic changes to route calculations, topology, hop numbers, and the connectivity status between the correspondent nodes. SIP-based VoIP depends on the caller’s registration, call initiation, and call termin...

  3. Performance Analysis of Control Signal Transmission Technique for Cognitive Radios in Dynamic Spectrum Access Networks

    Science.gov (United States)

    Sakata, Ren; Tomioka, Tazuko; Kobayashi, Takahiro

    When cognitive radio (CR) systems dynamically use the frequency band, a control signal is necessary to indicate which carrier frequencies are currently available in the network. In order to keep efficient spectrum utilization, this control signal also should be transmitted based on the channel conditions. If transmitters dynamically select carrier frequencies, receivers have to receive control signals without knowledge of their carrier frequencies. To enable such transmission and reception, this paper proposes a novel scheme called DCPT (Differential Code Parallel Transmission). With DCPT, receivers can receive low-rate information with no knowledge of the carrier frequencies. The transmitter transmits two signals whose carrier frequencies are spaced by a predefined value. The absolute values of the carrier frequencies can be varied. When the receiver acquires the DCPT signal, it multiplies the signal by a frequency-shifted version of the signal; this yields a DC component that represents the data signal which is then demodulated. The performance was evaluated by means of numerical analysis and computer simulation. We confirmed that DCPT operates successfully even under severe interference if its parameters are appropriately configured.

  4. Comparative Performance Evaluation of Orthogonal-Signal-Generators-Based Single-Phase PLL Algorithms

    DEFF Research Database (Denmark)

    Han, Yang; Luo, Mingyu; Zhao, Xin

    2016-01-01

    The orthogonal signal generator based phase-locked loops (OSG-PLLs) are among the most popular single-phase PLLs within the areas of power electronics and power systems, mainly because they are often easy to be implement and offer a robust performance against the grid disturbances. The main aim o...

  5. Threshold-Based Multiple Optical Signal Selection Scheme for Free-Space Optical Wavelength Division Multiplexing Systems

    KAUST Repository

    Nam, Sung Sik

    2017-11-13

    We propose a threshold-based multiple optical signal selection scheme (TMOS) for free-space optical wavelength division multiplexing systems. With this scheme, we can obtain higher spectral efficiency while reducing the possible complexity of implementation caused by the beam-selection scheme and without a considerable performance loss. To characterize the performance of our scheme, we statistically analyze the operation characteristics under conventional detection conditions (i.e., heterodyne detection and intensity modulation/direct detection techniques) with log-normal turbulence while taking into consideration the impact of pointing error. More specifically, we derive exact closed-form expressions for the outage probability, the average bit error rate, and the average spectral efficiency while adopting an adaptive modulation. Some selected results show that TMOS increases the average spectral efficiency while maintaining a minimum average bit error rate requirement.

  6. Extrapolation techniques evaluating 24 hours of average electromagnetic field emitted by radio base station installations: spectrum analyzer measurements of LTE and UMTS signals

    International Nuclear Information System (INIS)

    Mossetti, Stefano; Bartolo, Daniela de; Nava, Elisa; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina

    2017-01-01

    International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. (authors)

  7. Applicability of Time-Averaged Holography for Micro-Electro-Mechanical System Performing Non-Linear Oscillations

    Directory of Open Access Journals (Sweden)

    Paulius Palevicius

    2014-01-01

    Full Text Available Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms.

  8. Applicability of Time-Averaged Holography for Micro-Electro-Mechanical System Performing Non-Linear Oscillations

    Science.gov (United States)

    Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas

    2014-01-01

    Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467

  9. Applicability of time-averaged holography for micro-electro-mechanical system performing non-linear oscillations.

    Science.gov (United States)

    Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas

    2014-01-21

    Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms.

  10. Wavelet analysis for nonstationary signals

    International Nuclear Information System (INIS)

    Penha, Rosani Maria Libardi da

    1999-01-01

    Mechanical vibration signals play an important role in anomalies identification resulting of equipment malfunctioning. Traditionally, Fourier spectral analysis is used where the signals are assumed to be stationary. However, occasional transient impulses and start-up process are examples of nonstationary signals that can be found in mechanical vibrations. These signals can provide important information about the equipment condition, as early fault detection. The Fourier analysis can not adequately be applied to nonstationary signals because the results provide data about the frequency composition averaged over the duration of the signal. In this work, two methods for nonstationary signal analysis are used: Short Time Fourier Transform (STFT) and wavelet transform. The STFT is a method of adapting Fourier spectral analysis for nonstationary application to time-frequency domain. To have a unique resolution throughout the entire time-frequency domain is its main limitation. The wavelet transform is a new analysis technique suitable to nonstationary signals, which handles the STFT drawbacks, providing multi-resolution frequency analysis and time localization in a unique time-scale graphic. The multiple frequency resolutions are obtained by scaling (dilatation/compression) the wavelet function. A comparison of the conventional Fourier transform, STFT and wavelet transform is made applying these techniques to: simulated signals, arrangement rotor rig vibration signal and rotate machine vibration signal Hanning window was used to STFT analysis. Daubechies and harmonic wavelets were used to continuos, discrete and multi-resolution wavelet analysis. The results show the Fourier analysis was not able to detect changes in the signal frequencies or discontinuities. The STFT analysis detected the changes in the signal frequencies, but with time-frequency resolution problems. The wavelet continuos and discrete transform demonstrated to be a high efficient tool to detect

  11. A threshold-based multiple optical signal selection scheme for WDM FSO systems

    KAUST Repository

    Nam, Sung Sik

    2017-07-20

    In this paper, we propose a threshold-based-multiple optical signal selection scheme (TMOS) for free-space optical systems based on wavelength division multiplexing. With the proposed TMOS, we can obtain higher spectral efficiency while reducing the potential increase in complexity of implementation caused by applying a selection-based beam selection scheme without a considerable performance loss. Here, to accurately characterize the performance of the proposed TMOS, we statistically analyze the characteristics with heterodyne detection technique over independent and identically distributed Log-normal turbulence conditions taking into considerations the impact of pointing error. Specifically, we derive exact closed-form expressions for the average bit error rate, and the average spectral efficiency by adopting an adaptive modulation. Some selected results shows that the average spectral efficiency can be increased with TMOS while the system requirement is satisfied.

  12. A threshold-based multiple optical signal selection scheme for WDM FSO systems

    KAUST Repository

    Nam, Sung Sik; Alouini, Mohamed-Slim; Ko, Young-Chai; Cho, Sung Ho

    2017-01-01

    In this paper, we propose a threshold-based-multiple optical signal selection scheme (TMOS) for free-space optical systems based on wavelength division multiplexing. With the proposed TMOS, we can obtain higher spectral efficiency while reducing the potential increase in complexity of implementation caused by applying a selection-based beam selection scheme without a considerable performance loss. Here, to accurately characterize the performance of the proposed TMOS, we statistically analyze the characteristics with heterodyne detection technique over independent and identically distributed Log-normal turbulence conditions taking into considerations the impact of pointing error. Specifically, we derive exact closed-form expressions for the average bit error rate, and the average spectral efficiency by adopting an adaptive modulation. Some selected results shows that the average spectral efficiency can be increased with TMOS while the system requirement is satisfied.

  13. The predictive value of P-wave duration by signal-averaged electrocardiogram in acute ST elevation myocardial infarction.

    Science.gov (United States)

    Shturman, Alexander; Bickel, Amitai; Atar, Shaul

    2012-08-01

    The prognostic value of P-wave duration has been previously evaluated by signal-averaged ECG (SAECG) in patients with various arrhythmias not associated with acute myocardial infarction (AMI). To investigate the clinical correlates and prognostic value of P-wave duration in patients with ST elevation AMI (STEMI). The patients (n = 89) were evaluated on the first, second and third day after admission, as well as one week and one month post-AMI. Survival was determined 2 years after the index STEMI. In comparison with the upper normal range of P-wave duration ( 40% (128.79 +/- 28 msec) (P = 0.001). P-wave duration above 120 msec was significantly correlated with increased complication rate; namely, sustained ventricular tachyarrhythmia (36%), congestive heart failure (41%), atrial fibrillation (11%), recurrent angina (14%), and re-infarction (8%) (P = 0.012, odds ratio 4.267, 95% confidence interval 1.37-13.32). P-wave duration of 126 msec on the day of admission was found to have the highest predictive value for in-hospital complications including LVEF 40% (area under the curve 0.741, P < 0.001). However, we did not find a significant correlation between P-wave duration and mortality after multivariate analysis. P-wave duration as evaluated by SAECG correlates negatively with LVEF post-STEMI, and P-wave duration above 126 msec can be utilized as a non-invasive predictor of in-hospital complications and low LVEF following STEMI.

  14. Re-Normalization Method of Doppler Lidar Signal for Error Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.

  15. A novel power spectrum calculation method using phase-compensation and weighted averaging for the estimation of ultrasound attenuation.

    Science.gov (United States)

    Heo, Seo Weon; Kim, Hyungsuk

    2010-05-01

    An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.

  16. Performance analysis of power-efficient adaptive interference cancelation in fading channels

    KAUST Repository

    Radaydeh, Redha Mahmoud Mesleh; Alouini, Mohamed-Slim

    2010-01-01

    This paper analyzes the performance of a -steering scheme for highly correlated receive antennas in the presence of statistically unordered co-channel interferers over multipath fading channels. An adaptive activation of receive antennas according to the interfering signals fading conditions is considered in the analysis. Analytical expressions for various system performance measures, including the outage probability, average error probability of different signaling schemes, and raw moments of the combined signal-to-interference-plus-noise ratio (SINR) are obtained in exact forms. Numerical and simulation results for the performance-complexity tradeoff of this scheme is presented and then compared with that of full-size arbitrary interference cancelation and no cancelation scenarios. ©2010 IEEE.

  17. Performance analysis of power-efficient adaptive interference cancelation in fading channels

    KAUST Repository

    Radaydeh, Redha Mahmoud Mesleh

    2010-12-01

    This paper analyzes the performance of a -steering scheme for highly correlated receive antennas in the presence of statistically unordered co-channel interferers over multipath fading channels. An adaptive activation of receive antennas according to the interfering signals fading conditions is considered in the analysis. Analytical expressions for various system performance measures, including the outage probability, average error probability of different signaling schemes, and raw moments of the combined signal-to-interference-plus-noise ratio (SINR) are obtained in exact forms. Numerical and simulation results for the performance-complexity tradeoff of this scheme is presented and then compared with that of full-size arbitrary interference cancelation and no cancelation scenarios. ©2010 IEEE.

  18. Simulated performance of an acoustic modem using phase-modulated signals in a time-varying, shallow-water environment

    DEFF Research Database (Denmark)

    Bjerrum-Niese, Christian; Jensen, Leif Bjørnø

    1996-01-01

    and dynamic multipath channel. Multipath arrivals at the receiver cause phase distortion and fading of the signal envelope. Yet, for extreme ratios of range to depth, the delays of multipath arrivals decrease, and the channel impulse response coherently contributes energy to the signal at short delays......Underwater acoustic modems using coherent modulation, such as phase-shift keying, have proven to efficiently exploit the bandlimited underwater acoustical communication channel. However, the performance of an acoustic modem, given as maximum range and data and error rate, is limited in the complex...... relative to the first arrival, while longer delays give rise to intersymbol interference. Following this, the signal-to-multipath ratio (SMR) is introduced. It is claimed that the SMR determines the performance rather than the signal-to-noise ratio (SNR). Using a ray model including temporal variations...

  19. A real time ECG signal processing application for arrhythmia detection on portable devices

    Science.gov (United States)

    Georganis, A.; Doulgeraki, N.; Asvestas, P.

    2017-11-01

    Arrhythmia describes the disorders of normal heart rate, which, depending on the case, can even be fatal for a patient with severe history of heart disease. The purpose of this work is to develop an application for heart signal visualization, processing and analysis in Android portable devices e.g. Mobile phones, tablets, etc. The application is able to retrieve the signal initially from a file and at a later stage this signal is processed and analysed within the device so that it can be classified according to the features of the arrhythmia. In the processing and analysing stage, different algorithms are included among them the Moving Average and Pan Tompkins algorithm as well as the use of wavelets, in order to extract features and characteristics. At the final stage, testing is performed by simulating our application in real-time records, using the TCP network protocol for communicating the mobile with a simulated signal source. The classification of ECG beat to be processed is performed by neural networks.

  20. GNSS Signal Tracking Performance Improvement for Highly Dynamic Receivers by Gyroscopic Mounting Crystal Oscillator.

    Science.gov (United States)

    Abedi, Maryam; Jin, Tian; Sun, Kewen

    2015-08-31

    In this paper, the efficiency of the gyroscopic mounting method is studied for a highly dynamic GNSS receiver's reference oscillator for reducing signal loss. Analyses are performed separately in two phases, atmospheric and upper atmospheric flights. Results show that the proposed mounting reduces signal loss, especially in parts of the trajectory where its probability is the highest. This reduction effect appears especially for crystal oscillators with a low elevation angle g-sensitivity vector. The gyroscopic mounting influences frequency deviation or jitter caused by dynamic loads on replica carrier and affects the frequency locked loop (FLL) as the dominant tracking loop in highly dynamic GNSS receivers. In terms of steady-state load, the proposed mounting mostly reduces the frequency deviation below the one-sigma threshold of FLL (1σ(FLL)). The mounting method can also reduce the frequency jitter caused by sinusoidal vibrations and reduces the probability of signal loss in parts of the trajectory where the other error sources accompany this vibration load. In the case of random vibration, which is the main disturbance source of FLL, gyroscopic mounting is even able to suppress the disturbances greater than the three-sigma threshold of FLL (3σ(FLL)). In this way, signal tracking performance can be improved by the gyroscopic mounting method for highly dynamic GNSS receivers.

  1. RHIC BPM System Modifications and Performance

    CERN Document Server

    Satogata, Todd; Cameron, Peter; Cerniglia, Phil; Cupolo, John; Curcio, Anthony J; Dawson, William C; Degen, Christopher; Gullotta, Justin; Mead, Joe; Michnoff, Robert; Russo, Thomas; Sikora, Robert

    2005-01-01

    The RHIC beam position monitor (BPM) system provides independent average orbit and turn-by-turn (TBT) position measurements. In each ring, there are 162 measurement locations per plane (horizontal and vertical) for a total of 648 BPM planes in the RHIC machine. During 2003 and 2004 shutdowns, BPM processing electronics were moved from the RHIC tunnel to controls alcoves to reduce radiation impact, and the analog signal paths of several dozen modules were modified to eliminate gain-switching relays and improve signal stability. This paper presents results of improved system performance, including stability for interaction region and sextupole beam-based alignment efforts. We also summarize performance of improved million-turn TBT acquisition channels for nonlinear dynamics and echo studies.

  2. Effect of parameters in moving average method for event detection enhancement using phase sensitive OTDR

    Science.gov (United States)

    Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum

    2017-04-01

    We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.

  3. A Review on Human Body Communication: Signal Propagation Model, Communication Performance, and Experimental Issues

    Directory of Open Access Journals (Sweden)

    Jian Feng Zhao

    2017-01-01

    Full Text Available Human body communication (HBC, which uses the human body tissue as the transmission medium to transmit health informatics, serves as a promising physical layer solution for the body area network (BAN. The human centric nature of HBC offers an innovative method to transfer the healthcare data, whose transmission requires low interference and reliable data link. Therefore, the deployment of HBC system obtaining good communication performance is required. In this regard, a tutorial review on the important issues related to HBC data transmission such as signal propagation model, channel characteristics, communication performance, and experimental considerations is conducted. In this work, the development of HBC and its first attempts are firstly reviewed. Then a survey on the signal propagation models is introduced. Based on these models, the channel characteristics are summarized; the communication performance and selection of transmission parameters are also investigated. Moreover, the experimental issues, such as electrodes and grounding strategies, are also discussed. Finally, the recommended future studies are provided.

  4. EXTRAPOLATION TECHNIQUES EVALUATING 24 HOURS OF AVERAGE ELECTROMAGNETIC FIELD EMITTED BY RADIO BASE STATION INSTALLATIONS: SPECTRUM ANALYZER MEASUREMENTS OF LTE AND UMTS SIGNALS.

    Science.gov (United States)

    Mossetti, Stefano; de Bartolo, Daniela; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina; Nava, Elisa

    2017-04-01

    International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Ocean tides in GRACE monthly averaged gravity fields

    DEFF Research Database (Denmark)

    Knudsen, Per

    2003-01-01

    The GRACE mission will map the Earth's gravity fields and its variations with unprecedented accuracy during its 5-year lifetime. Unless ocean tide signals and their load upon the solid earth are removed from the GRACE data, their long period aliases obscure more subtle climate signals which GRACE...... aims at. In this analysis the results of Knudsen and Andersen (2002) have been verified using actual post-launch orbit parameter of the GRACE mission. The current ocean tide models are not accurate enough to correct GRACE data at harmonic degrees lower than 47. The accumulated tidal errors may affect...... the GRACE data up to harmonic degree 60. A study of the revised alias frequencies confirm that the ocean tide errors will not cancel in the GRACE monthly averaged temporal gravity fields. The S-2 and the K-2 terms have alias frequencies much longer than 30 days, so they remain almost unreduced...

  6. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  7. Objective automated quantification of fluorescence signal in histological sections of rat lens.

    Science.gov (United States)

    Talebizadeh, Nooshin; Hagström, Nanna Zhou; Yu, Zhaohua; Kronschläger, Martin; Söderberg, Per; Wählby, Carolina

    2017-08-01

    Visual quantification and classification of fluorescent signals is the gold standard in microscopy. The purpose of this study was to develop an automated method to delineate cells and to quantify expression of fluorescent signal of biomarkers in each nucleus and cytoplasm of lens epithelial cells in a histological section. A region of interest representing the lens epithelium was manually demarcated in each input image. Thereafter, individual cell nuclei within the region of interest were automatically delineated based on watershed segmentation and thresholding with an algorithm developed in Matlab™. Fluorescence signal was quantified within nuclei, cytoplasms and juxtaposed backgrounds. The classification of cells as labelled or not labelled was based on comparison of the fluorescence signal within cells with local background. The classification rule was thereafter optimized as compared with visual classification of a limited dataset. The performance of the automated classification was evaluated by asking 11 independent blinded observers to classify all cells (n = 395) in one lens image. Time consumed by the automatic algorithm and visual classification of cells was recorded. On an average, 77% of the cells were correctly classified as compared with the majority vote of the visual observers. The average agreement among visual observers was 83%. However, variation among visual observers was high, and agreement between two visual observers was as low as 71% in the worst case. Automated classification was on average 10 times faster than visual scoring. The presented method enables objective and fast detection of lens epithelial cells and quantification of expression of fluorescent signal with an accuracy comparable with the variability among visual observers. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  8. A Framework for Control System Design Subject to Average Data-Rate Constraints

    DEFF Research Database (Denmark)

    Silva, Eduardo; Derpich, Milan; Østergaard, Jan

    2011-01-01

    This paper studies discrete-time control systems subject to average data-rate limits. We focus on a situation where a noisy linear system has been designed assuming transparent feedback and, due to implementation constraints, a source-coding scheme (with unity signal transfer function) has to be ...

  9. [In patients with Graves' disease signal-averaged P wave duration positively correlates with the degree of thyrotoxicosis].

    Science.gov (United States)

    Czarkowski, Marek; Oreziak, Artur; Radomski, Dariusz

    2006-04-01

    Coexistence of the goitre, proptosis and palpitations was observed in XIX century for the first time. Sinus tachyarytmias and atrial fibrillation are typical cardiac symptoms of hyperthyroidism. Atrial fibrillation occurs more often in patients with toxic goiter than in young patients with Grave's disease. These findings suggest that causes of atrial fibrillation might be multifactorial in the elderly. The aims of our study were to evaluate correlations between the parameters of atrial signal averaged ECG (SAECG) and the serum concentration of thyroid free hormones. 25 patient with untreated Grave's disease (G-B) (age 29,6 +/- 9,0 y.o.) and 26 control patients (age 29,3 +/- 6,9 y.o.) were enrolled to our study. None of them had history of atrial fibrillation what was confirmed by 24-hour ECG Holter monitoring. The serum fT3, fT4, TSH were determined in the venous blood by the immunoenzymatic method. Atrial SAECG recording with filtration by zero phase Butterworth filter (45-150 Hz) was done in all subjects. The duration of atrial vector magnitude (hfP) and root meat square of terminal 20ms of atrial vector magnitude (RMS20) were analysed. There were no significant differences in values of SAECG parameters (hfP, RMS20) between investigated groups. The positive correlation between hfP and serum fT3 concentration in group G-B was observed (Spearman's correlation coefficient R = 0.462, p Grave's disease depends not only on hyperthyroidism but on serum concentration of fT3 also.

  10. Target acquisition performance : Effects of target aspect angle, dynamic imaging and signal processing

    NARCIS (Netherlands)

    Beintema, J.A.; Bijl, P.; Hogervorst, M.A.; Dijk, J.

    2008-01-01

    In an extensive Target Acquisition (TA) performance study, we recorded static and dynamic imagery of a set of military and civilian two-handheld objects at a range of distances and aspect angles with an under-sampled uncooled thermal imager. Next, we applied signal processing techniques including

  11. Extreme Temperature Performance of Automotive-Grade Small Signal Bipolar Junction Transistors

    Science.gov (United States)

    Boomer, Kristen; Damron, Benny; Gray, Josh; Hammoud, Ahmad

    2018-01-01

    Electronics designed for space exploration missions must display efficient and reliable operation under extreme temperature conditions. For example, lunar outposts, Mars rovers and landers, James Webb Space Telescope, Europa orbiter, and deep space probes represent examples of missions where extreme temperatures and thermal cycling are encountered. Switching transistors, small signal as well as power level devices, are widely used in electronic controllers, data instrumentation, and power management and distribution systems. Little is known, however, about their performance in extreme temperature environments beyond their specified operating range; in particular under cryogenic conditions. This report summarizes preliminary results obtained on the evaluation of commercial-off-the-shelf (COTS) automotive-grade NPN small signal transistors over a wide temperature range and thermal cycling. The investigations were carried out to establish a baseline on functionality of these transistors and to determine suitability for use outside their recommended temperature limits.

  12. Numerical Analysis of a Small-Size Vertical-Axis Wind Turbine Performance and Averaged Flow Parameters Around the Rotor

    Directory of Open Access Journals (Sweden)

    Rogowski Krzysztof

    2017-06-01

    Full Text Available Small-scale vertical-axis wind turbines can be used as a source of electricity in rural and urban environments. According to the authors’ knowledge, there are no validated simplified aerodynamic models of these wind turbines, therefore the use of more advanced techniques, such as for example the computational methods for fluid dynamics is justified. The paper contains performance analysis of the small-scale vertical-axis wind turbine with a large solidity. The averaged velocity field and the averaged static pressure distribution around the rotor have been also analyzed. All numerical results presented in this paper are obtained using the SST k-ω turbulence model. Computed power coeffcients are in good agreement with the experimental results. A small change in the tip speed ratio significantly affects the velocity field. Obtained velocity fields can be further used as a base for simplified aerodynamic methods.

  13. Zone of Acceptance Under Performance Measurement: Does Performance Information Affect Employee Acceptance of Management Authority?

    DEFF Research Database (Denmark)

    Nielsen, Poul Aaes; Jacobsen, Christian Bøtcher

    2018-01-01

    Public sector employees have traditionally enjoyed substantial influence and bargaining power in organizational decision making, but few studies have investigated the formation of employee acceptance of management authority. Drawing on the ‘romance of leadership’ perspective, we argue that perfor......Public sector employees have traditionally enjoyed substantial influence and bargaining power in organizational decision making, but few studies have investigated the formation of employee acceptance of management authority. Drawing on the ‘romance of leadership’ perspective, we argue...... that performance information shapes employee attributions of leader quality and perceptions of a need for change in ways that affect their acceptance of management authority, conceptualized using Simon’s notion of a ‘zone of acceptance.’ We conducted a survey experiment among 1,740 teachers, randomly assigning...... true performance information about each respondent’s own school. When employees were exposed to signals showing low or high performance, their acceptance of management authority increased, whereas average performance signals reduced employee acceptance of management authority. The findings suggest...

  14. Peak-to-average power ratio reduction in interleaved OFDMA systems

    KAUST Repository

    Al-Shuhail, Shamael; Ali, Anum; Al-Naffouri, Tareq Y.

    2015-01-01

    Orthogonal frequency division multiple access (OFDMA) systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and to keep up with the capacity/rate demands. One of these impairments is high peak-to-average power ratio (PAPR) and clipping is the simplest peak reduction scheme. However, in general, when multiple users are subjected to clipping, frequency domain clipping distortions spread over the spectrum of all users. This results in compromised performance and hence clipping distortions need to be mitigated at the receiver. Mitigating these distortions in multiuser case is not simple and requires complex clipping mitigation procedures at the receiver. However, it was observed that interleaved OFDMA presents a special structure that results in only self-inflicted clipping distortions (i.e., the distortions of a particular user do not interfere with other users). In this work, we prove analytically that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a compressed sensing system that utilizes the sparsity of the clipping distortions and recovers it on each user. We provide numerical results that validate our analysis and show promising performance for the proposed clipping recovery scheme.

  15. Peak-to-average power ratio reduction in interleaved OFDMA systems

    KAUST Repository

    Al-Shuhail, Shamael

    2015-12-07

    Orthogonal frequency division multiple access (OFDMA) systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and to keep up with the capacity/rate demands. One of these impairments is high peak-to-average power ratio (PAPR) and clipping is the simplest peak reduction scheme. However, in general, when multiple users are subjected to clipping, frequency domain clipping distortions spread over the spectrum of all users. This results in compromised performance and hence clipping distortions need to be mitigated at the receiver. Mitigating these distortions in multiuser case is not simple and requires complex clipping mitigation procedures at the receiver. However, it was observed that interleaved OFDMA presents a special structure that results in only self-inflicted clipping distortions (i.e., the distortions of a particular user do not interfere with other users). In this work, we prove analytically that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a compressed sensing system that utilizes the sparsity of the clipping distortions and recovers it on each user. We provide numerical results that validate our analysis and show promising performance for the proposed clipping recovery scheme.

  16. Improved contrast deep optoacoustic imaging using displacement-compensated averaging: breast tumour phantom studies

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, M; Preisser, S; Kitz, M; Frenz, M [Institute of Applied Physics, University of Bern, Sidlerstrasse 5, CH-3012 Bern (Switzerland); Ferrara, D; Senegas, S; Schweizer, D, E-mail: frenz@iap.unibe.ch [Fukuda Denshi Switzerland AG, Reinacherstrasse 131, CH-4002 Basel (Switzerland)

    2011-09-21

    For real-time optoacoustic (OA) imaging of the human body, a linear array transducer and reflection mode optical irradiation is usually preferred. Such a setup, however, results in significant image background, which prevents imaging structures at the ultimate depth determined by the light distribution and the signal noise level. Therefore, we previously proposed a method for image background reduction, based on displacement-compensated averaging (DCA) of image series obtained when the tissue sample under investigation is gradually deformed. OA signals and background signals are differently affected by the deformation and can thus be distinguished. The proposed method is now experimentally applied to image artificial tumours embedded inside breast phantoms. OA images are acquired alternately with pulse-echo images using a combined OA/echo-ultrasound device. Tissue deformation is accessed via speckle tracking in pulse echo images, and used to compensate in the OA images for the local tissue displacement. In that way, OA sources are highly correlated between subsequent images, while background is decorrelated and can therefore be reduced by averaging. We show that image contrast in breast phantoms is strongly improved and detectability of embedded tumours significantly increased, using the DCA method.

  17. The Value and Feasibility of Farming Differently Than the Local Average

    OpenAIRE

    Morris, Cooper; Dhuyvetter, Kevin; Yeager, Elizabeth A; Regier, Greg

    2018-01-01

    The purpose of this research is to quantify the value of being different than the local average and feasibility of distinguishing particular parts of an operation from the local average. Kansas crop farms are broken down by their farm characteristics, production practices, and management performances. An ordinary least squares regression model is used to quantify the value of having different than average characteristics, practices, and management performances. The degree farms have distingui...

  18. A High Performance Pocket-Size System for Evaluations in Acoustic Signal Processing

    Directory of Open Access Journals (Sweden)

    Steeger Gerhard H

    2001-01-01

    Full Text Available Custom-made hardware is attractive for sophisticated signal processing in wearable electroacoustic devices, but has a high initial cost overhead. Thus, signal processing algorithms should be tested thoroughly in real application environments by potential end users prior to the hardware implementation. In addition, the algorithms should be easily alterable during this test phase. A wearable system which meets these requirements has been developed and built. The system is based on the high performance signal processor Motorola DSP56309. This device also includes high quality stereo analog-to-digital-(ADC- and digital-to-analog-(DAC-converters with 20 bit word length each. The available dynamic range exceeds 88 dB. The input and output gains can be adjusted by digitally controlled potentiometers. The housing of the unit is small enough to carry it in a pocket (dimensions 150 × 80 × 25 mm. Software tools have been developed to ease the development of new algorithms. A set of configurable Assembler code modules implements all hardware dependent software routines and gives easy access to the peripherals and interfaces. A comfortable fitting interface allows easy control of the signal processing unit from a PC, even by assistant personnel. The device has proven to be a helpful means for development and field evaluations of advanced new hearing aid algorithms, within interdisciplinary research projects. Now it is offered to the scientific community.

  19. Visualization of Radial Peripapillary Capillaries Using Optical Coherence Tomography Angiography: The Effect of Image Averaging.

    Directory of Open Access Journals (Sweden)

    Shelley Mo

    Full Text Available To assess the effect of image registration and averaging on the visualization and quantification of the radial peripapillary capillary (RPC network on optical coherence tomography angiography (OCTA.Twenty-two healthy controls were imaged with a commercial OCTA system (AngioVue, Optovue, Inc.. Ten 10x10° scans of the optic disc were obtained, and the most superficial layer (50-μm slab extending from the inner limiting membrane was extracted for analysis. Rigid registration was achieved using ImageJ, and averaging of each 2 to 10 frames was performed in five ~2x2° regions of interest (ROI located 1° from the optic disc margin. The ROI were automatically skeletonized. Signal-to-noise ratio (SNR, number of endpoints and mean capillary length from the skeleton, capillary density, and mean intercapillary distance (ICD were measured for the reference and each averaged ROI. Repeated measures analysis of variance was used to assess statistical significance. Three patients with primary open angle glaucoma were also imaged to compare RPC density to controls.Qualitatively, vessels appeared smoother and closer to histologic descriptions with increasing number of averaged frames. Quantitatively, number of endpoints decreased by 51%, and SNR, mean capillary length, capillary density, and ICD increased by 44%, 91%, 11%, and 4.5% from single frame to 10-frame averaged, respectively. The 10-frame averaged images from the glaucomatous eyes revealed decreased density correlating to visual field defects and retinal nerve fiber layer thinning.OCTA image registration and averaging is a viable and accessible method to enhance the visualization of RPCs, with significant improvements in image quality and RPC quantitative parameters. With this technique, we will be able to non-invasively and reliably study RPC involvement in diseases such as glaucoma.

  20. Effects of signal salience and noise on performance and stress in an abbreviated vigil

    Science.gov (United States)

    Helton, William Stokely

    Vigilance or sustained attention tasks traditionally require observers to detect predetermined signals that occur unpredictably over periods of 30 min to several hours (Warm, 1984). These tasks are taxing and have been useful in revealing the effects of stress agents, such as infectious disease and drugs, on human performance (Alluisi, 1969; Damos & Parker, 1994; Warm, 1993). However, their long duration has been an inconvenience. Recently, Temple and his associates (Temple et al., 2000) developed an abbreviated 12-min vigilance task that duplicates many of the findings with longer duration vigils. The present study was designed to explore further the similarity of the abbreviated task to long-duration vigils by investigating the effects of signal salience and jet-aircraft engine noise on performance, operator stress, and coping strategies. Forty-eight observers (24 males and 24 females) were assigned at random to each of four conditions resulting from the factorial combination of signal salience (high and low contrast signals) and background noise (quiet and jet-aircraft noise). As is the case with long-duration vigils (Warm, 1993), signal detection in the abbreviated task was poorer for low salience than for high salience signals. In addition, stress scores, as indexed by the Dundee Stress State Questionnaire (Matthews, Joiner, Gilliland, Campbell, & Falconer, 1999), were elevated in the low as compared to the high salience condition. Unlike longer vigils, however, (Becker, Warm, Dember, & Hancock, 1996), signal detection in the abbreviated task was superior in the presence of aircraft noise than in quiet. Noise also attenuated the stress of the vigil, a result that is counter to previous findings regarding the effects of noise in a variety of other scenarios (Clark, 1984). Examination of observers' coping responses, as assessed by the Coping Inventory for Task Situations (Matthews & Campbell, 1998), indicated that problem-focused coping was the overwhelming

  1. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  2. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions.

    Science.gov (United States)

    Box, Simon

    2014-12-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.

  3. A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.

    Science.gov (United States)

    Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun

    2017-07-01

    Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time

  4. Ambiguity Towards Multiple Historical Performance Information Signals: Evidence From Indonesian Open-Ended Mutual Fund Investors

    OpenAIRE

    Haris Pratama Loeis; Ruslan Prijadi

    2015-01-01

    This study focuses on the behavior of open-ended mutual fund investors when encountered with multiple information signals of mutual fund’s historical performance. The behavior of investors can be reflected on their decision to subscribe or redeem their funds from mutual funds. Moreover, we observe the presence of ambiguity within investors due to multiple information signals, and also their reaction towards it. Our finding shows that open-ended mutual fund investors do not only have sen...

  5. Performance of Optimally Merged Multisatellite Precipitation Products Using the Dynamic Bayesian Model Averaging Scheme Over the Tibetan Plateau

    Science.gov (United States)

    Ma, Yingzhao; Hong, Yang; Chen, Yang; Yang, Yuan; Tang, Guoqiang; Yao, Yunjun; Long, Di; Li, Changmin; Han, Zhongying; Liu, Ronghua

    2018-01-01

    Accurate estimation of precipitation from satellites at high spatiotemporal scales over the Tibetan Plateau (TP) remains a challenge. In this study, we proposed a general framework for blending multiple satellite precipitation data using the dynamic Bayesian model averaging (BMA) algorithm. The blended experiment was performed at a daily 0.25° grid scale for 2007-2012 among Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42RT and 3B42V7, Climate Prediction Center MORPHing technique (CMORPH), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR). First, the BMA weights were optimized using the expectation-maximization (EM) method for each member on each day at 200 calibrated sites and then interpolated to the entire plateau using the ordinary kriging (OK) approach. Thus, the merging data were produced by weighted sums of the individuals over the plateau. The dynamic BMA approach showed better performance with a smaller root-mean-square error (RMSE) of 6.77 mm/day, higher correlation coefficient of 0.592, and closer Euclid value of 0.833, compared to the individuals at 15 validated sites. Moreover, BMA has proven to be more robust in terms of seasonality, topography, and other parameters than traditional ensemble methods including simple model averaging (SMA) and one-outlier removed (OOR). Error analysis between BMA and the state-of-the-art IMERG in the summer of 2014 further proved that the performance of BMA was superior with respect to multisatellite precipitation data merging. This study demonstrates that BMA provides a new solution for blending multiple satellite data in regions with limited gauges.

  6. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  7. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  8. Green Suppliers Performance Evaluation in Belt and Road Using Fuzzy Weighted Average with Social Media Information

    Directory of Open Access Journals (Sweden)

    Kuo-Ping Lin

    2017-12-01

    Full Text Available A decision model for selecting a suitable supplier is a key to reducing the environmental impact in green supply chain management for high-tech companies. Traditional fuzzy weight average (FWA adopts linguistic variable to determine weight by experts. However, the weights of FWA have not considered the public voice, meaning the viewpoints of consumers in green supply chain management. This paper focuses on developing a novel decision model for green supplier selection in the One Belt and One Road (OBOR initiative through a fuzzy weighted average approach with social media. The proposed decision model uses the membership grade of the criteria and sub-criteria and its relative weights, which consider the volume of social media, to establish an analysis matrix of green supplier selection. Then, the proposed fuzzy weighted average approach is considered as an aggregating tool to calculate a synthetic score for each green supplier in the Belt and Road initiative. The final score of the green supplier is ordered by a non-fuzzy performance value ranking method to help the consumer make a decision. A case of green supplier selection in the light-emitting diode (LED industry is used to demonstrate the proposed decision model. The findings demonstrate (1 the consumer’s main concerns are the “Quality” and “Green products” in LED industry, hence, the ranking of suitable supplier in FWA with social media information model obtained the difference result with tradition FWA; (2 OBOR in the LED industry is not fervently discussed in searches of Google and Twitter; and (3 the FWA with social media information could objectively analyze the green supplier selection because the novel model considers the viewpoints of the consumer.

  9. Performance analysis of NOAA tropospheric signal delay model

    International Nuclear Information System (INIS)

    Ibrahim, Hassan E; El-Rabbany, Ahmed

    2011-01-01

    Tropospheric delay is one of the dominant global positioning system (GPS) errors, which degrades the positioning accuracy. Recent development in tropospheric modeling relies on implementation of more accurate numerical weather prediction (NWP) models. In North America one of the NWP-based tropospheric correction models is the NOAA Tropospheric Signal Delay Model (NOAATrop), which was developed by the US National Oceanic and Atmospheric Administration (NOAA). Because of its potential to improve the GPS positioning accuracy, the NOAATrop model became the focus of many researchers. In this paper, we analyzed the performance of the NOAATrop model and examined its effect on ionosphere-free-based precise point positioning (PPP) solution. We generated 3 year long tropospheric zenith total delay (ZTD) data series for the NOAATrop model, Hopfield model, and the International GNSS Services (IGS) final tropospheric correction product, respectively. These data sets were generated at ten IGS reference stations spanning Canada and the United States. We analyzed the NOAATrop ZTD data series and compared them with those of the Hopfield model. The IGS final tropospheric product was used as a reference. The analysis shows that the performance of the NOAATrop model is a function of both season (time of the year) and geographical location. However, its performance was superior to the Hopfield model in all cases. We further investigated the effect of implementing the NOAATrop model on the ionosphere-free-based PPP solution convergence and accuracy. It is shown that the use of the NOAATrop model improved the PPP solution convergence by 1%, 10% and 15% for the latitude, longitude and height components, respectively

  10. Time-frequency analysis of non-stationary fusion plasma signals using an improved Hilbert-Huang transform

    International Nuclear Information System (INIS)

    Liu, Yangqing; Tan, Yi; Xie, Huiqiao; Wang, Wenhao; Gao, Zhe

    2014-01-01

    An improved Hilbert-Huang transform method is developed to the time-frequency analysis of non-stationary signals in tokamak plasmas. Maximal overlap discrete wavelet packet transform rather than wavelet packet transform is proposed as a preprocessor to decompose a signal into various narrow-band components. Then, a correlation coefficient based selection method is utilized to eliminate the irrelevant intrinsic mode functions obtained from empirical mode decomposition of those narrow-band components. Subsequently, a time varying vector autoregressive moving average model instead of Hilbert spectral analysis is performed to compute the Hilbert spectrum, i.e., a three-dimensional time-frequency distribution of the signal. The feasibility and effectiveness of the improved Hilbert-Huang transform method is demonstrated by analyzing a non-stationary simulated signal and actual experimental signals in fusion plasmas

  11. A Comparative Analysis of Techniques for PAPR Reduction of OFDM Signals

    Directory of Open Access Journals (Sweden)

    M. Janjić

    2014-06-01

    Full Text Available In this paper the problem of high Peak-to-Average Power Ratio (PAPR in Orthogonal Frequency-Division Multiplexing (OFDM signals is studied. Besides describing three techniques for PAPR reduction, SeLective Mapping (SLM, Partial Transmit Sequence (PTS and Interleaving, a detailed analysis of the performances of these techniques for various values of relevant parameters (number of phase sequences, number of interleavers, number of phase factors, number of subblocks depending on applied technique, is carried out. Simulation of these techniques is run in Matlab software. Results are presented in the form of Complementary Cumulative Distribution Function (CCDF curves for PAPR of 30000 randomly generated OFDM symbols. Simulations are performed for OFDM signals with 32 and 256 subcarriers, oversampled by a factor of 4. A detailed comparison of these techniques is made based on Matlab simulation results.

  12. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  13. 4D MR imaging using robust internal respiratory signal

    International Nuclear Information System (INIS)

    Hui, CheukKai; Wen, Zhifei; Beddar, Sam; Stemkens, Bjorn; Tijssen, R H N; Van den Berg, C A T; Hwang, Ken-Pin

    2016-01-01

    The purpose of this study is to investigate the feasibility of using internal respiratory (IR) surrogates to sort four-dimensional (4D) magnetic resonance (MR) images. The 4D MR images were constructed by acquiring fast 2D cine MR images sequentially, with each slice scanned for more than one breathing cycle. The 4D volume was then sorted retrospectively using the IR signal. In this study, we propose to use multiple low-frequency components in the Fourier space as well as the anterior body boundary as potential IR surrogates. From these potential IR surrogates, we used a clustering algorithm to identify those that best represented the respiratory pattern to derive the IR signal. A study with healthy volunteers was performed to assess the feasibility of the proposed IR signal. We compared this proposed IR signal with the respiratory signal obtained using respiratory bellows. Overall, 99% of the IR signals matched the bellows signals. The average difference between the end inspiration times in the IR signal and bellows signal was 0.18 s in this cohort of matching signals. For the acquired images corresponding to the other 1% of non-matching signal pairs, the respiratory motion shown in the images was coherent with the respiratory phases determined by the IR signal, but not the bellows signal. This suggested that the IR signal determined by the proposed method could potentially correct the faulty bellows signal. The sorted 4D images showed minimal mismatched artefacts and potential clinical applicability. The proposed IR signal therefore provides a feasible alternative to effectively sort MR images in 4D. (paper)

  14. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  15. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  16. Relation between stability and resilience determines the performance of early warning signals under different environmental drivers.

    Science.gov (United States)

    Dai, Lei; Korolev, Kirill S; Gore, Jeff

    2015-08-11

    Shifting patterns of temporal fluctuations have been found to signal critical transitions in a variety of systems, from ecological communities to human physiology. However, failure of these early warning signals in some systems calls for a better understanding of their limitations. In particular, little is known about the generality of early warning signals in different deteriorating environments. In this study, we characterized how multiple environmental drivers influence the dynamics of laboratory yeast populations, which was previously shown to display alternative stable states [Dai et al., Science, 2012]. We observed that both the coefficient of variation and autocorrelation increased before population collapse in two slowly deteriorating environments, one with a rising death rate and the other one with decreasing nutrient availability. We compared the performance of early warning signals across multiple environments as "indicators for loss of resilience." We find that the varying performance is determined by how a system responds to changes in a specific driver, which can be captured by a relation between stability (recovery rate) and resilience (size of the basin of attraction). Furthermore, we demonstrate that the positive correlation between stability and resilience, as the essential assumption of indicators based on critical slowing down, can break down in this system when multiple environmental drivers are changed simultaneously. Our results suggest that the stability-resilience relation needs to be better understood for the application of early warning signals in different scenarios.

  17. SIGNAL RECONSTRUCTION PERFORMANCE OF THE ATLAS HADRONIC TILE CALORIMETER

    CERN Document Server

    Do Amaral Coutinho, Y; The ATLAS collaboration

    2013-01-01

    "The Tile Calorimeter for the ATLAS experiment at the CERN Large Hadron Collider (LHC) is a sampling calorimeter with steel as absorber and scintillators as active medium. The scintillators are readout by wavelength shifting fibers coupled to photomultiplier tubes (PMT). The analogue signals from the PMTs are amplified, shaped and digitized by sampling the signal every 25 ns. The TileCal front-end electronics allows to read out the signals produced by about 10000 channels measuring energies ranging from ~30 MeV to ~2 TeV. The read-out system is responsible for reconstructing the data in real-time fulfilling the tight time constraint imposed by the ATLAS first level trigger rate (100 kHz). The main component of the read-out system is the Digital Signal Processor (DSP) which, using an Optimal Filtering reconstruction algorithm, allows to compute for each channel the signal amplitude, time and quality factor at the required high rate. Currently the ATLAS detector and the LHC are undergoing an upgrade program tha...

  18. Performance evaluation of radiation sensors with internal signal amplification based on the BJT effect

    International Nuclear Information System (INIS)

    Bosisio, Luciano; Batignani, Giovanni; Bettarini, Stefano; Boscardin, Maurizio; Dalla Betta, Gian-Franco; Giacomini, Gabriele; Piemonte, Claudio; Verzellesi, Giovanni; Zorzi, Nicola

    2006-01-01

    Prototypes of ionizing radiation detectors with internal signal amplification based on the bipolar transistor effect have been fabricated at ITC-irst (Trento, Italy). Results from the electrical characterization and preliminary functional tests of the devices have been previously reported. Here, we present a more detailed investigation of the performance of this type of detector, with particular attention to their noise and rate limits. Measurements of the signal waveform and of the gain versus frequency dependence are performed by illuminating the devices with, respectively, pulsed or sinusoidally modulated IR light. Pulse height spectra of X-rays from an Am241 source have been taken with very simple front-end electronics (an LF351 operational amplifier) or by directly reading with an oscilloscope the voltage drop across a load resistor connected to the emitter. An equivalent noise charge (referred to input) of 380 electrons r.m.s. has been obtained with the first setup for a small device, with an active area of 0.5x0.5mm 2 and a depleted thickness of 0.6mm. The corresponding power dissipation in the BJT was 17μW. The performance limitations of the devices are discussed

  19. Identifying colon cancer risk modules with better classification performance based on human signaling network.

    Science.gov (United States)

    Qu, Xiaoli; Xie, Ruiqiang; Chen, Lina; Feng, Chenchen; Zhou, Yanyan; Li, Wan; Huang, Hao; Jia, Xu; Lv, Junjie; He, Yuehan; Du, Youwen; Li, Weiguo; Shi, Yuchen; He, Weiming

    2014-10-01

    Identifying differences between normal and tumor samples from a modular perspective may help to improve our understanding of the mechanisms responsible for colon cancer. Many cancer studies have shown that signaling transduction and biological pathways are disturbed in disease states, and expression profiles can distinguish variations in diseases. In this study, we integrated a weighted human signaling network and gene expression profiles to select risk modules associated with tumor conditions. Risk modules as classification features by our method had a better classification performance than other methods, and one risk module for colon cancer had a good classification performance for distinguishing between normal/tumor samples and between tumor stages. All genes in the module were annotated to the biological process of positive regulation of cell proliferation, and were highly associated with colon cancer. These results suggested that these genes might be the potential risk genes for colon cancer. Copyright © 2013. Published by Elsevier Inc.

  20. Power Based Phase-Locked Loop Under Adverse Conditions with Moving Average Filter for Single-Phase System

    Directory of Open Access Journals (Sweden)

    Menxi Xie

    2017-06-01

    Full Text Available High performance synchronization methord is citical for grid connected power converter. For single-phase system, power based phase-locked loop(pPLL uses a multiplier as phase detector(PD. As single-phase grid voltage is distorted, the phase error information contains ac disturbances oscillating at integer multiples of fundamental frequency which lead to detection error. This paper presents a new scheme based on moving average filter(MAF applied in-loop of pPLL. The signal characteristic of phase error is dissussed in detail. A predictive rule is adopted to compensate the delay induced by MAF, thus achieving fast dynamic response. In the case of frequency deviate from nomimal, estimated frequency is fed back to adjust the filter window length of MAF and buffer size of predictive rule. Simulation and experimental results show that proposed PLL achieves good performance under adverse grid conditions.

  1. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  2. FIPSER: Performance study of a readout concept with few digitization levels for fast signals

    Energy Technology Data Exchange (ETDEWEB)

    Limyansky, B., E-mail: brent.limyansky@gatech.edu [School of Physics and Center for Relativistic Astrophysics, Georgia Institute of Technology, Atlanta (United States); Reese, R., E-mail: bobbeyreese@gmail.com [School of Physics and Center for Relativistic Astrophysics, Georgia Institute of Technology, Atlanta (United States); Cressler, J.D. [School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta (United States); Otte, A.N.; Taboada, I. [School of Physics and Center for Relativistic Astrophysics, Georgia Institute of Technology, Atlanta (United States); Ulusoy, C. [Dept. of Electrical and Computer Engineering, Michigan State University, East Lansing (United States)

    2016-11-21

    We discuss the performance of a readout system, Fixed Pulse Shape Efficient Readout (FIPSER), to digitize signals from detectors with a fixed pulse shape. In this study we are mainly interested in the readout of fast photon detectors like photomultipliers or Silicon photomultipliers. But the concept can be equally applied to the digitization of other detector signals. FIPSER is based on the flash analog to digital converter (FADC) concept, but has the potential to lower costs and power consumption by using an order of magnitude fewer discrete voltage levels. Performance is bolstered by combining the discretized signal with the knowledge of the underlying pulse shape. Simulated FIPSER data was reconstructed with two independent methods. One using a maximum likelihood method and the other using a modified χ{sup 2} test. Both methods show that utilizing 12 discrete voltage levels with a sampling rate of 4 samples per full width half maximum (FWHM) of the pulse achieves an amplitude resolution that is better than the Poisson limit for photon-counting experiments. The time resolution achieved in this configuration ranges between 0.02 and 0.16 FWHM and depends on the pulse amplitude. In a situation where the waveform is composed of two consecutive pulses the pulses can be separated if they are at least 0.05–0.30 FWHM apart with an amplitude resolution that is better than 20%.

  3. Testing averaged cosmology with type Ia supernovae and BAO data

    Energy Technology Data Exchange (ETDEWEB)

    Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)

    2017-02-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  4. Testing averaged cosmology with type Ia supernovae and BAO data

    International Nuclear Information System (INIS)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani

    2017-01-01

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.

  5. Transit-Based Emergency Evacuation with Transit Signal Priority in Sudden-Onset Disaster

    Directory of Open Access Journals (Sweden)

    Ciyun Lin

    2016-01-01

    Full Text Available This study presents methods of transit signal priority without transit-only lanes for a transit-based emergency evacuation in a sudden-onset disaster. Arterial priority signal coordination is optimized when a traffic signal control system provides priority signals for transit vehicles along an evacuation route. Transit signal priority is determined by “transit vehicle arrival time estimation,” “queuing vehicle dissipation time estimation,” “traffic signal status estimation,” “transit signal optimization,” and “arterial traffic signal coordination for transit vehicle in evacuation route.” It takes advantage of the large capacities of transit vehicles, reduces the evacuation time, and evacuates as many evacuees as possible. The proposed methods were tested on a simulation platform with Paramics V6.0. To evaluate and compare the performance of transit signal priority, three scenarios were simulated in the simulator. The results indicate that the methods of this study can reduce the travel times of transit vehicles along an evacuation route by 13% and 10%, improve the standard deviation of travel time by 16% and 46%, and decrease the average person delay at a signalized intersection by 22% and 17% when the traffic flow saturation along an evacuation route is 0.81.0, respectively.

  6. Timing performance of a self-cancelling turn-signal mechanism in motorcycles based on the ATMega328P microcontroller

    Science.gov (United States)

    Nurbuwat, Adzin Kondo; Eryandi, Kholid Yusuf; Estriyanto, Yuyun; Widiastuti, Indah; Pambudi, Nugroho Agung

    2018-02-01

    The objective of this study is to measure the time performance of a self-cancelling turn signal mechanism based on the In this study the performance of self-cancelling turn signal based on ATMega328P microcontroller is measured at low speed and high speed treatment on motorcycles commonly used in Indonesia. Time performance measurements were made by comparing the self-cancelling turn signal based on ATMega328P microcontroller with standard motor turn time. Measurements of time at low speed treatment were performed at a speed range of 15 km / h, 20 km / h, 25 km / h on the U-turn test trajectory. The angle of the turning angle of the potentiometer is determined at 3°. The limit of steering wheel turning angle at the potentiometer is set at 3°. For high-speed treatment is 30 km / h, 40 km / h, 50km / h, and 60 km / h, on the L-turn test track with a tilt angle (roll angle) read by the L3G4200D gyroscope sensor. Each speed test is repeated 3 replications. Standard time is a reference for self-cancelling turn signal performance. The standard time obtained is 15.68 s, 11.96 s, 9.34 s at low speed and 4.63 s, 4.06 s, 3.61 s, 3.13 s at high speed. The time test of self-cancelling turn signal shows 16.10 s, 12.42 s, 10.24 s at the low speed and 5.18, 4.51, 3.73, 3.21 at the high speed. At a speed of 15 km / h occurs the instability of motion turns motorcycle so that testing is more difficult. Small time deviations indicate the tool works well. The largest time deviation value is 0.9 seconds at low speed and 0.55 seconds at high speed. The conclusion at low velocity of the highest deviation value occurred at the speed of 25 km / h test due to the movement of slope with inclination has started to happen which resulted in slow reading of steering movement. At higher speeds the time slows down due to rapid sensor readings on the tilt when turning fast at ever higher speeds. The timing performance of self-cancelling turn signal decreases as the motorcycle turning

  7. A High Performance Approach to Minimizing Interactions between Inbound and Outbound Signals in Helmet, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose a high performance approach to enhancing communications between astronauts. In the new generation of NASA audio systems for astronauts, inbound signals...

  8. A Novel Blind Source Separation Algorithm and Performance Analysis of Weak Signal against Strong Interference in Passive Radar Systems

    Directory of Open Access Journals (Sweden)

    Chengjie Li

    2016-01-01

    Full Text Available In Passive Radar System, obtaining the mixed weak object signal against the super power signal (jamming is still a challenging task. In this paper, a novel framework based on Passive Radar System is designed for weak object signal separation. Firstly, we propose an Interference Cancellation algorithm (IC-algorithm to extract the mixed weak object signals from the strong jamming. Then, an improved FastICA algorithm with K-means cluster is designed to separate each weak signal from the mixed weak object signals. At last, we discuss the performance of the proposed method and verify the novel method based on several simulations. The experimental results demonstrate the effectiveness of the proposed method.

  9. A new method for quantifying the performance of EEG blind source separation algorithms by referencing a simultaneously recorded ECoG signal.

    Science.gov (United States)

    Oosugi, Naoya; Kitajo, Keiichi; Hasegawa, Naomi; Nagasaka, Yasuo; Okanoya, Kazuo; Fujii, Naotaka

    2017-09-01

    Blind source separation (BSS) algorithms extract neural signals from electroencephalography (EEG) data. However, it is difficult to quantify source separation performance because there is no criterion to dissociate neural signals and noise in EEG signals. This study develops a method for evaluating BSS performance. The idea is neural signals in EEG can be estimated by comparison with simultaneously measured electrocorticography (ECoG). Because the ECoG electrodes cover the majority of the lateral cortical surface and should capture most of the original neural sources in the EEG signals. We measured real EEG and ECoG data and developed an algorithm for evaluating BSS performance. First, EEG signals are separated into EEG components using the BSS algorithm. Second, the EEG components are ranked using the correlation coefficients of the ECoG regression and the components are grouped into subsets based on their ranks. Third, canonical correlation analysis estimates how much information is shared between the subsets of the EEG components and the ECoG signals. We used our algorithm to compare the performance of BSS algorithms (PCA, AMUSE, SOBI, JADE, fastICA) via the EEG and ECoG data of anesthetized nonhuman primates. The results (Best case >JADE = fastICA >AMUSE = SOBI ≥ PCA >random separation) were common to the two subjects. To encourage the further development of better BSS algorithms, our EEG and ECoG data are available on our Web site (http://neurotycho.org/) as a common testing platform. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  10. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  11. Performance of Narrowband Signal Detection under Correlated Rayleigh Fading Based on Synthetic Array

    Directory of Open Access Journals (Sweden)

    Ali Broumandan

    2009-01-01

    design parameters of probability of detection (Pd and probability of false alarm (Pfa. An optimum detector based on Estimator-Correlator (EC is developed, and its performance is compared with that of suboptimal Equal-Gain (EG combiner in different channel correlation scenarios. It is shown that in moderate channel correlation scenarios the detection performance of EC and EG is identical. The sensitivity of the proposed method to knowledge of motion parameters is also investigated. An extensive set of measurements based on CDMA-2000 pilot signals using the static antenna and synthetic array are used to experimentally verify these theoretical findings.

  12. Drug Safety Monitoring in Children: Performance of Signal Detection Algorithms and Impact of Age Stratification

    NARCIS (Netherlands)

    O.U. Osokogu (Osemeke); C. Dodd (Caitlin); A.C. Pacurariu (Alexandra C.); F. Kaguelidou (Florentia); D.M. Weibel (Daniel); M.C.J.M. Sturkenboom (Miriam)

    2016-01-01

    textabstractIntroduction: Spontaneous reports of suspected adverse drug reactions (ADRs) can be analyzed to yield additional drug safety evidence for the pediatric population. Signal detection algorithms (SDAs) are required for these analyses; however, the performance of SDAs in the pediatric

  13. Removing ECG Artifact from the Surface EMG Signal Using Adaptive Subtraction Technique

    Science.gov (United States)

    Abbaspour, S; Fallah, A

    2014-01-01

    Background: The electrocardiogram artifact is a major contamination in the electromyogram signals when electromyogram signal is recorded from upper trunk muscles and because of that the contaminated electromyogram is not useful. Objective: Removing electrocardiogram contamination from electromyogram signals. Methods: In this paper, the clean electromyogram signal, electrocardiogram artifact and electrocardiogram signal were recorded from leg muscles, the pectoralis major muscle of the left side and V4, respectively. After the pre-processing, contaminated electromyogram signal is simulated with a combination of clean electromyogram and electrocardiogram artifact. Then, contaminated electromyogram is cleaned using adaptive subtraction method. This method contains some steps; (1) QRS detection, (2) formation of electrocardiogram template by averaging the electrocardiogram complexes, (3) using low pass filter to remove undesirable artifacts, (4) subtraction. Results: Performance of our method is evaluated using qualitative criteria, power spectrum density and coherence and quantitative criteria signal to noise ratio, relative error and cross correlation. The result of signal to noise ratio, relative error and cross correlation is equal to 10.493, 0.04 and %97 respectively. Finally, there is a comparison between proposed method and some existing methods. Conclusion: The result indicates that adaptive subtraction method is somewhat effective to remove electrocardiogram artifact from contaminated electromyogram signal and has an acceptable result. PMID:25505766

  14. Average Throughput Performance of Myopic Policy in Energy Harvesting Wireless Sensor Networks.

    Science.gov (United States)

    Gul, Omer Melih; Demirekler, Mubeccel

    2017-09-26

    This paper considers a single-hop wireless sensor network where a fusion center collects data from M energy harvesting wireless sensors. The harvested energy is stored losslessly in an infinite-capacity battery at each sensor. In each time slot, the fusion center schedules K sensors for data transmission over K orthogonal channels. The fusion center does not have direct knowledge on the battery states of sensors, or the statistics of their energy harvesting processes. The fusion center only has information of the outcomes of previous transmission attempts. It is assumed that the sensors are data backlogged, there is no battery leakage and the communication is error-free. An energy harvesting sensor can transmit data to the fusion center whenever being scheduled only if it has enough energy for data transmission. We investigate average throughput of Round-Robin type myopic policy both analytically and numerically under an average reward (throughput) criterion. We show that Round-Robin type myopic policy achieves optimality for some class of energy harvesting processes although it is suboptimal for a broad class of energy harvesting processes.

  15. STAR Performance with SPEAR (Signal Processing Electronic Attack RFIC)

    Science.gov (United States)

    2017-03-01

    re about 6 x ains duplicate e implemente ifferences. A d signal from t first mixer sta ange. The LN etween noise e of a strong a on-chip balu icro...amplifiers filter is embed signal before f lumped com onics in the pr ly 1 x 1 (mm)2 adaptive p signal (in ultaneous h. anufacture whole off... ted circuit. s part of a in Global e die will el receiver ter; (iv) an -to-parallel 7 (mm)2 the same d by the low noise he antenna ge

  16. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  17. High-Performance Signal Detection for Adverse Drug Events using MapReduce Paradigm.

    Science.gov (United States)

    Fan, Kai; Sun, Xingzhi; Tao, Ying; Xu, Linhao; Wang, Chen; Mao, Xianling; Peng, Bo; Pan, Yue

    2010-11-13

    Post-marketing pharmacovigilance is important for public health, as many Adverse Drug Events (ADEs) are unknown when those drugs were approved for marketing. However, due to the large number of reported drugs and drug combinations, detecting ADE signals by mining these reports is becoming a challenging task in terms of computational complexity. Recently, a parallel programming model, MapReduce has been introduced by Google to support large-scale data intensive applications. In this study, we proposed a MapReduce-based algorithm, for common ADE detection approach, Proportional Reporting Ratio (PRR), and tested it in mining spontaneous ADE reports from FDA. The purpose is to investigate the possibility of using MapReduce principle to speed up biomedical data mining tasks using this pharmacovigilance case as one specific example. The results demonstrated that MapReduce programming model could improve the performance of common signal detection algorithm for pharmacovigilance in a distributed computation environment at approximately liner speedup rates.

  18. Improve Gear Fault Diagnosis and Severity Indexes Determinations via Time Synchronous Average

    Directory of Open Access Journals (Sweden)

    Mohamed El Morsy

    2016-11-01

    Full Text Available In order to reduce operation and maintenance costs, prognostics and health management (PHM of the geared system is needed to improve effective gearbox fault detection tools.  PHM system allows less costly maintenance because it can inform operators of needed repairs before a fault causes collateral damage happens to the gearbox. In this article, time synchronous average (TSA technique and complex continuous wavelet analysis enhancement are used as gear fault detection approach. In the first step, extract the periodic waveform from the noisy measured signal is considered as The main value of Time synchronous averaging (TSA for gearbox signals analyses, where it allows the vibration signature of the gear under analysis to be separated from other gears and noise sources in the gearbox that are not synchronous with faulty gear. In the second step, the complex wavelet analysis is used in case of multi-faults in same gear. The signal phased-locked with the angular position of a shaft within the system is done. The main aims for this research is to improve the gear fault diagnosis and severity index determinations based on TSA  of measured signal for investigated passenger vehicle gearbox under different operation conditions. In addition to, correct the variations in shaft speed such that the spreading of spectral energy into an adjacent gear mesh bin helps in detecting the gear fault position (faulted tooth or teeth and improve the Root Mean Square (RMS, Kurtosis, and Peak Pulse as the sensitivity of severity indexes for maintenance, prognostics and health management (PHM purposes. The open loop test stand is equipped with two dynamometers and investigated vehicle gearbox of mid-size passenger car; the total power is taken-off from one side only. Reference Number: www.asrongo.org/doi:4.2016.1.1.6

  19. Quantification of signal detection performance degradation induced by phase-retrieval in propagation-based x-ray phase-contrast imaging

    Science.gov (United States)

    Chou, Cheng-Ying; Anastasio, Mark A.

    2016-04-01

    In propagation-based X-ray phase-contrast (PB XPC) imaging, the measured image contains a mixture of absorption- and phase-contrast. To obtain separate images of the projected absorption and phase (i.e., refractive) properties of a sample, phase retrieval methods can be employed. It has been suggested that phase-retrieval can always improve image quality in PB XPC imaging. However, when objective (task-based) measures of image quality are employed, this is not necessarily true and phase retrieval can be detrimental. In this work, signal detection theory is utilized to quantify the performance of a Hotelling observer (HO) for detecting a known signal in a known background. Two cases are considered. In the first case, the HO acts directly on the measured intensity data. In the second case, the HO acts on either the retrieved phase or absorption image. We demonstrate that the performance of the HO is superior when acting on the measured intensity data. The loss of task-specific information induced by phase-retrieval is quantified by computing the efficiency of the HO as the ratio of the test statistic signal-to-noise ratio (SNR) for the two cases. The effect of the system geometry on this efficiency is systematically investigated. Our findings confirm that phase-retrieval can impair signal detection performance in XPC imaging.

  20. Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging

    International Nuclear Information System (INIS)

    Soltanian-Zadeh, H.; Windham, J.P.; Yagle, A.E.

    1993-01-01

    Segmentation of a feature of interest while correcting for partial volume averaging effects is a major tool for identification of hidden abnormalities, fast and accurate volume calculation, and three-dimensional visualization in the field of magnetic resonance imaging (MRI). The authors present the optimal transformation for simultaneous segmentation of a desired feature and correction of partial volume averaging effects, while maximizing the signal-to-noise ratio (SNR) of the desired feature. It is proved that correction of partial volume averaging effects requires the removal of the interfering features from the scene. It is also proved that correction of partial volume averaging effects can be achieved merely by a linear transformation. It is finally shown that the optimal transformation matrix is easily obtained using the Gram-Schmidt orthogonalization procedure, which is numerically stable. Applications of the technique to MRI simulation, phantom, and brain images are shown. They show that in all cases the desired feature is segmented from the interfering features and partial volume information is visualized in the resulting transformed images

  1. MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.

    Science.gov (United States)

    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne

    2014-01-01

    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement

  2. Do We Perceive Others Better than Ourselves? A Perceptual Benefit for Noise-Vocoded Speech Produced by an Average Speaker.

    Directory of Open Access Journals (Sweden)

    William L Schuerman

    Full Text Available In different tasks involving action perception, performance has been found to be facilitated when the presented stimuli were produced by the participants themselves rather than by another participant. These results suggest that the same mental representations are accessed during both production and perception. However, with regard to spoken word perception, evidence also suggests that listeners' representations for speech reflect the input from their surrounding linguistic community rather than their own idiosyncratic productions. Furthermore, speech perception is heavily influenced by indexical cues that may lead listeners to frame their interpretations of incoming speech signals with regard to speaker identity. In order to determine whether word recognition evinces similar self-advantages as found in action perception, it was necessary to eliminate indexical cues from the speech signal. We therefore asked participants to identify noise-vocoded versions of Dutch words that were based on either their own recordings or those of a statistically average speaker. The majority of participants were more accurate for the average speaker than for themselves, even after taking into account differences in intelligibility. These results suggest that the speech representations accessed during perception of noise-vocoded speech are more reflective of the input of the speech community, and hence that speech perception is not necessarily based on representations of one's own speech.

  3. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  4. Improved stochastic resonance algorithm for enhancement of signal-to-noise ratio of high-performance liquid chromatographic signal

    International Nuclear Information System (INIS)

    Xie Shaofei; Xiang Bingren; Deng Haishan; Xiang Suyun; Lu Jun

    2007-01-01

    Based on the theory of stochastic resonance, an improved stochastic resonance algorithm with a new criterion for optimizing system parameters to enhance signal-to-noise ratio (SNR) of HPLC/UV chromatographic signal for trace analysis was presented in this study. Compared with the conventional criterion in stochastic resonance, the proposed one can ensure satisfactory SNR as well as good peak shape of chromatographic peak in output signal. Application of the criterion to experimental weak signals of HPLC/UV was investigated and the results showed an excellent quantitative relationship between different concentrations and responses

  5. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.

    Science.gov (United States)

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-10-23

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal.

  6. Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms.

    Science.gov (United States)

    Liu, Rensong; Zhang, Zhiwen; Duan, Feng; Zhou, Xin; Meng, Zixuan

    2017-01-01

    Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K -nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance.

  7. Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms

    Science.gov (United States)

    Zhang, Zhiwen; Duan, Feng; Zhou, Xin; Meng, Zixuan

    2017-01-01

    Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance. PMID:28874909

  8. Adaptive signal processor

    Energy Technology Data Exchange (ETDEWEB)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 ..mu..sec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed.

  9. Adaptive signal processor

    International Nuclear Information System (INIS)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 μsec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed

  10. Hand posture classification using electrocorticography signals in the gamma band over human sensorimotor brain areas

    Science.gov (United States)

    Chestek, Cynthia A.; Gilja, Vikash; Blabe, Christine H.; Foster, Brett L.; Shenoy, Krishna V.; Parvizi, Josef; Henderson, Jaimie M.

    2013-04-01

    Objective. Brain-machine interface systems translate recorded neural signals into command signals for assistive technology. In individuals with upper limb amputation or cervical spinal cord injury, the restoration of a useful hand grasp could significantly improve daily function. We sought to determine if electrocorticographic (ECoG) signals contain sufficient information to select among multiple hand postures for a prosthetic hand, orthotic, or functional electrical stimulation system.Approach. We recorded ECoG signals from subdural macro- and microelectrodes implanted in motor areas of three participants who were undergoing inpatient monitoring for diagnosis and treatment of intractable epilepsy. Participants performed five distinct isometric hand postures, as well as four distinct finger movements. Several control experiments were attempted in order to remove sensory information from the classification results. Online experiments were performed with two participants. Main results. Classification rates were 68%, 84% and 81% for correct identification of 5 isometric hand postures offline. Using 3 potential controls for removing sensory signals, error rates were approximately doubled on average (2.1×). A similar increase in errors (2.6×) was noted when the participant was asked to make simultaneous wrist movements along with the hand postures. In online experiments, fist versus rest was successfully classified on 97% of trials; the classification output drove a prosthetic hand. Online classification performance for a larger number of hand postures remained above chance, but substantially below offline performance. In addition, the long integration windows used would preclude the use of decoded signals for control of a BCI system. Significance. These results suggest that ECoG is a plausible source of command signals for prosthetic grasp selection. Overall, avenues remain for improvement through better electrode designs and placement, better participant training

  11. Sharing the Licensed Spectrum of Full-Duplex Systems Using Improper Gaussian Signaling

    KAUST Repository

    Gaafar, Mohamed

    2015-12-01

    Sharing the spectrum with in-band full-duplex (FD) primary users (PU) is a challenging and interesting problem in the underlay cognitive radio (CR) systems. The self-interference introduced at the primary network may dramatically impede the secondary user (SU) opportunity to access the spectrum. In this work, we attempt to tackle this problem through the use of the so-called improper Gaussian signaling. Such a signaling technique has demonstrated its superiority in improving the overall performance in interference limited networks. Particularly, we assume a system with a SU pair working in half-duplex mode that uses improper Gaussian signaling while the FD PU pair implements the regular proper Gaussian signaling techniques. First, we derive a closed form expression for the SU outage probability and an upper bound for the PU outage probability. Then, we optimize the SU signal parameters to minimize its outage probability while maintaining the required PU quality-of-service based on the average channel state information. Finally, we provide some numerical results that validate the tightness of the PU outage probability bound and demonstrate the advantage of employing the improper Gaussian signaling to the SU in order to access the spectrum of the FD PU.

  12. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  13. Optimal Elbow Angle for Extracting sEMG Signals During Fatiguing Dynamic Contraction

    Directory of Open Access Journals (Sweden)

    Mohamed R. Al-Mulla

    2015-09-01

    Full Text Available Surface electromyographic (sEMG activity of the biceps muscle was recorded from 13 subjects. Data was recorded while subjects performed dynamic contraction until fatigue and the signals were segmented into two parts (Non-Fatigue and Fatigue. An evolutionary algorithm was used to determine the elbow angles that best separate (using Davies-Bouldin Index, DBI both Non-Fatigue and Fatigue segments of the sEMG signal. Establishing the optimal elbow angle for feature extraction used in the evolutionary process was based on 70% of the conducted sEMG trials. After completing 26 independent evolution runs, the best run containing the optimal elbow angles for separation (Non-Fatigue and Fatigue was selected and then tested on the remaining 30% of the data to measure the classification performance. Testing the performance of the optimal angle was undertaken on nine features extracted from each of the two classes (Non-Fatigue and Fatigue to quantify the performance. Results showed that the optimal elbow angles can be used for fatigue classification, showing 87.90% highest correct classification for one of the features and on average of all eight features (including worst performing features giving 78.45%.

  14. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  15. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  16. Measured emotional intelligence ability and grade point average in nursing students.

    Science.gov (United States)

    Codier, Estelle; Odell, Ellen

    2014-04-01

    For most schools of nursing, grade point average is the most important criteria for admission to nursing school and constitutes the main indicator of success throughout the nursing program. In the general research literature, the relationship between traditional measures of academic success, such as grade point average and postgraduation job performance is not well established. In both the general population and among practicing nurses, measured emotional intelligence ability correlates with both performance and other important professional indicators postgraduation. Little research exists comparing traditional measures of intelligence with measured emotional intelligence prior to graduation, and none in the student nurse population. This exploratory, descriptive, quantitative study was undertaken to explore the relationship between measured emotional intelligence ability and grade point average of first year nursing students. The study took place at a school of nursing at a university in the south central region of the United States. Participants included 72 undergraduate student nurse volunteers. Emotional intelligence was measured using the Mayer-Salovey-Caruso Emotional Intelligence Test, version 2, an instrument for quantifying emotional intelligence ability. Pre-admission grade point average was reported by the school records department. Total emotional intelligence (r=.24) scores and one subscore, experiential emotional intelligence(r=.25) correlated significantly (>.05) with grade point average. This exploratory, descriptive study provided evidence for some relationship between GPA and measured emotional intelligence ability, but also demonstrated lower than average range scores in several emotional intelligence scores. The relationship between pre-graduation measures of success and level of performance postgraduation deserves further exploration. The findings of this study suggest that research on the relationship between traditional and nontraditional

  17. The Performance of EEG-P300 Classification using Backpropagation Neural Networks

    Directory of Open Access Journals (Sweden)

    Arjon Turnip

    2013-12-01

    Full Text Available Electroencephalogram (EEG recordings signal provide an important function of brain-computer communication, but the accuracy of their classification is very limited in unforeseeable signal variations relating to artifacts. In this paper, we propose a classification method entailing time-series EEG-P300 signals using backpropagation neural networks to predict the qualitative properties of a subject’s mental tasks by extracting useful information from the highly multivariate non-invasive recordings of brain activity. To test the improvement in the EEG-P300 classification performance (i.e., classification accuracy and transfer rate with the proposed method, comparative experiments were conducted using Bayesian Linear Discriminant Analysis (BLDA. Finally, the result of the experiment showed that the average of the classification accuracy was 97% and the maximum improvement of the average transfer rate is 42.4%, indicating the considerable potential of the using of EEG-P300 for the continuous classification of mental tasks.

  18. Advanced optical signal processing of broadband parallel data signals

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo; Hu, Hao; Kjøller, Niels-Kristian

    2016-01-01

    Optical signal processing may aid in reducing the number of active components in communication systems with many parallel channels, by e.g. using telescopic time lens arrangements to perform format conversion and allow for WDM regeneration.......Optical signal processing may aid in reducing the number of active components in communication systems with many parallel channels, by e.g. using telescopic time lens arrangements to perform format conversion and allow for WDM regeneration....

  19. Silicon Photonics for Signal Processing of Tbit/s Serial Data Signals

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo; Ji, Hua; Galili, Michael

    2012-01-01

    In this paper, we describe our recent work on signal processing of terabit per second optical serial data signals using pure silicon waveguides. We employ nonlinear optical signal processing in nanoengineered silicon waveguides to perform demultiplexing and optical waveform sampling of 1.28-Tbit/...

  20. A Signal Averager Interface between a Biomation 6500 Transient Recorder and a LSI-11 Microcomputer.

    Science.gov (United States)

    1980-06-01

    decode the proper bus synchronizing signals. SA data lines 1 and 2 are decoded to produce SELO L - SEL4 L which select one of four SA registers. The...J42 A > SACCI..N.. 31 is41 4--.----(~~#I)I MMELYH-[@- T~. S5 46NI INI 404 II CSkN M.3 > ____ ____47 INWO L 3CSRRD L U SEL4 L MRPLY L t5CSRWHB H OUTHB L

  1. How Well Does High School Grade Point Average Predict College Performance by Student Urbanicity and Timing of College Entry? REL 2017-250

    Science.gov (United States)

    Hodara, Michelle; Lewis, Karyn

    2017-01-01

    This report is a companion to a study that found that high school grade point average was a stronger predictor of performance in college-level English and math than were standardized exam scores among first-time students at the University of Alaska who enrolled directly in college-level courses. This report examines how well high school grade…

  2. Experimental study on the effects of surface gravity waves of different wavelengths on the phase averaged performance characteristics of marine current turbine

    Science.gov (United States)

    Luznik, L.; Lust, E.; Flack, K. A.

    2014-12-01

    There are few studies describing the interaction between marine current turbines and an overlying surface gravity wave field. In this work we present an experimental study on the effects of surface gravity waves of different wavelengths on the wave phase averaged performance characteristics of a marine current turbine model. Measurements are performed with a 1/25 scale (diameter D=0.8m) two bladed horizontal axis turbine towed in the large (116m long) towing tank at the U.S. Naval Academy equipped with a dual-flap, servo-controlled wave maker. Three regular waves with wavelengths of 15.8, 8.8 and 3.9m with wave heights adjusted such that all waveforms have the same energy input per unit width are produced by the wave maker and model turbine is towed into the waves at constant carriage speed of 1.68 m/s. This representing the case of waves travelling in the same direction as the mean current. Thrust and torque developed by the model turbine are measured using a dynamometer mounted in line with the turbine shaft. Shaft rotation speed and blade position are measured using in in-house designed shaft position indexing system. The tip speed ratio (TSR) is adjusted using a hysteresis brake which is attached to the output shaft. Free surface elevation and wave parameters are measured with two optical wave height sensors, one located in the turbine rotor plane and other one diameter upstream of the rotor. All instruments are synchronized in time and data is sampled at a rate of 700 Hz. All measured quantities are conditionally sampled as a function of the measured surface elevation and transformed to wave phase space using the Hilbert Transform. Phenomena observed in earlier experiments with the same turbine such as phase lag in the torque signal and an increase in thrust due to Stokes drift are examined and presented with the present data as well as spectral analysis of the torque and thrust data.

  3. Familiarity and Voice Representation: From Acoustic-Based Representation to Voice Averages

    Directory of Open Access Journals (Sweden)

    Maureen Fontaine

    2017-07-01

    Full Text Available The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1, and famous voices (Experiment 2 are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker. In experiment 1, participants learned three voices over several sessions, and performed a three-alternative forced-choice identification task on original voice samples and several “speaker averages,” created by morphing across varying numbers of different vowels (e.g., [a] and [i] produced by the same speaker. In experiment 2, the same participants performed the same task on voice samples produced by familiar speakers. The two experiments showed that for famous voices, but not for trained-to-familiar voices, identification performance increased and response times decreased as a function of the number of utterances in the averages. This study sheds light on the perceptual representation of familiar voices, and demonstrates the power of average in recognizing familiar voices. The speaker average captures the unique characteristics of a speaker, and thus retains the information essential for recognition; it acts as a prototype of the speaker.

  4. Power Efficiency Improvements through Peak-to-Average Power Ratio Reduction and Power Amplifier Linearization

    Directory of Open Access Journals (Sweden)

    Zhou G Tong

    2007-01-01

    Full Text Available Many modern communication signal formats, such as orthogonal frequency-division multiplexing (OFDM and code-division multiple access (CDMA, have high peak-to-average power ratios (PARs. A signal with a high PAR not only is vulnerable in the presence of nonlinear components such as power amplifiers (PAs, but also leads to low transmission power efficiency. Selected mapping (SLM and clipping are well-known PAR reduction techniques. We propose to combine SLM with threshold clipping and digital baseband predistortion to improve the overall efficiency of the transmission system. Testbed experiments demonstrate the effectiveness of the proposed approach.

  5. Performance bounds on micro-Doppler estimation and adaptive waveform design using OFDM signals

    Science.gov (United States)

    Sen, Satyabrata; Barhen, Jacob; Glover, Charles W.

    2014-05-01

    We analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a target having multiple rotating scatterers (e.g., rotor blades of a helicopter, propellers of a submarine). The presence of rotating scatterers introduces Doppler frequency modulation in the received signal by generating sidebands about the transmitted frequencies. This is called the micro-Doppler effects. The use of a frequency-diverse OFDM signal in this context enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. Therefore, to characterize the accuracy of micro-Doppler frequency estimation, we compute the Craḿer-Rao Bound (CRB) on the angular-velocity estimate of the target while considering the scatterer responses as deterministic but unknown nuisance parameters. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to the transmitting OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations at different values of the signal-to-noise ratio (SNR) and the number of OFDM subcarriers. The CRB values not only decrease with the increase in the SNR values, but also reduce as we increase the number of subcarriers implying the significance of frequency-diverse OFDM waveforms. The improvement in estimation accuracy due to the adaptive waveform design is also numerically analyzed. Interestingly, we find that the relative decrease in the CRBs on the angular-velocity estimate is more pronounced for larger number of OFDM subcarriers.

  6. Performance Bounds on Micro-Doppler Estimation and Adaptive Waveform Design Using OFDM Signals

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL; Barhen, Jacob [ORNL; Glover, Charles Wayne [ORNL

    2014-01-01

    We analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a target having multiple rotating scatterers (e.g., rotor blades of a helicopter, propellers of a submarine). The presence of rotating scatterers introduces Doppler frequency modulation in the received signal by generating sidebands about the transmitted frequencies. This is called the micro-Doppler effects. The use of a frequency-diverse OFDM signal in this context enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. Therefore, to characterize the accuracy of micro-Doppler frequency estimation, we compute the Cram er-Rao Bound (CRB) on the angular-velocity estimate of the target while considering the scatterer responses as deterministic but unknown nuisance parameters. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to the transmitting OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations at different values of the signal-to-noise ratio (SNR) and the number of OFDM subcarriers. The CRB values not only decrease with the increase in the SNR values, but also reduce as we increase the number of subcarriers implying the significance of frequency-diverse OFDM waveforms. The improvement in estimation accuracy due to the adaptive waveform design is also numerically analyzed. Interestingly, we find that the relative decrease in the CRBs on the angular-velocity estimate is more pronounced for larger number of OFDM subcarriers.

  7. Heat stress, gastrointestinal permeability and interleukin-6 signaling - Implications for exercise performance and fatigue.

    Science.gov (United States)

    Vargas, Nicole; Marino, Frank

    2016-01-01

    Exercise in heat stress exacerbates performance decrements compared to normothermic environments. It has been documented that the performance decrements are associated with reduced efferent drive from the central nervous system (CNS), however, specific factors that contribute to the decrements are not completely understood. During exertional heat stress, blood flow is preferentially distributed away from the intestinal area to supply the muscles and brain with oxygen. Consequently, the gastrointestinal barrier becomes increasingly permeable, resulting in the release of lipopolysaccharides (LPS, endotoxin) into the circulation. LPS leakage stimulates an acute-phase inflammatory response, including the release of interleukin (IL)-6 in response to an increasingly endotoxic environment. If LPS translocation is too great, heat shock, neurological dysfunction, or death may ensue. IL-6 acts initially in a pro-inflammatory manner during endotoxemia, but can attenuate the response through signaling the hypothalamic pituitary adrenal (HPA)-axis. Likewise, IL-6 is believed to be a thermoregulatory sensor in the gut during the febrile response, hence highlighting its role in periphery - to - brain communication. Recently, IL-6 has been implicated in signaling the CNS and influencing perceptions of fatigue and performance during exercise. Therefore, due to the cascade of events that occur during exertional heat stress, it is possible that the release of LPS and exacerbated response of IL-6 contributes to CNS modulation during exertional heat stress. The purpose of this review is to evaluate previous literature and discuss the potential role for IL-6 during exertional heat stress to modulate performance in favor of whole body preservation.

  8. Adaptive Control for Buck Power Converter Using Fixed Point Inducting Control and Zero Average Dynamics Strategies

    Science.gov (United States)

    Hoyos Velasco, Fredy Edimer; García, Nicolás Toro; Garcés Gómez, Yeison Alberto

    In this paper, the output voltage of a buck power converter is controlled by means of a quasi-sliding scheme. The Fixed Point Inducting Control (FPIC) technique is used for the control design, based on the Zero Average Dynamics (ZAD) strategy, including load estimation by means of the Least Mean Squares (LMS) method. The control scheme is tested in a Rapid Control Prototyping (RCP) system based on Digital Signal Processing (DSP) for dSPACE platform. The closed loop system shows adequate performance. The experimental and simulation results match. The main contribution of this paper is to introduce the load estimator by means of LMS, to make ZAD and FPIC control feasible in load variation conditions. In addition, comparison results for controlled buck converter with SMC, PID and ZAD-FPIC control techniques are shown.

  9. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Morphologie de l'onde P du signal électrocardiographique. Analyse de forme des signaux bidimensionnels: mesure d'effets pharmacologiques sur les ondes P, QRS et T en représentation temps-fréquence

    OpenAIRE

    Oficjalska , Barbara

    1994-01-01

    The aim of this work is to develop a signal processing methodology in order to improve fine studies of the cardiac signal, and specially of P wave, particularly focusing the measurement of shape variations. After a review of cardiac signal characteristics, a description of its physiological and pathological variability and of the different recording techniques, a critical study of cardiac signal processing methods is performed: noise reduction, specific filtering, signal averaging and jitter ...

  11. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  12. Control mechanism to prevent correlated message arrivals from degrading signaling no. 7 network performance

    Science.gov (United States)

    Kosal, Haluk; Skoog, Ronald A.

    1994-04-01

    Signaling System No. 7 (SS7) is designed to provide a connection-less transfer of signaling messages of reasonable length. Customers having access to user signaling bearer capabilities as specified in the ANSI T1.623 and CCITT Q.931 standards can send bursts of correlated messages (e.g., by doing a file transfer that results in the segmentation of a block of data into a number of consecutive signaling messages) through SS7 networks. These message bursts with short interarrival times could have an adverse impact on the delay performance of the SS7 networks. A control mechanism, Credit Manager, is investigated in this paper to regulate incoming traffic to the SS7 network by imposing appropriate time separation between messages when the incoming stream is too bursty. The credit manager has a credit bank where credits accrue at a fixed rate up to a prespecified credit bank capacity. When a message arrives, the number of octets in that message is compared to the number of credits in the bank. If the number of credits is greater than or equal to the number of octets, then the message is accepted for transmission and the number of credits in the bank is decremented by the number of octets. If the number of credits is less than the number of octets, then the message is delayed until enough credits are accumulated. This paper presents simulation results showing delay performance of the SS7 ISUP and TCAP message traffic with a range of correlated message traffic, and control parameters of the credit manager (i.e., credit generation rate and bank capacity) are determined that ensure the traffic entering the SS7 network is acceptable. The results show that control parameters can be set so that for any incoming traffic stream there is no detrimental impact on the SS7 ISUP and TCAP message delay, and the credit manager accepts a wide range of traffic patterns without causing significant delay.

  13. Revisiting the Majority Problem: Average-Case Analysis with Arbitrarily Many Colours

    OpenAIRE

    Kleerekoper, Anthony

    2016-01-01

    The majority problem is a special case of the heavy hitters problem. Given a collection of coloured balls, the task is to identify the majority colour or state that no such colour exists. Whilst the special case of two-colours has been well studied, the average-case performance for arbitrarily many colours has not. In this paper, we present heuristic analysis of the average-case performance of three deterministic algorithms that appear in the literature. We empirically validate our analysis w...

  14. The effect of various parameters of large scale radio propagation models on improving performance mobile communications

    Science.gov (United States)

    Pinem, M.; Fauzi, R.

    2018-02-01

    One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.

  15. EXTRACTING PERIODIC TRANSIT SIGNALS FROM NOISY LIGHT CURVES USING FOURIER SERIES

    Energy Technology Data Exchange (ETDEWEB)

    Samsing, Johan [Department of Astrophysical Sciences, Princeton University, Peyton Hall, 4 Ivy Lane, Princeton, NJ 08544 (United States)

    2015-07-01

    We present a simple and powerful method for extracting transit signals associated with a known transiting planet from noisy light curves. Assuming the orbital period of the planet is known and the signal is periodic, we illustrate that systematic noise can be removed in Fourier space at all frequencies by only using data within a fixed time frame with a width equal to an integer number of orbital periods. This results in a reconstruction of the full transit signal, which on average is unbiased despite no prior knowledge of either the noise or the transit signal itself being used in the analysis. The method therefore has clear advantages over standard phase folding, which normally requires external input such as nearby stars or noise models for removing systematic components. In addition, we can extract the full orbital transit signal (360°) simultaneously, and Kepler-like data can be analyzed in just a few seconds. We illustrate the performance of our method by applying it to a dataset composed of light curves from Kepler with a fake injected signal emulating a planet with rings. For extracting periodic transit signals, our presented method is in general the optimal and least biased estimator and could therefore lead the way toward the first detections of, e.g., planet rings and exo-trojan asteroids.

  16. A piecewise probabilistic regression model to decode hand movement trajectories from epidural and subdural ECoG signals

    Science.gov (United States)

    Farrokhi, Behraz; Erfanian, Abbas

    2018-06-01

    Objective. The primary concern of this study is to develop a probabilistic regression method that would improve the decoding of the hand movement trajectories from epidural ECoG as well as from subdural ECoG signals. Approach. The model is characterized by the conditional expectation of the hand position given the ECoG signals. The conditional expectation of the hand position is then modeled by a linear combination of the conditional probability density functions defined for each segment of the movement. Moreover, a spatial linear filter is proposed for reducing the dimension of the feature space. The spatial linear filter is applied to each frequency band of the ECoG signals and extract the features with highest decoding performance. Main results. For evaluating the proposed method, a dataset including 28 ECoG recordings from four adult Japanese macaques is used. The results show that the proposed decoding method outperforms the results with respect to the state of the art methods using this dataset. The relative kinematic information of each frequency band is also investigated using mutual information and decoding performance. The decoding performance shows that the best performance was obtained for high gamma bands from 50 to 200 Hz as well as high frequency ECoG band from 200 to 400 Hz for subdural recordings. However, the decoding performance was decreased for these frequency bands using epidural recordings. The mutual information shows that, on average, the high gamma band from 50 to 200 Hz and high frequency ECoG band from 200 to 400 Hz contain significantly more information than the average of the rest of the frequency bands ≤ft( pright) for both subdural and epidural recordings. The results of high resolution time-frequency analysis show that ERD/ERS patterns in all frequency bands could reveal the dynamics of the ECoG responses during the movement. The onset and offset of the movement can be clearly identified by the ERD/ERS patterns. Significance

  17. Performance of integrated systems of automated roller shade systems and daylight responsive dimming systems

    Energy Technology Data Exchange (ETDEWEB)

    Park, Byoung-Chul; Choi, An-Seop; Jeong, Jae-Weon [Department of Architectural Engineering, Sejong University, Kunja-Dong, Kwangjin-Gu, Seoul (Korea, Republic of); Lee, Eleanor S. [Building Technologies Department, Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2011-03-15

    Daylight responsive dimming systems have been used in few buildings to date because they require improvements to improve reliability. The key underlying factor contributing to poor performance is the variability of the ratio of the photosensor signal to daylight workplane illuminance in accordance with sun position, sky condition, and fenestration condition. Therefore, this paper describes the integrated systems between automated roller shade systems and daylight responsive dimming systems with an improved closed-loop proportional control algorithm, and the relative performance of the integrated systems and single systems. The concept of the improved closed-loop proportional control algorithm for the integrated systems is to predict the varying correlation of photosensor signal to daylight workplane illuminance according to roller shade height and sky conditions for improvement of the system accuracy. In this study, the performance of the integrated systems with two improved closed-loop proportional control algorithms was compared with that of the current (modified) closed-loop proportional control algorithm. In the results, the average maintenance percentage and the average discrepancies of the target illuminance, as well as the average time under 90% of target illuminance for the integrated systems significantly improved in comparison with the current closed-loop proportional control algorithm for daylight responsive dimming systems as a single system. (author)

  18. TERMA Framework for Biomedical Signal Analysis: An Economic-Inspired Approach

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    2016-11-01

    Full Text Available Biomedical signals contain features that represent physiological events, and each of these events has peaks. The analysis of biomedical signals for monitoring or diagnosing diseases requires the detection of these peaks, making event detection a crucial step in biomedical signal processing. Many researchers have difficulty detecting these peaks to investigate, interpret and analyze their corresponding events. To date, there is no generic framework that captures these events in a robust, efficient and consistent manner. A new method referred to for the first time as two event-related moving averages (“TERMA” involves event-related moving averages and detects events in biomedical signals. The TERMA framework is flexible and universal and consists of six independent LEGO building bricks to achieve high accuracy detection of biomedical events. Results recommend that the window sizes for the two moving averages ( W 1 and W 2 have to follow the inequality ( 8 × W 1 ≥ W 2 ≥ ( 2 × W 1 . Moreover, TERMA is a simple yet efficient event detector that is suitable for wearable devices, point-of-care devices, fitness trackers and smart watches, compared to more complex machine learning solutions.

  19. Continuous detection of weak sensory signals in afferent spike trains: the role of anti-correlated interspike intervals in detection performance.

    Science.gov (United States)

    Goense, J B M; Ratnam, R

    2003-10-01

    An important problem in sensory processing is deciding whether fluctuating neural activity encodes a stimulus or is due to variability in baseline activity. Neurons that subserve detection must examine incoming spike trains continuously, and quickly and reliably differentiate signals from baseline activity. Here we demonstrate that a neural integrator can perform continuous signal detection, with performance exceeding that of trial-based procedures, where spike counts in signal- and baseline windows are compared. The procedure was applied to data from electrosensory afferents of weakly electric fish (Apteronotus leptorhynchus), where weak perturbations generated by small prey add approximately 1 spike to a baseline of approximately 300 spikes s(-1). The hypothetical postsynaptic neuron, modeling an electrosensory lateral line lobe cell, could detect an added spike within 10-15 ms, achieving near ideal detection performance (80-95%) at false alarm rates of 1-2 Hz, while trial-based testing resulted in only 30-35% correct detections at that false alarm rate. The performance improvement was due to anti-correlations in the afferent spike train, which reduced both the amplitude and duration of fluctuations in postsynaptic membrane activity, and so decreased the number of false alarms. Anti-correlations can be exploited to improve detection performance only if there is memory of prior decisions.

  20. Flame Motion In Gas Turbine Burner From Averages Of Single-Pulse Flame Fronts

    Energy Technology Data Exchange (ETDEWEB)

    Tylli, N.; Hubschmid, W.; Inauen, A.; Bombach, R.; Schenker, S.; Guethe, F. [Alstom (Switzerland); Haffner, K. [Alstom (Switzerland)

    2005-03-01

    Thermo acoustic instabilities of a gas turbine burner were investigated by flame front localization from measured OH laser-induced fluorescence single pulse signals. The average position of the flame was obtained from the superposition of the single pulse flame fronts at constant phase of the dominant acoustic oscillation. One observes that the flame position varies periodically with the phase angle of the dominant acoustic oscillation. (author)

  1. Grating geophone signal processing based on wavelet transform

    Science.gov (United States)

    Li, Shuqing; Zhang, Huan; Tao, Zhifei

    2008-12-01

    Grating digital geophone is designed based on grating measurement technique benefiting averaging-error effect and wide dynamic range to improve weak signal detected precision. This paper introduced the principle of grating digital geophone and its post signal processing system. The signal acquisition circuit use Atmega 32 chip as core part and display the waveform on the Labwindows through the RS232 data link. Wavelet transform is adopted this paper to filter the grating digital geophone' output signal since the signal is unstable. This data processing method is compared with the FIR filter that widespread use in current domestic. The result indicates that the wavelet algorithm has more advantages and the SNR of seismic signal improve obviously.

  2. Impact of Self-Interference on the Performance of Joint Partial RAKE Receiver and Adaptive Modulation

    KAUST Repository

    Nam, Sung Sik

    2016-11-23

    In this paper, we investigate the impact of self-interference on the performance of a joint partial RAKE (PRAKE) receiver and adaptive modulation over both independent and identically distributed and independent but non-identically distributed Rayleigh fading channels. To better observe the impact of self-interference, our approach starts from considering the signal to interference plus noise ratio. Specifically, we accurately analyze the outage probability, the average spectral efficiency, and the average bit error rate as performance measures in the presence of self-interference. Several numerical and simulation results are selected to present the performance of the joint PRAKE receiver and adaptive modulation subject to self-interference.

  3. Performance Analysis of Simple Channel Feedback Schemes for a Practical OFDMA System

    DEFF Research Database (Denmark)

    Pedersen, Klaus, I.; Kolding, Troels; Kovacs, Istvan

    2009-01-01

    In this paper, we evaluate the tradeoff between the amount of uplink channel feedback information and the orthogonal frequency-division multiple access (OFDMA) downlink performance with opportunistic frequency-domain packet scheduling. Three candidate channel feedback schemes are investigated......, including practical aspects, such as the effects of terminal measurement errors, bandwidth measurement granularity, quantization, and uplink signaling delays. The performance is evaluated by means of system-level simulations with detailed modeling of various radio resource-management algorithms, etc. Our...... results show that the optimal tradeoff between the channel feedback and the downlink OFDMA system performance depends on the radio channel frequency coherence bandwidth. We conclude that the so-called average best-M scheme is the most attractive channel feedback solution, where only the average channel...

  4. Predicting Performance on the National Athletic Trainers' Association Board of Certification Examination From Grade Point Average and Number of Clinical Hours.

    Science.gov (United States)

    Middlemas, David A.; Manning, James M.; Gazzillo, Linda M.; Young, John

    2001-06-01

    OBJECTIVE: To determine whether grade point average, hours of clinical education, or both are significant predictors of performance on the National Athletic Trainers' Association Board of Certification examination and whether curriculum and internship candidates' scores on the certification examination can be differentially predicted. DESIGN AND SETTING: Data collection forms and consent forms were mailed to the subjects to collect data for predictor variables. Subject scores on the certification examination were obtained from Columbia Assessment Services. SUBJECTS: A total of 270 first-time candidates for the April and June 1998 certification examinations. MEASUREMENTS: Grade point average, number of clinical hours completed, sex, route to certification eligibility (curriculum or internship), scores on each section of the certification examination, and pass/fail criteria for each section. RESULTS: We found no significant difference between the scores of men and women on any section of the examination. Scores for curriculum and internship candidates differed significantly on the written and practical sections of the examination but not on the simulation section. Grade point average was a significant predictor of scores on each section of the examination and the examination as a whole. Clinical hours completed did not add a significant increment for any section but did add a significant increment for the examination overall. Although no significant difference was noted between curriculum and internship candidates in predicting scores on sections of the examination, a significant difference by route was found in predicting whether candidates would pass the examination as a whole (P =.047). Proportion of variance accounted for was less than R(2) = 0.0723 for any section of the examination and R(2) = 0.057 for the examination as a whole. CONCLUSIONS: Potential predictors of performance on the certification examination can be useful to athletic training educators in

  5. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  6. Signal recovery of the corrupted metal impact signal using the adaptive filtering in NPPs

    International Nuclear Information System (INIS)

    Kim, Dai Il; Shin, Won Ky; Oh, Sung Hun; Yun, Won Young

    1995-01-01

    Loose Part Monitoring System (LPMS) is one of the fundamental diagnostic tools installed in the nuclear power plants. In this paper, recovery process algorithm and model for the corrupted impact signal generated by loose parts is presented. The characteristics of this algorithm can obtain a proper burst signal even though background noise is considerably high level comparing with actual impact signal. To verify performance of the proposed algorithm, we evaluate mathematically signal-to-noise ratio of primary output and noise. The performance of this recovery process algorithm is shown through computer simulation

  7. Performance Verification on UWB Antennas for Breast Cancer Detection

    Directory of Open Access Journals (Sweden)

    Vijayasarveswari V.

    2017-01-01

    Full Text Available Breast cancer is a common disease among women and death figure is continuing to increase. Early breast cancer detection is very important. Ultra wide-band (UWB is the promising candidate for short communication applications. This paper presents the performance of different types of UWB antennas for breast cancer detection. Two types of antennas are used i.e: UWB pyramidal antenna and UWB horn antenna. These antennas are used to transmit and receive the UWB signal. The collected signals are fed into developed neural network module to measure the performance efficiency of each antenna. The average detection efficiency is 88.46% and 87.55% for UWB pyramidal antenna and UWB horn antenna respectively. These antennas can be used to detect breast cancer in the early stage and save precious lives.

  8. Performance of MgO:PPLN, KTA, and KNbO₃ for mid-wave infrared broadband parametric amplification at high average power.

    Science.gov (United States)

    Baudisch, M; Hemmer, M; Pires, H; Biegert, J

    2014-10-15

    The performance of potassium niobate (KNbO₃), MgO-doped periodically poled lithium niobate (MgO:PPLN), and potassium titanyl arsenate (KTA) were experimentally compared for broadband mid-wave infrared parametric amplification at a high repetition rate. The seed pulses, with an energy of 6.5 μJ, were amplified using 410 μJ pump energy at 1064 nm to a maximum pulse energy of 28.9 μJ at 3 μm wavelength and at a 160 kHz repetition rate in MgO:PPLN while supporting a transform limited duration of 73 fs. The high average powers of the interacting beams used in this study revealed average power-induced processes that limit the scaling of optical parametric amplification in MgO:PPLN; the pump peak intensity was limited to 3.8  GW/cm² due to nonpermanent beam reshaping, whereas in KNbO₃ an absorption-induced temperature gradient in the crystal led to permanent internal distortions in the crystal structure when operated above a pump peak intensity of 14.4  GW/cm².

  9. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    International Nuclear Information System (INIS)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R.

    2013-01-01

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  10. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R., E-mail: michaeltdh@physics.ucsb.edu, E-mail: cgwinn@physics.ucsb.edu [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)

    2013-03-10

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  11. 1-D Wavelet Signal Analysis of the Actuators Nonlinearities Impact on the Healthy Control Systems Performance

    Directory of Open Access Journals (Sweden)

    Nicolae Tudoroiu

    2017-09-01

    Full Text Available The objective of this paper is to investigate the use of the 1-D wavelet analysis to extract several patterns from signals data sets collected from healthy and faulty input-output signals of control systems as a preliminary step in real-time implementation of fault detection diagnosis and isolation strategies. The 1-D wavelet analysis proved that is an useful tool for signals processing, design and analysis based on wavelet transforms found in a wide range of control systems industrial applications. Based on the fact that in the real life there is a great similitude between the phenomena, we are motivated to extend the applicability of these techniques to solve similar applications from control systems field, such is done in our research work. Their efficiency will be demonstrated on a case study mainly chosen to evaluate the impact of the uncertainties and the nonlinearities of the sensors and actuators on the overall performance of the control systems. The proposed techniques are able to extract in frequency domain some pattern features (signatures of interest directly from the signals data set collected by data acquisition equipment from the control system.

  12. Wavelet-based characterization of gait signal for neurological abnormalities.

    Science.gov (United States)

    Baratin, E; Sugavaneswaran, L; Umapathy, K; Ioana, C; Krishnan, S

    2015-02-01

    Studies conducted by the World Health Organization (WHO) indicate that over one billion suffer from neurological disorders worldwide, and lack of efficient diagnosis procedures affects their therapeutic interventions. Characterizing certain pathologies of motor control for facilitating their diagnosis can be useful in quantitatively monitoring disease progression and efficient treatment planning. As a suitable directive, we introduce a wavelet-based scheme for effective characterization of gait associated with certain neurological disorders. In addition, since the data were recorded from a dynamic process, this work also investigates the need for gait signal re-sampling prior to identification of signal markers in the presence of pathologies. To benefit automated discrimination of gait data, certain characteristic features are extracted from the wavelet-transformed signals. The performance of the proposed approach was evaluated using a database consisting of 15 Parkinson's disease (PD), 20 Huntington's disease (HD), 13 Amyotrophic lateral sclerosis (ALS) and 16 healthy control subjects, and an average classification accuracy of 85% is achieved using an unbiased cross-validation strategy. The obtained results demonstrate the potential of the proposed methodology for computer-aided diagnosis and automatic characterization of certain neurological disorders. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Directory of Open Access Journals (Sweden)

    C. O'Brien

    2007-01-01

    Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as k-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of 81% and specificity of 84%.

  14. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  15. On signal design by the R/0/ criterion for non-white Gaussian noise channels

    Science.gov (United States)

    Bordelon, D. L.

    1977-01-01

    The use of the cut-off rate criterion for modulation system design is investigated for channels with non-white Gaussian noise. A signal space representation of the waveform channel is developed, and the cut-off rate for vector channels with additive non-white Gaussian noise and unquantized demodulation is derived. When the signal input to the channel is a continuous random vector, maximization of the cut-off rate with constrained average signal energy leads to a water-filling interpretation of optimal energy distribution in signal space. The necessary condition for a finite signal set to maximize the cut-off rate with constrained energy and an equally likely probability assignment of signal vectors is presented, and an algorithm is outlined for numerically computing the optimum signal set. As an example, the rectangular signal set which has the water-filling average energy distribution and the optimum rectangular set are compared.

  16. Performance Analysis of the Effect of Pulsed-Noise Interference on WLAN Signals Transmitted Over a Nakagami Fading Channel

    National Research Council Canada - National Science Library

    Tsoumanis, Andreas

    2004-01-01

    ...) coding with soft decision decoding (SDD) and maximum- likelihood detection improves performance as compared to uncoded signals, In addition, the combination of maximum-likelihood detection and error connection coding renders pulsed-noise...

  17. Front-end data reduction of diagnostic signals by real-time digital filtering

    International Nuclear Information System (INIS)

    Zasche, D.; Fahrbach, H.U.; Harmeyer, E.

    1984-01-01

    Diagnostic measurements on a fusion plasma with high resolution in space, time and signal amplitude involve handling large amounts of data. In the design of the soft-X-ray pinhole camera diagnostic for JET (100 detectors in 2 cameras) a new approach to this problem was found. The analogue-to-digital conversion is performed continuously at the highest sample rate of 200 kHz, lower sample rates (10 kHz, 1 kHz, 100 Hz) are obtained by real-time digital filters which calculate weighted averages over consecutive samples and are undersampled at their outputs to reduce the data rate. At any time, the signals from all detectors are available at all possible data rates in ring buffers. The appropriate data rate can always be recorded on demand. (author)

  18. An integrated framework for high level design of high performance signal processing circuits on FPGAs

    Science.gov (United States)

    Benkrid, K.; Belkacemi, S.; Sukhsawas, S.

    2005-06-01

    This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.

  19. Receiver-based recovery of clipped ofdm signals for papr reduction: A bayesian approach

    KAUST Repository

    Ali, Anum

    2014-01-01

    Clipping is one of the simplest peak-to-average power ratio reduction schemes for orthogonal frequency division multiplexing (OFDM). Deliberately clipping the transmission signal degrades system performance, and clipping mitigation is required at the receiver for information restoration. In this paper, we acknowledge the sparse nature of the clipping signal and propose a low-complexity Bayesian clipping estimation scheme. The proposed scheme utilizes a priori information about the sparsity rate and noise variance for enhanced recovery. At the same time, the proposed scheme is robust against inaccurate estimates of the clipping signal statistics. The undistorted phase property of the clipped signal, as well as the clipping likelihood, is utilized for enhanced reconstruction. Furthermore, motivated by the nature of modern OFDM-based communication systems, we extend our clipping reconstruction approach to multiple antenna receivers and multi-user OFDM.We also address the problem of channel estimation from pilots contaminated by the clipping distortion. Numerical findings are presented that depict favorable results for the proposed scheme compared to the established sparse reconstruction schemes.

  20. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  1. Signal detection

    International Nuclear Information System (INIS)

    Tholomier, M.

    1985-01-01

    In a scanning electron microscope, whatever is the measured signal, the same set is found: incident beam, sample, signal detection, signal amplification. The resulting signal is used to control the spot luminosity with the observer cathodoscope. This is synchronized with the beam scanning on the sample; on the cathodoscope, the image in secondary electrons, backscattered electrons,... of the sample surface is reconstituted. The best compromise must be found between a register time low enough to remove eventual variations (under the incident beam) of the nature of the observed phenomenon, and a good spatial resolution of the image and a signal-to-noise ratio high enough. The noise is one of the basic limitations of the scanning electron microscope performance. The whose measurement line must be optimized to reduce it [fr

  2. On monogamy of non-locality and macroscopic averages: examples and preliminary results

    Directory of Open Access Journals (Sweden)

    Rui Soares Barbosa

    2014-12-01

    Full Text Available We explore a connection between monogamy of non-locality and a weak macroscopic locality condition: the locality of the average behaviour. These are revealed by our analysis as being two sides of the same coin. Moreover, we exhibit a structural reason for both in the case of Bell-type multipartite scenarios, shedding light on but also generalising the results in the literature [Ramanathan et al., Phys. Rev. Lett. 107, 060405 (2001; Pawlowski & Brukner, Phys. Rev. Lett. 102, 030403 (2009]. More specifically, we show that, provided the number of particles in each site is large enough compared to the number of allowed measurement settings, and whatever the microscopic state of the system, the macroscopic average behaviour is local realistic, or equivalently, general multipartite monogamy relations hold. This result relies on a classical mathematical theorem by Vorob'ev [Theory Probab. Appl. 7(2, 147-163 (1962] about extending compatible families of probability distributions defined on the faces of a simplicial complex – in the language of the sheaf-theoretic framework of Abramsky & Brandenburger [New J. Phys. 13, 113036 (2011], such families correspond to no-signalling empirical models, and the existence of an extension corresponds to locality or non-contextuality. Since Vorob'ev's theorem depends solely on the structure of the simplicial complex, which encodes the compatibility of the measurements, and not on the specific probability distributions (i.e. the empirical models, our result about monogamy relations and locality of macroscopic averages holds not just for quantum theory, but for any empirical model satisfying the no-signalling condition. In this extended abstract, we illustrate our approach by working out a couple of examples, which convey the intuition behind our analysis while keeping the discussion at an elementary level.

  3. A Method for Vibration-Based Structural Interrogation and Health Monitoring Based on Signal Cross-Correlation

    International Nuclear Information System (INIS)

    Trendafilova, I

    2011-01-01

    Vibration-based structural interrogation and health monitoring is a field which is concerned with the estimation of the current state of a structure or a component from its vibration response with regards to its ability to perform its intended function appropriately. One way to approach this problem is through damage features extracted from the measured structural vibration response. This paper suggests to use a new concept for the purposes of vibration-based health monitoring. The correlation between two signals, an input and an output, measured on the structure is used to develop a damage indicator. The paper investigates the applicability of the signal cross-correlation and a nonlinear alternative, the average mutual information between the two signals, for the purposes of structural health monitoring and damage assessment. The suggested methodology is applied and demonstrated for delamination detection in a composite beam.

  4. Contributory factors to traffic crashes at signalized intersections in Hong Kong.

    Science.gov (United States)

    Wong, S C; Sze, N N; Li, Y C

    2007-11-01

    Efficient geometric design and signal timing not only improve operational performance at signalized intersections by expanding capacity and reducing traffic delays, but also result in an appreciable reduction in traffic conflicts, and thus better road safety. Information on the incidence of crashes, traffic flow, geometric design, road environment, and traffic control at 262 signalized intersections in Hong Kong during 2002 and 2003 are incorporated into a crash prediction model. Poisson regression and negative binomial regression are used to quantify the influence of possible contributory factors on the incidence of killed and severe injury (KSI) crashes and slight injury crashes, respectively, while possible interventions by traffic flow are controlled. The results for the incidence of slight injury crashes reveal that the road environment, degree of curvature, and presence of tram stops are significant factors, and that traffic volume has a diminishing effect on the crash risk. The presence of tram stops, number of pedestrian streams, road environment, proportion of commercial vehicles, average lane width, and degree of curvature increase the risk of KSI crashes, but the effect of traffic volume is negligible.

  5. Impact of revascularization of coronary chronic total occlusion on left ventricular function and electrical stability: analysis by speckle tracking echocardiography and signal-averaged electrocardiogram.

    Science.gov (United States)

    Sotomi, Yohei; Okamura, Atsunori; Iwakura, Katsuomi; Date, Motoo; Nagai, Hiroyuki; Yamasaki, Tomohiro; Koyama, Yasushi; Inoue, Koichi; Sakata, Yasushi; Fujii, Kenshi

    2017-06-01

    The present study aimed to assess the mechanisms of effects of percutaneous coronary intervention (PCI) for chronic total occlusion (CTO) from two different aspects: left ventricular (LV) systolic function assessed by two-dimensional speckle tracking echocardiography (2D-STE) and electrical stability evaluated by late potential on signal-averaged electrocardiogram (SAECG). We conducted a prospective observational study with consecutive CTO-PCI patients. 2D-STE and SAECG were performed before PCI, and after 1-day and 3-months of procedure. 2D-STE computed global longitudinal strain (GLS) and regional longitudinal strain (RLS) in CTO area, collateral blood-supplying donor artery area, and non-CTO/non-donor area. A total of 37 patients (66 ± 11 years, 78% male) were analyzed. RLS in CTO and donor areas and GLS were significantly improved 1-day after the procedure, but these improvements diminished during 3 months. The improvement of RLS in donor area remained significant after 3-months the index procedure (pre-PCI -13.4 ± 4.8% vs. post-3M -15.1 ± 4.5%, P = 0.034). RLS in non-CTO/non-donor area and LV ejection fraction were not influenced. Mitral annulus velocity was improved at 3-month follow-up (5.0 ± 1.4 vs. 5.6 ± 1.7 cm/s, P = 0.049). Before the procedure, 12 patients (35%) had a late potential. All components of the late potential (filtered QRS duration, root-mean-square voltage in the terminal 40 ms, and duration of the low amplitude signal <40 μV) were not improved. CTO-PCI improved RLS in the donor area at 3-month follow-up without changes of LV ejection fraction. Although higher prevalence of late potential in the current population compared to healthy population was observed, late potential as a surrogate of arrhythmogenic substrate was not influenced by CTO-PCI.

  6. Theory and analysis of accuracy for the method of characteristics direction probabilities with boundary averaging

    International Nuclear Information System (INIS)

    Liu, Zhouyu; Collins, Benjamin; Kochunas, Brendan; Downar, Thomas; Xu, Yunlin; Wu, Hongchun

    2015-01-01

    Highlights: • The CDP combines the benefits of the CPM’s efficiency and the MOC’s flexibility. • Boundary averaging reduces the computation effort with losing minor accuracy. • An analysis model is used to justify the choice of optimize averaging strategy. • Numerical results show the performance and accuracy. - Abstract: The method of characteristic direction probabilities (CDP) combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC) for the solution of the integral form of the Botlzmann Transport Equation. By coupling only the fine regions traversed by the characteristic rays in a particular direction, the computational effort required to calculate the probability matrices and to solve the matrix system is considerably reduced compared to the CPM. Furthermore, boundary averaging is performed to reduce the storage and computation but the capability of dealing with complicated geometries is preserved since the same ray tracing information is used as in MOC. An analysis model for the outgoing angular flux is used to analyze a variety of outgoing angular flux averaging methods for the boundary and to justify the choice of optimize averaging strategy. The boundary average CDP method was then implemented in the Michigan PArallel Characteristic based Transport (MPACT) code to perform 2-D and 3-D transport calculations. The numerical results are given for different cases to show the effect of averaging on the outgoing angular flux, region scalar flux and the eigenvalue. Comparison of the results with the case with no averaging demonstrates that an angular dependent averaging strategy is possible for the CDP to improve its computational performance without compromising the achievable accuracy

  7. Performance evaluation of CPPM modulation in multi-path environments

    International Nuclear Information System (INIS)

    Tasev, Zarko; Kocarev, Ljupco

    2003-01-01

    Chaotic pulse position modulation (CPPM) is a novel technique to communicate with chaotic signals based upon pulse trains in which the intervals between two pulses are determined by chaotic dynamics of a pulse generator. Using numerical simulations we show that CPPM offers excellent multi-path performance. We simulated the CPPM radio system, which is designed for a WLAN application and operates in the 2.4 GHz ISM frequency band with IEEE 802.11 compliant channel spacing. In this case, the average performance loss due the multi-path for CPPM is less than 5 dB

  8. Performance evaluation of CPPM modulation in multi-path environments

    Energy Technology Data Exchange (ETDEWEB)

    Tasev, Zarko E-mail: ztasev@ucsd.edu; Kocarev, Ljupco E-mail: lkocarev@ucsd.edu

    2003-01-01

    Chaotic pulse position modulation (CPPM) is a novel technique to communicate with chaotic signals based upon pulse trains in which the intervals between two pulses are determined by chaotic dynamics of a pulse generator. Using numerical simulations we show that CPPM offers excellent multi-path performance. We simulated the CPPM radio system, which is designed for a WLAN application and operates in the 2.4 GHz ISM frequency band with IEEE 802.11 compliant channel spacing. In this case, the average performance loss due the multi-path for CPPM is less than 5 dB.

  9. On signal design by the R sub 0 criterion for non-white Gaussian noise channels

    Science.gov (United States)

    Bordelon, D. L.

    1976-01-01

    The use of the R sub 0 criterion for modulation system design is investigated for channels with non-white Gaussian noise. A signal space representation of the waveform channel is developed, and the cut-off rate R sub 0 for vector channels with additive nonwhite Gaussian noise and unquantized demodulation is derived. When the signal unput to the channel is a continuous random vector, maximization of R sub 0 with constrained average signal energy leads to a water-filling interpretation of optimal energy distribution in signal space. The necessary condition for a finite signal set to maximize R sub 0 with constrained energy and an equally likely probability assignment of signal vectors is presented, and an algorithm is outlined for numerically computing the optimum signal set. A necessary condition on a constrained energy, finite signal set is found which maximizes a Taylor series approximation of R sub 0. This signal set is compared with the finite signal set which has the water-filling average energy distribution.

  10. Performance study of highly efficient 520 W average power long pulse ceramic Nd:YAG rod laser

    Science.gov (United States)

    Choubey, Ambar; Vishwakarma, S. C.; Ali, Sabir; Jain, R. K.; Upadhyaya, B. N.; Oak, S. M.

    2013-10-01

    We report the performance study of a 2% atomic doped ceramic Nd:YAG rod for long pulse laser operation in the millisecond regime with pulse duration in the range of 0.5-20 ms. A maximum average output power of 520 W with 180 J maximum pulse energy has been achieved with a slope efficiency of 5.4% using a dual rod configuration, which is the highest for typical lamp pumped ceramic Nd:YAG lasers. The laser output characteristics of the ceramic Nd:YAG rod were revealed to be nearly equivalent or superior to those of high-quality single crystal Nd:YAG rod. The laser pump chamber and resonator were designed and optimized to achieve a high efficiency and good beam quality with a beam parameter product of 16 mm mrad (M2˜47). The laser output beam was efficiently coupled through a 400 μm core diameter optical fiber with 90% overall transmission efficiency. This ceramic Nd:YAG laser will be useful for various material processing applications in industry.

  11. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  12. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  13. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    Science.gov (United States)

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  14. An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet

    International Nuclear Information System (INIS)

    Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi

    2010-01-01

    Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm

  15. Variation of High-Intensity Therapeutic Ultrasound (HITU) Pressure Field Characterization: Effects of Hydrophone Choice, Nonlinearity, Spatial Averaging and Complex Deconvolution.

    Science.gov (United States)

    Liu, Yunbo; Wear, Keith A; Harris, Gerald R

    2017-10-01

    Reliable acoustic characterization is fundamental for patient safety and clinical efficacy during high-intensity therapeutic ultrasound (HITU) treatment. Technical challenges, such as measurement variation and signal analysis, still exist for HITU exposimetry using ultrasound hydrophones. In this work, four hydrophones were compared for pressure measurement: a robust needle hydrophone, a small polyvinylidene fluoride capsule hydrophone and two fiberoptic hydrophones. The focal waveform and beam distribution of a single-element HITU transducer (1.05 MHz and 3.3 MHz) were evaluated. Complex deconvolution between the hydrophone voltage signal and frequency-dependent complex sensitivity was performed to obtain pressure waveforms. Compressional pressure (p + ), rarefactional pressure (p - ) and focal beam distribution were compared up to 10.6/-6.0 MPa (p + /p - ) (1.05 MHz) and 20.65/-7.20 MPa (3.3 MHz). The effects of spatial averaging, local non-linear distortion, complex deconvolution and hydrophone damage thresholds were investigated. This study showed a variation of no better than 10%-15% among hydrophones during HITU pressure characterization. Published by Elsevier Inc.

  16. Estimating Safety Effects of Green-Man Countdown Devices at Signalized Pedestrian Crosswalk Based on Cellular Automata

    Directory of Open Access Journals (Sweden)

    Chen Chai

    2017-01-01

    Full Text Available Safety effects of Green-Man Countdown Device (GMCD at signalized pedestrian crosswalks are evaluated. Pedestrian behavior at GMCD and non-GMCD crosswalks is observed and analyzed. A microsimulation model is developed based on field observations to estimate safety performance. Simulation outputs allow analysts to assess the impacts of GMCD at various conditions with different geometric layout, traffic and pedestrian volumes, and the green time. According to simulation results, it is found that the safety impact of GMCD is affected by traffic condition as well as different time duration within green-man signal phase. In general, GMCD increases average walking velocity, especially during the last few seconds. The installation of GMCD improves safety performance generally, especially at more crowded crossings. Conflict severity is increased during last 10 s after GMCD installation. Findings from this study suggest that the current practice, which is to install GMCD at more crowded crosswalks or near the school zone, is effective. Moreover, at crosswalks with GMCD, longer all red signal phase is suggested to improve pedestrian safety during intergreen period.

  17. On the performance of diagonal lattice space-time codes

    KAUST Repository

    Abediseid, Walid

    2013-11-01

    There has been tremendous work done on designing space-time codes for the quasi-static multiple-input multiple output (MIMO) channel. All the coding design up-to-date focuses on either high-performance, high rates, low complexity encoding and decoding, or targeting a combination of these criteria [1]-[9]. In this paper, we analyze in details the performance limits of diagonal lattice space-time codes under lattice decoding. We present both lower and upper bounds on the average decoding error probability. We first derive a new closed-form expression for the lower bound using the so-called sphere lower bound. This bound presents the ultimate performance limit a diagonal lattice space-time code can achieve at any signal-to-noise ratio (SNR). The upper bound is then derived using the union-bound which demonstrates how the average error probability can be minimized by maximizing the minimum product distance of the code. Combining both the lower and the upper bounds on the average error probability yields a simple upper bound on the the minimum product distance that any (complex) lattice code can achieve. At high-SNR regime, we discuss the outage performance of such codes and provide the achievable diversity-multiplexing tradeoff under lattice decoding. © 2013 IEEE.

  18. Performance of equal gain combining with quantized phases in rayleigh fading channels

    KAUST Repository

    Rizvi, Umar H.

    2011-01-01

    In this paper, we analyze the error probability of equal gain combining with quantized channel phase compensation for binary phase shift keying signalling over Rayleigh fading channels. The probability density and characteristic functions of the combined signal amplitude are derived and used to compute the analytic expressions for the bit error probability in dependance of the number of quantization levels L, the number of diversity branches N-R and the average received signal-to-noise ratio. The analysis is utilized to outline the trade-off between N-R and L and to compare the performance with non-coherent binary frequency shift keying and differential binary phase shift keying schemes under diversity reception. © 2011 IEEE.

  19. Acquisition and deconvolution of seismic signals by different methods to perform direct ground-force measurements

    Science.gov (United States)

    Poletto, Flavio; Schleifer, Andrea; Zgauc, Franco; Meneghini, Fabio; Petronio, Lorenzo

    2016-12-01

    We present the results of a novel borehole-seismic experiment in which we used different types of onshore-transient-impulsive and non-impulsive-surface sources together with direct ground-force recordings. The ground-force signals were obtained by baseplate load cells located beneath the sources, and by buried soil-stress sensors installed in the very shallow-subsurface together with accelerometers. The aim was to characterize the source's emission by its complex impedance, function of the near-field vibrations and soil stress components, and above all to obtain appropriate deconvolution operators to remove the signature of the sources in the far-field seismic signals. The data analysis shows the differences in the reference measurements utilized to deconvolve the source signature. As downgoing waves, we process the signals of vertical seismic profiles (VSP) recorded in the far-field approximation by an array of permanent geophones cemented at shallow-medium depth outside the casing of an instrumented well. We obtain a significant improvement in the waveform of the radiated seismic-vibrator signals deconvolved by ground force, similar to that of the seismograms generated by the impulsive sources, and demonstrates that the results obtained by different sources present low values in their repeatability norm. The comparison evidences the potentiality of the direct ground-force measurement approach to effectively remove the far-field source signature in VSP onshore data, and to increase the performance of permanent acquisition installations for time-lapse application purposes.

  20. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  1. Free-space optical communications with peak and average constraints: High SNR capacity approximation

    KAUST Repository

    Chaaban, Anas

    2015-09-07

    The capacity of the intensity-modulation direct-detection (IM-DD) free-space optical channel with both average and peak intensity constraints is studied. A new capacity lower bound is derived by using a truncated-Gaussian input distribution. Numerical evaluation shows that this capacity lower bound is nearly tight at high signal-to-noise ratio (SNR), while it is shown analytically that the gap to capacity upper bounds is a small constant at high SNR. In particular, the gap to the high-SNR asymptotic capacity of the channel under either a peak or an average constraint is small. This leads to a simple approximation of the high SNR capacity. Additionally, a new capacity upper bound is derived using sphere-packing arguments. This bound is tight at high SNR for a channel with a dominant peak constraint.

  2. Alternatives to Pyrotechnic Distress Signals; Additional Signal Evaluation

    Science.gov (United States)

    2017-06-01

    device performance standard that addresses Coast Guard project sponsor and stakeholders needs. 17. Key Words Visual Distress Signal Device (VDSD...devices. The report discussed the concept of “effective intensity,” as used by the International Association of Marine Aids to Navigation and Lighthouse...efficacy of Cyan as a signal color. In order to move forward, the RDC project team met with CG-ENG-4 and other Coast Guard stakeholders (Offices of

  3. MO-F-CAMPUS-J-03: Sorting 2D Dynamic MR Images Using Internal Respiratory Signal for 4D MRI

    International Nuclear Information System (INIS)

    Wen, Z; Hui, C; Beddar, S; Stemkens, B; Tijssen, R; Berg, C van den

    2015-01-01

    Purpose: To develop a novel algorithm to extract internal respiratory signal (IRS) for sorting dynamic magnetic resonance (MR) images in order to achieve four-dimensional (4D) MR imaging. Methods: Dynamic MR images were obtained with the balanced steady state free precession by acquiring each two-dimensional sagittal slice repeatedly for more than one breathing cycle. To generate a robust IRS, we used 5 different representative internal respiratory surrogates in both the image space (body area) and the Fourier space (the first two low-frequency phase components in the anterior-posterior direction, and the first two low-frequency phase components in the superior-inferior direction). A clustering algorithm was then used to search for a group of similar individual internal signals, which was then used to formulate the final IRS. A phantom study and a volunteer study were performed to demonstrate the effectiveness of this algorithm. The IRS was compared to the signal from the respiratory bellows. Results: The IRS computed by our algorithm matched well with the bellows signal in both the phantom and the volunteer studies. On average, the normalized cross correlation between the IRS and the bellows signal was 0.97 in the phantom study and 0.87 in the volunteer study, respectively. The average difference between the end inspiration times in the IRS and bellows signal was 0.18 s in the phantom study and 0.14 s in the volunteer study, respectively. 4D images sorted based on the IRS showed minimal mismatched artifacts, and the motion of the anatomy was coherent with the respiratory phases. Conclusion: A novel algorithm was developed to generate IRS from dynamic MR images to achieve 4D MR imaging. The performance of the IRS was comparable to that of the bellows signal. It can be easily implemented into the clinic and potentially could replace the use of external respiratory surrogates. This research was partially funded by the the Center for Radiation Oncology Research from

  4. Compressive sensing of signals generated in plastic scintillators in a novel J-PET instrument

    International Nuclear Information System (INIS)

    Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Rudy, Z.; Rundel, O.; Salabura, P.

    2015-01-01

    The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The discussed detector offers improvement of the Time of Flight (TOF) resolution due to the use of fast plastic scintillators and dedicated electronics allowing for sampling in the voltage domain of signals with durations of few nanoseconds. In this paper we show that recovery of the whole signal, based on only a few samples, is possible. In order to do that, we incorporate the training signals into the Tikhonov regularization framework and we perform the Principal Component Analysis decomposition, which is well known for its compaction properties. The method yields a simple closed form analytical solution that does not require iterative processing. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This is the key to introduce and prove the formula for calculations of the signal recovery error. In this paper we show that an average recovery error is approximately inversely proportional to the number of acquired samples

  5. Compressive sensing of signals generated in plastic scintillators in a novel J-PET instrument

    Energy Technology Data Exchange (ETDEWEB)

    Raczyński, L., E-mail: lech.raczynski@ncbj.gov.pl [Świerk Computing Centre, National Centre for Nuclear Research, 05-400 Otwock-Świerk (Poland); Moskal, P. [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, 30-059 Cracow (Poland); Kowalski, P.; Wiślicki, W. [Świerk Computing Centre, National Centre for Nuclear Research, 05-400 Otwock-Świerk (Poland); Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A. [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, 30-059 Cracow (Poland); Kapłon, Ł. [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, 30-059 Cracow (Poland); Institute of Metallurgy and Materials Science of Polish Academy of Sciences, Cracow (Poland); Kochanowski, A. [Faculty of Chemistry, Jagiellonian University, 30-060 Cracow (Poland); Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Rudy, Z.; Rundel, O.; Salabura, P. [Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, 30-059 Cracow (Poland); and others

    2015-06-21

    The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The discussed detector offers improvement of the Time of Flight (TOF) resolution due to the use of fast plastic scintillators and dedicated electronics allowing for sampling in the voltage domain of signals with durations of few nanoseconds. In this paper we show that recovery of the whole signal, based on only a few samples, is possible. In order to do that, we incorporate the training signals into the Tikhonov regularization framework and we perform the Principal Component Analysis decomposition, which is well known for its compaction properties. The method yields a simple closed form analytical solution that does not require iterative processing. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This is the key to introduce and prove the formula for calculations of the signal recovery error. In this paper we show that an average recovery error is approximately inversely proportional to the number of acquired samples.

  6. Compressive sensing of signals generated in plastic scintillators in a novel J-PET instrument

    Science.gov (United States)

    Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.

    2015-06-01

    The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The discussed detector offers improvement of the Time of Flight (TOF) resolution due to the use of fast plastic scintillators and dedicated electronics allowing for sampling in the voltage domain of signals with durations of few nanoseconds. In this paper we show that recovery of the whole signal, based on only a few samples, is possible. In order to do that, we incorporate the training signals into the Tikhonov regularization framework and we perform the Principal Component Analysis decomposition, which is well known for its compaction properties. The method yields a simple closed form analytical solution that does not require iterative processing. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This is the key to introduce and prove the formula for calculations of the signal recovery error. In this paper we show that an average recovery error is approximately inversely proportional to the number of acquired samples.

  7. Performance analysis of selective cooperation with fixed gain relays in Nakagami-m channels

    KAUST Repository

    Hussain, Syed Imtiaz; Hasna, Mazen Omar; Alouini, Mohamed-Slim

    2012-01-01

    Selecting the best relay using the maximum signal to noise ratio (SNR) among all the relays ready to cooperate saves system resources and utilizes the available bandwidth more efficiently compared to the regular all-relay cooperation. In this paper, we analyze the performance of the best relay selection scheme with fixed gain relays operating in Nakagami-. m channels. We first derive the probability density function (PDF) of upper bounded end-to-end SNR of the relay link. Using this PDF, we derive some key performance parameters for the system including average bit error probability and average channel capacity. The analytical results are verified through Monte Carlo simulations. © 2012 Elsevier B.V.

  8. Performance analysis of selective cooperation with fixed gain relays in Nakagami-m channels

    KAUST Repository

    Hussain, Syed Imtiaz

    2012-09-01

    Selecting the best relay using the maximum signal to noise ratio (SNR) among all the relays ready to cooperate saves system resources and utilizes the available bandwidth more efficiently compared to the regular all-relay cooperation. In this paper, we analyze the performance of the best relay selection scheme with fixed gain relays operating in Nakagami-. m channels. We first derive the probability density function (PDF) of upper bounded end-to-end SNR of the relay link. Using this PDF, we derive some key performance parameters for the system including average bit error probability and average channel capacity. The analytical results are verified through Monte Carlo simulations. © 2012 Elsevier B.V.

  9. Wind farms providing secondary frequency regulation: evaluating the performance of model-based receding horizon control

    Directory of Open Access Journals (Sweden)

    C. R. Shapiro

    2018-01-01

    Full Text Available This paper is an extended version of our paper presented at the 2016 TORQUE conference (Shapiro et al., 2016. We investigate the use of wind farms to provide secondary frequency regulation for a power grid using a model-based receding horizon control framework. In order to enable real-time implementation, the control actions are computed based on a time-varying one-dimensional wake model. This model describes wake advection and wake interactions, both of which play an important role in wind farm power production. In order to test the control strategy, it is implemented in a large-eddy simulation (LES model of an 84-turbine wind farm using the actuator disk turbine representation. Rotor-averaged velocity measurements at each turbine are used to provide feedback for error correction. The importance of including the dynamics of wake advection in the underlying wake model is tested by comparing the performance of this dynamic-model control approach to a comparable static-model control approach that relies on a modified Jensen model. We compare the performance of both control approaches using two types of regulation signals, RegA and RegD, which are used by PJM, an independent system operator in the eastern United States. The poor performance of the static-model control relative to the dynamic-model control demonstrates that modeling the dynamics of wake advection is key to providing the proposed type of model-based coordinated control of large wind farms. We further explore the performance of the dynamic-model control via composite performance scores used by PJM to qualify plants for regulation services or markets. Our results demonstrate that the dynamic-model-controlled wind farm consistently performs well, passing the qualification threshold for all fast-acting RegD signals. For the RegA signal, which changes over slower timescales, the dynamic-model control leads to average performance that surpasses the qualification threshold, but further

  10. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  11. Muon Signals at a Low Signal-to-Noise Ratio Environment

    CERN Document Server

    Zakareishvili, Tamar; The ATLAS collaboration

    2017-01-01

    Calorimeters provide high-resolution energy measurements for particle detection. Muon signals are important for evaluating electronics performance, since they produce a signal that is close to electronic noise values. This work provides a noise RMS analysis for the Demonstrator drawer of the 2016 Tile Calorimeter (TileCal) Test Beam in order to help reconstruct events in a low signal-to-noise environment. Muon signals were then found for a beam penetrating through all three layers of the drawer. The Demonstrator drawer is an electronic candidate for TileCal, part of the ATLAS experiment for the Large Hadron Collider that operates at the European Organization for Nuclear Research (CERN).

  12. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  13. Single Trial Classification of Evoked EEG Signals Due to RGB Colors

    Directory of Open Access Journals (Sweden)

    Eman Alharbi

    2016-03-01

    Full Text Available Recently, the impact of colors on the brain signals has become one of the leading researches in BCI systems. These researches are based on studying the brain behavior after color stimulus, and finding a way to classify its signals offline without considering the real time. Moving to the next step, we present a real time classification model (online for EEG signals evoked by RGB colors stimuli, which is not presented in previous studies. In this research, EEG signals were recorded from 7 subjects through BCI2000 toolbox. The Empirical Mode Decomposition (EMD technique was used at the signal analysis stage. Various feature extraction methods were investigated to find the best and reliable set, including Event-related spectral perturbations (ERSP, Target mean with Feast Fourier Transform (FFT, Wavelet Packet Decomposition (WPD, Auto Regressive model (AR and EMD residual. A new feature selection method was created based on the peak's time of EEG signal when red and blue colors stimuli are presented. The ERP image was used to find out the peak's time, which was around 300 ms for the red color and around 450 ms for the blue color. The classification was performed using the Support Vector Machine (SVM classifier, LIBSVM toolbox being used for that purpose. The EMD residual was found to be the most reliable method that gives the highest classification accuracy with an average of 88.5% and with an execution time of only 14 seconds.

  14. CHARACTERIZATION OF THE EFFECTS OF INHALED PERCHLOROETHYLENE ON SUSTAINED ATTENTION IN RATS PERFORMING A VISUAL SIGNAL DETECTION TASK

    Science.gov (United States)

    The aliphatic hydrocarbon perchloroethyelene (PCE) has been associated with neurobehavioral dysfunction including reduced attention in humans. The current study sought to assess the effects of inhaled PCE on sustained attention in rats performing a visual signal detection task (S...

  15. RTS noise and dark current white defects reduction using selective averaging based on a multi-aperture system.

    Science.gov (United States)

    Zhang, Bo; Kagawa, Keiichiro; Takasawa, Taishi; Seo, Min Woong; Yasutomi, Keita; Kawahito, Shoji

    2014-01-16

    In extremely low-light conditions, random telegraph signal (RTS) noise and dark current white defects become visible. In this paper, a multi-aperture imaging system and selective averaging method which removes the RTS noise and the dark current white defects by minimizing the synthetic sensor noise at every pixel is proposed. In the multi-aperture imaging system, a very small synthetic F-number which is much smaller than 1.0 is achieved by increasing optical gain with multiple lenses. It is verified by simulation that the effective noise normalized by optical gain in the peak of noise histogram is reduced from 1.38e⁻ to 0.48 e⁻ in a 3 × 3-aperture system using low-noise CMOS image sensors based on folding-integration and cyclic column ADCs. In the experiment, a prototype 3 × 3-aperture camera, where each aperture has 200 × 200 pixels and an imaging lens with a focal length of 3.0 mm and F-number of 3.0, is developed. Under a low-light condition, in which the maximum average signal is 11e⁻ per aperture, the RTS and dark current white defects are removed and the peak signal-to-noise ratio (PSNR) of the image is increased by 6.3 dB.

  16. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  17. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    Science.gov (United States)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  18. A Review on Successive Interference Cancellation-Based Optical PPM-CDMA Signaling

    Science.gov (United States)

    Alsowaidi, Naif; Eltaif, Tawfig; Mokhtar, Mohd Ridzuan

    2017-06-01

    This paper presents a comprehensive review of successive interference cancellation (SIC) scheme using pulse position modulation (PPM) for optical code division multiple access (OCDMA) systems. SIC scheme focuses on high-intensity signal, which will be selected after all users were detected, and then it will be subtracted from the overall received signal, hence, generating a new received signal. This process continues till all users eliminated one by one have been detected. It is shown that the random location of the sequences due to PPM encoding can reduce the probability of concentrated buildup of the pulse overlap in any one-slot time, and support SIC to easily remove the effect of the strongest signal at each stage of the cancellation process. The system bit error rate (BER) performance with modified quadratic congruence (MQC) codes used as signature sequence has been investigated. A detailed theoretical analysis of proposed system taking into account the impact of imperfect interference cancellation, the loss produced from the splitting during encoding and decoding, the channel loss and multiple access interference is presented. Results show that under average effective power constraint optical CDMA system using SIC scheme with M-ary PPM modulation outperforms conventional correlator detector and SIC scheme with on-off keying (OOK) modulation.

  19. Sharing the Licensed Spectrum of Full-Duplex Systems using Improper Gaussian Signaling

    KAUST Repository

    Gaafar, Mohamed; Amin, Osama; Abediseid, Walid; Alouini, Mohamed-Slim

    2016-01-01

    Sharing the spectrum with in-band full-duplex (FD) primary users (PU) is a challenging and interesting problem in the underlay cognitive radio (CR) systems. The self-inteference introducsed at the primary network may dramatically impede the secondary user (SU) opportunity to access the spectrum. In this work, we attempt to tackle this problem through the use of so called improper Gaussian signaling (IGS). Such a signaling technique has demonstrated its superiority in improving the overall performance in interference limited networks. Particularly, we assume a system with a SU pair working in half-duplex mode that uses IGS while the FD PU pair implements the regular proper Gaussiam signaling techniques. Frist, we derive a closed form expression for the SU outage probability while maintaining the required PU quality-of-service based on the average channel state information. Finally, we provide some numerical results that validate the tightness of the PU outage probability bound and demonstrate the advantage of employing IGS to the SU in order to access the spectrum of the FD PU.

  20. Sharing the Licensed Spectrum of Full-Duplex Systems using Improper Gaussian Signaling

    KAUST Repository

    Gaafar, Mohamed

    2016-01-06

    Sharing the spectrum with in-band full-duplex (FD) primary users (PU) is a challenging and interesting problem in the underlay cognitive radio (CR) systems. The self-inteference introducsed at the primary network may dramatically impede the secondary user (SU) opportunity to access the spectrum. In this work, we attempt to tackle this problem through the use of so called improper Gaussian signaling (IGS). Such a signaling technique has demonstrated its superiority in improving the overall performance in interference limited networks. Particularly, we assume a system with a SU pair working in half-duplex mode that uses IGS while the FD PU pair implements the regular proper Gaussiam signaling techniques. Frist, we derive a closed form expression for the SU outage probability while maintaining the required PU quality-of-service based on the average channel state information. Finally, we provide some numerical results that validate the tightness of the PU outage probability bound and demonstrate the advantage of employing IGS to the SU in order to access the spectrum of the FD PU.

  1. Modified BTC Algorithm for Audio Signal Coding

    Directory of Open Access Journals (Sweden)

    TOMIC, S.

    2016-11-01

    Full Text Available This paper describes modification of a well-known image coding algorithm, named Block Truncation Coding (BTC and its application in audio signal coding. BTC algorithm was originally designed for black and white image coding. Since black and white images and audio signals have different statistical characteristics, the application of this image coding algorithm to audio signal presents a novelty and a challenge. Several implementation modifications are described in this paper, while the original idea of the algorithm is preserved. The main modifications are performed in the area of signal quantization, by designing more adequate quantizers for audio signal processing. The result is a novel audio coding algorithm, whose performance is presented and analyzed in this research. The performance analysis indicates that this novel algorithm can be successfully applied in audio signal coding.

  2. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  3. Monte Carlo Simulation of the Echo Signals from Low-Flying Targets for Airborne Radar

    Directory of Open Access Journals (Sweden)

    Mingyuan Man

    2014-01-01

    Full Text Available A demonstrated hybrid method based on the combination of half-space physical optics method (PO, graphical-electromagnetic computing (GRECO, and Monte Carlo method on echo signals from low-flying targets based on actual environment for airborne radar is presented in this paper. The half-space physical optics method , combined with the graphical-electromagnetic computing (GRECO method to eliminate the shadow regions quickly and rebuild the target automatically, is employed to calculate the radar cross section (RCS of the conductive targets in half space fast and accurately. The direct echo is computed based on the radar equation. The reflected paths from sea or ground surface cause multipath effects. In order to accurately obtain the echo signals, the phase factors are modified for fluctuations in multipath, and the statistical average value of the echo signals is obtained using the Monte Carlo method. A typical simulation is performed, and the numerical results show the accuracy of the proposed method.

  4. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  5. Genetically determined measures of striatal D2 signaling predict prefrontal activity during working memory performance.

    Science.gov (United States)

    Bertolino, Alessandro; Taurisano, Paolo; Pisciotta, Nicola Marco; Blasi, Giuseppe; Fazio, Leonardo; Romano, Raffaella; Gelao, Barbara; Lo Bianco, Luciana; Lozupone, Madia; Di Giorgio, Annabella; Caforio, Grazia; Sambataro, Fabio; Niccoli-Asabella, Artor; Papp, Audrey; Ursini, Gianluca; Sinibaldi, Lorenzo; Popolizio, Teresa; Sadee, Wolfgang; Rubini, Giuseppe

    2010-02-22

    Variation of the gene coding for D2 receptors (DRD2) has been associated with risk for schizophrenia and with working memory deficits. A functional intronic SNP (rs1076560) predicts relative expression of the two D2 receptors isoforms, D2S (mainly pre-synaptic) and D2L (mainly post-synaptic). However, the effect of functional genetic variation of DRD2 on striatal dopamine D2 signaling and on its correlation with prefrontal activity during working memory in humans is not known. Thirty-seven healthy subjects were genotyped for rs1076560 (G>T) and underwent SPECT with [123I]IBZM (which binds primarily to post-synaptic D2 receptors) and with [123I]FP-CIT (which binds to pre-synaptic dopamine transporters, whose activity and density is also regulated by pre-synaptic D2 receptors), as well as BOLD fMRI during N-Back working memory. Subjects carrying the T allele (previously associated with reduced D2S expression) had striatal reductions of [123I]IBZM and of [123I]FP-CIT binding. DRD2 genotype also differentially predicted the correlation between striatal dopamine D2 signaling (as identified with factor analysis of the two radiotracers) and activity of the prefrontal cortex during working memory as measured with BOLD fMRI, which was positive in GG subjects and negative in GT. Our results demonstrate that this functional SNP within DRD2 predicts striatal binding of the two radiotracers to dopamine transporters and D2 receptors as well as the correlation between striatal D2 signaling with prefrontal cortex activity during performance of a working memory task. These data are consistent with the possibility that the balance of excitatory/inhibitory modulation of striatal neurons may also affect striatal outputs in relationship with prefrontal activity during working memory performance within the cortico-striatal-thalamic-cortical pathway.

  6. Genetically determined measures of striatal D2 signaling predict prefrontal activity during working memory performance.

    Directory of Open Access Journals (Sweden)

    Alessandro Bertolino

    2010-02-01

    Full Text Available Variation of the gene coding for D2 receptors (DRD2 has been associated with risk for schizophrenia and with working memory deficits. A functional intronic SNP (rs1076560 predicts relative expression of the two D2 receptors isoforms, D2S (mainly pre-synaptic and D2L (mainly post-synaptic. However, the effect of functional genetic variation of DRD2 on striatal dopamine D2 signaling and on its correlation with prefrontal activity during working memory in humans is not known.Thirty-seven healthy subjects were genotyped for rs1076560 (G>T and underwent SPECT with [123I]IBZM (which binds primarily to post-synaptic D2 receptors and with [123I]FP-CIT (which binds to pre-synaptic dopamine transporters, whose activity and density is also regulated by pre-synaptic D2 receptors, as well as BOLD fMRI during N-Back working memory.Subjects carrying the T allele (previously associated with reduced D2S expression had striatal reductions of [123I]IBZM and of [123I]FP-CIT binding. DRD2 genotype also differentially predicted the correlation between striatal dopamine D2 signaling (as identified with factor analysis of the two radiotracers and activity of the prefrontal cortex during working memory as measured with BOLD fMRI, which was positive in GG subjects and negative in GT.Our results demonstrate that this functional SNP within DRD2 predicts striatal binding of the two radiotracers to dopamine transporters and D2 receptors as well as the correlation between striatal D2 signaling with prefrontal cortex activity during performance of a working memory task. These data are consistent with the possibility that the balance of excitatory/inhibitory modulation of striatal neurons may also affect striatal outputs in relationship with prefrontal activity during working memory performance within the cortico-striatal-thalamic-cortical pathway.

  7. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    Directory of Open Access Journals (Sweden)

    Wilmar Hernandez

    2007-01-01

    Full Text Available In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart sensors that today’s cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher’s interest in the fusion of intelligent sensors and optimal signal processing techniques.

  8. A NEW APPROACH TO DETECT CONGESTIVE HEART FAILURE USING DETRENDED FLUCTUATION ANALYSIS OF ELECTROCARDIOGRAM SIGNALS

    Directory of Open Access Journals (Sweden)

    CHANDRAKAR KAMATH

    2015-02-01

    Full Text Available The aim of this study is to evaluate how far the detrended fluctuation analysis (DFA approach helps to characterize the short-term and intermediate-term fractal correlations in the raw electrocardiogram (ECG signals and thereby discriminate between normal and congestive heart failure (CHF subjects. The DFA-1 calculations were performed on normal and CHF short-term ECG segments, of the order of 20 seconds duration. Differences were found in shortterm and intermediate-term correlation properties and the corresponding scaling exponents of the two groups (normal and CHF. The statistical analyses show that short-term fractal scaling exponent alone is sufficient to distinguish between normal and CHF subjects. The receiver operating characteristic curve (ROC analysis confirms the robustness of this new approach and exhibits an average accuracy that exceeds 98.2%, average sensitivity of about 98.4%, positive predictivity of 98.00%, and average specificity of 98.00%.

  9. Preoptometry and optometry school grade point average and optometry admissions test scores as predictors of performance on the national board of examiners in optometry part I (basic science) examination.

    Science.gov (United States)

    Bailey, J E; Yackle, K A; Yuen, M T; Voorhees, L I

    2000-04-01

    To evaluate preoptometry and optometry school grade point averages and Optometry Admission Test (OAT) scores as predictors of performance on the National Board of Examiners in Optometry NBEO Part I (Basic Science) (NBEOPI) examination. Simple and multiple correlation coefficients were computed from data obtained from a sample of three consecutive classes of optometry students (1995-1997; n = 278) at Southern California College of Optometry. The GPA after year two of optometry school was the highest correlation (r = 0.75) among all predictor variables; the average of all scores on the OAT was the highest correlation among preoptometry predictor variables (r = 0.46). Stepwise regression analysis indicated a combination of the optometry GPA, the OAT Academic Average, and the GPA in certain optometry curricular tracks resulted in an improved correlation (multiple r = 0.81). Predicted NBEOPI scores were computed from the regression equation and then analyzed by receiver operating characteristic (roc) and statistic of agreement (kappa) methods. From this analysis, we identified the predicted score that maximized identification of true and false NBEOPI failures (71% and 10%, respectively). Cross validation of this result on a separate class of optometry students resulted in a slightly lower correlation between actual and predicted NBEOPI scores (r = 0.77) but showed the criterion-predicted score to be somewhat lax. The optometry school GPA after 2 years is a reasonably good predictor of performance on the full NBEOPI examination, but the prediction is enhanced by adding the Academic Average OAT score. However, predicting performance in certain subject areas of the NBEOPI examination, for example Psychology and Ocular/Visual Biology, was rather insubstantial. Nevertheless, predicting NBEOPI performance from the best combination of year two optometry GPAs and preoptometry variables is better than has been shown in previous studies predicting optometry GPA from the best

  10. Biomechanics of the Peacock's Display: How Feather Structure and Resonance Influence Multimodal Signaling.

    Science.gov (United States)

    Dakin, Roslyn; McCrossan, Owen; Hare, James F; Montgomerie, Robert; Amador Kane, Suzanne

    2016-01-01

    Courtship displays may serve as signals of the quality of motor performance, but little is known about the underlying biomechanics that determines both their signal content and costs. Peacocks (Pavo cristatus) perform a complex, multimodal "train-rattling" display in which they court females by vibrating the iridescent feathers in their elaborate train ornament. Here we study how feather biomechanics influences the performance of this display using a combination of field recordings and laboratory experiments. Using high-speed video, we find that train-rattling peacocks stridulate their tail feathers against the train at 25.6 Hz, on average, generating a broadband, pulsating mechanical sound at that frequency. Laboratory measurements demonstrate that arrays of peacock tail and train feathers have a broad resonant peak in their vibrational spectra at the range of frequencies used for train-rattling during the display, and the motion of feathers is just as expected for feathers shaking near resonance. This indicates that peacocks are able to drive feather vibrations energetically efficiently over a relatively broad range of frequencies, enabling them to modulate the feather vibration frequency of their displays. Using our field data, we show that peacocks with longer trains use slightly higher vibration frequencies on average, even though longer train feathers are heavier and have lower resonant frequencies. Based on these results, we propose hypotheses for future studies of the function and energetics of this display that ask why its dynamic elements might attract and maintain female attention. Finally, we demonstrate how the mechanical structure of the train feathers affects the peacock's visual display by allowing the colorful iridescent eyespots-which strongly influence female mate choice-to remain nearly stationary against a dynamic iridescent background.

  11. Detection of driving fatigue by using noncontact EMG and ECG signals measurement system.

    Science.gov (United States)

    Fu, Rongrong; Wang, Hong

    2014-05-01

    Driver fatigue can be detected by constructing a discriminant mode using some features obtained from physiological signals. There exist two major challenges of this kind of methods. One is how to collect physiological signals from subjects while they are driving without any interruption. The other is to find features of physiological signals that are of corresponding change with the loss of attention caused by driver fatigue. Driving fatigue is detected based on the study of surface electromyography (EMG) and electrocardiograph (ECG) during the driving period. The noncontact data acquisition system was used to collect physiological signals from the biceps femoris of each subject to tackle the first challenge. Fast independent component analysis (FastICA) and digital filter were utilized to process the original signals. Based on the statistical analysis results given by Kolmogorov-Smirnov Z test, the peak factor of EMG (p fatigue of drivers. The discriminant criterion of fatigue was obtained from the training samples by using Mahalanobis distance, and then the average classification accuracy was given by 10-fold cross-validation. The results showed that the method proposed in this paper can give well performance in distinguishing the normal state and fatigue state. The noncontact, onboard vehicle drivers' fatigue detection system was developed to reduce fatigue-related risks.

  12. Fission neutron spectrum averaged cross sections for threshold reactions on arsenic

    International Nuclear Information System (INIS)

    Dorval, E.L.; Arribere, M.A.; Kestelman, A.J.; Comision Nacional de Energia Atomica, Cuyo Nacional Univ., Bariloche; Ribeiro Guevara, S.; Cohen, I.M.; Ohaco, R.A.; Segovia, M.S.; Yunes, A.N.; Arrondo, M.; Comision Nacional de Energia Atomica, Buenos Aires

    2006-01-01

    We have measured the cross sections, averaged over a 235 U fission neutron spectrum, for the two high threshold reactions: 75 As(n,p) 75 mGe and 75 As(n,2n) 74 As. The measured averaged cross sections are 0.292±0.022 mb, referred to the 3.95±0.20 mb standard for the 27 Al(n,p) 27 Mg averaged cross section, and 0.371±0.032 mb referred to the 111±3 mb standard for the 58 Ni(n,p) 58m+g Co averaged cross section, respectively. The measured averaged cross sections were also evaluated semi-empirically by numerically integrating experimental differential cross section data extracted for both reactions from the current literature. The calculations were performed for four different representations of the thermal-neutron-induced 235 U fission neutron spectrum. The calculated cross sections, though depending on analytical representation of the flux, agree with the measured values within the estimated uncertainties. (author)

  13. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  14. Two-way cooperative AF relaying in spectrum-sharing systems: Enhancing cell-edge performance

    KAUST Repository

    Xia, Minghua

    2012-09-01

    In this contribution, two-way cooperative amplify-and-forward (AF) relaying technique is integrated into spectrumsharing wireless systems to improve spectral efficiency of secondary users (SUs). In order to share the available spectrum resources originally dedicated to primary users (PUs), the transmit power of a SU is optimized with respect to the average tolerable interference power at primary receivers. By analyzing outage probability and achievable data rate at the base station and at a cell-edge SU, our results reveal that the uplink performance is dominated by the average tolerable interference power at primary receivers, while the downlink always behaves like conventional one-way AF relaying and its performance is dominated by the average signal-to-noise ratio (SNR). These important findings provide fresh perspectives for system designers to improve spectral efficiency of secondary users in next-generation broadband spectrum-sharing wireless systems. © 2012 IEEE.

  15. Signaling aggression.

    Science.gov (United States)

    van Staaden, Moira J; Searcy, William A; Hanlon, Roger T

    2011-01-01

    From psychological and sociological standpoints, aggression is regarded as intentional behavior aimed at inflicting pain and manifested by hostility and attacking behaviors. In contrast, biologists define aggression as behavior associated with attack or escalation toward attack, omitting any stipulation about intentions and goals. Certain animal signals are strongly associated with escalation toward attack and have the same function as physical attack in intimidating opponents and winning contests, and ethologists therefore consider them an integral part of aggressive behavior. Aggressive signals have been molded by evolution to make them ever more effective in mediating interactions between the contestants. Early theoretical analyses of aggressive signaling suggested that signals could never be honest about fighting ability or aggressive intentions because weak individuals would exaggerate such signals whenever they were effective in influencing the behavior of opponents. More recent game theory models, however, demonstrate that given the right costs and constraints, aggressive signals are both reliable about strength and intentions and effective in influencing contest outcomes. Here, we review the role of signaling in lieu of physical violence, considering threat displays from an ethological perspective as an adaptive outcome of evolutionary selection pressures. Fighting prowess is conveyed by performance signals whose production is constrained by physical ability and thus limited to just some individuals, whereas aggressive intent is encoded in strategic signals that all signalers are able to produce. We illustrate recent advances in the study of aggressive signaling with case studies of charismatic taxa that employ a range of sensory modalities, viz. visual and chemical signaling in cephalopod behavior, and indicators of aggressive intent in the territorial calls of songbirds. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Closed-Form Algorithm for 3-D Near-Field OFDM Signal Localization under Uniform Circular Array.

    Science.gov (United States)

    Su, Xiaolong; Liu, Zhen; Chen, Xin; Wei, Xizhang

    2018-01-14

    Due to its widespread application in communications, radar, etc., the orthogonal frequency division multiplexing (OFDM) signal has become increasingly urgent in the field of localization. Under uniform circular array (UCA) and near-field conditions, this paper presents a closed-form algorithm based on phase difference for estimating the three-dimensional (3-D) location (azimuth angle, elevation angle, and range) of the OFDM signal. In the algorithm, considering that it is difficult to distinguish the frequency of the OFDM signal's subcarriers and the phase-based method is always affected by errors of the frequency estimation, this paper employs sparse representation (SR) to obtain the super-resolution frequencies and the corresponding phases of subcarriers. Further, as the phase differences of the adjacent sensors including azimuth angle, elevation angle and range parameters can be expressed as indefinite equations, the near-field OFDM signal's 3-D location is obtained by employing the least square method, where the phase differences are based on the average of the estimated subcarriers. Finally, the performance of the proposed algorithm is demonstrated by several simulations.

  17. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  18. Evaluating lane-by-lane gap-out based signal control for isolated intersection under stop-line, single and multiple advance detection systems

    Directory of Open Access Journals (Sweden)

    Chandan Keerthi Kancharla

    2016-12-01

    Full Text Available In isolated intersection’s actuated signal control, inductive loop detector layout plays a crucial role in providingthe vehicle information to the signal controller. Based on vehicle actuations at the detector, the green time is extended till a pre-defined threshold gap-out occurs. The Federal Highway Administration (FHWA proposed various guidelines for detec-tor layouts on low-speed and high-speed approaches. This paper proposes single and multiple advance detection schemes for low-speed traffic movements, that utilizes vehicle actuations from advance detectors located upstream of the stop-line, which are able to detect spill-back queues. The proposed detection schemes operate with actuated signal control based on lane-by-lane gap-out criteria. The performance of the proposed schemes is compared with FHWA’s stop-line and single advance detection schemes in the VISSIM simulation tool. Results have shown that the proposed single advance detection schemes showed improved performance in reducing travel time delay and average number of stops per vehicle under low volumes while the multiple advance detection scheme performed well under high volumes.

  19. Performance Analysis of an Optical CDMA MAC Protocol With Variable-Size Sliding Window

    Science.gov (United States)

    Mohamed, Mohamed Aly A.; Shalaby, Hossam M. H.; Abdel-Moety El-Badawy, El-Sayed

    2006-10-01

    A media access control protocol for optical code-division multiple-access packet networks with variable length data traffic is proposed. This protocol exhibits a sliding window with variable size. A model for interference-level fluctuation and an accurate analysis for channel usage are presented. Both multiple-access interference (MAI) and photodetector's shot noise are considered. Both chip-level and correlation receivers are adopted. The system performance is evaluated using a traditional average system throughput and average delay. Finally, in order to enhance the overall performance, error control codes (ECCs) are applied. The results indicate that the performance can be enhanced to reach its peak using the ECC with an optimum number of correctable errors. Furthermore, chip-level receivers are shown to give much higher performance than that of correlation receivers. Also, it has been shown that MAI is the main source of signal degradation.

  20. Assessment of general public exposure to lte signals compared to other cellular networks present in Thessaloniki, Greece

    International Nuclear Information System (INIS)

    Gkonis, Fotios; Boursianis, Achilles; Samaras, Theodoros

    2017-01-01

    To assess general public exposure to electromagnetic fields from Long Term Evolution (LTE) base stations, measurements at 10 sites in Thessaloniki, Greece were performed. Results are compared with other mobile cellular networks currently in use. All exposure values satisfy the guidelines for general public exposure of the International Commission on Non-Ionizing Radiation Protection (ICNIRP), as well as the reference levels by the Greek legislation at all sites. LTE electric field measurements were recorded up to 0.645 V/m. By applying the ICNIRP guidelines, the exposure ratio for all LTE signals is between 2.9 x 10"-"5 and 2.8 x 10"-"2. From the measurements results it is concluded that the average and maximum power density contribution of LTE down-link signals to the overall cellular networks signals are 7.8% and 36.7%, respectively. (authors)

  1. An adaptive Kalman filter approach for cardiorespiratory signal extraction and fusion of non-contacting sensors.

    Science.gov (United States)

    Foussier, Jerome; Teichmann, Daniel; Jia, Jing; Misgeld, Berno; Leonhardt, Steffen

    2014-05-09

    Extracting cardiorespiratory signals from non-invasive and non-contacting sensor arrangements, i.e. magnetic induction sensors, is a challenging task. The respiratory and cardiac signals are mixed on top of a large and time-varying offset and are likely to be disturbed by measurement noise. Basic filtering techniques fail to extract relevant information for monitoring purposes. We present a real-time filtering system based on an adaptive Kalman filter approach that separates signal offsets, respiratory and heart signals from three different sensor channels. It continuously estimates respiration and heart rates, which are fed back into the system model to enhance performance. Sensor and system noise covariance matrices are automatically adapted to the aimed application, thus improving the signal separation capabilities. We apply the filtering to two different subjects with different heart rates and sensor properties and compare the results to the non-adaptive version of the same Kalman filter. Also, the performance, depending on the initialization of the filters, is analyzed using three different configurations ranging from best to worst case. Extracted data are compared with reference heart rates derived from a standard pulse-photoplethysmographic sensor and respiration rates from a flowmeter. In the worst case for one of the subjects the adaptive filter obtains mean errors (standard deviations) of -0.2 min(-1) (0.3 min(-1)) and -0.7 bpm (1.7 bpm) (compared to -0.2 min(-1) (0.4 min(-1)) and 42.0 bpm (6.1 bpm) for the non-adaptive filter) for respiration and heart rate, respectively. In bad conditions the heart rate is only correctly measurable when the Kalman matrices are adapted to the target sensor signals. Also, the reduced mean error between the extracted offset and the raw sensor signal shows that adapting the Kalman filter continuously improves the ability to separate the desired signals from the raw sensor data. The average total computational time needed

  2. Peak-to-average power ratio reduction in orthogonal frequency division multiplexing-based visible light communication systems using a modified partial transmit sequence technique

    Science.gov (United States)

    Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen

    2018-01-01

    We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.

  3. Soft drink effects on sensorimotor rhythm brain computer interface performance and resting-state spectral power.

    Science.gov (United States)

    Mundahl, John; Jianjun Meng; He, Jeffrey; Bin He

    2016-08-01

    Brain-computer interface (BCI) systems allow users to directly control computers and other machines by modulating their brain waves. In the present study, we investigated the effect of soft drinks on resting state (RS) EEG signals and BCI control. Eight healthy human volunteers each participated in three sessions of BCI cursor tasks and resting state EEG. During each session, the subjects drank an unlabeled soft drink with either sugar, caffeine, or neither ingredient. A comparison of resting state spectral power shows a substantial decrease in alpha and beta power after caffeine consumption relative to control. Despite attenuation of the frequency range used for the control signal, caffeine average BCI performance was the same as control. Our work provides a useful characterization of caffeine, the world's most popular stimulant, on brain signal frequencies and their effect on BCI performance.

  4. A Martian PFS average spectrum: Comparison with ISO SWS

    Science.gov (United States)

    Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.

    2005-08-01

    The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.

  5. A CCD fitted to the UV Prime spectrograph: Performance

    International Nuclear Information System (INIS)

    Boulade, O.

    1986-10-01

    A CCD camera was fitted to the 3.6 m French-Canadian telescope in Hawai. Performance of the system and observations of elliptic galaxies (stellar content and galactic evolution in a cluster) and quasars (absorption lines in spectra) are reported. In spite of its resolution being only average, the extremely rapid optics of the UV spectrograph gives good signal to noise ratios enabling redshifts and velocity scatter to be calculated with an accuracy better than 30 km/sec [fr

  6. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  7. The classical correlation limits the ability of the measurement-induced average coherence

    Science.gov (United States)

    Zhang, Jun; Yang, Si-Ren; Zhang, Yang; Yu, Chang-Shui

    2017-04-01

    Coherence is the most fundamental quantum feature in quantum mechanics. For a bipartite quantum state, if a measurement is performed on one party, the other party, based on the measurement outcomes, will collapse to a corresponding state with some probability and hence gain the average coherence. It is shown that the average coherence is not less than the coherence of its reduced density matrix. In particular, it is very surprising that the extra average coherence (and the maximal extra average coherence with all the possible measurements taken into account) is upper bounded by the classical correlation of the bipartite state instead of the quantum correlation. We also find the sufficient and necessary condition for the null maximal extra average coherence. Some examples demonstrate the relation and, moreover, show that quantum correlation is neither sufficient nor necessary for the nonzero extra average coherence within a given measurement. In addition, the similar conclusions are drawn for both the basis-dependent and the basis-free coherence measure.

  8. Spatial modeling of the membrane-cytosolic interface in protein kinase signal transduction.

    Directory of Open Access Journals (Sweden)

    Wolfgang Giese

    2018-04-01

    Full Text Available The spatial architecture of signaling pathways and the interaction with cell size and morphology are complex, but little understood. With the advances of single cell imaging and single cell biology, it becomes crucial to understand intracellular processes in time and space. Activation of cell surface receptors often triggers a signaling cascade including the activation of membrane-attached and cytosolic signaling components, which eventually transmit the signal to the cell nucleus. Signaling proteins can form steep gradients in the cytosol, which cause strong cell size dependence. We show that the kinetics at the membrane-cytosolic interface and the ratio of cell membrane area to the enclosed cytosolic volume change the behavior of signaling cascades significantly. We suggest an estimate of average concentration for arbitrary cell shapes depending on the cell volume and cell surface area. The normalized variance, known from image analysis, is suggested as an alternative measure to quantify the deviation from the average concentration. A mathematical analysis of signal transduction in time and space is presented, providing analytical solutions for different spatial arrangements of linear signaling cascades. Quantification of signaling time scales reveals that signal propagation is faster at the membrane than at the nucleus, while this time difference decreases with the number of signaling components in the cytosol. Our investigations are complemented by numerical simulations of non-linear cascades with feedback and asymmetric cell shapes. We conclude that intracellular signal propagation is highly dependent on cell geometry and, thereby, conveys information on cell size and shape to the nucleus.

  9. Comparison of power pulses from homogeneous and time-average-equivalent models

    International Nuclear Information System (INIS)

    De, T.K.; Rouben, B.

    1995-01-01

    The time-average-equivalent model is an 'instantaneous' core model designed to reproduce the same three dimensional power distribution as that generated by a time-average model. However it has been found that the time-average-equivalent model gives a full-core static void reactivity about 8% smaller than the time-average or homogeneous models. To investigate the consequences of this difference in static void reactivity in time dependent calculations, simulations of the power pulse following a hypothetical large-loss-of-coolant accident were performed with a homogeneous model and compared with the power pulse from the time-average-equivalent model. The results show that there is a much smaller difference in peak dynamic reactivity than in static void reactivity between the two models. This is attributed to the fact that voiding is not complete, but also to the retardation effect of the delayed-neutron precursors on the dynamic flux shape. The difference in peak reactivity between the models is 0.06 milli-k. The power pulses are essentially the same in the two models, because the delayed-neutron fraction in the time-average-equivalent model is lower than in the homogeneous model, which compensates for the lower void reactivity in the time-average-equivalent model. (author). 1 ref., 5 tabs., 9 figs

  10. Evaluation of high performance data acquisition boards for simultaneous sampling of fast signals from PET detectors

    International Nuclear Information System (INIS)

    Judenhofer, Martin S; Pichler, Bernd J; Cherry, Simon R

    2005-01-01

    Detectors used for positron emission tomography (PET) provide fast, randomly distributed signals that need to be digitized for further processing. One possibility is to sample the signals at the peak initiated by a trigger from a constant fraction discriminator (CFD). For PET detectors, simultaneous acquisition of many channels is often important. To develop and evaluate novel PET detectors, a flexible, relatively low cost and high performance laboratory data acquisition (DAQ) system is therefore required. The use of dedicated DAQ systems, such as a multi-channel analysers (MCAs) or continuous sampling boards at high rates, is expensive. This work evaluates the suitability of well-priced peripheral component interconnect (PCI)-based 8-channel DAQ boards (PD2-MFS-8 2M/14 and PD2-MFS-8-500k/14, United Electronic Industries Inc., Canton, MA, USA) for signal acquisition from novel PET detectors. A software package was developed to access the board, measure basic board parameters, and to acquire, visualize, and analyse energy spectra and position profiles from block detectors. The performance tests showed that the boards input linearity is >99.2% and the standard deviation is 22 Na source was 14.9% (FWHM) at 511 keV and is slightly better than the result obtained with a high-end single channel MCA (8000A, Amptek, USA) using the same detector (16.8%). The crystals (1.2 x 1.2 x 12 mm 3 ) within a 9 x 9 LSO block detector could be clearly separated in an acquired position profile. Thus, these boards are well suited for data acquisition with novel detectors developed for nuclear imaging

  11. Data-driven quantification of the robustness and sensitivity of cell signaling networks

    International Nuclear Information System (INIS)

    Mukherjee, Sayak; Seok, Sang-Cheol; Vieland, Veronica J; Das, Jayajit

    2013-01-01

    Robustness and sensitivity of responses generated by cell signaling networks has been associated with survival and evolvability of organisms. However, existing methods analyzing robustness and sensitivity of signaling networks ignore the experimentally observed cell-to-cell variations of protein abundances and cell functions or contain ad hoc assumptions. We propose and apply a data-driven maximum entropy based method to quantify robustness and sensitivity of Escherichia coli (E. coli) chemotaxis signaling network. Our analysis correctly rank orders different models of E. coli chemotaxis based on their robustness and suggests that parameters regulating cell signaling are evolutionary selected to vary in individual cells according to their abilities to perturb cell functions. Furthermore, predictions from our approach regarding distribution of protein abundances and properties of chemotactic responses in individual cells based on cell population averaged data are in excellent agreement with their experimental counterparts. Our approach is general and can be used to evaluate robustness as well as generate predictions of single cell properties based on population averaged experimental data in a wide range of cell signaling systems. (paper)

  12. Seizure classification in EEG signals utilizing Hilbert-Huang transform

    Directory of Open Access Journals (Sweden)

    Abdulhay Enas W

    2011-05-01

    Full Text Available Abstract Background Classification method capable of recognizing abnormal activities of the brain functionality are either brain imaging or brain signal analysis. The abnormal activity of interest in this study is characterized by a disturbance caused by changes in neuronal electrochemical activity that results in abnormal synchronous discharges. The method aims at helping physicians discriminate between healthy and seizure electroencephalographic (EEG signals. Method Discrimination in this work is achieved by analyzing EEG signals obtained from freely accessible databases. MATLAB has been used to implement and test the proposed classification algorithm. The analysis in question presents a classification of normal and ictal activities using a feature relied on Hilbert-Huang Transform. Through this method, information related to the intrinsic functions contained in the EEG signal has been extracted to track the local amplitude and the frequency of the signal. Based on this local information, weighted frequencies are calculated and a comparison between ictal and seizure-free determinant intrinsic functions is then performed. Methods of comparison used are the t-test and the Euclidean clustering. Results The t-test results in a P-value Conclusion An original tool for EEG signal processing giving physicians the possibility to diagnose brain functionality abnormalities is presented in this paper. The proposed system bears the potential of providing several credible benefits such as fast diagnosis, high accuracy, good sensitivity and specificity, time saving and user friendly. Furthermore, the classification of mode mixing can be achieved using the extracted instantaneous information of every IMF, but it would be most likely a hard task if only the average value is used. Extra benefits of this proposed system include low cost, and ease of interface. All of that indicate the usefulness of the tool and its use as an efficient diagnostic tool.

  13. Seizure classification in EEG signals utilizing Hilbert-Huang transform.

    Science.gov (United States)

    Oweis, Rami J; Abdulhay, Enas W

    2011-05-24

    Classification method capable of recognizing abnormal activities of the brain functionality are either brain imaging or brain signal analysis. The abnormal activity of interest in this study is characterized by a disturbance caused by changes in neuronal electrochemical activity that results in abnormal synchronous discharges. The method aims at helping physicians discriminate between healthy and seizure electroencephalographic (EEG) signals. Discrimination in this work is achieved by analyzing EEG signals obtained from freely accessible databases. MATLAB has been used to implement and test the proposed classification algorithm. The analysis in question presents a classification of normal and ictal activities using a feature relied on Hilbert-Huang Transform. Through this method, information related to the intrinsic functions contained in the EEG signal has been extracted to track the local amplitude and the frequency of the signal. Based on this local information, weighted frequencies are calculated and a comparison between ictal and seizure-free determinant intrinsic functions is then performed. Methods of comparison used are the t-test and the Euclidean clustering. The t-test results in a P-value with respect to its fast response and ease to use. An original tool for EEG signal processing giving physicians the possibility to diagnose brain functionality abnormalities is presented in this paper. The proposed system bears the potential of providing several credible benefits such as fast diagnosis, high accuracy, good sensitivity and specificity, time saving and user friendly. Furthermore, the classification of mode mixing can be achieved using the extracted instantaneous information of every IMF, but it would be most likely a hard task if only the average value is used. Extra benefits of this proposed system include low cost, and ease of interface. All of that indicate the usefulness of the tool and its use as an efficient diagnostic tool.

  14. Genetic diversity within honeybee colonies increases signal production by waggle-dancing foragers

    Science.gov (United States)

    Mattila, Heather R; Burke, Kelly M; Seeley, Thomas D

    2008-01-01

    Recent work has demonstrated considerable benefits of intracolonial genetic diversity for the productivity of honeybee colonies: single-patriline colonies have depressed foraging rates, smaller food stores and slower weight gain relative to multiple-patriline colonies. We explored whether differences in the use of foraging-related communication behaviour (waggle dances and shaking signals) underlie differences in foraging effort of genetically diverse and genetically uniform colonies. We created three pairs of colonies; each pair had one colony headed by a multiply mated queen (inseminated by 15 drones) and one colony headed by a singly mated queen. For each pair, we monitored the production of foraging-related signals over the course of 3 days. Foragers in genetically diverse colonies had substantially more information available to them about food resources than foragers in uniform colonies. On average, in genetically diverse colonies compared with genetically uniform colonies, 36% more waggle dances were identified daily, dancers performed 62% more waggle runs per dance, foragers reported food discoveries that were farther from the nest and 91% more shaking signals were exchanged among workers each morning prior to foraging. Extreme polyandry by honeybee queens enhances the production of worker–worker communication signals that facilitate the swift discovery and exploitation of food resources. PMID:18198143

  15. A Novel Robust Audio Watermarking Algorithm by Modifying the Average Amplitude in Transform Domain

    Directory of Open Access Journals (Sweden)

    Qiuling Wu

    2018-05-01

    Full Text Available In order to improve the robustness and imperceptibility in practical application, a novel audio watermarking algorithm with strong robustness is proposed by exploring the multi-resolution characteristic of discrete wavelet transform (DWT and the energy compaction capability of discrete cosine transform (DCT. The human auditory system is insensitive to the minor changes in the frequency components of the audio signal, so the watermarks can be embedded by slightly modifying the frequency components of the audio signal. The audio fragments segmented from the cover audio signal are decomposed by DWT to obtain several groups of wavelet coefficients with different frequency bands, and then the fourth level detail coefficient is selected to be divided into the former packet and the latter packet, which are executed for DCT to get two sets of transform domain coefficients (TDC respectively. Finally, the average amplitudes of the two sets of TDC are modified to embed the binary image watermark according to the special embedding rule. The watermark extraction is blind without the carrier audio signal. Experimental results confirm that the proposed algorithm has good imperceptibility, large payload capacity and strong robustness when resisting against various attacks such as MP3 compression, low-pass filtering, re-sampling, re-quantization, amplitude scaling, echo addition and noise corruption.

  16. Urban Traffic Signal System Control Structural Optimization Based on Network Analysis

    Directory of Open Access Journals (Sweden)

    Li Wang

    2013-01-01

    Full Text Available Advanced urban traffic signal control systems such as SCOOT and SCATS normally coordinate traffic network using multilevel hierarchical control mechanism. In this mechanism, several key intersections will be selected from traffic signal network and the network will be divided into different control subareas. Traditionally, key intersection selection and control subareas division are executed according to dynamic traffic counts and link length between intersections, which largely rely on traffic engineers’ experience. However, it omits important inherent characteristics of traffic network topology. In this paper, we will apply network analysis approach into these two aspects for traffic system control structure optimization. Firstly, the modified C-means clustering algorithm will be proposed to assess the importance of intersections in traffic network and furthermore determine the key intersections based on three indexes instead of merely on traffic counts in traditional methods. Secondly, the improved network community discovery method will be used to give more reasonable evidence in traffic control subarea division. Finally, to test the effectiveness of network analysis approach, a hardware-in-loop simulation environment composed of regional traffic control system, microsimulation software and signal controller hardware, will be built. Both traditional method and proposed approach will be implemented on simulation test bed to evaluate traffic operation performance indexes, for example, travel time, stop times, delay and average vehicle speed. Simulation results show that the proposed network analysis approach can improve the traffic control system operation performance effectively.

  17. PN Sequence Preestimator Scheme for DS-SS Signal Acquisition Using Block Sequence Estimation

    Directory of Open Access Journals (Sweden)

    Sang Kyu Park

    2005-03-01

    Full Text Available An m-sequence (PN sequence preestimator scheme for direct-sequence spread spectrum (DS-SS signal acquisition by using block sequence estimation (BSE is proposed and analyzed. The proposed scheme consists of an estimator and a verifier which work according to the PN sequence chip clock, and provides not only the enhanced chip estimates with a threshold decision logic and one-chip error correction among the first m received chips, but also the reliability check of the estimates with additional decision logic. The probabilities of the estimator and verifier operations are calculated. With these results, the detection, the false alarm, and the missing probabilities of the proposed scheme are derived. In addition, using a signal flow graph, the average acquisition time is calculated. The proposed scheme can be used as a preestimator and easily implemented by changing the internal signal path of a generally used digital matched filter (DMF correlator or any other correlator that has a lot of sampling data memories for sampled PN sequence. The numerical results show rapid acquisition performance in a relatively good CNR.

  18. Preprocessing the Nintendo Wii Board Signal to Derive More Accurate Descriptors of Statokinesigrams.

    Science.gov (United States)

    Audiffren, Julien; Contal, Emile

    2016-08-01

    During the past few years, the Nintendo Wii Balance Board (WBB) has been used in postural control research as an affordable but less reliable replacement for laboratory grade force platforms. However, the WBB suffers some limitations, such as a lower accuracy and an inconsistent sampling rate. In this study, we focus on the latter, namely the non uniform acquisition frequency. We show that this problem, combined with the poor signal to noise ratio of the WBB, can drastically decrease the quality of the obtained information if not handled properly. We propose a new resampling method, Sliding Window Average with Relevance Interval Interpolation (SWARII), specifically designed with the WBB in mind, for which we provide an open source implementation. We compare it with several existing methods commonly used in postural control, both on synthetic and experimental data. The results show that some methods, such as linear and piecewise constant interpolations should definitely be avoided, particularly when the resulting signal is differentiated, which is necessary to estimate speed, an important feature in postural control. Other methods, such as averaging on sliding windows or SWARII, perform significantly better on synthetic dataset, and produce results more similar to the laboratory-grade AMTI force plate (AFP) during experiments. Those methods should be preferred when resampling data collected from a WBB.

  19. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  20. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  1. An EEMD-PCA approach to extract heart rate, respiratory rate and respiratory activity from PPG signal.

    Science.gov (United States)

    Motin, Mohammod Abdul; Karmakar, Chandan Kumar; Palaniswami, Marimuthu

    2016-08-01

    The pulse oximeter's photoplethysmographic (PPG) signals, measure the local variations of blood volume in tissues, reflecting the peripheral pulse modulated by cardiac activity, respiration and other physiological effects. Therefore, PPG can be used to extract the vital cardiorespiratory signals like heart rate (HR), respiratory rate (RR) and respiratory activity (RA) and this will reduce the number of sensors connected to the patient's body for recording vital signs. In this paper, we propose an algorithm based on ensemble empirical mode decomposition with principal component analysis (EEMD-PCA) as a novel approach to estimate HR, RR and RA simultaneously from PPG signal. To examine the performance of the proposed algorithm, we used 45 epochs of PPG, electrocardiogram (ECG) and respiratory signal extracted from the MIMIC database (Physionet ATM data bank). The ECG and capnograph based respiratory signal were used as the ground truth and several metrics such as magnitude squared coherence (MSC), correlation coefficients (CC) and root mean square (RMS) error were used to compare the performance of EEMD-PCA algorithm with most of the existing methods in the literature. Results of EEMD-PCA based extraction of HR, RR and RA from PPG signal showed that the median RMS error (quartiles) obtained for RR was 0 (0, 0.89) breaths/min, for HR was 0.62 (0.56, 0.66) beats/min and for RA the average value of MSC and CC was 0.95 and 0.89 respectively. These results illustrated that the proposed EEMD-PCA approach is more accurate in estimating HR, RR and RA than other existing methods.

  2. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  3. Space-Varying Iterative Restoration of Diffuse Optical Tomograms Reconstructed by the Photon Average Trajectories Method

    Directory of Open Access Journals (Sweden)

    Kravtsenyuk Olga V

    2007-01-01

    Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a gain in spatial resolution can be obtained.

  4. Space-Varying Iterative Restoration of Diffuse Optical Tomograms Reconstructed by the Photon Average Trajectories Method

    Directory of Open Access Journals (Sweden)

    Vladimir V. Lyubimov

    2007-01-01

    Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a 27% gain in spatial resolution can be obtained.

  5. Simulation of X-ray signals

    International Nuclear Information System (INIS)

    Weller, A.

    1980-12-01

    A parameterized form of the local emissivity is used for the simulation of soft X-ray signals obtained on the WENDELSTEIN W VII-A stellarator with a 30 diode array. Numerical calculation of the line integrals for the different viewing angles and for a set of rotation angles covering one full signal period provides simulated periodic signals. In addition radial profiles of the line integrated emmission averaged over some time interval or at specific times, the relative amplitude modulation and the relative phase of the oscillations are calculated. These have to be fitted to the corresponding measured signals and profiles in order to get a reliable picture of the local emissivity. The model can take into account two poloidally asymmetric contributions of the type m = 1,2,3 or 4 (m = poloidal mode number). Each asymmetry can be generated in two ways (modulation of intensity and of geometry parameters). Besides an uniform rotation of the asymmetric terms some specific simple time evolution of the signals can be included (non-uniform rotation, growth of oscillations, sawtooth oscillations). The various input parameters are illustrated and the result of a simulation procedure is presented for a particular discharge in W VII-A. (orig.)

  6. Effects of gradient encoding and number of signal averages on fractional anisotropy and fiber density index in vivo at 1.5 tesla.

    Science.gov (United States)

    Widjaja, E; Mahmoodabadi, S Z; Rea, D; Moineddin, R; Vidarsson, L; Nilsson, D

    2009-01-01

    Tensor estimation can be improved by increasing the number of gradient directions (NGD) or increasing the number of signal averages (NSA), but at a cost of increased scan time. To evaluate the effects of NGD and NSA on fractional anisotropy (FA) and fiber density index (FDI) in vivo. Ten healthy adults were scanned on a 1.5T system using nine different diffusion tensor sequences. Combinations of 7 NGD, 15 NGD, and 25 NGD with 1 NSA, 2 NSA, and 3 NSA were used, with scan times varying from 2 to 18 min. Regions of interest (ROIs) were placed in the internal capsules, middle cerebellar peduncles, and splenium of the corpus callosum, and FA and FDI were calculated. Analysis of variance was used to assess whether there was a difference in FA and FDI of different combinations of NGD and NSA. There was no significant difference in FA of different combinations of NGD and NSA of the ROIs (P>0.005). There was a significant difference in FDI between 7 NGD/1 NSA and 25 NGD/3 NSA in all three ROIs (PNSA, 25 NGD/1 NSA, and 25 NGD/2 NSA and 25 NGD/3 NSA in all ROIs (P>0.005). We have not found any significant difference in FA with varying NGD and NSA in vivo in areas with relatively high anisotropy. However, lower NGD resulted in reduced FDI in vivo. With larger NGD, NSA has less influence on FDI. The optimal sequence among the nine sequences tested with the shortest scan time was 25 NGD/1 NSA.

  7. Machine Learning Techniques for Optical Performance Monitoring from Directly Detected PDM-QAM Signals

    DEFF Research Database (Denmark)

    Thrane, Jakob; Wass, Jesper; Piels, Molly

    2017-01-01

    Linear signal processing algorithms are effective in dealing with linear transmission channel and linear signal detection, while the nonlinear signal processing algorithms, from the machine learning community, are effective in dealing with nonlinear transmission channel and nonlinear signal...... detection. In this paper, a brief overview of the various machine learning methods and their application in optical communication is presented and discussed. Moreover, supervised machine learning methods, such as neural networks and support vector machine, are experimentally demonstrated for in-band optical...

  8. Biomechanics of the Peacock's Display: How Feather Structure and Resonance Influence Multimodal Signaling.

    Directory of Open Access Journals (Sweden)

    Roslyn Dakin

    Full Text Available Courtship displays may serve as signals of the quality of motor performance, but little is known about the underlying biomechanics that determines both their signal content and costs. Peacocks (Pavo cristatus perform a complex, multimodal "train-rattling" display in which they court females by vibrating the iridescent feathers in their elaborate train ornament. Here we study how feather biomechanics influences the performance of this display using a combination of field recordings and laboratory experiments. Using high-speed video, we find that train-rattling peacocks stridulate their tail feathers against the train at 25.6 Hz, on average, generating a broadband, pulsating mechanical sound at that frequency. Laboratory measurements demonstrate that arrays of peacock tail and train feathers have a broad resonant peak in their vibrational spectra at the range of frequencies used for train-rattling during the display, and the motion of feathers is just as expected for feathers shaking near resonance. This indicates that peacocks are able to drive feather vibrations energetically efficiently over a relatively broad range of frequencies, enabling them to modulate the feather vibration frequency of their displays. Using our field data, we show that peacocks with longer trains use slightly higher vibration frequencies on average, even though longer train feathers are heavier and have lower resonant frequencies. Based on these results, we propose hypotheses for future studies of the function and energetics of this display that ask why its dynamic elements might attract and maintain female attention. Finally, we demonstrate how the mechanical structure of the train feathers affects the peacock's visual display by allowing the colorful iridescent eyespots-which strongly influence female mate choice-to remain nearly stationary against a dynamic iridescent background.

  9. Evaluation of blind signal separation methods

    NARCIS (Netherlands)

    Schobben, D.W.E.; Torkkola, K.; Smaragdis, P.

    1999-01-01

    Recently many new Blind Signal Separation BSS algorithms have been introduced Authors evaluate the performance of their algorithms in various ways Among these are speech recognition rates plots of separated signals plots of cascaded mixingunmixing impulse responses and signal to noise ratios Clearly

  10. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  11. Self-organized neural network for the quality control of 12-lead ECG signals

    International Nuclear Information System (INIS)

    Chen, Yun; Yang, Hui

    2012-01-01

    Telemedicine is very important for the timely delivery of health care to cardiovascular patients, especially those who live in the rural areas of developing countries. However, there are a number of uncertainty factors inherent to the mobile-phone-based recording of electrocardiogram (ECG) signals such as personnel with minimal training and other extraneous noises. PhysioNet organized a challenge in 2011 to develop efficient algorithms that can assess the ECG signal quality in telemedicine settings. This paper presents our efforts in this challenge to integrate multiscale recurrence analysis with a self-organizing map for controlling the ECG signal quality. As opposed to directly evaluating the 12-lead ECG, we utilize an information-preserving transform, i.e. Dower transform, to derive the 3-lead vectorcardiogram (VCG) from the 12-lead ECG in the first place. Secondly, we delineate the nonlinear and nonstationary characteristics underlying the 3-lead VCG signals into multiple time-frequency scales. Furthermore, a self-organizing map is trained, in both supervised and unsupervised ways, to identify the correlations between signal quality and multiscale recurrence features. The efficacy and robustness of this approach are validated using real-world ECG recordings available from PhysioNet. The average performance was demonstrated to be 95.25% for the training dataset and 90.0% for the independent test dataset with unknown labels. (paper)

  12. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  13. Performance and Usability of Various Robotic Arm Control Modes from Human Force Signals

    Directory of Open Access Journals (Sweden)

    Sébastien Mick

    2017-10-01

    Full Text Available Elaborating an efficient and usable mapping between input commands and output movements is still a key challenge for the design of robotic arm prostheses. In order to address this issue, we present and compare three different control modes, by assessing them in terms of performance as well as general usability. Using an isometric force transducer as the command device, these modes convert the force input signal into either a position or a velocity vector, whose magnitude is linearly or quadratically related to force input magnitude. With the robotic arm from the open source 3D-printed Poppy Humanoid platform simulating a mobile prosthesis, an experiment was carried out with eighteen able-bodied subjects performing a 3-D target-reaching task using each of the three modes. The subjects were given questionnaires to evaluate the quality of their experience with each mode, providing an assessment of their global usability in the context of the task. According to performance metrics and questionnaire results, velocity control modes were found to perform better than position control mode in terms of accuracy and quality of control as well as user satisfaction and comfort. Subjects also seemed to favor quadratic velocity control over linear (proportional velocity control, even if these two modes did not clearly distinguish from one another when it comes to performance and usability assessment. These results highlight the need to take into account user experience as one of the key criteria for the design of control modes intended to operate limb prostheses.

  14. An in-vacuum wall current monitor and low cost signal sampling system

    International Nuclear Information System (INIS)

    Yin, Y.; Rawnsley, W.R.; Mackenzie, G.H.

    1993-11-01

    The beam bunches extracted from the TRIUMF cyclotron are usually about 4 ns long, contain ∼ 4 x 10 7 protons, and are spaced at 43 ns. A wall current monitor capable of giving the charge distribution within a bunch, on a bunch by bunch basis, has recently been installed together with a sampling system for routine display in the control room. The wall current monitor is enclosed in a vacuum vessel and no ceramic spacer is required. This enhances the response to high frequencies, ferrite rings extend the low frequency response. Bench measurements show a flat response between a few hundred kilohertz and 4.6 GHz. For a permanent display in the control room the oscilloscope will be replaced by a Stanford Research Systems fast sampler module, a scanner module, and an interface module made at TRIUMF. The time to acquire one 10 ns distribution encompassing the beam bunch is 30 ms with a sample width of 100 ps and an average sample spacing of 13 ps. The scan, sample, and retrace signals are buffered carried on 70 m differential lines to the control room. An analog scope in XYZ mode provides a real time display. Signal averaging can be performed by using a digital oscilloscope in YT mode. (author). 6 refs., 2 tabs., 7 figs

  15. Use of modulated excitation signals in ultrasound. Part II: Design and performance for medical imaging applications

    DEFF Research Database (Denmark)

    Misaridis, Thanassis; Jensen, Jørgen Arendt

    2005-01-01

    ultrasound presents design methods of linear FM signals and mismatched filters, in order to meet the higher demands on resolution in ultrasound imaging. It is shown that for the small time-bandwidth (TB) products available in ultrasound, the rectangular spectrum approximation is not valid, which reduces....... The method is evaluated first for resolution performance and axial sidelobes through simulations with the program Field II. A coded excitation ultrasound imaging system based on a commercial scanner and a 4 MHz probe driven by coded sequences is presented and used for the clinical evaluation of the coded...... excitation/compression scheme. The clinical images show a significant improvement in penetration depth and contrast, while they preserve both axial and lateral resolution. At the maximum acquisition depth of 15 cm, there is an improvement of more than 10 dB in the signal-to-noise ratio of the images...

  16. Pretreatment data is highly predictive of liver chemistry signals in clinical trials.

    Science.gov (United States)

    Cai, Zhaohui; Bresell, Anders; Steinberg, Mark H; Silberg, Debra G; Furlong, Stephen T

    2012-01-01

    The goal of this retrospective analysis was to assess how well predictive models could determine which patients would develop liver chemistry signals during clinical trials based on their pretreatment (baseline) information. Based on data from 24 late-stage clinical trials, classification models were developed to predict liver chemistry outcomes using baseline information, which included demographics, medical history, concomitant medications, and baseline laboratory results. Predictive models using baseline data predicted which patients would develop liver signals during the trials with average validation accuracy around 80%. Baseline levels of individual liver chemistry tests were most important for predicting their own elevations during the trials. High bilirubin levels at baseline were not uncommon and were associated with a high risk of developing biochemical Hy's law cases. Baseline γ-glutamyltransferase (GGT) level appeared to have some predictive value, but did not increase predictability beyond using established liver chemistry tests. It is possible to predict which patients are at a higher risk of developing liver chemistry signals using pretreatment (baseline) data. Derived knowledge from such predictions may allow proactive and targeted risk management, and the type of analysis described here could help determine whether new biomarkers offer improved performance over established ones.

  17. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    International Nuclear Information System (INIS)

    Cooling, M P; Humphrey, V F; Wilkens, V

    2011-01-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  18. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    Science.gov (United States)

    Cooling, M. P.; Humphrey, V. F.; Wilkens, V.

    2011-02-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  19. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  20. Cancelable ECG biometrics using GLRT and performance improvement using guided filter with irreversible guide signal.

    Science.gov (United States)

    Kim, Hanvit; Minh Phuong Nguyen; Se Young Chun

    2017-07-01

    Biometrics such as ECG provides a convenient and powerful security tool to verify or identify an individual. However, one important drawback of biometrics is that it is irrevocable. In other words, biometrics cannot be re-used practically once it is compromised. Cancelable biometrics has been investigated to overcome this drawback. In this paper, we propose a cancelable ECG biometrics by deriving a generalized likelihood ratio test (GLRT) detector from a composite hypothesis testing in randomly projected domain. Since it is common to observe performance degradation for cancelable biometrics, we also propose a guided filtering (GF) with irreversible guide signal that is a non-invertibly transformed signal of ECG authentication template. We evaluated our proposed method using ECG-ID database with 89 subjects. Conventional Euclidean detector with original ECG template yielded 93.9% PD1 (detection probability at 1% FAR) while Euclidean detector with 10% compressed ECG (1/10 of the original data size) yielded 90.8% PD1. Our proposed GLRT detector with 10% compressed ECG yielded 91.4%, which is better than Euclidean with the same compressed ECG. GF with our proposed irreversible ECG template further improved the performance of our GLRT with 10% compressed ECG up to 94.3%, which is higher than Euclidean detector with original ECG. Lastly, we showed that our proposed cancelable ECG biometrics practically met cancelable biometrics criteria such as efficiency, re-usability, diversity and non-invertibility.

  1. Digital mammography screening: average glandular dose and first performance parameters

    International Nuclear Information System (INIS)

    Weigel, S.; Girnus, R.; Czwoydzinski, J.; Heindel, W.; Decker, T.; Spital, S.

    2007-01-01

    Purpose: The Radiation Protection Commission demanded structured implementation of digital mammography screening in Germany. The main requirements were the installation of digital reference centers and separate evaluation of the fully digitized screening units. Digital mammography screening must meet the quality standards of the European guidelines and must be compared to analog screening results. We analyzed early surrogate indicators of effective screening and dosage levels for the first German digital screening unit in a routine setting after the first half of the initial screening round. Materials and Methods: We used three digital mammography screening units (one full-field digital scanner [DR] and two computed radiography systems [CR]). Each system has been proven to fulfill the requirements of the National and European guidelines. The radiation exposure levels, the medical workflow and the histological results were documented in a central electronic screening record. Results: In the first year 11,413 women were screened (participation rate 57.5 %). The parenchymal dosages for the three mammographic X-ray systems, averaged for the different breast sizes, were 0.7 (DR), 1.3 (CR), 1.5 (CR) mGy. 7 % of the screened women needed to undergo further examinations. The total number of screen-detected cancers was 129 (detection rate 1.1 %). 21 % of the carcinomas were classified as ductal carcinomas in situ, 40 % of the invasive carcinomas had a histological size ≤ 10 mm and 61 % < 15 mm. The frequency distribution of pT-categories of screen-detected cancer was as follows: pTis 20.9 %, pT1 61.2 %, pT2 14.7 %, pT3 2.3 %, pT4 0.8 %. 73 % of the invasive carcinomas were node-negative. (orig.)

  2. Front-end data reduction of diagnostic signals by real-time digital filtering

    International Nuclear Information System (INIS)

    Zasche, D.; Fahrbach, H.U.; Harmeyer, E.

    1985-01-01

    Diagnostic measurements on a fusion plasma with high resolution in space, time and signal amplitude involve handling large amounts of data. In the design of the soft-X-ray pinhole camera diagnostic for JET (100 detectors in 2 cameras) a new approach to this problem was found. The analogue-to-digital conversion is performed continuously at the highest sample rate of 200 kHz, lower sample rates (10 kHz, 1 kHz, 100 Hz) are obtained by real-time digital filters which calculate weighted averages over consecutive samples and are undersampled at their outputs to reduce the data rate. At any time, the signals from all detectors are available at all possible data rates in ring buffers. Thus the appropriate data rate can always be recorded on demand (preprogrammed or triggered by the experiment). With this system a reduction of the raw data by a factor of up to 2000 (typically 200) is possible without severe loss of information

  3. A programmable Gaussian random pulse generator for automated performance measurements

    International Nuclear Information System (INIS)

    Abdel-Aal, R.E.

    1989-01-01

    This paper describes a versatile random signal generator which produces logic pulses with a Gaussian distribution for the pulse spacing. The average rate at the pulse generator output can be software-programmed, which makes it useful in performing automated measurements of dead time and CPU time performance of data acquisition systems and modules over a wide range of data rates. Hardware and software components are described and data on the input-output characteristics and the statistical properties of the pulse generator are given. Typical applications are discussed together with advantages over using radioactive test sources. Results obtained from an automated performance run on a VAX 11/785 data acquisition system are presented. (orig.)

  4. Receiver Signal to Noise Ratios for IPDA Lidars Using Sine-wave and Pulsed Laser Modulation and Direct Detections

    Science.gov (United States)

    Sun, Xiaoli; Abshire, James B.

    2011-01-01

    Integrated path differential absorption (IPDA) lidar can be used to remotely measure the column density of gases in the path to a scattering target [1]. The total column gas molecular density can be derived from the ratio of the laser echo signal power with the laser wavelength on the gas absorption line (on-line) to that off the line (off-line). 80th coherent detection and direct detection IPDA lidar have been used successfully in the past in horizontal path and airborne remote sensing measurements. However, for space based measurements, the signal propagation losses are often orders of magnitude higher and it is important to use the most efficient laser modulation and detection technique to minimize the average laser power and the electrical power from the spacecraft. This paper gives an analysis the receiver signal to noise ratio (SNR) of several laser modulation and detection techniques versus the average received laser power under similar operation environments. Coherent detection [2] can give the best receiver performance when the local oscillator laser is relatively strong and the heterodyne mixing losses are negligible. Coherent detection has a high signal gain and a very narrow bandwidth for the background light and detector dark noise. However, coherent detection must maintain a high degree of coherence between the local oscillator laser and the received signal in both temporal and spatial modes. This often results in a high system complexity and low overall measurement efficiency. For measurements through atmosphere the coherence diameter of the received signal also limits the useful size of the receiver telescope. Direct detection IPDA lidars are simpler to build and have fewer constraints on the transmitter and receiver components. They can use much larger size 'photon-bucket' type telescopes to reduce the demands on the laser transmitter. Here we consider the two most widely used direct detection IPDA lidar techniques. The first technique uses two CW

  5. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  6. The application of empirical mode decomposition for the enhancement of cardiotocograph signals

    International Nuclear Information System (INIS)

    Krupa, B N; Mohd Ali, M A; Zahedi, E

    2009-01-01

    Cardiotocograph (CTG) is widely used in everyday clinical practice for fetal surveillance, where it is used to record fetal heart rate (FHR) and uterine activity (UA). These two biosignals can be used for antepartum and intrapartum fetal monitoring and are, in fact, nonlinear and non-stationary. CTG recordings are often corrupted by artifacts such as missing beats in FHR, high-frequency noise in FHR and UA signals. In this paper, an empirical mode decomposition (EMD) method is applied on CTG signals. A recursive algorithm is first utilized to eliminate missing beats. High-frequency noise is reduced using EMD followed by the partial reconstruction (PAR) method, where the noise order is identified by a statistical method. The obtained signal enhancement from the proposed method is validated by comparing the resulting traces with the output obtained by applying classical signal processing methods such as Butterworth low-pass filtering, linear interpolation and a moving average filter on 12 CTG signals. Three obstetricians evaluated all 12 sets of traces and rated the proposed method, on average, 3.8 out of 5 on a scale of 1(lowest) to 5 (highest)

  7. Development of quick-response area-averaged void fraction meter

    International Nuclear Information System (INIS)

    Watanabe, Hironori; Iguchi, Tadashi; Kimura, Mamoru; Anoda, Yoshinari

    2000-11-01

    Authors are performing experiments to investigate BWR thermal-hydraulic instability under coupling of neutronics and thermal-hydraulics. To perform the experiment, it is necessary to measure instantaneously area-averaged void fraction in rod bundle under high temperature/high pressure gas-liquid two-phase flow condition. Since there were no void fraction meters suitable for these requirements, we newly developed a practical void fraction meter. The principle of the meter is based on the electrical conductance changing with void fraction in gas-liquid two-phase flow. In this meter, metal flow channel wall is used as one electrode and a L-shaped line electrode installed at the center of flow channel is used as the other electrode. This electrode arrangement makes possible instantaneous measurement of area-averaged void fraction even under the metal flow channel. We performed experiments with air/water two-phase flow to clarify the void fraction meter performance. Experimental results indicated that void fraction was approximated by α=1-I/I o , where α and I are void fraction and current (I o is current at α=0). This relation holds in the wide range of void fraction of 0∼70%. The difference between α and 1-I/I o was approximately 10% at maximum. The major reasons of the difference are a void distribution over measurement area and an electrical insulation of the center electrode by bubbles. The principle and structure of this void fraction meter are very basic and simple. Therefore, the meter can be applied to various fields on gas-liquid two-phase flow studies. (author)

  8. A Divergence Median-based Geometric Detector with A Weighted Averaging Filter

    Science.gov (United States)

    Hua, Xiaoqiang; Cheng, Yongqiang; Li, Yubo; Wang, Hongqiang; Qin, Yuliang

    2018-01-01

    To overcome the performance degradation of the classical fast Fourier transform (FFT)-based constant false alarm rate detector with the limited sample data, a divergence median-based geometric detector on the Riemannian manifold of Heimitian positive definite matrices is proposed in this paper. In particular, an autocorrelation matrix is used to model the correlation of sample data. This method of the modeling can avoid the poor Doppler resolution as well as the energy spread of the Doppler filter banks result from the FFT. Moreover, a weighted averaging filter, conceived from the philosophy of the bilateral filtering in image denoising, is proposed and combined within the geometric detection framework. As the weighted averaging filter acts as the clutter suppression, the performance of the geometric detector is improved. Numerical experiments are given to validate the effectiveness of our proposed method.

  9. Aperture averaging and BER for Gaussian beam in underwater oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-03-01

    In an underwater wireless optical communication (UWOC) link, power fluctuations over finite-sized collecting lens are investigated for a horizontally propagating Gaussian beam wave. The power scintillation index, also known as the irradiance flux variance, for the received irradiance is evaluated in weak oceanic turbulence by using the Rytov method. This lets us further quantify the associated performance indicators, namely, the aperture averaging factor and the average bit-error rate (). The effects on the UWOC link performance of the oceanic turbulence parameters, i.e., the rate of dissipation of kinetic energy per unit mass of fluid, the rate of dissipation of mean-squared temperature, Kolmogorov microscale, the ratio of temperature to salinity contributions to the refractive index spectrum as well as system parameters, i.e., the receiver aperture diameter, Gaussian source size, laser wavelength and the link distance are investigated.

  10. Bayesian Recovery of Clipped OFDM Signals: A Receiver-based Approach

    KAUST Repository

    Al-Rabah, Abdullatif R.

    2013-05-01

    Recently, orthogonal frequency-division multiplexing (OFDM) has been adopted for high-speed wireless communications due to its robustness against multipath fading. However, one of the main fundamental drawbacks of OFDM systems is the high peak-to-average-power ratio (PAPR). Several techniques have been proposed for PAPR reduction. Most of these techniques require transmitter-based (pre-compensated) processing. On the other hand, receiver-based alternatives would save the power and reduce the transmitter complexity. By keeping this in mind, a possible approach is to limit the amplitude of the OFDM signal to a predetermined threshold and equivalently a sparse clipping signal is added. Then, estimating this clipping signal at the receiver to recover the original signal. In this work, we propose a Bayesian receiver-based low-complexity clipping signal recovery method for PAPR reduction. The method is able to i) effectively reduce the PAPR via simple clipping scheme at the transmitter side, ii) use Bayesian recovery algorithm to reconstruct the clipping signal at the receiver side by measuring part of subcarriers, iii) perform well in the absence of statistical information about the signal (e.g. clipping level) and the noise (e.g. noise variance), and at the same time iv is energy efficient due to its low complexity. Specifically, the proposed recovery technique is implemented in data-aided based. The data-aided method collects clipping information by measuring reliable 
data subcarriers, thus makes full use of spectrum for data transmission without the need for tone reservation. The study is extended further to discuss how to improve the recovery of the clipping signal utilizing some features of practical OFDM systems i.e., the oversampling and the presence of multiple receivers. Simulation results demonstrate the superiority of the proposed technique over other recovery algorithms. The overall objective is to show that the receiver-based Bayesian technique is highly

  11. Performance Analysis of Ultra-Wideband Channel for Short-Range Monopulse Radar at Ka-Band

    Directory of Open Access Journals (Sweden)

    Naohiko Iwakiri

    2012-01-01

    Full Text Available High-range resolution is inherently provided with Ka-band ultra-wideband (UWB vehicular radars. The authors have developed a prototype UWB monopulse radar equipped with a two-element receiving antenna array and reported its measurement results. In this paper, a more detailed verification using these measurements is presented. The measurements were analyzed employing matched filtering and eigendecomposition, and then multipath components were extracted to examine the behavior of received UWB monopulse signals. Next, conventional direction finding algorithms based on narrowband assumption were evaluated using the extracted multipath components, resulting in acceptable angle-of-arrival (AOA from the UWB monopulse signal regardless of wideband signals. Performance degradation due to a number of averaging the received monopulses was also examined to design suitable radar's waveforms.

  12. Methods and systems for the processing of physiological signals

    International Nuclear Information System (INIS)

    Cosnac, B. de; Gariod, R.; Max, J.; Monge, V.

    1975-01-01

    This note is a general survey of the processing of physiological signals. After an introduction about electrodes and their limitations, the physiological nature of the main signals are shortly recalled. Different methods (signal averaging, spectral analysis, shape morphological analysis) are described as applications to the fields of magnetocardiography, electro-encephalography, cardiography, electronystagmography. As for processing means (single portable instruments and programmable), they are described through the example of application to rheography and to the Plurimat'S general system. As a conclusion the methods of signal processing are dominated by the morphological analysis of curves and by the necessity of a more important introduction of the statistical classification. As for the instruments, microprocessors will appear but specific operators linked to computer will certainly grow [fr

  13. Topological signal processing

    CERN Document Server

    Robinson, Michael

    2014-01-01

    Signal processing is the discipline of extracting information from collections of measurements. To be effective, the measurements must be organized and then filtered, detected, or transformed to expose the desired information.  Distortions caused by uncertainty, noise, and clutter degrade the performance of practical signal processing systems. In aggressively uncertain situations, the full truth about an underlying signal cannot be known.  This book develops the theory and practice of signal processing systems for these situations that extract useful, qualitative information using the mathematics of topology -- the study of spaces under continuous transformations.  Since the collection of continuous transformations is large and varied, tools which are topologically-motivated are automatically insensitive to substantial distortion. The target audience comprises practitioners as well as researchers, but the book may also be beneficial for graduate students.

  14. Averaging scheme for atomic resolution off-axis electron holograms.

    Science.gov (United States)

    Niermann, T; Lehmann, M

    2014-08-01

    All micrographs are limited by shot-noise, which is intrinsic to the detection process of electrons. For beam insensitive specimen this limitation can in principle easily be circumvented by prolonged exposure times. However, in the high-resolution regime several instrumental instabilities limit the applicable exposure time. Particularly in the case of off-axis holography the holograms are highly sensitive to the position and voltage of the electron-optical biprism. We present a novel reconstruction algorithm to average series of off-axis holograms while compensating for specimen drift, biprism drift, drift of biprism voltage, and drift of defocus, which all might cause problematic changes from exposure to exposure. We show an application of the algorithm utilizing also the possibilities of double biprism holography, which results in a high quality exit-wave reconstruction with 75 pm resolution at a very high signal-to-noise ratio. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Moving Average Filter-Based Phase-Locked Loops: Performance Analysis and Design Guidelines

    DEFF Research Database (Denmark)

    Golestan, Saeed; Ramezani, Malek; Guerrero, Josep M.

    2014-01-01

    this challenge, incorporating moving average filter(s) (MAF) into the PLL structure has been proposed in some recent literature. A MAF is a linear-phase finite impulse response filter which can act as an ideal low-pass filter, if certain conditions hold. The main aim of this paper is to present the control...... design guidelines for a typical MAF-based PLL. The paper starts with the general description of MAFs. The main challenge associated with using the MAFs is then explained, and its possible solutions are discussed. The paper then proceeds with a brief overview of the different MAF-based PLLs. In each case......, the PLL block diagram description is shown, the advantages and limitations are briefly discussed, and the tuning approach (if available) is evaluated. The paper then presents two systematic methods to design the control parameters of a typical MAF-based PLL: one for the case of using a proportional...

  16. An averaging battery model for a lead-acid battery operating in an electric car

    Science.gov (United States)

    Bozek, J. M.

    1979-01-01

    A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.

  17. Multiple logistic regression model of signalling practices of drivers on urban highways

    Science.gov (United States)

    Puan, Othman Che; Ibrahim, Muttaka Na'iya; Zakaria, Rozana

    2015-05-01

    Giving signal is a way of informing other road users, especially to the conflicting drivers, the intention of a driver to change his/her movement course. Other users are exposed to hazard situation and risks of accident if the driver who changes his/her course failed to give signal as required. This paper describes the application of logistic regression model for the analysis of driver's signalling practices on multilane highways based on possible factors affecting driver's decision such as driver's gender, vehicle's type, vehicle's speed and traffic flow intensity. Data pertaining to the analysis of such factors were collected manually. More than 2000 drivers who have performed a lane changing manoeuvre while driving on two sections of multilane highways were observed. Finding from the study shows that relatively a large proportion of drivers failed to give any signals when changing lane. The result of the analysis indicates that although the proportion of the drivers who failed to provide signal prior to lane changing manoeuvre is high, the degree of compliances of the female drivers is better than the male drivers. A binary logistic model was developed to represent the probability of a driver to provide signal indication prior to lane changing manoeuvre. The model indicates that driver's gender, type of vehicle's driven, speed of vehicle and traffic volume influence the driver's decision to provide a signal indication prior to a lane changing manoeuvre on a multilane urban highway. In terms of types of vehicles driven, about 97% of motorcyclists failed to comply with the signal indication requirement. The proportion of non-compliance drivers under stable traffic flow conditions is much higher than when the flow is relatively heavy. This is consistent with the data which indicates a high degree of non-compliances when the average speed of the traffic stream is relatively high.

  18. Flow measurements using noise signals of axially displaced thermocouples

    Energy Technology Data Exchange (ETDEWEB)

    Kozma, R.; Hoogenboom, J.E. (Interuniversitair Reactor Inst., Delft (Netherlands))

    1990-01-01

    Determination of the flow rate of the coolant in the cooling channels of nuclear reactors is an important aspect of core monitoring. It is usually impossible to measure the flow by flowmeters in the individual channels due to the lack of space and safety reasons. An alternative method is based on the analysis of noise signals of the available in-core detectors. In such a noise method, a transit time which characterises the propagation of thermohydraulic fluctuations (density or temperature fluctuations) in the coolant is determined from the correlation between the noise signals of axially displaced detectors. In this paper, the results of flow measurements using axially displaced thermocouples in the channel wall will be presented. The experiments have been performed in a simulated MRT-type fuel assembly located in the research reactor HOR of the Interfaculty Reactor Institute, Delft. It was found that the velocities obtained via temperature noise correlation methods are significantly larger than the area-averaged velocity in the single-phase coolant flow. Model calculations show that the observed phenomenon can be explained by effects due to the radial velocity distribution in the channel. (author).

  19. Global 21 cm Signal Extraction from Foreground and Instrumental Effects. I. Pattern Recognition Framework for Separation Using Training Sets

    Science.gov (United States)

    Tauscher, Keith; Rapetti, David; Burns, Jack O.; Switzer, Eric

    2018-02-01

    The sky-averaged (global) highly redshifted 21 cm spectrum from neutral hydrogen is expected to appear in the VHF range of ∼20–200 MHz and its spectral shape and strength are determined by the heating properties of the first stars and black holes, by the nature and duration of reionization, and by the presence or absence of exotic physics. Measurements of the global signal would therefore provide us with a wealth of astrophysical and cosmological knowledge. However, the signal has not yet been detected because it must be seen through strong foregrounds weighted by a large beam, instrumental calibration errors, and ionospheric, ground, and radio-frequency-interference effects, which we collectively refer to as “systematics.” Here, we present a signal extraction method for global signal experiments which uses Singular Value Decomposition of “training sets” to produce systematics basis functions specifically suited to each observation. Instead of requiring precise absolute knowledge of the systematics, our method effectively requires precise knowledge of how the systematics can vary. After calculating eigenmodes for the signal and systematics, we perform a weighted least square fit of the corresponding coefficients and select the number of modes to include by minimizing an information criterion. We compare the performance of the signal extraction when minimizing various information criteria and find that minimizing the Deviance Information Criterion most consistently yields unbiased fits. The methods used here are built into our widely applicable, publicly available Python package, pylinex, which analytically calculates constraints on signals and systematics from given data, errors, and training sets.

  20. Deblurring of class-averaged images in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Park, Wooram; Chirikjian, Gregory S; Madden, Dean R; Rockmore, Daniel N

    2010-01-01

    This paper proposes a method for the deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid-body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre–Fourier expansions, and Hermite expansion and Laguerre–Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method

  1. Traffic signal design and simulation for vulnerable road users safety and bus preemption

    International Nuclear Information System (INIS)

    Lo, Shih-Ching; Huang, Hsieh-Chu

    2015-01-01

    Mostly, pedestrian car accidents occurred at a signalized interaction is because pedestrians cannot across the intersection safely within the green light. From the viewpoint of pedestrian, there might have two reasons. The first one is pedestrians cannot speed up to across the intersection, such as the elders. The other reason is pedestrians do not sense that the signal phase is going to change and their right-of-way is going to be lost. Developing signal logic to protect pedestrian, who is crossing an intersection is the first purpose of this study. In addition, to improve the reliability and reduce delay of public transportation service is the second purpose. Therefore, bus preemption is also considered in the designed signal logic. In this study, the traffic data of the intersection of Chong-Qing North Road and Min-Zu West Road, Taipei, Taiwan, is employed to calibrate and validate the signal logic by simulation. VISSIM 5.20, which is a microscopic traffic simulation software, is employed to simulate the signal logic. From the simulated results, the signal logic presented in this study can protect pedestrians crossing the intersection successfully. The design of bus preemption can reduce the average delay. However, the pedestrian safety and bus preemption signal will influence the average delay of cars largely. Thus, whether applying the pedestrian safety and bus preemption signal logic to an intersection or not should be evaluated carefully

  2. Traffic signal design and simulation for vulnerable road users safety and bus preemption

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Shih-Ching; Huang, Hsieh-Chu [Department of Transportation Technology and Logistics Management, Chung Hua University, No. 707, Sec. 2, WuFu Rd., Hsinchu, 300, Taiwan (China)

    2015-01-22

    Mostly, pedestrian car accidents occurred at a signalized interaction is because pedestrians cannot across the intersection safely within the green light. From the viewpoint of pedestrian, there might have two reasons. The first one is pedestrians cannot speed up to across the intersection, such as the elders. The other reason is pedestrians do not sense that the signal phase is going to change and their right-of-way is going to be lost. Developing signal logic to protect pedestrian, who is crossing an intersection is the first purpose of this study. In addition, to improve the reliability and reduce delay of public transportation service is the second purpose. Therefore, bus preemption is also considered in the designed signal logic. In this study, the traffic data of the intersection of Chong-Qing North Road and Min-Zu West Road, Taipei, Taiwan, is employed to calibrate and validate the signal logic by simulation. VISSIM 5.20, which is a microscopic traffic simulation software, is employed to simulate the signal logic. From the simulated results, the signal logic presented in this study can protect pedestrians crossing the intersection successfully. The design of bus preemption can reduce the average delay. However, the pedestrian safety and bus preemption signal will influence the average delay of cars largely. Thus, whether applying the pedestrian safety and bus preemption signal logic to an intersection or not should be evaluated carefully.

  3. Signal existence verification (SEV) for GPS low received power signal detection using the time-frequency approach.

    Science.gov (United States)

    Jan, Shau-Shiun; Sun, Chih-Cheng

    2010-01-01

    The detection of low received power of global positioning system (GPS) signals in the signal acquisition process is an important issue for GPS applications. Improving the miss-detection problem of low received power signal is crucial, especially for urban or indoor environments. This paper proposes a signal existence verification (SEV) process to detect and subsequently verify low received power GPS signals. The SEV process is based on the time-frequency representation of GPS signal, and it can capture the characteristic of GPS signal in the time-frequency plane to enhance the GPS signal acquisition performance. Several simulations and experiments are conducted to show the effectiveness of the proposed method for low received power signal detection. The contribution of this work is that the SEV process is an additional scheme to assist the GPS signal acquisition process in low received power signal detection, without changing the original signal acquisition or tracking algorithms.

  4. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    Science.gov (United States)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  5. Real time pressure signal system for a rotary engine

    Science.gov (United States)

    Rice, W. J. (Inventor)

    1984-01-01

    A real-time IMEP signal which is a composite of those produced in any one chamber of a three-lobed rotary engine is developed by processing the signals of four transducers positioned in a Wankel engine housing such that the rotor overlaps two of the transducers for a brief period during each cycle. During the overlap period of any two transducers, their output is compared and sampled for 10 microseconds per 0.18 degree of rotation by a sampling switch and capacitive circuit. When the switch is closed, the instantaneous difference between the value of the transducer signals is provided while with the switch open the average difference is produced. This combined signal, along with the original signal of the second transducer, is fed through a multiplexer to a pressure output terminal. Timing circuits, controlled by a crank angle encoder on the engine, determine which compared transducer signals are applied to the output terminal and when, as well as the open and closed periods of the switches.

  6. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  7. Vibration Signal Forecasting on Rotating Machinery by means of Signal Decomposition and Neurofuzzy Modeling

    Directory of Open Access Journals (Sweden)

    Daniel Zurita-Millán

    2016-01-01

    Full Text Available Vibration monitoring plays a key role in the industrial machinery reliability since it allows enhancing the performance of the machinery under supervision through the detection of failure modes. Thus, vibration monitoring schemes that give information regarding future condition, that is, prognosis approaches, are of growing interest for the scientific and industrial communities. This work proposes a vibration signal prognosis methodology, applied to a rotating electromechanical system and its associated kinematic chain. The method combines the adaptability of neurofuzzy modeling with a signal decomposition strategy to model the patterns of the vibrations signal under different fault scenarios. The model tuning is performed by means of Genetic Algorithms along with a correlation based interval selection procedure. The performance and effectiveness of the proposed method are validated experimentally with an electromechanical test bench containing a kinematic chain. The results of the study indicate the suitability of the method for vibration forecasting in complex electromechanical systems and their associated kinematic chains.

  8. Biomechanics of the Peacock’s Display: How Feather Structure and Resonance Influence Multimodal Signaling

    Science.gov (United States)

    Dakin, Roslyn; McCrossan, Owen; Hare, James F.; Montgomerie, Robert; Amador Kane, Suzanne

    2016-01-01

    Courtship displays may serve as signals of the quality of motor performance, but little is known about the underlying biomechanics that determines both their signal content and costs. Peacocks (Pavo cristatus) perform a complex, multimodal “train-rattling” display in which they court females by vibrating the iridescent feathers in their elaborate train ornament. Here we study how feather biomechanics influences the performance of this display using a combination of field recordings and laboratory experiments. Using high-speed video, we find that train-rattling peacocks stridulate their tail feathers against the train at 25.6 Hz, on average, generating a broadband, pulsating mechanical sound at that frequency. Laboratory measurements demonstrate that arrays of peacock tail and train feathers have a broad resonant peak in their vibrational spectra at the range of frequencies used for train-rattling during the display, and the motion of feathers is just as expected for feathers shaking near resonance. This indicates that peacocks are able to drive feather vibrations energetically efficiently over a relatively broad range of frequencies, enabling them to modulate the feather vibration frequency of their displays. Using our field data, we show that peacocks with longer trains use slightly higher vibration frequencies on average, even though longer train feathers are heavier and have lower resonant frequencies. Based on these results, we propose hypotheses for future studies of the function and energetics of this display that ask why its dynamic elements might attract and maintain female attention. Finally, we demonstrate how the mechanical structure of the train feathers affects the peacock’s visual display by allowing the colorful iridescent eyespots–which strongly influence female mate choice–to remain nearly stationary against a dynamic iridescent background. PMID:27119380

  9. Fast Decentralized Averaging via Multi-scale Gossip

    Science.gov (United States)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  10. Performance improvement of two-dimensional EUV spectroscopy based on high frame rate CCD and signal normalization method

    International Nuclear Information System (INIS)

    Zhang, H.M.; Morita, S.; Ohishi, T.; Goto, M.; Huang, X.L.

    2014-01-01

    In the Large Helical Device (LHD), the performance of two-dimensional (2-D) extreme ultraviolet (EUV) spectroscopy with wavelength range of 30-650A has been improved by installing a high frame rate CCD and applying a signal intensity normalization method. With upgraded 2-D space-resolved EUV spectrometer, measurement of 2-D impurity emission profiles with high horizontal resolution is possible in high-density NBI discharges. The variation in intensities of EUV emission among a few discharges is significantly reduced by normalizing the signal to the spectral intensity from EUV_—Long spectrometer which works as an impurity monitor with high-time resolution. As a result, high resolution 2-D intensity distribution has been obtained from CIV (384.176A), CV(2x40.27A), CVI(2x33.73A) and HeII(303.78A). (author)

  11. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  12. Influence of Wilbraham-Gibbs Phenomenon on Digital Stochastic Measurement of EEG Signal Over an Interval

    Directory of Open Access Journals (Sweden)

    Sovilj P.

    2014-10-01

    Full Text Available Measurement methods, based on the approach named Digital Stochastic Measurement, have been introduced, and several prototype and small-series commercial instruments have been developed based on these methods. These methods have been mostly investigated for various types of stationary signals, but also for non-stationary signals. This paper presents, analyzes and discusses digital stochastic measurement of electroencephalography (EEG signal in the time domain, emphasizing the problem of influence of the Wilbraham-Gibbs phenomenon. The increase of measurement error, related to the Wilbraham-Gibbs phenomenon, is found. If the EEG signal is measured and measurement interval is 20 ms wide, the average maximal error relative to the range of input signal is 16.84 %. If the measurement interval is extended to 2s, the average maximal error relative to the range of input signal is significantly lowered - down to 1.37 %. Absolute errors are compared with the error limit recommended by Organisation Internationale de Métrologie Légale (OIML and with the quantization steps of the advanced EEG instruments with 24-bit A/D conversion

  13. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    Science.gov (United States)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-07-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics.

  14. Improvement of the characterization of ultrasonic data by means of digital signal processing

    International Nuclear Information System (INIS)

    Bieth, M.; Romy, D.; Weigel, D.

    1985-01-01

    The digital signal processing method for averaging using minima developed by Framatome allows to improve signal-to-noise ratio up to 7 dB during ultrasonic testing of cast stainless steel structures (primary pipes of PWR power plants). Application of digital signal processing to industrial testing conditions requires the availability of a fast analog-digital converter capable of real time processings which has been developed by CGR [fr

  15. Assessment of General Public Exposure to LTE signals compared to other Cellular Networks Present in Thessaloniki, Greece.

    Science.gov (United States)

    Gkonis, Fotios; Boursianis, Achilles; Samaras, Theodoros

    2017-07-01

    To assess general public exposure to electromagnetic fields from Long Term Evolution (LTE) base stations, measurements at 10 sites in Thessaloniki, Greece were performed. Results are compared with other mobile cellular networks currently in use. All exposure values satisfy the guidelines for general public exposure of the International Commission on Non-Ionizing Radiation Protection (ICNIRP), as well as the reference levels by the Greek legislation at all sites. LTE electric field measurements were recorded up to 0.645 V/m. By applying the ICNIRP guidelines, the exposure ratio for all LTE signals is between 2.9 × 10-5 and 2.8 × 10-2. From the measurements results it is concluded that the average and maximum power density contribution of LTE downlink signals to the overall cellular networks signals are 7.8% and 36.7%, respectively. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Analysis of Arm Movement Prediction by Using the Electroencephalography Signal

    Directory of Open Access Journals (Sweden)

    Reza Darmakusuma

    2016-04-01

    Full Text Available Various technological approaches have been developed in order to help those people who are unfortunateenough to be afflicted with different types of paralysis which limit them in performing their daily life activitiesindependently. One of the proposed technologies is the Brain-Computer Interface (BCI. The BCI system uses electroencephalography (EEG which is generated by the subject’s mental activityas input, and converts it into commands. Some previous experiments have shown the capability of the BCI system to predict the movement intention before the actual movement is onset. Thus research has predicted the movement by discriminating between data in the “rest” condition, wherethere is no movement intention, with “pre-movement” condition, where movement intention is detected before actual movement occurs. This experiment, however, was done to analyze the system for which machine learning was applied to data obtained in a continuous time interval, between 3 seconds before the movement was detected until 1 second after the actual movement was onset. This experiment shows that the system can discriminate the “pre-movement” condition and “rest” condition by using the EEG signal in 7-30 Hzwhere the Mu and Beta rhythm can be discovered with an average True Positive Rate (TPR value of 0.64 ± 0.11 and an average False Positive Rate (FPR of 0.17 ± 0.08. This experiment also shows that by using EEG signals obtained nearing the movement onset, the system has higher TPR or a detection rate in predicting the movement intention.

  17. Perceptual learning in Williams syndrome: looking beyond averages.

    Directory of Open Access Journals (Sweden)

    Patricia Gervan

    Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.

  18. On the performance of multiuser scheduling with post-examining under non-identical fading

    KAUST Repository

    Gaaloul, Fakhreddine

    2012-06-11

    We investigate the performance of a multiuser downlink access scheme based on a post-selection switch and examine algorithm. The studied scheme sequentially switches over the users that experience independent and non-identically distributed fading conditions, and selects a single user with an acceptable channel quality as compared to a pre-selected signal-to-noise ratio (SNR) threshold. If no one of the users can satisfy the target channel quality, the base station (BS) takes the advantage of the knowledge of all users channels and serves the user with the best channel quality among all users. This scheme reduces considerably the feedback load but offers a lower average spectral efficiency (ASE) as compared to that of the full feedback system with instantaneous best user selection. On the other hand, it improves the system performances, such as outage probability and average bit error rate (BER), as compared to a system that is based on a standard switching scheme without post-selection. Numerical results for the ASE average BER, and average feed back load, are presented for the cases of outdated and non-outdated rate-adaptive modulation scheme operating over independent and non-identically distributed users.

  19. On the performance of multiuser scheduling with post-examining under non-identical fading

    KAUST Repository

    Gaaloul, Fakhreddine; Alouini, Mohamed-Slim; Radaydeh, Redha M.; Yang, Hong-Chuan

    2012-01-01

    We investigate the performance of a multiuser downlink access scheme based on a post-selection switch and examine algorithm. The studied scheme sequentially switches over the users that experience independent and non-identically distributed fading conditions, and selects a single user with an acceptable channel quality as compared to a pre-selected signal-to-noise ratio (SNR) threshold. If no one of the users can satisfy the target channel quality, the base station (BS) takes the advantage of the knowledge of all users channels and serves the user with the best channel quality among all users. This scheme reduces considerably the feedback load but offers a lower average spectral efficiency (ASE) as compared to that of the full feedback system with instantaneous best user selection. On the other hand, it improves the system performances, such as outage probability and average bit error rate (BER), as compared to a system that is based on a standard switching scheme without post-selection. Numerical results for the ASE average BER, and average feed back load, are presented for the cases of outdated and non-outdated rate-adaptive modulation scheme operating over independent and non-identically distributed users.

  20. Team decision problems with classical and quantum signals.

    Science.gov (United States)

    Brandenburger, Adam; La Mura, Pierfrancesco

    2016-01-13

    We study team decision problems where communication is not possible, but coordination among team members can be realized via signals in a shared environment. We consider a variety of decision problems that differ in what team members know about one another's actions and knowledge. For each type of decision problem, we investigate how different assumptions on the available signals affect team performance. Specifically, we consider the cases of perfectly correlated, i.i.d., and exchangeable classical signals, as well as the case of quantum signals. We find that, whereas in perfect-recall trees (Kuhn 1950 Proc. Natl Acad. Sci. USA 36, 570-576; Kuhn 1953 In Contributions to the theory of games, vol. II (eds H Kuhn, A Tucker), pp. 193-216) no type of signal improves performance, in imperfect-recall trees quantum signals may bring an improvement. Isbell (Isbell 1957 In Contributions to the theory of games, vol. III (eds M Drescher, A Tucker, P Wolfe), pp. 79-96) proved that, in non-Kuhn trees, classical i.i.d. signals may improve performance. We show that further improvement may be possible by use of classical exchangeable or quantum signals. We include an example of the effect of quantum signals in the context of high-frequency trading. © 2015 The Authors.

  1. On-line signal trend identification

    International Nuclear Information System (INIS)

    Tambouratzis, T.; Antonopoulos-Domis, M.

    2004-01-01

    An artificial neural network, based on the self-organizing map, is proposed for on-line signal trend identification. Trends are categorized at each incoming signal as steady-state, increasing and decreasing, while they are further classified according to characteristics such signal shape and rate of change. Tests with model-generated signals illustrate the ability of the self-organizing map to accurately and reliably perform on-line trend identification in terms of both detection and classification. The proposed methodology has been found robust to the presence of white noise

  2. A paradoxical signal intensity increase in fatty livers using opposed-phase gradient echo imaging with fat-suppression pulses

    International Nuclear Information System (INIS)

    Mulkern, Robert V.; Voss, Stephan; Loeb Salsberg, Sandra; Krauel, Marta Ramon; Ludwig, David S.

    2008-01-01

    With the increase in obese and overweight children, nonalcoholic fatty liver disease has become more prevalent in the pediatric population. Appreciating subtleties of magnetic resonance (MR) signal intensity behavior from fatty livers under different imaging conditions thus becomes important to pediatric radiologists. We report an initially confusing signal behavior - increased signal from fatty livers when fat-suppression pulses are applied in an opposed-phase gradient echo imaging sequence - and seek to explain the physical mechanisms for this paradoxical signal intensity behavior. Abdominal MR imaging at 3 T with a 3-D volumetric interpolated breath-hold (VIBE) sequence in the opposed-phase condition (TR/TE 3.3/1.3 ms) was performed in five obese boys (14±2 years of age, body mass index >95th percentile for age and sex) with spectroscopically confirmed fatty livers. Two VIBE acquisitions were performed, one with and one without the use of chemical shift selective (CHESS) pulse fat suppression. The ratios of fat-suppressed over non-fat-suppressed signal intensities were assessed in regions-of-interest (ROIs) in five tissues: subcutaneous fat, liver, vertebral marrow, muscle and spleen. The boys had spectroscopically estimated hepatic fat levels between 17% and 48%. CHESS pulse fat suppression decreased subcutaneous fat signals dramatically, by more than 85% within regions of optimal fat suppression. Fatty liver signals, in contrast, were elevated by an average of 87% with CHESS pulse fat suppression. Vertebral marrow signal was also significantly elevated with CHESS pulse fat suppression, while spleen and muscle signals demonstrated only small signal increases on the order of 10%. We demonstrated that CHESS pulse fat suppression actually increases the signal intensity from fatty livers in opposed-phase gradient echo imaging conditions. The increase can be attributed to suppression of one partner of the opposed-phase pair that normally contributes to the

  3. A paradoxical signal intensity increase in fatty livers using opposed-phase gradient echo imaging with fat-suppression pulses

    Energy Technology Data Exchange (ETDEWEB)

    Mulkern, Robert V.; Voss, Stephan [Harvard Medical School, Department of Radiology, Children' s Hospital Boston, Boston, MA (United States); Loeb Salsberg, Sandra; Krauel, Marta Ramon; Ludwig, David S. [Harvard Medical School, Department of Medicine, Children' s Hospital Boston, Boston, MA (United States)

    2008-10-15

    With the increase in obese and overweight children, nonalcoholic fatty liver disease has become more prevalent in the pediatric population. Appreciating subtleties of magnetic resonance (MR) signal intensity behavior from fatty livers under different imaging conditions thus becomes important to pediatric radiologists. We report an initially confusing signal behavior - increased signal from fatty livers when fat-suppression pulses are applied in an opposed-phase gradient echo imaging sequence - and seek to explain the physical mechanisms for this paradoxical signal intensity behavior. Abdominal MR imaging at 3 T with a 3-D volumetric interpolated breath-hold (VIBE) sequence in the opposed-phase condition (TR/TE 3.3/1.3 ms) was performed in five obese boys (14{+-}2 years of age, body mass index >95th percentile for age and sex) with spectroscopically confirmed fatty livers. Two VIBE acquisitions were performed, one with and one without the use of chemical shift selective (CHESS) pulse fat suppression. The ratios of fat-suppressed over non-fat-suppressed signal intensities were assessed in regions-of-interest (ROIs) in five tissues: subcutaneous fat, liver, vertebral marrow, muscle and spleen. The boys had spectroscopically estimated hepatic fat levels between 17% and 48%. CHESS pulse fat suppression decreased subcutaneous fat signals dramatically, by more than 85% within regions of optimal fat suppression. Fatty liver signals, in contrast, were elevated by an average of 87% with CHESS pulse fat suppression. Vertebral marrow signal was also significantly elevated with CHESS pulse fat suppression, while spleen and muscle signals demonstrated only small signal increases on the order of 10%. We demonstrated that CHESS pulse fat suppression actually increases the signal intensity from fatty livers in opposed-phase gradient echo imaging conditions. The increase can be attributed to suppression of one partner of the opposed-phase pair that normally contributes to the

  4. LMD Based Features for the Automatic Seizure Detection of EEG Signals Using SVM.

    Science.gov (United States)

    Zhang, Tao; Chen, Wanzhong

    2017-08-01

    Achieving the goal of detecting seizure activity automatically using electroencephalogram (EEG) signals is of great importance and significance for the treatment of epileptic seizures. To realize this aim, a newly-developed time-frequency analytical algorithm, namely local mean decomposition (LMD), is employed in the presented study. LMD is able to decompose an arbitrary signal into a series of product functions (PFs). Primarily, the raw EEG signal is decomposed into several PFs, and then the temporal statistical and non-linear features of the first five PFs are calculated. The features of each PF are fed into five classifiers, including back propagation neural network (BPNN), K-nearest neighbor (KNN), linear discriminant analysis (LDA), un-optimized support vector machine (SVM) and SVM optimized by genetic algorithm (GA-SVM), for five classification cases, respectively. Confluent features of all PFs and raw EEG are further passed into the high-performance GA-SVM for the same classification tasks. Experimental results on the international public Bonn epilepsy EEG dataset show that the average classification accuracy of the presented approach are equal to or higher than 98.10% in all the five cases, and this indicates the effectiveness of the proposed approach for automated seizure detection.

  5. Programmable delay circuit for sparker signal analysis

    Digital Repository Service at National Institute of Oceanography (India)

    Pathak, D.

    The sparker echo signal had been recorded along with the EPC recorder trigger on audio cassettes in a dual channel analog recorder. The sparker signal in the analog form had to be digitised for further signal processing techniques to be performed...

  6. Signal processing for boiling noise detection

    International Nuclear Information System (INIS)

    Ledwidge, T.J.; Black, J.L.

    1989-01-01

    The present paper deals with investigations of acoustic signals from a boiling experiment performed on the KNS I loop at KfK Karlsruhe. Signals have been analysed in frequency as well as in time domain. Signal characteristics successfully used to detect the boiling process have been found in time domain. (author). 6 refs, figs

  7. Non-linear dynamical signal characterization for prediction of defibrillation success through machine learning

    Directory of Open Access Journals (Sweden)

    Shandilya Sharad

    2012-10-01

    Full Text Available Abstract Background Ventricular Fibrillation (VF is a common presenting dysrhythmia in the setting of cardiac arrest whose main treatment is defibrillation through direct current countershock to achieve return of spontaneous circulation. However, often defibrillation is unsuccessful and may even lead to the transition of VF to more nefarious rhythms such as asystole or pulseless electrical activity. Multiple methods have been proposed for predicting defibrillation success based on examination of the VF waveform. To date, however, no analytical technique has been widely accepted. We developed a unique approach of computational VF waveform analysis, with and without addition of the signal of end-tidal carbon dioxide (PetCO2, using advanced machine learning algorithms. We compare these results with those obtained using the Amplitude Spectral Area (AMSA technique. Methods A total of 90 pre-countershock ECG signals were analyzed form an accessible preshosptial cardiac arrest database. A unified predictive model, based on signal processing and machine learning, was developed with time-series and dual-tree complex wavelet transform features. Upon selection of correlated variables, a parametrically optimized support vector machine (SVM model was trained for predicting outcomes on the test sets. Training and testing was performed with nested 10-fold cross validation and 6–10 features for each test fold. Results The integrative model performs real-time, short-term (7.8 second analysis of the Electrocardiogram (ECG. For a total of 90 signals, 34 successful and 56 unsuccessful defibrillations were classified with an average Accuracy and Receiver Operator Characteristic (ROC Area Under the Curve (AUC of 82.2% and 85%, respectively. Incorporation of the end-tidal carbon dioxide signal boosted Accuracy and ROC AUC to 83.3% and 93.8%, respectively, for a smaller dataset containing 48 signals. VF analysis using AMSA resulted in accuracy and ROC AUC of 64

  8. Time-averaged molluscan death assemblages: Palimpsests of richness, snapshots of abundance

    Science.gov (United States)

    Kidwell, Susan M.

    2002-09-01

    Field tests that compare living communities to associated dead remains are the primary means of estimating the reliability of biological information in the fossil record; such tests also provide insights into the dynamics of skeletal accumulation. Contrary to expectations, molluscan death assemblages capture a strong signal of living species' rank-order abundances. This finding, combined with independent evidence for exponential postmortem destruction of dead cohorts, argues that, although the species richness of a death assemblage may be a time-averaged palimpsest of the habitat (molluscan death assemblages contain, on average, ˜25% more species than any single census of the local live community, after sample-size standardization), species' relative-abundance data from the same assemblage probably constitute a much higher acuity record dominated by the most recent dead cohorts (e.g., from the past few hundred years or so, rather than the several thousand years recorded by the total assemblage and usually taken as the acuity of species-richness information). The pervasive excess species richness of molluscan death assemblages requires further analysis and modeling to discriminate among possible sources. However, time averaging alone cannot be responsible unless rare species (species with low rates of dead-shell production) are collectively more durable (have longer taphonomic half-lives) than abundant species. Species richness and abundance data thus appear to present fundamentally different taphonomic qualities for paleobiological analysis. Relative- abundance information is more snapshot-like and thus taphonomically more straightforward than expected, especially compared to the complex origins of dead-species richness.

  9. Multimodal signalling in estrildid finches

    DEFF Research Database (Denmark)

    Gomes, A. C. R.; Funghi, C.; Soma, M.

    2017-01-01

    Sexual traits (e.g. visual ornaments, acoustic signals, courtship behaviour) are often displayed together as multimodal signals. Some hypotheses predict joint evolution of different sexual signals (e.g. to increase the efficiency of communication) or that different signals trade off with each other...... (e.g. due to limited resources). Alternatively, multiple signals may evolve independently for different functions, or to communicate different information (multiple message hypothesis). We evaluated these hypotheses with a comparative study in the family Estrildidae, one of the largest songbird...... compromise, but generally courtship dance also evolved independently from other signals. Instead of correlated evolution, we found that song, dance and colour are each related to different socio-ecological traits. Song complexity evolved together with ecological generalism, song performance with investment...

  10. Indirect MRI of 17 o-labeled water using steady-state sequences: Signal simulation and preclinical experiment.

    Science.gov (United States)

    Kudo, Kohsuke; Harada, Taisuke; Kameda, Hiroyuki; Uwano, Ikuko; Yamashita, Fumio; Higuchi, Satomi; Yoshioka, Kunihiro; Sasaki, Makoto

    2018-05-01

    Few studies have been reported for T 2 -weighted indirect 17 O imaging. To evaluate the feasibility of steady-state sequences for indirect 17 O brain imaging. Signal simulation, phantom measurements, and prospective animal experiments were performed in accordance with the institutional guidelines for animal experiments. Signal simulations of balanced steady-state free precession (bSSFP) were performed for concentrations of 17 O ranging from 0.037-1.600%. Phantom measurements with concentrations of 17 O water ranging from 0.037-1.566% were also conducted. Six healthy beagle dogs were scanned with intravenous administration of 20% 17 O-labeled water (1 mL/kg). Dynamic 3D-bSSFP scans were performed at 3T MRI. 17 O-labeled water was injected 60 seconds after the scan start, and the total scan duration was 5 minutes. Based on the result of signal simulation and phantom measurement, signal changes in the beagle dogs were measured and converted into 17 O concentrations. The 17 O concentrations were averaged for every 15 seconds, and compared to the baseline (30-45 sec) with Dunnett's multiple comparison tests. Signal simulation revealed that the relationships between 17 O concentration and the natural logarithm of relative signals were linear. The intraclass correlation coefficient between relative signals in phantom measurement and signal simulations was 0.974. In the animal experiments, significant increases in 17 O concentration (P O. At the end of scanning, mean respective 17 O concentrations of 0.084 ± 0.026%, 0.117 ± 0.038, 0.082 ± 0.037%, and 0.049 ± 0.004% were noted for the cerebral cortex, cerebellar cortex, cerebral white matter, and ventricle. Dynamic steady-state sequences were feasible for indirect 17 O imaging, and absolute quantification was possible. This method can be applied for the measurement of permeability and blood flow in the brain, and for kinetic analysis of cerebrospinal fluid. 2 Technical Efficacy: Stage 1 J. Magn. Reson

  11. Dynamics in atomic signaling games

    KAUST Repository

    Fox, Michael J.

    2015-04-08

    We study an atomic signaling game under stochastic evolutionary dynamics. There are a finite number of players who repeatedly update from a finite number of available languages/signaling strategies. Players imitate the most fit agents with high probability or mutate with low probability. We analyze the long-run distribution of states and show that, for sufficiently small mutation probability, its support is limited to efficient communication systems. We find that this behavior is insensitive to the particular choice of evolutionary dynamic, a property that is due to the game having a potential structure with a potential function corresponding to average fitness. Consequently, the model supports conclusions similar to those found in the literature on language competition. That is, we show that efficient languages eventually predominate the society while reproducing the empirical phenomenon of linguistic drift. The emergence of efficiency in the atomic case can be contrasted with results for non-atomic signaling games that establish the non-negligible possibility of convergence, under replicator dynamics, to states of unbounded efficiency loss.

  12. Dynamics in atomic signaling games

    KAUST Repository

    Fox, Michael J.; Touri, Behrouz; Shamma, Jeff S.

    2015-01-01

    We study an atomic signaling game under stochastic evolutionary dynamics. There are a finite number of players who repeatedly update from a finite number of available languages/signaling strategies. Players imitate the most fit agents with high probability or mutate with low probability. We analyze the long-run distribution of states and show that, for sufficiently small mutation probability, its support is limited to efficient communication systems. We find that this behavior is insensitive to the particular choice of evolutionary dynamic, a property that is due to the game having a potential structure with a potential function corresponding to average fitness. Consequently, the model supports conclusions similar to those found in the literature on language competition. That is, we show that efficient languages eventually predominate the society while reproducing the empirical phenomenon of linguistic drift. The emergence of efficiency in the atomic case can be contrasted with results for non-atomic signaling games that establish the non-negligible possibility of convergence, under replicator dynamics, to states of unbounded efficiency loss.

  13. Transmission of Voice Signal: BER Performance Analysis of Different FEC Schemes Based OFDM System over Various Channels

    OpenAIRE

    Rashed, Md. Golam; Kabir, M. Hasnat; Reza, Md. Selim; Islam, Md. Matiqul; Shams, Rifat Ara; Masum, Saleh; Ullah, Sheikh Enayet

    2012-01-01

    In this paper, we investigate the impact of Forward Error Correction (FEC) codes namely Cyclic Redundancy Code and Convolution Code on the performance of OFDM wireless communication system for speech signal transmission over both AWGN and fading (Rayleigh and Rician) channels in term of Bit Error Probability. The simulation has been done in conjunction with QPSK digital modulation and compared with uncoded resultstal modulation. In the fading channels, it is found via computer simulation that...

  14. Signal and image processing algorithm performance in a virtual and elastic computing environment

    Science.gov (United States)

    Bennett, Kelly W.; Robertson, James

    2013-05-01

    The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.

  15. A Front End for Multipetawatt Lasers Based on a High-Energy, High-Average-Power Optical Parametric Chirped-Pulse Amplifier

    International Nuclear Information System (INIS)

    Bagnoud, V.

    2004-01-01

    We report on a high-energy, high-average-power optical parametric chirped-pulse amplifier developed as the front end for the OMEGA EP laser. The amplifier provides a gain larger than 109 in two stages leading to a total energy of 400 mJ with a pump-to-signal conversion efficiency higher than 25%

  16. Signal Processing

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Signal processing techniques, extensively used nowadays to maximize the performance of audio and video equipment, have been a key part in the design of hardware and software for high energy physics detectors since pioneering applications in the UA1 experiment at CERN in 1979

  17. Measurement of transient two-phase flow velocity using statistical signal analysis of impedance probe signals

    International Nuclear Information System (INIS)

    Leavell, W.H.; Mullens, J.A.

    1981-01-01

    A computational algorithm has been developed to measure transient, phase-interface velocity in two-phase, steam-water systems. The algorithm will be used to measure the transient velocity of steam-water mixture during simulated PWR reflood experiments. By utilizing signals produced by two, spatially separated impedance probes immersed in a two-phase mixture, the algorithm computes the average transit time of mixture fluctuations moving between the two probes. This transit time is computed by first, measuring the phase shift between the two probe signals after transformation to the frequency domain and then computing the phase shift slope by a weighted least-squares fitting technique. Our algorithm, which has been tested with both simulated and real data, is able to accurately track velocity transients as fast as 4 m/s/s

  18. Electronic devices for analog signal processing

    CERN Document Server

    Rybin, Yu K

    2012-01-01

    Electronic Devices for Analog Signal Processing is intended for engineers and post graduates and considers electronic devices applied to process analog signals in instrument making, automation, measurements, and other branches of technology. They perform various transformations of electrical signals: scaling, integration, logarithming, etc. The need in their deeper study is caused, on the one hand, by the extension of the forms of the input signal and increasing accuracy and performance of such devices, and on the other hand, new devices constantly emerge and are already widely used in practice, but no information about them are written in books on electronics. The basic approach of presenting the material in Electronic Devices for Analog Signal Processing can be formulated as follows: the study with help from self-education. While divided into seven chapters, each chapter contains theoretical material, examples of practical problems, questions and tests. The most difficult questions are marked by a diamon...

  19. MAMAP – a new spectrometer system for column-averaged methane and carbon dioxide observations from aircraft: instrument description and performance analysis

    Directory of Open Access Journals (Sweden)

    K. Gerilowski

    2011-02-01

    Full Text Available Carbon dioxide (CO2 and Methane (CH4 are the two most important anthropogenic greenhouse gases. CH4 is furthermore one of the most potent present and future contributors to global warming because of its large global warming potential (GWP. Our knowledge of CH4 and CO2 source strengths is based primarily on bottom-up scaling of sparse in-situ local point measurements of emissions and up-scaling of emission factor estimates or top-down modeling incorporating data from surface networks and more recently also by incorporating data from low spatial resolution satellite observations for CH4. There is a need to measure and retrieve the dry columns of CO2 and CH4 having high spatial resolution and spatial coverage. In order to fill this gap a new passive airborne 2-channel grating spectrometer instrument for remote sensing of small scale and mesoscale column-averaged CH4 and CO2 observations has been developed. This Methane Airborne MAPper (MAMAP instrument measures reflected and scattered solar radiation in the short wave infrared (SWIR and near-infrared (NIR parts of the electro-magnetic spectrum at moderate spectral resolution. The SWIR channel yields measurements of atmospheric absorption bands of CH4 and CO2 in the spectral range between 1.59 and 1.69 μm at a spectral resolution of 0.82 nm. The NIR channel around 0.76 μm measures the atmospheric O2-A-band absorption with a resolution of 0.46 nm. MAMAP has been designed for flexible operation aboard a variety of airborne platforms. The instrument design and the performance of the SWIR channel, together with some results from on-ground and in-flight engineering tests are presented. The SWIR channel performance has been analyzed using a retrieval algorithm applied to the nadir measured spectra. Dry air column-averaged mole fractions are obtained from SWIR

  20. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  1. Predicting the performance of a power amplifier using large-signal circuit simulations of an AlGaN/GaN HFET model

    Science.gov (United States)

    Bilbro, Griff L.; Hou, Danqiong; Yin, Hong; Trew, Robert J.

    2009-02-01

    We have quantitatively modeled the conduction current and charge storage of an HFET in terms its physical dimensions and material properties. For DC or small-signal RF operation, no adjustable parameters are necessary to predict the terminal characteristics of the device. Linear performance measures such as small-signal gain and input admittance can be predicted directly from the geometric structure and material properties assumed for the device design. We have validated our model at low-frequency against experimental I-V measurements and against two-dimensional device simulations. We discuss our recent extension of our model to include a larger class of electron velocity-field curves. We also discuss the recent reformulation of our model to facilitate its implementation in commercial large-signal high-frequency circuit simulators. Large signal RF operation is more complex. First, the highest CW microwave power is fundamentally bounded by a brief, reversible channel breakdown in each RF cycle. Second, the highest experimental measurements of efficiency, power, or linearity always require harmonic load pull and possibly also harmonic source pull. Presently, our model accounts for these facts with an adjustable breakdown voltage and with adjustable load impedances and source impedances for the fundamental frequency and its harmonics. This has allowed us to validate our model for large signal RF conditions by simultaneously fitting experimental measurements of output power, gain, and power added efficiency of real devices. We show that the resulting model can be used to compare alternative device designs in terms of their large signal performance, such as their output power at 1dB gain compression or their third order intercept points. In addition, the model provides insight into new device physics features enabled by the unprecedented current and voltage levels of AlGaN/GaN HFETs, including non-ohmic resistance in the source access regions and partial depletion of

  2. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  3. Performance of a novel multiple-signal luminescence sediment tracing method

    Science.gov (United States)

    Reimann, Tony

    2014-05-01

    Optically Stimulated Luminescence (OSL) is commonly used for dating sediments. Luminescence signals build up due to exposure of mineral grains to natural ionizing radiation, and are reset when these grains are exposed to (sun)light during sediment transport and deposition. Generally, luminescence signals can be read in two ways, potentially providing information on the burial history (dating) or the transport history (sediment tracing) of mineral grains. In this study we use a novel luminescence measurement procedure (Reimann et al., submitted) that simultaneously monitors six different luminescence signals from the same sub-sample (aliquot) to infer the transport history of sand grains. Daylight exposure experiments reveal that each of these six signals resets (bleaches) at a different rate, thus allowing to trace the bleaching history of the sediment in six different observation windows. To test the feasibility of luminescence sediment tracing in shallow-marine coastal settings we took eight sediment samples from the pilot mega-nourishment Zandmotor in Kijkduin (South-Holland). This site provides relatively controlled conditions as the morphological evolution of this nourishment is densely monitored (Stive et al., 2013). After sampling the original nourishment source we took samples along the seaward facing contour of the spit that was formed from August 2011 (start of nourishment) to June 2012 (sampling). It is presumed that these samples originate from the source and were transported and deposited within the first year after construction. The measured luminescence of a sediment sample was interpolated onto the daylight bleaching curve of each signal to assign the Equivalent Exposure Time (EET) to a sample. The EET is a quantitative measure of the full daylight equivalent a sample was exposed to during sediment transport, i.e. the higher the EET the longer the sample has been transported or the more efficient it has been exposed to day-light during sediment

  4. Novel Signal Noise Reduction Method through Cluster Analysis, Applied to Photoplethysmography.

    Science.gov (United States)

    Waugh, William; Allen, John; Wightman, James; Sims, Andrew J; Beale, Thomas A W

    2018-01-01

    Physiological signals can often become contaminated by noise from a variety of origins. In this paper, an algorithm is described for the reduction of sporadic noise from a continuous periodic signal. The design can be used where a sample of a periodic signal is required, for example, when an average pulse is needed for pulse wave analysis and characterization. The algorithm is based on cluster analysis for selecting similar repetitions or pulses from a periodic single. This method selects individual pulses without noise, returns a clean pulse signal, and terminates when a sufficiently clean and representative signal is received. The algorithm is designed to be sufficiently compact to be implemented on a microcontroller embedded within a medical device. It has been validated through the removal of noise from an exemplar photoplethysmography (PPG) signal, showing increasing benefit as the noise contamination of the signal increases. The algorithm design is generalised to be applicable for a wide range of physiological (physical) signals.

  5. On the performance of diagonal lattice space-time codes for the quasi-static MIMO channel

    KAUST Repository

    Abediseid, Walid

    2013-06-01

    There has been tremendous work done on designing space-time codes for the quasi-static multiple-input multiple-output (MIMO) channel. All the coding design to date focuses on either high-performance, high rates, low complexity encoding and decoding, or targeting a combination of these criteria. In this paper, we analyze in detail the performance of diagonal lattice space-time codes under lattice decoding. We present both upper and lower bounds on the average error probability. We derive a new closed form expression of the lower bound using the so-called sphere-packing bound. This bound presents the ultimate performance limit a diagonal lattice space-time code can achieve at any signal-to-noise ratio (SNR). The upper bound is simply derived using the union-bound and demonstrates how the average error probability can be minimized by maximizing the minimum product distance of the code. © 2013 IEEE.

  6. Evaluation of the dark signal performance of different SiPM-technologies under irradiation with cold neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Durini, Daniel, E-mail: d.durini@fz-juelich.de [Central Institute of Engineering, Electronics and Analytics ZEA-2 – Electronic Systems, Forschungszentrum Jülich GmbH, D-52425 Jülich (Germany); Degenhardt, Carsten; Rongen, Heinz [Central Institute of Engineering, Electronics and Analytics ZEA-2 – Electronic Systems, Forschungszentrum Jülich GmbH, D-52425 Jülich (Germany); Feoktystov, Artem [Jülich Centre for Neutron Science (JCNS) at Heinz Maier-Leibnitz Zentrum (MLZ), Forschungszentrum Jülich GmbH, Lichtenbergstr. 1, D-85748 Garching (Germany); Schlösser, Mario; Palomino-Razo, Alejandro [Central Institute of Engineering, Electronics and Analytics ZEA-2 – Electronic Systems, Forschungszentrum Jülich GmbH, D-52425 Jülich (Germany); Frielinghaus, Henrich [Jülich Centre for Neutron Science (JCNS) at Heinz Maier-Leibnitz Zentrum (MLZ), Forschungszentrum Jülich GmbH, Lichtenbergstr. 1, D-85748 Garching (Germany); Waasen, Stefan van [Central Institute of Engineering, Electronics and Analytics ZEA-2 – Electronic Systems, Forschungszentrum Jülich GmbH, D-52425 Jülich (Germany)

    2016-11-01

    In this paper we report the results of the assessment of changes in the dark signal delivered by three silicon photomultiplier (SiPM) detector arrays, fabricated by three different manufacturers, when irradiated with cold neutrons (wavelength λ{sub n}=5 Å or neutron energy of E{sub n}=3.27 meV) up to a neutron dose of 6×10{sup 12} n/cm{sup 2}. The dark signals as well as the breakdown voltages (V{sub br}) of the SiPM detectors were monitored during the irradiation. The system was characterized at room temperature. The analog SiPM detectors, with and without a 1 mm thick Cerium doped {sup 6}Li-glass scintillator material located in front of them, were operated using a bias voltage recommended by the respective manufacturer for a proper detector performance. I{sub out}-V{sub bias} measurements, used to determine the breakdown voltage of the devices, were repeated every 30 s during the first hour and every 300 s during the rest of the irradiation time. The digital SiPM detectors were held at the advised bias voltage between the respective breakdown voltage and dark count mappings repeated every 4 min. The measurements were performed on the KWS-1 instrument of the Heinz Maier-Leibnitz Zentrum (MLZ) in Garching, Germany. The two analog and one digital SiPM detector modules under investigation were respectively fabricated by SensL (Ireland), Hamamatsu Photonics (Japan), and Philips Digital Photon Counting (Germany).

  7. The use of difference spectra with a filtered rolling average background in mobile gamma spectrometry measurements

    International Nuclear Information System (INIS)

    Cresswell, A.J.; Sanderson, D.C.W.

    2009-01-01

    The use of difference spectra, with a filtering of a rolling average background, as a variation of the more common rainbow plots to aid in the visual identification of radiation anomalies in mobile gamma spectrometry systems is presented. This method requires minimal assumptions about the radiation environment, and is not computationally intensive. Some case studies are presented to illustrate the method. It is shown that difference spectra produced in this manner can improve signal to background, estimate shielding or mass depth using scattered spectral components, and locate point sources. This approach could be a useful addition to the methods available for locating point sources and mapping dispersed activity in real time. Further possible developments of the procedure utilising more intelligent filters and spatial averaging of the background are identified.

  8. Determining of the Parking Manoeuvre and the Taxi Blockage Adjustment Factor for the Saturation Flow Rate at the Outlet Legs of Signalized Intersections: Case Study from Rasht City (Iran)

    Science.gov (United States)

    Behbahani, Hamid; Jahangir Samet, Mehdi; Najafi Moghaddam Gilani, Vahid; Amini, Amir

    2017-10-01

    The presence of taxi stops within the area of signalized intersections at the outlet legs due to unnatural behaviour of the taxis, sudden change of lanes, parking manoeuvres activities and stopping the vehicle to discharge or pick up the passengers have led to reduction of saturation flow rate at the outlet leg of signalized intersections and increased delay as well as affecting the performance of a crossing lane. So far, in term of evaluating effective adjustment factors on saturation flow rate at the inlet legs of the signalized intersections, various studies have been carried out, however; there has not been any studies on effective adjustment factors on saturation flow rate at the inlet legs. Hence, the evaluating of the traffic effects of unique behaviours on the saturation flow rate of the outlet leg is very important. In this research the parking manoeuvre time and taxi blockage time were evaluated and analyzed based on the available lane width as well as determining the effective adjustment factors on the saturation flow rate using recording related data at four signalized intersections in Rasht city. The results show that the average parking manoeuvre time is a function of the lane width and is increased as the lane width is reduced. Also, it is suggested to use the values of 7.37 and 11.31 seconds, respectively for the average parking manoeuvre time and the average blockage time of taxies at the outlet legs of signalized intersections for the traffic designing in Rasht city.

  9. Averaging, not internal noise, limits the development of coherent motion processing

    Directory of Open Access Journals (Sweden)

    Catherine Manning

    2014-10-01

    Full Text Available The development of motion processing is a critical part of visual development, allowing children to interact with moving objects and navigate within a dynamic environment. However, global motion processing, which requires pooling motion information across space, develops late, reaching adult-like levels only by mid-to-late childhood. The reasons underlying this protracted development are not yet fully understood. In this study, we sought to determine whether the development of motion coherence sensitivity is limited by internal noise (i.e., imprecision in estimating the directions of individual elements and/or global pooling across local estimates. To this end, we presented equivalent noise direction discrimination tasks and motion coherence tasks at both slow (1.5°/s and fast (6°/s speeds to children aged 5, 7, 9 and 11 years, and adults. We show that, as children get older, their levels of internal noise reduce, and they are able to average across more local motion estimates. Regression analyses indicated, however, that age-related improvements in coherent motion perception are driven solely by improvements in averaging and not by reductions in internal noise. Our results suggest that the development of coherent motion sensitivity is primarily limited by developmental changes within brain regions involved in integrating motion signals (e.g., MT/V5.

  10. Fundamentals of statistical signal processing

    CERN Document Server

    Kay, Steven M

    1993-01-01

    A unified presentation of parameter estimation for those involved in the design and implementation of statistical signal processing algorithms. Covers important approaches to obtaining an optimal estimator and analyzing its performance; and includes numerous examples as well as applications to real- world problems. MARKETS: For practicing engineers and scientists who design and analyze signal processing systems, i.e., to extract information from noisy signals — radar engineer, sonar engineer, geophysicist, oceanographer, biomedical engineer, communications engineer, economist, statistician, physicist, etc.

  11. Fundamentals of an Optimal Multirate Subband Coding of Cyclostationary Signals

    Directory of Open Access Journals (Sweden)

    D. Kula

    2000-06-01

    Full Text Available A consistent theory of optimal subband coding of zero mean wide-sense cyclostationary signals, with N-periodic statistics, is presented in this article. An M-channel orthonormal uniform filter bank, employing N-periodic analysis and synthesis filters, is used while an average variance condition is applied to evaluate the output distortion. In three lemmas and final theorem, the necessity of decorrelation of blocked subband signals and requirement of specific ordering of power spectral densities are proven.

  12. Measurements of the global 21-cm signal from the Cosmic Dawn

    Science.gov (United States)

    Bernardi, Gianni

    2018-05-01

    The sky-averaged (global) 21-cm signal is a very promising probe of the Cosmic Dawn, when the first luminous sources were formed and started to shine in a substantially neutral intergalactic medium. I here report on the status and early result of the Large-Aperture Experiment to Detect the Dark Age that focuses on observations of the global 21-cm signal in the 16 <~ z <~ 30 range.

  13. An electromagnetic signals monitoring and analysis wireless platform employing personal digital assistants and pattern analysis techniques

    Science.gov (United States)

    Ninos, K.; Georgiadis, P.; Cavouras, D.; Nomicos, C.

    2010-05-01

    This study presents the design and development of a mobile wireless platform to be used for monitoring and analysis of seismic events and related electromagnetic (EM) signals, employing Personal Digital Assistants (PDAs). A prototype custom-developed application was deployed on a 3G enabled PDA that could connect to the FTP server of the Institute of Geodynamics of the National Observatory of Athens and receive and display EM signals at 4 receiver frequencies (3 KHz (E-W, N-S), 10 KHz (E-W, N-S), 41 MHz and 46 MHz). Signals may originate from any one of the 16 field-stations located around the Greek territory. Employing continuous recordings of EM signals gathered from January 2003 till December 2007, a Support Vector Machines (SVM)-based classification system was designed to distinguish EM precursor signals within noisy background. EM-signals corresponding to recordings preceding major seismic events (Ms≥5R) were segmented, by an experienced scientist, and five features (mean, variance, skewness, kurtosis, and a wavelet based feature), derived from the EM-signals were calculated. These features were used to train the SVM-based classification scheme. The performance of the system was evaluated by the exhaustive search and leave-one-out methods giving 87.2% overall classification accuracy, in correctly identifying EM precursor signals within noisy background employing all calculated features. Due to the insufficient processing power of the PDAs, this task was performed on a typical desktop computer. This optimal trained context of the SVM classifier was then integrated in the PDA based application rendering the platform capable to discriminate between EM precursor signals and noise. System's efficiency was evaluated by an expert who reviewed 1/ multiple EM-signals, up to 18 days prior to corresponding past seismic events, and 2/ the possible EM-activity of a specific region employing the trained SVM classifier. Additionally, the proposed architecture can form a

  14. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  15. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  16. A study on hybrid split-spectrum processing technique for enhanced reliability in ultrasonic signal analysis

    International Nuclear Information System (INIS)

    Huh, Hyung; Koo, Kil Mo; Cheong, Yong Moo; Kim, G. J.

    1995-01-01

    Many signal-processing techniques have been found to be useful in ultrasonic and nondestructive evaluation. Among the most popular techniques are signal averaging, spatial compounding, matched filters, and homomorphic processing. One of the significant new process is split-spectrum processing(SSP), which can be equally useful in signal-to-noise ratio(SNR) improvement and grain characterization in several engineering materials. The purpose of this paper is to explore the utility of SSP in ultrasonic NDE. A wide variety of engineering problems are reviewed and suggestions for implementation of the technique are provided. SSP uses the frequency-dependent response of the interfering coherent noise produced by unresolvable scatters in the resolution range cell of a transducer. It is implemented by splitting the Sequency spectrum of the received signal by using Gaussian bandpass filters. The theoretical basis for the potential of SSP for grain characterization in SUS 304 material is discussed, and some experimental-evidence for the feasibility of the approach is presented. Results of SNR enhancement in signals obtained from real four samples of SUS 304. The influence of various processing parameters on the performance of the processing technique is also discussed. The minimization algorithm. which provides an excellent SNR enhancement when used either in conjunction with other SSP algorithms like polarity-check or by itself, is also presented.

  17. A Study on Hybrid Split-Spectrum Processing Technique for Enhanced Reliability in Ultrasonic Signal Analysis

    International Nuclear Information System (INIS)

    Huh, H.; Koo, K. M.; Kim, G. J.

    1996-01-01

    Many signal-processing techniques have been found to be useful in ultrasonic and nondestructive evaluation. Among the most popular techniques are signal averaging, spatial compounding, matched filters and homomorphic processing. One of the significant new process is split-spectrum processing(SSP), which can be equally useful in signal-to-noise ratio(SNR) improvement and grain characterization in several specimens. The purpose of this paper is to explore the utility of SSP in ultrasonic NDE. A wide variety of engineering problems are reviewed, and suggestions for implementation of the technique are provided. SSP uses the frequency-dependent response of the interfering coherent noise produced by unresolvable scatters in the resolution range cell of a transducer. It is implemented by splitting the frequency spectrum of the received signal by using gaussian bandpass filter. The theoretical basis for the potential of SSP for grain characterization in SUS 304 material is discussed, and some experimental evidence for the feasibility of the approach is presented. Results of SNR enhancement in signals obtained from real four samples of SUS 304. The influence of various processing parameters on the performance of the processing technique is also discussed. The minimization algorithm, which provides an excellent SNR enhancement when used either in conjunction with other SSP algorithms like polarity-check or by itself, is also presented

  18. Assessment of the Speech Intelligibility Performance of Post Lingual Cochlear Implant Users at Different Signal-to-Noise Ratios Using the Turkish Matrix Test

    Directory of Open Access Journals (Sweden)

    Zahra Polat

    2016-10-01

    Full Text Available Background: Spoken word recognition and speech perception tests in quiet are being used as a routine in assessment of the benefit which children and adult cochlear implant users receive from their devices. Cochlear implant users generally demonstrate high level performances in these test materials as they are able to achieve high level speech perception ability in quiet situations. Although these test materials provide valuable information regarding Cochlear Implant (CI users’ performances in optimal listening conditions, they do not give realistic information regarding performances in adverse listening conditions, which is the case in the everyday environment. Aims: The aim of this study was to assess the speech intelligibility performance of post lingual CI users in the presence of noise at different signal-to-noise ratio with the Matrix Test developed for Turkish language. Study Design: Cross-sectional study. Methods: The thirty post lingual implant user adult subjects, who had been using implants for a minimum of one year, were evaluated with Turkish Matrix test. Subjects’ speech intelligibility was measured using the adaptive and non-adaptive Matrix Test in quiet and noisy environments. Results: The results of the study show a correlation between Pure Tone Average (PTA values of the subjects and Matrix test Speech Reception Threshold (SRT values in the quiet. Hence, it is possible to asses PTA values of CI users using the Matrix Test also. However, no correlations were found between Matrix SRT values in the quiet and Matrix SRT values in noise. Similarly, the correlation between PTA values and intelligibility scores in noise was also not significant. Therefore, it may not be possible to assess the intelligibility performance of CI users using test batteries performed in quiet conditions. Conclusion: The Matrix Test can be used to assess the benefit of CI users from their systems in everyday life, since it is possible to perform

  19. Ultra-Fast Optical Signal Processing in Nonlinear Silicon Waveguides

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo; Galili, Michael; Pu, Minhao

    2011-01-01

    We describe recent demonstrations of exploiting highly nonlinear silicon nanowires for processing Tbit/s optical data signals. We perform demultiplexing and optical waveform sampling of 1.28 Tbit/s and wavelength conversion of 640 Gbit/s data signals.......We describe recent demonstrations of exploiting highly nonlinear silicon nanowires for processing Tbit/s optical data signals. We perform demultiplexing and optical waveform sampling of 1.28 Tbit/s and wavelength conversion of 640 Gbit/s data signals....

  20. Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach

    KAUST Repository

    Afify, Laila H.

    2015-09-14

    In this work, we develop an analytical paradigm to analyze the average symbol error probability (ASEP) performance of uplink traffic in a multi-tier cellular network. The analysis is based on the recently developed Equivalent-in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important communication system parameters and goes beyond signal-to-interference-plus-noise ratio characterization. That is, the presented model accounts for the modulation scheme, constellation type, and signal recovery techniques to model the ASEP. To this end, we derive single integral expressions for the ASEP for different modulation schemes due to aggregate network interference. Finally, all theoretical findings of the paper are verified via Monte Carlo simulations.

  1. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    International Nuclear Information System (INIS)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-01-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics. (paper)

  2. Noise-aware dictionary-learning-based sparse representation framework for detection and removal of single and combined noises from ECG signal.

    Science.gov (United States)

    Satija, Udit; Ramkumar, Barathram; Sabarimalai Manikandan, M

    2017-02-01

    Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal.

  3. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  4. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    Science.gov (United States)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  5. A New Second-Order Generalized Integrator Based Quadrature Signal Generator With Enhanced Performance

    DEFF Research Database (Denmark)

    Xin, Zhen; Qin, Zian; Lu, Minghui

    2016-01-01

    Due to the simplicity and flexibility of the structure of the Second-Order Generalized Integrator based Quadrature Signal Generator (SOGI-QSG), it has been widely used over the past decade for many applications such as frequency estimation, grid synchronization, and harmonic extraction. However......, the SOGI-QSG will produce errors when its input signal contains a dc component or harmonic components with unknown frequencies. The accuracy of the signal detection methods using it may hence be compromised. To overcome the drawback, the First-Order System (FOS) concept is first used to illustrate...

  6. Outage performance of cognitive radio systems with Improper Gaussian signaling

    KAUST Repository

    Amin, Osama; Abediseid, Walid; Alouini, Mohamed-Slim

    2015-01-01

    design the SU signal by adjusting its transmitted power and the circularity coefficient to minimize the SU outage probability while maintaining a certain PU quality-of-service. Finally, we evaluate the proposed bounds and adaptive algorithms by numerical

  7. Hybrid Large-Eddy/Reynolds-Averaged Simulation of a Supersonic Cavity Using VULCAN

    Science.gov (United States)

    Quinlan, Jesse; McDaniel, James; Baurle, Robert A.

    2013-01-01

    Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters a three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and the effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case and indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. Simulations are performed with and without inflow turbulence recycling on the coarse grid to isolate the effect of the recycling procedure, which is demonstrably critical to capturing the relevant shear layer dynamics. Shock sensor formulations of Ducros and Larsson are found to predict mean flow statistics equally well.

  8. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  9. On the performance of shared access control strategy for femtocells

    KAUST Repository

    Magableh, Amer M.

    2013-02-18

    Femtocells can be employed in cellular systems to enhance the indoor coverage, especially in the areas with high capacity growing demands and high traffic rates. In this paper, we propose an efficient resource utilization protocol, named as shared access protocol (SAP), to enable the unauthorized macrocell user equipment to communicate with partially closed-access femtocell base station to improve and enhance the system performance. The system model considers a femtocell that is equipped with a total of N separated antennas or channels to multiplex independent traffic. Then, a set of N1 channels is used for closed access only by the authorized users, and the remaining set of channel resources can be used for open access by either authorized or unauthorized users upon their demands and spatial locations. For this system model, we obtain the signal-to-interference ratio characteristics, such as the distribution and the moment generating function, in closed forms for two fading models of indoor and outdoor environments. The signal-tointerference ratio statistics are then used to derive some important performance measures of the proposed SAP in closed form, such as the average bit error rate, outage probability, and average channel capacity for the two fading models under consideration. Numerical results for the obtained expressions are provided and supported by Monte Carlo simulations to validate the analytical development and study the effectiveness of the proposed SAP under different conditions. Copyright © 2012 John Wiley and Sons, Ltd.

  10. Screening applicants for risk of poor academic performance: a novel scoring system using preadmission grade point averages and graduate record examination scores.

    Science.gov (United States)

    Luce, David

    2011-01-01

    The purpose of this study was to develop an effective screening tool for identifying physician assistant (PA) program applicants at highest risk for poor academic performance. Prior to reviewing applications for the class of 2009, a retrospective analysis of preadmission data took place for the classes of 2006, 2007, and 2008. A single composite score was calculated for each student who matriculated (number of subjects, N=228) incorporating the total undergraduate grade point average (UGPA), the science GPA (SGPA), and the three component Graduate Record Examination (GRE) scores: verbal (GRE-V), quantitative (GRE-Q), analytical (GRE-A). Individual applicant scores for each of the five parameters were ranked in descending quintiles. Each applicant's five quintile scores were then added, yielding a total quintile score ranging from 25, which indicated an excellent performance, to 5, which indicated poorer performance. Thirteen of the 228 students had academic difficulty (dismissal, suspension, or one-quarter on academic warning or probation). Twelve of the 13 students having academic difficulty had a preadmission total quintile score 12 (range, 6-14). In response to this descriptive analysis, when selecting applicants for the class of 2009, the admissions committee used the total quintile score for screening applicants for interviews. Analysis of correlations in preadmission, graduate, and postgraduate performance data for the classes of 2009-2013 will continue and may help identify those applicants at risk for academic difficulty. Establishing a threshold total quintile score of applicant GPA and GRE scores may significantly decrease the number of entering PA students at risk for poor academic performance.

  11. A simple signaling rule for variable life-adjusted display derived from an equivalent risk-adjusted CUSUM chart.

    Science.gov (United States)

    Wittenberg, Philipp; Gan, Fah Fatt; Knoth, Sven

    2018-04-17

    The variable life-adjusted display (VLAD) is the first risk-adjusted graphical procedure proposed in the literature for monitoring the performance of a surgeon. It displays the cumulative sum of expected minus observed deaths. It has since become highly popular because the statistic plotted is easy to understand. But it is also easy to misinterpret a surgeon's performance by utilizing the VLAD, potentially leading to grave consequences. The problem of misinterpretation is essentially caused by the variance of the VLAD's statistic that increases with sample size. In order for the VLAD to be truly useful, a simple signaling rule is desperately needed. Various forms of signaling rules have been developed, but they are usually quite complicated. Without signaling rules, making inferences using the VLAD alone is difficult if not misleading. In this paper, we establish an equivalence between a VLAD with V-mask and a risk-adjusted cumulative sum (RA-CUSUM) chart based on the difference between the estimated probability of death and surgical outcome. Average run length analysis based on simulation shows that this particular RA-CUSUM chart has similar performance as compared to the established RA-CUSUM chart based on the log-likelihood ratio statistic obtained by testing the odds ratio of death. We provide a simple design procedure for determining the V-mask parameters based on a resampling approach. Resampling from a real data set ensures that these parameters can be estimated appropriately. Finally, we illustrate the monitoring of a real surgeon's performance using VLAD with V-mask. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Feeding of Whitefly on Tobacco Decreases Aphid Performance via Increased Salicylate Signaling.

    Directory of Open Access Journals (Sweden)

    Haipeng Zhao

    Full Text Available The feeding of Bemisia tabaci nymphs trigger the SA pathway in some plant species. A previous study showed that B. tabaci nymphs induced defense against aphids (Myzus persicae in tobacco. However, the mechanism underlying this defense response is not well understood.Here, the effect of activating the SA signaling pathway in tobacco plants through B. tabaci nymph infestation on subsequent M. persicae colonization is investigated. Performance assays showed that B. tabaci nymphs pre-infestation significantly reduced M. persicae survival and fecundity systemically in wild-type (WT but not salicylate-deficient (NahG plants compared with respective control. However, pre-infestation had no obvious local effects on subsequent M. persicae in either WT or NahG tobacco. SA quantification results indicated that the highest accumulation of SA was induced by B. tabaci nymphs in WT plants after 15 days of infestation. These levels were 8.45- and 6.14-fold higher in the local and systemic leaves, respectively, than in controls. Meanwhile, no significant changes of SA levels were detected in NahG plants. Further, biochemical analysis of defense enzymes polyphenol oxidase (PPO, peroxidase (POD, β-1,3-glucanase, and chitinase demonstrated that B. tabaci nymph infestation increased these enzymes' activity locally and systemically in WT plants, and there was more chitinase and β-1, 3-glucanase activity systemically than locally, which was opposite to the changing trends of PPO. However, B. tabaci nymph infestation caused no obvious increase in enzyme activity in any NahG plants except POD.In conclusion, these results underscore the important role that induction of the SA signaling pathway by B. tabaci nymphs plays in defeating aphids. It also indicates that the activity of β-1, 3-glucanase and chitinase may be positively correlated with resistance to aphids.

  13. Real-time digital signal recovery for a multi-pole low-pass transfer function system.

    Science.gov (United States)

    Lee, Jhinhwan

    2017-08-01

    In order to solve the problems of waveform distortion and signal delay by many physical and electrical systems with multi-pole linear low-pass transfer characteristics, a simple digital-signal-processing (DSP)-based method of real-time recovery of the original source waveform from the distorted output waveform is proposed. A mathematical analysis on the convolution kernel representation of the single-pole low-pass transfer function shows that the original source waveform can be accurately recovered in real time using a particular moving average algorithm applied on the input stream of the distorted waveform, which can also significantly reduce the overall delay time constant. This method is generalized for multi-pole low-pass systems and has noise characteristics of the inverse of the low-pass filter characteristics. This method can be applied to most sensors and amplifiers operating close to their frequency response limits to improve the overall performance of data acquisition systems and digital feedback control systems.

  14. Development and Comparative Study of Effects of Training Algorithms on Performance of Artificial Neural Network Based Analog and Digital Automatic Modulation Recognition

    Directory of Open Access Journals (Sweden)

    Jide Julius Popoola

    2015-11-01

    Full Text Available This paper proposes two new classifiers that automatically recognise twelve combined analog and digital modulated signals without any a priori knowledge of the modulation schemes and the modulation parameters. The classifiers are developed using pattern recognition approach. Feature keys extracted from the instantaneous amplitude, instantaneous phase and the spectrum symmetry of the simulated signals are used as inputs to the artificial neural network employed in developing the classifiers. The two developed classifiers are trained using scaled conjugate gradient (SCG and conjugate gradient (CONJGRAD training algorithms. Sample results of the two classifiers show good success recognition performance with an average overall recognition rate above 99.50% at signal-to-noise ratio (SNR value from 0 dB and above with the two training algorithms employed and an average overall recognition rate slightly above 99.00% and 96.40% respectively at - 5 dB SNR value for SCG and CONJGRAD training algorithms. The comparative performance evaluation of the two developed classifiers using the two training algorithms shows that the two training algorithms have different effects on both the response rate and efficiency of the two developed artificial neural networks classifiers. In addition, the result of the performance evaluation carried out on the overall success recognition rates between the two developed classifiers in this study using pattern recognition approach with the two training algorithms and one reported classifier in surveyed literature using decision-theoretic approach shows that the classifiers developed in this study perform favourably with regard to accuracy and performance probability as compared to classifier presented in previous study.

  15. Using exponentially weighted moving average algorithm to defend against DDoS attacks

    CSIR Research Space (South Africa)

    Machaka, P

    2016-11-01

    Full Text Available This paper seeks to investigate the performance of the Exponentially Weighted Moving Average (EWMA) for mining big data and detection of DDoS attacks in Internet of Things (IoT) infrastructure. The paper will investigate the tradeoff between...

  16. The Signalling Value of Education across Genders

    DEFF Research Database (Denmark)

    Nielsson, Ulf; Steingrimsdottir, Herdis

    2018-01-01

    This study examines gender discrimination and the possibility that education is more important for signalling ability among women than men. As social networks tend to run along gender lines and managers in the labour market are predominantly male, it may be more difficult for women to signal...... their ability without college credentials. The Lang and Manove (Am Econ Rev 101(4):1467-1496, 2011) model of racial discrimination and educational sorting is applied to examine the gender gap in schooling attainment. The model is empirically estimated for whites, blacks and Hispanics separately......, with the results among whites consistent with education being more valuable to women due to signalling. For 90% of the whites in the sample women choose a higher level of education, given their ability, than men. Women on average obtain 0.5-0.7 extra years of schooling compared to men with the same ability score....

  17. Digitally generated excitation and near-baseband quadrature detection of rapid scan EPR signals.

    Science.gov (United States)

    Tseitlin, Mark; Yu, Zhelin; Quine, Richard W; Rinard, George A; Eaton, Sandra S; Eaton, Gareth R

    2014-12-01

    The use of multiple synchronized outputs from an arbitrary waveform generator (AWG) provides the opportunity to perform EPR experiments differently than by conventional EPR. We report a method for reconstructing the quadrature EPR spectrum from periodic signals that are generated with sinusoidal magnetic field modulation such as continuous wave (CW), multiharmonic, or rapid scan experiments. The signal is down-converted to an intermediate frequency (IF) that is less than the field scan or field modulation frequency and then digitized in a single channel. This method permits use of a high-pass analog filter before digitization to remove the strong non-EPR signal at the IF, that might otherwise overwhelm the digitizer. The IF is the difference between two synchronized X-band outputs from a Tektronix AWG 70002A, one of which is for excitation and the other is the reference for down-conversion. To permit signal averaging, timing was selected to give an exact integer number of full cycles for each frequency. In the experiments reported here the IF was 5kHz and the scan frequency was 40kHz. To produce sinusoidal rapid scans with a scan frequency eight times IF, a third synchronized output generated a square wave that was converted to a sine wave. The timing of the data acquisition with a Bruker SpecJet II was synchronized by an external clock signal from the AWG. The baseband quadrature signal in the frequency domain was reconstructed. This approach has the advantages that (i) the non-EPR response at the carrier frequency is eliminated, (ii) both real and imaginary EPR signals are reconstructed from a single physical channel to produce an ideal quadrature signal, and (iii) signal bandwidth does not increase relative to baseband detection. Spectra were obtained by deconvolution of the reconstructed signals for solid BDPA (1,3-bisdiphenylene-2-phenylallyl) in air, 0.2mM trityl OX63 in water, 15 N perdeuterated tempone, and a nitroxide with a 0.5G partially-resolved proton

  18. Analysis of defense signals in Arabidopsis thaliana leaves by ultra-performance liquid chromatography/tandem mass spectrometry: jasmonates, salicylic acid, abscisic acid.

    Science.gov (United States)

    Stingl, Nadja; Krischke, Markus; Fekete, Agnes; Mueller, Martin J

    2013-01-01

    Defense signaling compounds and phytohormones play an essential role in the regulation of plant responses to various environmental abiotic and biotic stresses. Among the most severe stresses are herbivory, pathogen infection, and drought stress. The major hormones involved in the regulation of these responses are 12-oxo-phytodienoic acid (OPDA), the pro-hormone jasmonic acid (JA) and its biologically active isoleucine conjugate (JA-Ile), salicylic acid (SA), and abscisic acid (ABA). These signaling compounds are present and biologically active at very low concentrations from ng/g to μg/g dry weight. Accurate and sensitive quantification of these signals has made a significant contribution to the understanding of plant stress responses. Ultra-performance liquid chromatography (UPLC) coupled with a tandem quadrupole mass spectrometer (MS/MS) has become an essential technique for the analysis and quantification of these compounds.

  19. Primary Study about Intensity Signal of Electron Paramagnetic Resonance in vivo Tooth Dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hoon; Gang, Seo Gon; Kim, Jeong In; Lee, Byung Il [KHNP Radiation Health Institute, Gyeongju (Korea, Republic of)

    2017-04-15

    The signal of Electron Paramagnetic Resonance(EPR) dosimetry system using human tooth has been well introduced as one of the efficient tool to evaluate radiation exposure. But, EPR dosimetry, even in the case of classical in vitro EPR system using tooth sample(measured molars), was regarded as having big signal fluctuation. One of reason for such difficulty in getting accurate intensity was the big effect of organic materials mixed in enamel part of teeth samples. They are mainly caused by the adaptation process of system itself to the movement of measured human subject. Generally, when we measured human teeth in vivo, five of six teeth spectrum were gathered and averaged for real evaluation. The these spectrum are measured under very different environment like angle of external magnet making magnetic filed with teeth(incisor). Random movement of these signals should be considered in different view point to understand and compare each EPR in vivo EPR spectrum. The peak to peak value of obtained five or six in vivo EPR system to get averaged value for final quantity of free radicals in hydroxy apatite crystal construction in enamel part of human teeth looks so randomly changed without regulation. But, in overall view, the EPR signal, especially at no irradiation level, is almost same for every measurement trial which is mainly composed of big noise and very small signal from real free radicals. The peak to peak value of obtained five or six in vivo EPR system to get averaged value for final quantity of free radicals in hydroxy apatite crystal construction in enamel part of human teeth looks so randomly changed without regulation.

  20. Primary Study about Intensity Signal of Electron Paramagnetic Resonance in vivo Tooth Dosimetry

    International Nuclear Information System (INIS)

    Choi, Hoon; Gang, Seo Gon; Kim, Jeong In; Lee, Byung Il

    2017-01-01

    The signal of Electron Paramagnetic Resonance(EPR) dosimetry system using human tooth has been well introduced as one of the efficient tool to evaluate radiation exposure. But, EPR dosimetry, even in the case of classical in vitro EPR system using tooth sample(measured molars), was regarded as having big signal fluctuation. One of reason for such difficulty in getting accurate intensity was the big effect of organic materials mixed in enamel part of teeth samples. They are mainly caused by the adaptation process of system itself to the movement of measured human subject. Generally, when we measured human teeth in vivo, five of six teeth spectrum were gathered and averaged for real evaluation. The these spectrum are measured under very different environment like angle of external magnet making magnetic filed with teeth(incisor). Random movement of these signals should be considered in different view point to understand and compare each EPR in vivo EPR spectrum. The peak to peak value of obtained five or six in vivo EPR system to get averaged value for final quantity of free radicals in hydroxy apatite crystal construction in enamel part of human teeth looks so randomly changed without regulation. But, in overall view, the EPR signal, especially at no irradiation level, is almost same for every measurement trial which is mainly composed of big noise and very small signal from real free radicals. The peak to peak value of obtained five or six in vivo EPR system to get averaged value for final quantity of free radicals in hydroxy apatite crystal construction in enamel part of human teeth looks so randomly changed without regulation.

  1. On the choice of the number of samples in laser Doppler anemometry signal processing

    Science.gov (United States)

    Dios, Federico; Comeron, Adolfo; Garcia-Vizcaino, David

    2001-05-01

    The minimum number of samples that must be taken from a sinusoidal signal affected by white Gaussian noise, in order to find its frequency with a predetermined maximum error, is derived. This analysis is of interest in evaluating the performance of velocity-measurement systems based on the Doppler effect. Specifically, in laser Doppler anemometry (LDA) it is usual to receive bursts with a poor signal-to- noise ratio, yet high accuracy is required for the measurement. In recent years special attention has been paid to the problem of monitoring the temporal evolution of turbulent flows. In this kind of situation averaging or filtering the data sequences cannot be allowed: in a rapidly changing environment each one of the measurements should rather by performed within a maximum permissible error and the bursts strongly affected by noise removed. The method for velocity extraction that will be considered here is the spectral analysis through the squared discrete Fourier transform, or periodogram, of the received bursts. This paper has two parts. In the first an approximate expression for the error committed in LDA is derived and discussed. In the second a mathematical formalism for the exact calculation of the error as a function of the signal-to- noise ratio is obtained, and some universal curves for the expected error are provided. The results presented here appear to represent a fundamental limitation on the accuracy of LDA measurements, yet, to our knowledge, they have not been reported in the literature so far.

  2. Dual fiber Bragg gratings configuration-based fiber acoustic sensor for low-frequency signal detection

    Science.gov (United States)

    Yang, Dong; Wang, Shun; Lu, Ping; Liu, Deming

    2014-11-01

    We propose and fabricate a new type fiber acoustic sensor based on dual fiber Bragg gratings (FBGs) configuration. The acoustic sensor head is constructed by putting the sensing cells enclosed in an aluminum cylinder space built by two Cband FBGs and a titanium diaphragm of 50 um thickness. One end of each FBG is longitudinally adhered to the diaphragm by UV glue. Both of the two FBGs are employed for reflecting light. The dual FBGs play roles not only as signal transmission system but also as sensing component, and they demodulate each other's optical signal mutually during the measurement. Both of the two FBGs are pre-strained and the output optical power experiences fluctuation in a linear relationship along with a variation of axial strain and surrounding acoustic interference. So a precise approach to measure the frequency and sound pressure of the acoustic disturbance is achieved. Experiments are performed and results show that a relatively flat frequency response in a range from 200 Hz to 1 kHz with the average signal-to-noise ratio (SNR) above 21 dB is obtained. The maximum sound pressure sensitivity of 11.35mV/Pa is achieved with the Rsquared value of 0.99131 when the sound pressure in the range of 87.7-106.6dB. It has potential applications in low frequency signal detection. Owing to its direct self-demodulation method, the sensing system reveals the advantages of easy to demodulate, good temperature stability and measurement reliability. Besides, performance of the proposed sensor could be improved by optimizing the parameters of the sensor, especially the diaphragm.

  3. Sleep staging with movement-related signals.

    Science.gov (United States)

    Jansen, B H; Shankar, K

    1993-05-01

    Body movement related signals (i.e., activity due to postural changes and the ballistocardiac effort) were recorded from six normal volunteers using the static-charge-sensitive bed (SCSB). Visual sleep staging was performed on the basis of simultaneously recorded EEG, EMG and EOG signals. A statistical classification technique was used to determine if reliable sleep staging could be performed using only the SCSB signal. A classification rate of between 52% and 75% was obtained for sleep staging in the five conventional sleep stages and the awake state. These rates improved from 78% to 89% for classification between awake, REM and non-REM sleep and from 86% to 98% for awake versus asleep classification.

  4. Effects of Soft Drinks on Resting State EEG and Brain-Computer Interface Performance.

    Science.gov (United States)

    Meng, Jianjun; Mundahl, John; Streitz, Taylor; Maile, Kaitlin; Gulachek, Nicholas; He, Jeffrey; He, Bin

    2017-01-01

    Motor imagery-based (MI based) brain-computer interface (BCI) using electroencephalography (EEG) allows users to directly control a computer or external device by modulating and decoding the brain waves. A variety of factors could potentially affect the performance of BCI such as the health status of subjects or the environment. In this study, we investigated the effects of soft drinks and regular coffee on EEG signals under resting state and on the performance of MI based BCI. Twenty-six healthy human subjects participated in three or four BCI sessions with a resting period in each session. During each session, the subjects drank an unlabeled soft drink with either sugar (Caffeine Free Coca-Cola), caffeine (Diet Coke), neither ingredient (Caffeine Free Diet Coke), or a regular coffee if there was a fourth session. The resting state spectral power in each condition was compared; the analysis showed that power in alpha and beta band after caffeine consumption were decreased substantially compared to control and sugar condition. Although the attenuation of powers in the frequency range used for the online BCI control signal was shown, group averaged BCI online performance after consuming caffeine was similar to those of other conditions. This work, for the first time, shows the effect of caffeine, sugar intake on the online BCI performance and resting state brain signal.

  5. Comparative analysis of the performance of One-Way and Two-Way urban road networks

    Science.gov (United States)

    Gheorghe, Carmen

    2017-10-01

    The fact that the number of vehicles is increasing year after year represents a challenge in road traffic management because it is necessary to adjust the road traffic, in order to prevent any incidents, using mostly the same road infrastructure. At this moment one-way road network provides efficient traffic flow for vehicles but it is not ideal for pedestrians. Therefore, a proper solution must be found and applied when and where it is necessary. Replacing one-way road network with two-way road network may be a viable solution especially if in the area is high pedestrian traffic. The paper aims to highlight the influence of both, one-way and two-way urban road networks through an experimental research which was performed by using traffic data collected in the field. Each of the two scenarios analyzed were based on the same traffic data, the same geometrical conditions of the road (lane width, total road segment width, road slopes, total length of the road network) and also the same signaling conditions (signalised intersection or roundabout). The analysis which involves two-way scenario reveals changes in the performance parameters like delay average, stops average, delay stop average and vehicle speed average. Based on the values obtained, it was possible to perform a comparative analysis between the real, one-way, scenario and the theoretical, two-way, scenario.

  6. Combined peak-to-average power ratio reduction and physical layer security enhancement in optical orthogonal frequency division multiplexing visible-light communication systems

    Science.gov (United States)

    Wang, Zhongpeng; Chen, Shoufa

    2016-07-01

    A physical encryption scheme for discrete Hartley transform (DHT) precoded orthogonal frequency division multiplexing (OFDM) visible-light communication (VLC) systems using frequency domain chaos scrambling is proposed. In the scheme, the chaos scrambling, which is generated by a modified logistic mapping, is utilized to enhance the physical layer of security, and the DHT precoding is employed to reduce of OFDM signal for OFDM-based VLC. The influence of chaos scrambling on peak-to-average power ratio (PAPR) and bit error rate (BER) of systems is studied. The experimental simulation results prove the efficiency of the proposed encryption method for DHT-precoded, OFDM-based VLC systems. Furthermore, the influence of the proposed encryption to the PAPR and BER of systems is evaluated. The experimental results show that the proposed security scheme can protect the DHT-precoded, OFDM-based VLC from eavesdroppers, while keeping the good BER performance of DHT-precoded systems. The BER performance of the encrypted and DHT-precoded system is almost the same as that of the conventional DHT-precoded system without encryption.

  7. A RED modified weighted moving average for soft real-time application

    Directory of Open Access Journals (Sweden)

    Domanśka Joanna

    2014-09-01

    Full Text Available The popularity of TCP/IP has resulted in an increase in usage of best-effort networks for real-time communication. Much effort has been spent to ensure quality of service for soft real-time traffic over IP networks. The Internet Engineering Task Force has proposed some architecture components, such as Active Queue Management (AQM. The paper investigates the influence of the weighted moving average on packet waiting time reduction for an AQM mechanism: the RED algorithm. The proposed method for computing the average queue length is based on a difference equation (a recursive equation. Depending on a particular optimality criterion, proper parameters of the modified weighted moving average function can be chosen. This change will allow reducing the number of violations of timing constraints and better use of this mechanism for soft real-time transmissions. The optimization problem is solved through simulations performed in OMNeT++ and later verified experimentally on a Linux implementation

  8. Endogenous Information, Risk Characterization, and the Predictability of Average Stock Returns

    Directory of Open Access Journals (Sweden)

    Pradosh Simlai

    2012-09-01

    Full Text Available In this paper we provide a new type of risk characterization of the predictability of two widely known abnormal patterns in average stock returns: momentum and reversal. The purpose is to illustrate the relative importance of common risk factors and endogenous information. Our results demonstrates that in the presence of zero-investment factors, spreads in average momentum and reversal returns correspond to spreads in the slopes of the endogenous information. The empirical findings support the view that various classes of firms react differently to volatility risk, and endogenous information harbor important sources of potential risk loadings. Taken together, our results suggest that returns are influenced by random endogenous information flow, which is asymmetric in nature, and can be used as a performance attribution factor. If one fails to incorporate the existing asymmetric endogenous information hidden in the historical behavior, any attempt to explore average stock return predictability will be subject to an unquantified specification bias.

  9. Parametric modelling of cardiac system multiple measurement signals: an open-source computer framework for performance evaluation of ECG, PCG and ABP event detectors.

    Science.gov (United States)

    Homaeinezhad, M R; Sabetian, P; Feizollahi, A; Ghaffari, A; Rahmani, R

    2012-02-01

    The major focus of this study is to present a performance accuracy assessment framework based on mathematical modelling of cardiac system multiple measurement signals. Three mathematical algebraic subroutines with simple structural functions for synthetic generation of the synchronously triggered electrocardiogram (ECG), phonocardiogram (PCG) and arterial blood pressure (ABP) signals are described. In the case of ECG signals, normal and abnormal PQRST cycles in complicated conditions such as fascicular ventricular tachycardia, rate dependent conduction block and acute Q-wave infarctions of inferior and anterolateral walls can be simulated. Also, continuous ABP waveform with corresponding individual events such as systolic, diastolic and dicrotic pressures with normal or abnormal morphologies can be generated by another part of the model. In addition, the mathematical synthetic PCG framework is able to generate the S4-S1-S2-S3 cycles in normal and in cardiac disorder conditions such as stenosis, insufficiency, regurgitation and gallop. In the PCG model, the amplitude and frequency content (5-700 Hz) of each sound and variation patterns can be specified. The three proposed models were implemented to generate artificial signals with varies abnormality types and signal-to-noise ratios (SNR), for quantitative detection-delineation performance assessment of several ECG, PCG and ABP individual event detectors designed based on the Hilbert transform, discrete wavelet transform, geometric features such as area curve length (ACLM), the multiple higher order moments (MHOM) metric, and the principal components analysed geometric index (PCAGI). For each method the detection-delineation operating characteristics were obtained automatically in terms of sensitivity, positive predictivity and delineation (segmentation) error rms and checked by the cardiologist. The Matlab m-file script of the synthetic ECG, ABP and PCG signal generators are available in the Appendix.

  10. Application of Data Smoothing Method in Signal Processing for Vortex Flow Meters

    Directory of Open Access Journals (Sweden)

    Zhang Jun

    2017-01-01

    Full Text Available Vortex flow meter is typical flow measure equipment. Its measurement output signals can easily be impaired by environmental conditions. In order to obtain an improved estimate of the time-averaged velocity from the vortex flow meter, a signal filter method is applied in this paper. The method is based on a simple Savitzky-Golay smoothing filter algorithm. According with the algorithm, a numerical program is developed in Python with the scientific library numerical Numpy. Two sample data sets are processed through the program. The results demonstrate that the processed data is available accepted compared with the original data. The improved data of the time-averaged velocity is obtained within smoothing curves. Finally the simple data smoothing program is useable and stable for this filter.

  11. A speed guidance strategy for multiple signalized intersections based on car-following model

    Science.gov (United States)

    Tang, Tie-Qiao; Yi, Zhi-Yan; Zhang, Jian; Wang, Tao; Leng, Jun-Qiang

    2018-04-01

    Signalized intersection has great roles in urban traffic system. The signal infrastructure and the driving behavior near the intersection are paramount factors that have significant impacts on traffic flow and energy consumption. In this paper, a speed guidance strategy is introduced into a car-following model to study the driving behavior and the fuel consumption in a single-lane road with multiple signalized intersections. The numerical results indicate that the proposed model can reduce the fuel consumption and the average stop times. The findings provide insightful guidance for the eco-driving strategies near the signalized intersections.

  12. Multi-GNSS signal-in-space range error assessment - Methodology and results

    Science.gov (United States)

    Montenbruck, Oliver; Steigenberger, Peter; Hauschild, André

    2018-06-01

    The positioning accuracy of global and regional navigation satellite systems (GNSS/RNSS) depends on a variety of influence factors. For constellation-specific performance analyses it has become common practice to separate a geometry-related quality factor (the dilution of precision, DOP) from the measurement and modeling errors of the individual ranging measurements (known as user equivalent range error, UERE). The latter is further divided into user equipment errors and contributions related to the space and control segment. The present study reviews the fundamental concepts and underlying assumptions of signal-in-space range error (SISRE) analyses and presents a harmonized framework for multi-GNSS performance monitoring based on the comparison of broadcast and precise ephemerides. The implications of inconsistent geometric reference points, non-common time systems, and signal-specific range biases are analyzed, and strategies for coping with these issues in the definition and computation of SIS range errors are developed. The presented concepts are, furthermore, applied to current navigation satellite systems, and representative results are presented along with a discussion of constellation-specific problems in their determination. Based on data for the January to December 2017 time frame, representative global average root-mean-square (RMS) SISRE values of 0.2 m, 0.6 m, 1 m, and 2 m are obtained for Galileo, GPS, BeiDou-2, and GLONASS, respectively. Roughly two times larger values apply for the corresponding 95th-percentile values. Overall, the study contributes to a better understanding and harmonization of multi-GNSS SISRE analyses and their use as key performance indicators for the various constellations.

  13. Understanding signal integrity

    CERN Document Server

    Thierauf, Stephen C

    2010-01-01

    This unique book provides you with practical guidance on understanding and interpreting signal integrity (SI) performance to help you with your challenging circuit board design projects. You find high-level discussions of important SI concepts presented in a clear and easily accessible format, including question and answer sections and bulleted lists.This valuable resource features rules of thumb and simple equations to help you make estimates of critical signal integrity parameters without using circuit simulators of CAD (computer-aided design). The book is supported with over 120 illustratio

  14. Measurement of the Average $B^{0}_{s}$ Lifetime in the Decay $B^{0}_{s} \\to J/\\Psi\\Phi$

    Energy Technology Data Exchange (ETDEWEB)

    Pauly, Thilo [Keble Collge, Oxford (United Kingdom)

    2003-01-01

    The lifetime difference between the long (CP odd) and short (CP even) lived components of the Bg meson is currently predicted to be of the order of 10 % in the Standard Model. It has been suggested that the decay Bg —>• J/\\|> 4) is predominantly CP even and thus the measured average lifetime could be shorter than the lifetime measured in the inclusive decay modes. We present a measurement of the average lifetime of the 6° meson in its decay Eg —>• J/4> cj), with J/\\|) —> M.+ M.~ and cj) —>• K+K-. During January 2002 and August 2003 the CDF experiment at the Tevatron has been exposed to about 135 pb" 1 of pp collisions with a centre-of-mass energy of A/S = 1.96 TeV. In the data sample collected with the J/\\Jj dimuon trigger we fully reconstruct about 125 Bg —> J/\\J) (J) candidates with precision silicon information. This is currently the largest exclusive Bg sample. We perform a fit to the proper decay time information to extract the average Bg lifetime and simultaneously use the mass information to disentangle signal from background. For cross-checks we measure the lifetime in the higher statistics modes Bj -» J/\\J> K* and B° —> J/4> K*°, which both have similar decay topologies and kinematics. We obtain r(B°s -> J/\\|> cf>) = (1.31±5:l3(stat.) ± 0.02(syst.)) ps , which is currently the best single measurement of the Bg lifetime and is consistent with other measurements. This result is not accurate enough to establish the existence of a possible significant lifetime difference between the CP odd and even states.

  15. On the use of peripheral autonomic signals for binary control of body–machine interfaces

    International Nuclear Information System (INIS)

    Falk, Tiago H; Guirgis, Mirna; Power, Sarah; Blain, Stefanie; Chau, Tom

    2010-01-01

    In this work, the potential of using peripheral autonomic (PA) responses as control signals for body–machine interfaces that require no physical movement was investigated. Electrodermal activity, skin temperature, heart rate and respiration rate were collected from six participants and hidden Markov models (HMMs) were used to automatically detect when a subject was performing music imagery as opposed to being at rest. Experiments were performed under controlled silent conditions as well as in the presence of continuous and startle (e.g. door slamming) ambient noise. By developing subject-specific HMMs, music imagery was detected under silent conditions with the average sensitivity and specificity of 94.2% and 93.3%, respectively. In the presence of startle noise stimuli, the system sensitivity and specificity levels of 78.8% and 80.2% were attained, respectively. In environments corrupted by continuous ambient and startle noise, the system specificity further decreased to 75.9%. To improve the system robustness against environmental noise, a startle noise detection and compensation strategy were proposed. Once in place, performance levels were shown to be comparable to those observed in silence. The obtained results suggest that PA signals, combined with HMMs, can be useful tools for the development of body–machine interfaces that allow individuals with severe motor impairments to communicate and/or to interact with their environment

  16. A Novel Method for Control Performance Assessment with Fractional Order Signal Processing and Its Application to Semiconductor Manufacturing

    Directory of Open Access Journals (Sweden)

    Kai Liu

    2018-06-01

    Full Text Available The significant task for control performance assessment (CPA is to review and evaluate the performance of the control system. The control system in the semiconductor industry exhibits a complex dynamic behavior, which is hard to analyze. This paper investigates the interesting crossover properties of Hurst exponent estimations and proposes a novel method for feature extraction of the nonlinear multi-input multi-output (MIMO systems. At first, coupled data from real industry are analyzed by multifractal detrended fluctuation analysis (MFDFA and the resultant multifractal spectrum is obtained. Secondly, the crossover points with spline fit in the scale-law curve are located and then employed to segment the entire scale-law curve into several different scaling regions, in which a single Hurst exponent can be estimated. Thirdly, to further ascertain the origin of the multifractality of control signals, the generalized Hurst exponents of the original series are compared with shuffled data. At last, non-Gaussian statistical properties, multifractal properties and Hurst exponents of the process control variables are derived and compared with different sets of tuning parameters. The results have shown that CPA of the MIMO system can be better employed with the help of fractional order signal processing (FOSP.

  17. Advanced signal processing based on support vector regression for lidar applications

    Science.gov (United States)

    Gelfusa, M.; Murari, A.; Malizia, A.; Lungaroni, M.; Peluso, E.; Parracino, S.; Talebzadeh, S.; Vega, J.; Gaudio, P.

    2015-10-01

    The LIDAR technique has recently found many applications in atmospheric physics and remote sensing. One of the main issues, in the deployment of systems based on LIDAR, is the filtering of the backscattered signal to alleviate the problems generated by noise. Improvement in the signal to noise ratio is typically achieved by averaging a quite large number (of the order of hundreds) of successive laser pulses. This approach can be effective but presents significant limitations. First of all, it implies a great stress on the laser source, particularly in the case of systems for automatic monitoring of large areas for long periods. Secondly, this solution can become difficult to implement in applications characterised by rapid variations of the atmosphere, for example in the case of pollutant emissions, or by abrupt changes in the noise. In this contribution, a new method for the software filtering and denoising of LIDAR signals is presented. The technique is based on support vector regression. The proposed new method is insensitive to the statistics of the noise and is therefore fully general and quite robust. The developed numerical tool has been systematically compared with the most powerful techniques available, using both synthetic and experimental data. Its performances have been tested for various statistical distributions of the noise and also for other disturbances of the acquired signal such as outliers. The competitive advantages of the proposed method are fully documented. The potential of the proposed approach to widen the capability of the LIDAR technique, particularly in the detection of widespread smoke, is discussed in detail.

  18. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  19. Characterisation and correction of signal fluctuations in successive acquisitions of microarray images

    Directory of Open Access Journals (Sweden)

    François Nicolas

    2009-03-01

    Full Text Available Abstract Background There are many sources of variation in dual labelled microarray experiments, including data acquisition and image processing. The final interpretation of experiments strongly relies on the accuracy of the measurement of the signal intensity. For low intensity spots in particular, accurately estimating gene expression variations remains a challenge as signal measurement is, in this case, highly subject to fluctuations. Results To evaluate the fluctuations in the fluorescence intensities of spots, we used series of successive scans, at the same settings, of whole genome arrays. We measured the decrease in fluorescence and we evaluated the influence of different parameters (PMT gain, resolution and chemistry of the slide on the signal variability, at the level of the array as a whole and by intensity interval. Moreover, we assessed the effect of averaging scans on the fluctuations. We found that the extent of photo-bleaching was low and we established that 1 the fluorescence fluctuation is linked to the resolution e.g. it depends on the number of pixels in the spot 2 the fluorescence fluctuation increases as the scanner voltage increases and, moreover, is higher for the red as opposed to the green fluorescence which can introduce bias in the analysis 3 the signal variability is linked to the intensity level, it is higher for low intensities 4 the heterogeneity of the spots and the variability of the signal and the intensity ratios decrease when two or three scans are averaged. Conclusion Protocols consisting of two scans, one at low and one at high PMT gains, or multiple scans (ten scans can introduce bias or be difficult to implement. We found that averaging two, or at most three, acquisitions of microarrays scanned at moderate photomultiplier settings (PMT gain is sufficient to significantly improve the accuracy (quality of the data and particularly those for spots having low intensities and we propose this as a general

  20. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.