Averaging scheme for atomic resolution off-axis electron holograms.
Niermann, T; Lehmann, M
2014-08-01
All micrographs are limited by shot-noise, which is intrinsic to the detection process of electrons. For beam insensitive specimen this limitation can in principle easily be circumvented by prolonged exposure times. However, in the high-resolution regime several instrumental instabilities limit the applicable exposure time. Particularly in the case of off-axis holography the holograms are highly sensitive to the position and voltage of the electron-optical biprism. We present a novel reconstruction algorithm to average series of off-axis holograms while compensating for specimen drift, biprism drift, drift of biprism voltage, and drift of defocus, which all might cause problematic changes from exposure to exposure. We show an application of the algorithm utilizing also the possibilities of double biprism holography, which results in a high quality exit-wave reconstruction with 75 pm resolution at a very high signal-to-noise ratio. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reducing Noise by Repetition: Introduction to Signal Averaging
Hassan, Umer; Anwar, Muhammad Sabieh
2010-01-01
This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…
A high speed digital signal averager for pulsed NMR
International Nuclear Information System (INIS)
Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.
1978-01-01
A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)
Application of NMR circuit for superconducting magnet using signal averaging
International Nuclear Information System (INIS)
Yamada, R.; Ishimoto, H.; Shea, M.F.; Schmidt, E.E.; Borer, K.
1977-01-01
An NMR circuit was used to measure the absolute field values of Fermilab Energy Doubler magnets up to 44 kG. A signal averaging method to improve the S/N ratio was implemented by means of a Tektronix Digital Processing Oscilloscope, followed by the development of an inexpensive microprocessor based system contained in a NIM module. Some of the data obtained from measuring two superconducting dipole magnets are presented
Ultrasonic correlator versus signal averager as a signal to noise enhancement instrument
Kishoni, Doron; Pietsch, Benjamin E.
1989-01-01
Ultrasonic inspection of thick and attenuating materials is hampered by the reduced amplitudes of the propagated waves to a degree that the noise is too high to enable meaningful interpretation of the data. In order to overcome the low Signal to Noise (S/N) ratio, a correlation technique has been developed. In this method, a continuous pseudo-random pattern generated digitally is transmitted and detected by piezoelectric transducers. A correlation is performed in the instrument between the received signal and a variable delayed image of the transmitted one. The result is shown to be proportional to the impulse response of the investigated material, analogous to a signal received from a pulsed system, with an improved S/N ratio. The degree of S/N enhancement depends on the sweep rate. This paper describes the correlator, and compares it to the method of enhancing S/N ratio by averaging the signals. The similarities and differences between the two are highlighted and the potential advantage of the correlator system is explained.
Signal-averaged P wave duration and the dimensions of the atria
DEFF Research Database (Denmark)
Dixen, Ulrik; Joens, Christian; Rasmussen, Bo V
2004-01-01
Delay of atrial electrical conduction measured as prolonged signal-averaged P wave duration (SAPWD) could be due to atrial enlargement. Here, we aimed to compare different atrial size parameters obtained from echocardiography with the SAPWD measured with a signal-averaged electrocardiogram (SAECG)....
Signal detection without finite-energy limits to quantum resolution
Luis Aina, Alfredo
2013-01-01
We show that there are extremely simple signal detection schemes where the finiteness of energy resources places no limit on the resolution. On the contrary, larger resolution can be obtained with lower energy. To this end the generator of the signal-dependent transformation encoding the signal information on the probe state must be different from the energy. We show that the larger the deviation of the probe state from being the minimum-uncertainty state, the better the resolution.
Large-signal analysis of DC motor drive system using state-space averaging technique
International Nuclear Information System (INIS)
Bekir Yildiz, Ali
2008-01-01
The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model
Manipulating cell signaling with subcellular spatial resolution
Czech Academy of Sciences Publication Activity Database
Yushchenko, Dmytro A.; Nadler, A.; Schultz, C.
2016-01-01
Roč. 15, č. 8 (2016), s. 1023-1024 ISSN 1538-4101 Institutional support: RVO:61388963 Keywords : arachidonic acid * caging group * insulin secretion * photorelease * signaling lipids Subject RIV: CE - Biochemistry Impact factor: 3.530, year: 2016
Han, Sheng; Xi, Shi-qiong; Geng, Wei-dong
2017-11-01
In order to solve the problem of low recognition rate of traditional feature extraction operators under low-resolution images, a novel algorithm of expression recognition is proposed, named central oblique average center-symmetric local binary pattern (CS-LBP) with adaptive threshold (ATCS-LBP). Firstly, the features of face images can be extracted by the proposed operator after pretreatment. Secondly, the obtained feature image is divided into blocks. Thirdly, the histogram of each block is computed independently and all histograms can be connected serially to create a final feature vector. Finally, expression classification is achieved by using support vector machine (SVM) classifier. Experimental results on Japanese female facial expression (JAFFE) database show that the proposed algorithm can achieve a recognition rate of 81.9% when the resolution is as low as 16×16, which is much better than that of the traditional feature extraction operators.
Accurate measurement of imaging photoplethysmographic signals based camera using weighted average
Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji
2018-01-01
Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.
Energy Technology Data Exchange (ETDEWEB)
Yun, Yuxing [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China; Fan, Jiwen [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xiao, Heng [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Zhang, Guang J. [Scripps Institution of Oceanography, University of California, San Diego CA USA; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xu, Kuan-Man [NASA Langley Research Center, Hampton VA USA; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Gustafson, William I. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA
2017-11-01
Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32 km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.
Inverse methods for estimating primary input signals from time-averaged isotope profiles
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
Improving sensitivity in micro-free flow electrophoresis using signal averaging
Turgeon, Ryan T.; Bowser, Michael T.
2009-01-01
Microfluidic free-flow electrophoresis (μFFE) is a separation technique that separates continuous streams of analytes as they travel through an electric field in a planar flow channel. The continuous nature of the μFFE separation suggests that approaches more commonly applied in spectroscopy and imaging may be effective in improving sensitivity. The current paper describes the S/N improvements that can be achieved by simply averaging multiple images of a μFFE separation; 20–24-fold improvements in S/N were observed by averaging the signal from 500 images recorded for over 2 min. Up to an 80-fold improvement in S/N was observed by averaging 6500 images. Detection limits as low as 14 pM were achieved for fluorescein, which is impressive considering the non-ideal optical set-up used in these experiments. The limitation to this signal averaging approach was the stability of the μFFE separation. At separation times longer than 20 min bubbles began to form at the electrodes, which disrupted the flow profile through the device, giving rise to erratic peak positions. PMID:19319908
High Resolution of the ECG Signal by Polynomial Approximation
Directory of Open Access Journals (Sweden)
G. Rozinaj
2006-04-01
Full Text Available Averaging techniques as temporal averaging and space averaging have been successfully used in many applications for attenuating interference [6], [7], [8], [9], [10]. In this paper we introduce interference removing of the ECG signal by polynomial approximation, with smoothing discrete dependencies, to make up for averaging methods. The method is suitable for low-level signals of the electrical activity of the heart often less than 10 m V. Most low-level signals arising from PR, ST and TP segments which can be detected eventually and their physiologic meaning can be appreciated. Of special importance for the diagnostic of the electrical activity of the heart is the activity bundle of His between P and R waveforms. We have established an artificial sine wave to ECG signal between P and R wave. The aim focus is to verify the smoothing method by polynomial approximation if the SNR (signal-to-noise ratio is negative (i.e. a signal is lower than noise.
Real-time traffic signal optimization model based on average delay time per person
Directory of Open Access Journals (Sweden)
Pengpeng Jiao
2015-10-01
Full Text Available Real-time traffic signal control is very important for relieving urban traffic congestion. Many existing traffic control models were formulated using optimization approach, with the objective functions of minimizing vehicle delay time. To improve people’s trip efficiency, this article aims to minimize delay time per person. Based on the time-varying traffic flow data at intersections, the article first fits curves of accumulative arrival and departure vehicles, as well as the corresponding functions. Moreover, this article transfers vehicle delay time to personal delay time using average passenger load of cars and buses, employs such time as the objective function, and proposes a signal timing optimization model for intersections to achieve real-time signal parameters, including cycle length and green time. This research further implements a case study based on practical data collected at an intersection in Beijing, China. The average delay time per person and queue length are employed as evaluation indices to show the performances of the model. The results show that the proposed methodology is capable of improving traffic efficiency and is very effective for real-world applications.
Hayashi, Risa; Nakai, Kenji; Fukushima, Akimune; Itoh, Manabu; Sugiyama, Toru
2009-03-01
Although ultrasonic diagnostic imaging and fetal heart monitors have undergone great technological improvements, the development and use of fetal electrocardiograms to evaluate fetal arrhythmias and autonomic nervous activity have not been fully established. We verified the clinical significance of the novel signal-averaged vector-projected high amplification ECG (SAVP-ECG) method in fetuses from 48 gravidas at 32-41 weeks of gestation and in 34 neonates. SAVP-ECGs from fetuses and newborns were recorded using a modified XYZ-leads system. Once noise and maternal QRS waves were removed, the P, QRS, and T wave intervals were measured from the signal-averaged fetal ECGs. We also compared fetal and neonatal heart rates (HRs), coefficients of variation of heart rate variability (CV) as a parasympathetic nervous activity, and the ratio of low to high frequency (LF/HF ratio) as a sympathetic nervous activity. The rate of detection of a fetal ECG by SAVP-ECG was 72.9%, and the fetal and neonatal QRS and QTc intervals were not significantly different. The neonatal CVs and LF/HF ratios were significantly increased compared with those in the fetus. In conclusion, we have developed a fetal ECG recording method using the SAVP-ECG system, which we used to evaluate autonomic nervous system development.
Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.
2017-12-01
We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.
Improvement of the energy resolution via an optimized digital signal processing in GERDA Phase I
Agostini, M.; Allardt, M.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Barros, N.; Baudis, L.; Bauer, C.; Becerici-Schmidt, N.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode, T.; Borowicz, D.; Brudanin, V.; Brugnera, R.; Budjáš, D.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; D'Andrea, V.; Demidova, E. V.; Vacri, A. di; Domula, A.; Doroshkevich, E.; Egorov, V.; Falkenstein, R.; Fedorova, O.; Freund, K.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Hegai, A.; Heisel, M.; Hemmer, S.; Heusser, G.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Janicskó Csáthy, J.; Jochum, J.; Junker, M.; Kazalov, V.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Klimenko, A.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, ********************M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Majorovits, B.; Maneschg, W.; Medinaceli, E.; Misiaszek, M.; Moseev, P.; Nemchenok, I.; Palioselitis, D.; Panas, K.; Pandola, L.; Pelczar, K.; Pullia, A.; Riboldi, S.; Rumyantseva, N.; Sada, C.; Salathe, M.; Schmitt, C.; Schneider, B.; Schönert, S.; Schreiner, J.; Schütz, A.-K.; Schulz, O.; Schwingenheuer, B.; Selivanenko, O.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Stepaniuk, M.; Ur, C. A.; Vanhoefer, L.; Vasenko, A. A.; Veresnikova, A.; von Sturm, K.; Wagner, V.; Walter, M.; Wegmann, A.; Wester, T.; Wilsenach, H.; Wojcik, M.; Yanovich, E.; Zavarise, P.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.
2015-06-01
An optimized digital shaping filter has been developed for the Gerda experiment which searches for neutrinoless double beta decay in Ge. The Gerda Phase I energy calibration data have been reprocessed and an average improvement of 0.3 keV in energy resolution (FWHM) corresponding to 10 % at the value for decay in Ge is obtained. This is possible thanks to the enhanced low-frequency noise rejection of this Zero Area Cusp (ZAC) signal shaping filter.
Abboud, S.; Blatt, C. M.; Lown, B.; Graboys, T. B.; Sadeh, D.; Cohen, R. J.
1987-01-01
An advanced non invasive signal averaging technique was used to detect late potentials in two groups of patients: Group A (24 patients) with coronary artery disease (CAD) and without sustained ventricular tachycardia (VT) and Group B (8 patients) with CAD and sustained VT. Recorded analog data were digitized and aligned using a cross correlation function with fast Fourier transform schema, averaged and band pass filtered between 60 and 200 Hz with a non-recursive digital filter. Averaged filtered waveforms were analyzed by computer program for 3 parameters: (1) filtered QRS (fQRS) duration (2) interval between the peak of the R wave peak and the end of fQRS (R-LP) (3) RMS value of last 40 msec of fQRS (RMS). Significant change was found between Groups A and B in fQRS (101 -/+ 13 msec vs 123 -/+ 15 msec; p < .0005) and in R-LP vs 52 -/+ 11 msec vs 71-/+18 msec, p <.002). We conclude that (1) the use of a cross correlation triggering method and non-recursive digital filter enables a reliable recording of late potentials from the body surface; (2) fQRS and R-LP durations are sensitive indicators of CAD patients susceptible to VT.
Brain Network Analysis from High-Resolution EEG Signals
de Vico Fallani, Fabrizio; Babiloni, Fabio
Over the last decade, there has been a growing interest in the detection of the functional connectivity in the brain from different neuroelectromagnetic and hemodynamic signals recorded by several neuro-imaging devices such as the functional Magnetic Resonance Imaging (fMRI) scanner, electroencephalography (EEG) and magnetoencephalography (MEG) apparatus. Many methods have been proposed and discussed in the literature with the aim of estimating the functional relationships among different cerebral structures. However, the necessity of an objective comprehension of the network composed by the functional links of different brain regions is assuming an essential role in the Neuroscience. Consequently, there is a wide interest in the development and validation of mathematical tools that are appropriate to spot significant features that could describe concisely the structure of the estimated cerebral networks. The extraction of salient characteristics from brain connectivity patterns is an open challenging topic, since often the estimated cerebral networks have a relative large size and complex structure. Recently, it was realized that the functional connectivity networks estimated from actual brain-imaging technologies (MEG, fMRI and EEG) can be analyzed by means of the graph theory. Since a graph is a mathematical representation of a network, which is essentially reduced to nodes and connections between them, the use of a theoretical graph approach seems relevant and useful as firstly demonstrated on a set of anatomical brain networks. In those studies, the authors have employed two characteristic measures, the average shortest path L and the clustering index C, to extract respectively the global and local properties of the network structure. They have found that anatomical brain networks exhibit many local connections (i.e. a high C) and few random long distance connections (i.e. a low L). These values identify a particular model that interpolate between a regular
DEFF Research Database (Denmark)
Lenzi, Amanda; Pinson, Pierre; Clemmensen, Line Katrine Harder
2017-01-01
average wind power generation, and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial correlation is captured by a latent Gaussian field. We explore how such models can be handled...... with stochastic partial differential approximations of Matérn Gaussian fields together with Integrated Nested Laplace Approximations. We demonstrate the proposed methods on wind farm data from Western Denmark, and compare the results to those obtained with standard geostatistical methods. The results show...
International Nuclear Information System (INIS)
Kobayashi, T.; Yoshinuma, M.; Ohdachi, S.; Ida, K.; Itoh, K.; Moon, C.; Yamada, I.; Funaba, H.; Yasuhara, R.; Tsuchiya, H.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Kubo, S.; Tsujimura, T. I.; Inagaki, S.
2016-01-01
This paper provides a software application of the sampling scope concept for fusion research. The time evolution of Thomson scattering data is reconstructed with a high temporal resolution during a modulated electron cyclotron resonance heating (MECH) phase. The amplitude profile and the delay time profile of the heat pulse propagation are obtained from the reconstructed signal for discharges having on-axis and off-axis MECH depositions. The results are found to be consistent with the MECH deposition.
Energy Technology Data Exchange (ETDEWEB)
Kobayashi, T., E-mail: kobayashi.tatsuya@LHD.nifs.ac.jp; Yoshinuma, M.; Ohdachi, S. [National Institute for Fusion Science, Toki 509-5292 (Japan); SOKENDAI (The Graduate University for Advanced Studies), Toki 509-5292 (Japan); Ida, K. [National Institute for Fusion Science, Toki 509-5292 (Japan); SOKENDAI (The Graduate University for Advanced Studies), Toki 509-5292 (Japan); Research Center for Plasma Turbulence, Kyushu University, Kasuga 816-8580 (Japan); Itoh, K. [National Institute for Fusion Science, Toki 509-5292 (Japan); Research Center for Plasma Turbulence, Kyushu University, Kasuga 816-8580 (Japan); Moon, C.; Yamada, I.; Funaba, H.; Yasuhara, R.; Tsuchiya, H.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Kubo, S.; Tsujimura, T. I. [National Institute for Fusion Science, Toki 509-5292 (Japan); Inagaki, S. [Research Center for Plasma Turbulence, Kyushu University, Kasuga 816-8580 (Japan); Research Institute for Applied Mechanics, Kyushu University, Kasuga 816-8580 (Japan)
2016-04-15
This paper provides a software application of the sampling scope concept for fusion research. The time evolution of Thomson scattering data is reconstructed with a high temporal resolution during a modulated electron cyclotron resonance heating (MECH) phase. The amplitude profile and the delay time profile of the heat pulse propagation are obtained from the reconstructed signal for discharges having on-axis and off-axis MECH depositions. The results are found to be consistent with the MECH deposition.
Kamath, Ganesh S.; Zareba, Wojciech; Delaney, Jessica; Koneru, Jayanthi N.; McKenna, William; Gear, Kathleen; Polonsky, Slava; Sherrill, Duane; Bluemke, David; Marcus, Frank; Steinberg, Jonathan S.
2011-01-01
Background Arrhythmogenic right ventricular cardiomyopathy/dysplasia (ARVC/D) is an inherited disease causing structural and functional abnormalities of the right ventricle (RV). The presence of late potentials as assessed by the signal averaged electrocardiogram (SAECG) is a minor Task Force criterion. Objective The purpose of this study was to examine the diagnostic and clinical value of the SAECG in a large population of genotyped ARVC/D probands. Methods We compared the SAECGs of 87 ARVC/D probands (age 37 ± 13 years, 47 males) diagnosed as affected or borderline by Task Force criteria without using the SAECG criterion with 103 control subjects. The association of SAECG abnormalities was also correlated with clinical presentation; surface ECG; VT inducibility at electrophysiologic testing; ICD therapy for VT; and RV abnormalities as assessed by cardiac magnetic resonance imaging (cMRI). Results When compared with controls, all 3 components of the SAECG were highly associated with the diagnosis of ARVC/D (p<0.001). These include the filtered QRS duration (fQRSD) (97.8 ± 8.7 msec vs. 119.6 ± 23.8 msec), low amplitude signal (LAS) (24.4 ± 9.2 msec vs. 46.2 ± 23.7 msec) and root mean square amplitude of the last 40 msec of late potentials (RMS-40) (50.4 ± 26.9 µV vs. 27.9 ± 36.3 µV). The sensitivity of using SAECG for diagnosis of ARVC/D was increased from 47% using the established 2 of 3 criteria (i.e. late potentials) to 69% by using a modified criterion of any 1 of the 3 criteria, while maintaining a high specificity of 95%. Abnormal SAECG as defined by this modified criteria was associated with a dilated RV volume and decreased RV ejection fraction detected by cMRI (p<0.05). SAECG abnormalities did not vary with clinical presentation or reliably predict spontaneous or inducible VT, and had limited correlation with ECG findings. Conclusion Using 1 of 3 SAECG criteria contributed to increased sensitivity and specificity for the diagnosis of ARVC/D. This
Energy Technology Data Exchange (ETDEWEB)
Yu, Lifeng, E-mail: yu.lifeng@mayo.edu; Vrieze, Thomas J.; Leng, Shuai; Fletcher, Joel G.; McCollough, Cynthia H. [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)
2015-05-15
Purpose: The spatial resolution of iterative reconstruction (IR) in computed tomography (CT) is contrast- and noise-dependent because of the nonlinear regularization. Due to the severe noise contamination, it is challenging to perform precise spatial-resolution measurements at very low-contrast levels. The purpose of this study was to measure the spatial resolution of a commercially available IR method using ensemble-averaged images acquired from repeated scans. Methods: A low-contrast phantom containing three rods (7, 14, and 21 HU below background) was scanned on a 128-slice CT scanner at three dose levels (CTDI{sub vol} = 16, 8, and 4 mGy). Images were reconstructed using two filtered-backprojection (FBP) kernels (B40 and B20) and a commercial IR method (sinogram affirmed iterative reconstruction, SAFIRE, Siemens Healthcare) with two strength settings (I40-3 and I40-5). The same scan was repeated 100 times at each dose level. The modulation transfer function (MTF) was calculated based on the edge profile measured on the ensemble-averaged images. Results: The spatial resolution of the two FBP kernels, B40 and B20, remained relatively constant across contrast and dose levels. However, the spatial resolution of the two IR kernels degraded relative to FBP as contrast or dose level decreased. For a given dose level at 16 mGy, the MTF{sub 50%} value normalized to the B40 kernel decreased from 98.4% at 21 HU to 88.5% at 7 HU for I40-3 and from 97.6% to 82.1% for I40-5. At 21 HU, the relative MTF{sub 50%} value decreased from 98.4% at 16 mGy to 90.7% at 4 mGy for I40-3 and from 97.6% to 85.6% for I40-5. Conclusions: A simple technique using ensemble averaging from repeated CT scans can be used to measure the spatial resolution of IR techniques in CT at very low contrast levels. The evaluated IR method degraded the spatial resolution at low contrast and high noise levels.
Feng, Lei; Jeon, Tina; Yu, Qiaowen; Ouyang, Minhui; Peng, Qinmu; Mishra, Virendra; Pletikos, Mihovil; Sestan, Nenad; Miller, Michael I; Mori, Susumu; Hsiao, Steven; Liu, Shuwei; Huang, Hao
2017-12-01
Animal models of the rhesus macaque (Macaca mulatta), the most widely used nonhuman primate, have been irreplaceable in neurobiological studies. However, a population-averaged macaque brain diffusion tensor imaging (DTI) atlas, including comprehensive gray and white matter labeling as well as bony and facial landmarks guiding invasive experimental procedures, is not available. The macaque white matter tract pathways and microstructures have been rarely recorded. Here, we established a population-averaged macaque brain atlas with high-resolution ex vivo DTI integrated into in vivo space incorporating bony and facial landmarks, and delineated microstructures and three-dimensional pathways of major white matter tracts in vivo MRI/DTI and ex vivo (postmortem) DTI of ten rhesus macaque brains were acquired. Single-subject macaque brain DTI template was obtained by transforming the postmortem high-resolution DTI data into in vivo space. Ex vivo DTI of ten macaque brains was then averaged in the in vivo single-subject template space to generate population-averaged macaque brain DTI atlas. The white matter tracts were traced with DTI-based tractography. One hundred and eighteen neural structures including all cortical gyri, white matter tracts and subcortical nuclei, were labeled manually on population-averaged DTI-derived maps. The in vivo microstructural metrics of fractional anisotropy, axial, radial and mean diffusivity of the traced white matter tracts were measured. Population-averaged digital atlas integrated into in vivo space can be used to label the experimental macaque brain automatically. Bony and facial landmarks will be available for guiding invasive procedures. The DTI metric measurements offer unique insights into heterogeneous microstructural profiles of different white matter tracts.
Improvement of the energy resolution via an optimized digital signal processing in GERDA Phase I
International Nuclear Information System (INIS)
Agostini, M.; Allardt, M.; Bakalyarov, A. M.; Balata, M.
2015-01-01
An optimized digital shaping filter has been developed for the Gerda experiment which searches for neutrinoless double beta decay in 76 Ge. The Gerda Phase I energy calibration data have been reprocessed and an average improvement of 0.3 keV in energy resolution (FWHM) corresponding to 10 % at the Q value for 0νββ decay in 76 Ge is obtained. This is possible thanks to the enhanced low-frequency noise rejection of this Zero Area Cusp (ZAC) signal shaping filter
Improvement of the energy resolution via an optimized digital signal processing in GERDA Phase I
Energy Technology Data Exchange (ETDEWEB)
Agostini, M. [Physik Department and Excellence Cluster Universe, Technische Universität München, Munich (Germany); Allardt, M. [Institut für Kern- und Teilchenphysik, Technische Universität Dresden, Dresden (Germany); Bakalyarov, A. M. [National Research Center “Kurchatov Institute”, Moscow (Russian Federation); Balata, M. [INFN Laboratori Nazionali del Gran Sasso, LNGS, and Gran Sasso Science Institute, GSSI, Assergi (Italy); Collaboration: GERDA Collaboration; and others
2015-06-09
An optimized digital shaping filter has been developed for the Gerda experiment which searches for neutrinoless double beta decay in {sup 76}Ge. The Gerda Phase I energy calibration data have been reprocessed and an average improvement of 0.3 keV in energy resolution (FWHM) corresponding to 10 % at the Q value for 0νββ decay in {sup 76}Ge is obtained. This is possible thanks to the enhanced low-frequency noise rejection of this Zero Area Cusp (ZAC) signal shaping filter.
Improvement of the energy resolution via an optimized digital signal processing in GERDA Phase I
Energy Technology Data Exchange (ETDEWEB)
Agostini, M.; Bode, T.; Budjas, D.; Janicsko Csathy, J.; Lazzaro, A.; Schoenert, S. [Technische Universitaet Muenchen, Physik Department and Excellence Cluster Universe, Munich (Germany); Allardt, M.; Domula, A.; Lehnert, B.; Schneider, B.; Wester, T.; Wilsenach, H.; Zuber, K. [Technische Universitaet Dresden, Institut fuer Kern- und Teilchenphysik, Dresden (Germany); Bakalyarov, A.M.; Belyaev, S.T.; Lebedev, V.I.; Zhukov, S.V. [National Research Center ' ' Kurchatov Institute' ' , Moscow (Russian Federation); Balata, M.; D' Andrea, V.; Di Vacri, A.; Junker, M.; Laubenstein, M.; Macolino, C.; Zavarise, P. [LNGS, Assergi (Italy); Barabanov, I.; Bezrukov, L.; Doroshkevich, E.; Fedorova, O.; Gurentsov, V.; Kazalov, V.; Kuzminov, V.V.; Lubsandorzhiev, B.; Moseev, P.; Selivanenko, O.; Veresnikova, A.; Yanovich, E. [Institute for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation); Barros, N. [Technische Universitaet Dresden, Institut fuer Kern- und Teilchenphysik, Dresden (Germany); University of Pennsylvania, Department of Physics and Astronomy, Philadelphia, PA (United States); Baudis, L.; Benato, G.; Walter, M. [Physik Institut der Universitaet Zuerich, Zurich (Switzerland); Bauer, C.; Heisel, M.; Heusser, G.; Hofmann, W.; Kihm, T.; Kirsch, A.; Knoepfle, K.T.; Lindner, M.; Maneschg, W.; Salathe, M.; Schreiner, J.; Schwingenheuer, B.; Simgen, H.; Smolnikov, A.; Stepaniuk, M.; Wagner, V.; Wegmann, A. [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany); Becerici-Schmidt, N.; Caldwell, A.; Liao, H.Y.; Majorovits, B.; Palioselitis, D.; Schulz, O.; Vanhoefer, L. [Max-Planck-Institut fuer Physik, Munich (Germany); Bellotti, E. [Universita Milano Bicocca, Dipartimento di Fisica, Milan (Italy); INFN Milano Bicocca, Milan (Italy); Belogurov, S.; Kornoukhov, V.N. [Institute for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation); Institute for Theoretical and Experimental Physics, Moscow (Russian Federation); Bettini, A.; Brugnera, R.; Garfagnini, A.; Hemmer, S.; Medinaceli, E.; Sada, C.; Sturm, K. von [Universita di Padova, Dipartimento di Fisica e Astronomia, Padua (Italy); INFN Padova, Padua (Italy); Borowicz, D. [Jagiellonian University, Institute of Physics, Krakow (Poland); Joint Institute for Nuclear Research, Dubna (Russian Federation); Brudanin, V.; Egorov, V.; Kochetov, O.; Nemchenok, I.; Rumyantseva, N.; Zhitnikov, I.; Zinatulina, D. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Cattadori, C. [INFN Milano Bicocca, Milan (Italy); Chernogorov, A.; Demidova, E.V.; Kirpichnikov, I.V.; Vasenko, A.A. [Institute for Theoretical and Experimental Physics, Moscow (Russian Federation); Falkenstein, R.; Freund, K.; Grabmayr, P.; Hegai, A.; Jochum, J.; Schmitt, C.; Schuetz, A.K. [Eberhard Karls Universitaet Tuebingen, Physikalisches Institut, Tuebingen (Germany); Frodyma, N.; Misiaszek, M.; Panas, K.; Pelczar, K.; Wojcik, M.; Zuzel, G. [Jagiellonian University, Institute of Physics, Krakow (Poland); Gangapshev, A. [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany); Institute for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation); Gusev, K. [Joint Institute for Nuclear Research, Dubna (Russian Federation); National Research Center ' ' Kurchatov Institute' ' , Moscow (Russian Federation); Technische Universitaet Muenchen, Physik Department and Excellence Cluster Universe, Munich (Germany); Hult, M.; Lutter, G. [Institute for Reference Materials and Measurements, Geel (Belgium); Inzhechik, L.V. [Institute for Nuclear Research of the Russian Academy of Sciences, Moscow (Russian Federation); Moscow Institute of Physics and Technology, Moscow (Russian Federation); Klimenko, A. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany); International University for Nature, Society and Man ' ' Dubna' ' , Dubna (Russian Federation); Lippi, I.; Stanco, L.; Ur, C.A. [INFN Padova, Padua (Italy); Lubashevskiy, A. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany); Pandola, L. [INFN Laboratori Nazionali del Sud, Catania (Italy); Pullia, A.; Riboldi, S. [Universita degli Studi di Milano, Dipartimento di Fisica, Milan (Italy); INFN, Milano (Italy); Shirchenko, M. [Joint Institute for Nuclear Research, Dubna (Russian Federation); National Research Center ' ' Kurchatov Institute' ' , Moscow (Russian Federation); Collaboration: GERDA Collaboration
2015-06-15
An optimized digital shaping filter has been developed for the Gerda experiment which searches for neutrinoless double beta decay in {sup 76}Ge. The GERDA Phase I energy calibration data have been reprocessed and an average improvement of 0.3 keV in energy resolution (FWHM) corresponding to 10% at the Q value for 0νββ decay in {sup 76}Ge is obtained. This is possible thanks to the enhanced low-frequency noise rejection of this Zero Area Cusp (ZAC) signal shaping filter. (orig.)
Yilmaz, Ferkan
2014-04-01
The main idea in the moment generating function (MGF) approach is to alternatively express the conditional bit error probability (BEP) in a desired exponential form so that possibly multi-fold performance averaging is readily converted into a computationally efficient single-fold averaging - sometimes into a closed-form - by means of using the MGF of the signal-to-noise ratio. However, as presented in [1] and specifically indicated in [2] and also to the best of our knowledge, there does not exist an MGF-based approach in the literature to represent Wojnar\\'s generic BEP expression in a desired exponential form. This paper presents novel MGF-based expressions for calculating the average BEP of binary signalling over generalized fading channels, specifically by expressing Wojnar\\'s generic BEP expression in a desirable exponential form. We also propose MGF-based expressions to explore the amount of dispersion in the BEP for binary signalling over generalized fading channels.
Rheineck-Leyssius, A T; Kalkman, C J
1999-05-01
To determine the effect of a new signal processing technique (Oxismart, Nellcor, Inc., Pleasanton, CA) on the incidence of false pulse oximeter alarms in the operating room (OR). Prospective observational study. Nonuniversity hospital. 53 ASA physical status I, II, and III consecutive patients undergoing general anesthesia with tracheal intubation. In the OR we compared the number of alarms produced by a recently developed third generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504). Three pulse oximeters were used simultaneously in each patient: a Nellcor pulse oximeter, a Criticare with the signal averaging time set at 3 seconds (Criticareaverage3s) and a similar unit with the signal averaging time set at 21 seconds (Criticareaverage21s). For each pulse oximeter, the number of false (artifact) alarms was counted. One false alarm was produced by the Nellcor (duration 55 sec) and one false alarm by the Criticareaverage21s monitor (5 sec). The incidence of false alarms was higher in Criticareaverage3s. In eight patients, Criticareaverage3s produced 20 false alarms (p signal processing compared with the Criticare monitor with the longer averaging time of 21 seconds.
Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H A; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T M; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid
2016-01-01
Background Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival
Lobmaier, Silvia M.; Mensing van Charante, Nico; Ferrazzi, Enrico; Giussani, Dino A.; Shaw, Caroline J.; Müller, Alexander; Ortiz, Javier U.; Ostermayer, Eva; Haller, Bernhard; Prefumo, Federico; Frusca, Tiziana; Hecher, Kurt; Arabin, Birgit; Thilaganathan, Baskaran; Papageorghiou, Aris T.; Bhide, Amarnath; Martinelli, Pasquale; Duvekot, Johannes J.; van Eyck, Jim; Visser, Gerard H. A.; Schmidt, Georg; Ganzevoort, Wessel; Lees, Christoph C.; Schneider, Karl T. M.; Bilardo, Caterina M.; Brezinka, Christoph; Diemert, Anke; Derks, Jan B.; Schlembach, Dietmar; Todros, Tullia; Valcamonico, Adriana; Marlow, Neil; van Wassenaer-Leemhuis, Aleid
2016-01-01
Phase-rectified signal averaging, an innovative signal processing technique, can be used to investigate quasi-periodic oscillations in noisy, nonstationary signals that are obtained from fetal heart rate. Phase-rectified signal averaging is currently the best method to predict survival after
A Signal Averager Interface between a Biomation 6500 Transient Recorder and a LSI-11 Microcomputer.
1980-06-01
decode the proper bus synchronizing signals. SA data lines 1 and 2 are decoded to produce SELO L - SEL4 L which select one of four SA registers. The...J42 A > SACCI..N.. 31 is41 4--.----(~~#I)I MMELYH-[@- T~. S5 46NI INI 404 II CSkN M.3 > ____ ____47 INWO L 3CSRRD L U SEL4 L MRPLY L t5CSRWHB H OUTHB L
Directory of Open Access Journals (Sweden)
MEHDI AMIAN
2013-10-01
Full Text Available Functional near infrared spectroscopy (fNIRS is a technique that is used for noninvasive measurement of the oxyhemoglobin (HbO2 and deoxyhemoglobin (HHb concentrations in the brain tissue. Since the ratio of the concentration of these two agents is correlated with the neuronal activity, fNIRS can be used for the monitoring and quantifying the cortical activity. The portability of fNIRS makes it a good candidate for studies involving subject's movement. The fNIRS measurements, however, are sensitive to artifacts generated by subject's head motion. This makes fNIRS signals less effective in such applications. In this paper, the autoregressive moving average (ARMA modeling of the fNIRS signal is proposed for state-space representation of the signal which is then fed to the Kalman filter for estimating the motionless signal from motion corrupted signal. Results are compared to the autoregressive model (AR based approach, which has been done previously, and show that the ARMA models outperform AR models. We attribute it to the richer structure, containing more terms indeed, of ARMA than AR. We show that the signal to noise ratio (SNR is about 2 dB higher for ARMA based method.
Ambiguity Function and Resolution Characteristic Analysis of DVB-S Signal for Passive Radar
Directory of Open Access Journals (Sweden)
Jin Wei
2012-12-01
Full Text Available This paper gives the performance research on the ambiguity function and resolution of passive radar based on DVB-S (Digital Video Broadcasting-Satellite signal. The radar system structure and signal model of DVB-S signal are firstly studied, then the ambiguity function of DVB-S signal is analyzed. At last, it has been obtained how the bistatic radar position impacts the resolution. Theoretical analyses and computer simulation show that DVB-S signal is applicable as an illuminator for passive radar.
Soury, Hamza
2012-06-01
This letter considers the average bit error probability of binary coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closed form expression in terms of the Fox\\'s H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading and Nakagami-m fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters. © 2012 IEEE.
International Nuclear Information System (INIS)
Phelps, M.E.; Huang, S.C.; Hoffman, E.J.; Plummer, D.; Carson, R.
1981-01-01
Spatial resolution improvements in computed tomography (CT) have been limited by the large and unique error propagation properties of this technique. The desire to provide maximum image resolution has resulted in the use of reconstruction filter functions designed to produce tomographic images with resolution as close as possible to the intrinsic detector resolution. Thus, many CT systems produce images with excessive noise with the system resolution determined by the detector resolution rather than the reconstruction algorithm. CT is a rigorous mathematical technique which applies an increasing amplification to increasing spatial frequencies in the measured data. This mathematical approach to spatial frequency amplification cannot distinguish between signal and noise and therefore both are amplified equally. We report here a method in which tomographic resolution is improved by using very small detectors to selectively amplify the signal and not noise. Thus, this approach is referred to as the signal amplification technique (SAT). SAT can provide dramatic improvements in image resolution without increases in statistical noise or dose because increases in the cutoff frequency of the reconstruction algorithm are not required to improve image resolution. Alternatively, in cases where image counts are low, such as in rapid dynamic or receptor studies, statistical noise can be reduced by lowering the cutoff frequency while still maintaining the best possible image resolution. A possible system design for a positron CT system with SAT is described
DEFF Research Database (Denmark)
Dixen, Ulrik; Wallevik, Laura; Hansen, Maja
2003-01-01
To evaluate the prognostic roles of prolonged signal-averaged P wave duration (SAPWD), raised levels of natriuretic peptides, and clinical characteristics in patients with stable congestive heart failure (CHF).......To evaluate the prognostic roles of prolonged signal-averaged P wave duration (SAPWD), raised levels of natriuretic peptides, and clinical characteristics in patients with stable congestive heart failure (CHF)....
International Nuclear Information System (INIS)
Nigmatullin, Raoul R.; Toboev, Vyacheslav A.; Lino, Paolo; Maione, Guido
2015-01-01
Highlights: •A new approach describes fractal-branched systems with long-range fluctuations. •A reduced fractal model is proposed. •The approach is used to characterize blow-like signals. •The approach is tested on data from different fields. -- Abstract: It has been shown that many micromotions in the mesoscale region are averaged in accordance with their self-similar (geometrical/dynamical) structure. This distinctive feature helps to reduce a wide set of different micromotions describing relaxation/exchange processes to an averaged collective motion, expressed mathematically in a rather general form. This reduction opens new perspectives in description of different blow-like signals (BLS) in many complex systems. The main characteristic of these signals is a finite duration also when the generalized reduced function is used for their quantitative fitting. As an example, we describe quantitatively available signals that are generated by bronchial asthmatic people, songs by queen bees, and car engine valves operating in the idling regime. We develop a special treatment procedure based on the eigen-coordinates (ECs) method that allows to justify the generalized reduced fractal model (RFM) for description of BLS that can propagate in different complex systems. The obtained describing function is based on the self-similar properties of the different considered micromotions. This kind of cooperative model is proposed here for the first time. In spite of the fact that the nature of the dynamic processes that take place in fractal structure on a mesoscale level is not well understood, the parameters of the RFM fitting function can be used for construction of calibration curves, affected by various external/random factors. Then, the calculated set of the fitting parameters of these calibration curves can characterize BLS of different complex systems affected by those factors. Though the method to construct and analyze the calibration curves goes beyond the scope
High-resolution imaging methods in array signal processing
DEFF Research Database (Denmark)
Xenaki, Angeliki
in active sonar signal processing for detection and imaging of submerged oil contamination in sea water from a deep-water oil leak. The submerged oil _eld is modeled as a uid medium exhibiting spatial perturbations in the acoustic parameters from their mean ambient values which cause weak scattering...... of the incident acoustic energy. A highfrequency active sonar is selected to insonify the medium and receive the backscattered waves. High-frequency acoustic methods can both overcome the optical opacity of water (unlike methods based on electromagnetic waves) and resolve the small-scale structure...... of the submerged oil field (unlike low-frequency acoustic methods). The study shows that high-frequency acoustic methods are suitable not only for large-scale localization of the oil contamination in the water column but also for statistical characterization of the submerged oil field through inference...
DEFF Research Database (Denmark)
Karlsen, Brian; Jakobsen, Kaj Bjarne; Larsen, Jan
2001-01-01
Proper clutter reduction is essential for Ground Penetrating Radar data since low signal-to-clutter ratio prevent correct detection of mine objects. A signal processing approach for resolution enhancement and clutter reduction used on Stepped-Frequency Ground Penetrating Radar (SF-GPR) data is pr....... The clutter reduction method is based on basis function decomposition of the SF-GPR time-series from which the clutter and the signal are separated....
Signal yields, energy resolution, and recombination fluctuations in liquid xenon
Akerib, D. S.; Alsum, S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Bramante, R.; Brás, P.; Byram, D.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Chiller, A. A.; Chiller, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; Dobi, A.; Dobson, J. E. Y.; Druszkiewicz, E.; Edwards, B. N.; Faham, C. H.; Fiorucci, S.; Gaitskell, R. J.; Gehman, V. M.; Ghag, C.; Gibson, K. R.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Ihm, M.; Jacobsen, R. G.; Ji, W.; Kamdin, K.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lee, C.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Manalaysay, A.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O'Sullivan, K.; Oliver-Mallory, K. C.; Palladino, K. J.; Pease, E. K.; Phelps, P.; Reichhart, L.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Solmaz, M.; Solovov, V. N.; Sorensen, P.; Stephenson, S.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W. C.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Xu, J.; Yazdani, K.; Young, S. K.; Zhang, C.; LUX Collaboration
2017-01-01
This work presents an analysis of monoenergetic electronic recoil peaks in the dark-matter-search and calibration data from the first underground science run of the Large Underground Xenon (LUX) detector. Liquid xenon charge and light yields for electronic recoil energies between 5.2 and 661.7 keV are measured, as well as the energy resolution for the LUX detector at those same energies. Additionally, there is an interpretation of existing measurements and descriptions of electron-ion recombination fluctuations in liquid xenon as limiting cases of a more general liquid xenon recombination fluctuation model. Measurements of the standard deviation of these fluctuations at monoenergetic electronic recoil peaks exhibit a linear dependence on the number of ions for energy deposits up to 661.7 keV, consistent with previous LUX measurements between 2 and 16 keV with 3H. We highlight similarities in liquid xenon recombination for electronic and nuclear recoils with a comparison of recombination fluctuations measured with low-energy calibration data.
Digital signal processors for cryogenic high-resolution x-ray detector readout
International Nuclear Information System (INIS)
Friedrich, Stephan; Drury, Owen B.; Bechstein, Sylke; Hennig, Wolfgang; Momayezi, Michael
2003-01-01
We are developing fast digital signal processors (DSPs) to read out superconducting high-resolution X-ray detectors with on-line pulse processing. For superconducting tunnel junction (STJ) detector read-out, the DSPs offer online filtering, rise time discrimination and pile-up rejection. Compared to analog pulse processing, DSP readout somewhat degrades the detector resolution, but improves the spectral purity of the detector response. We discuss DSP performance with our 9-channel STJ array for synchrotron-based high-resolution X-ray spectroscopy. (author)
Shturman, Alexander; Bickel, Amitai; Atar, Shaul
2012-08-01
The prognostic value of P-wave duration has been previously evaluated by signal-averaged ECG (SAECG) in patients with various arrhythmias not associated with acute myocardial infarction (AMI). To investigate the clinical correlates and prognostic value of P-wave duration in patients with ST elevation AMI (STEMI). The patients (n = 89) were evaluated on the first, second and third day after admission, as well as one week and one month post-AMI. Survival was determined 2 years after the index STEMI. In comparison with the upper normal range of P-wave duration ( 40% (128.79 +/- 28 msec) (P = 0.001). P-wave duration above 120 msec was significantly correlated with increased complication rate; namely, sustained ventricular tachyarrhythmia (36%), congestive heart failure (41%), atrial fibrillation (11%), recurrent angina (14%), and re-infarction (8%) (P = 0.012, odds ratio 4.267, 95% confidence interval 1.37-13.32). P-wave duration of 126 msec on the day of admission was found to have the highest predictive value for in-hospital complications including LVEF 40% (area under the curve 0.741, P < 0.001). However, we did not find a significant correlation between P-wave duration and mortality after multivariate analysis. P-wave duration as evaluated by SAECG correlates negatively with LVEF post-STEMI, and P-wave duration above 126 msec can be utilized as a non-invasive predictor of in-hospital complications and low LVEF following STEMI.
Czarkowski, Marek; Oreziak, Artur; Radomski, Dariusz
2006-04-01
Coexistence of the goitre, proptosis and palpitations was observed in XIX century for the first time. Sinus tachyarytmias and atrial fibrillation are typical cardiac symptoms of hyperthyroidism. Atrial fibrillation occurs more often in patients with toxic goiter than in young patients with Grave's disease. These findings suggest that causes of atrial fibrillation might be multifactorial in the elderly. The aims of our study were to evaluate correlations between the parameters of atrial signal averaged ECG (SAECG) and the serum concentration of thyroid free hormones. 25 patient with untreated Grave's disease (G-B) (age 29,6 +/- 9,0 y.o.) and 26 control patients (age 29,3 +/- 6,9 y.o.) were enrolled to our study. None of them had history of atrial fibrillation what was confirmed by 24-hour ECG Holter monitoring. The serum fT3, fT4, TSH were determined in the venous blood by the immunoenzymatic method. Atrial SAECG recording with filtration by zero phase Butterworth filter (45-150 Hz) was done in all subjects. The duration of atrial vector magnitude (hfP) and root meat square of terminal 20ms of atrial vector magnitude (RMS20) were analysed. There were no significant differences in values of SAECG parameters (hfP, RMS20) between investigated groups. The positive correlation between hfP and serum fT3 concentration in group G-B was observed (Spearman's correlation coefficient R = 0.462, p Grave's disease depends not only on hyperthyroidism but on serum concentration of fT3 also.
Study on the ratio of signal to noise for single photon resolution time spectrometer
International Nuclear Information System (INIS)
Wang Zhaomin; Huang Shengli; Xu Zizong; Wu Chong
2001-01-01
The ratio of signal to noise for single photon resolution time spectrometer and their influence factors were studied. A method to depress the background, to shorten the measurement time and to increase the ratio of signal to noise was discussed. Results show that ratio of signal to noise is proportional to solid angle of detector to source and detection efficiency, and inverse proportional to electronics noise. Choose the activity of the source was important for decreasing of random coincidence counting. To use a coincidence gate and a discriminator of single photon were an effective way of increasing measurement accuracy and detection efficiency
Pushing the limits of signal resolution to make coupling measurement easier.
Herbert Pucheta, José Enrique; Pitoux, Daisy; Grison, Claire M; Robin, Sylvie; Merlet, Denis; Aitken, David J; Giraud, Nicolas; Farjon, Jonathan
2015-05-07
Probing scalar couplings are essential for structural elucidation in molecular (bio)chemistry. While the measurement of JHH couplings is facilitated by SERF experiments, overcrowded signals represent a significant limitation. Here, a new band selective pure shift SERF allows access to δ(1)H and JHH with an ultrahigh spectral resolution.
Hasegawa, Hideyuki
2017-07-01
The range spatial resolution is an important factor determining the image quality in ultrasonic imaging. The range spatial resolution in ultrasonic imaging depends on the ultrasonic pulse length, which is determined by the mechanical response of the piezoelectric element in an ultrasonic probe. To improve the range spatial resolution without replacing the transducer element, in the present study, methods based on maximum likelihood (ML) estimation and multiple signal classification (MUSIC) were proposed. The proposed methods were applied to echo signals received by individual transducer elements in an ultrasonic probe. The basic experimental results showed that the axial half maximum of the echo from a string phantom was improved from 0.21 mm (conventional method) to 0.086 mm (ML) and 0.094 mm (MUSIC).
Signal Characteristics of Super-Resolution Near-Field Structure Disks with 100 GB Capacity
Kim, Jooho; Hwang, Inoh; Kim, Hyunki; Park, Insik; Tominaga, Junji
2005-05-01
We report the basic characteristics of super resolution near-field structure (Super-RENS) media at a blue laser optical system (laser wavelength 405 nm, numerical aperture 0.85). Using a novel write once read many (WORM) structure for a blue laser system, we obtained a carrier-to-noise ratio (CNR) above 33 dB from the signal of the 37.5 nm mark length, which is equivalent to a 100 GB capacity with a 0.32 micrometer track pitch, and an eye pattern for 50 GB (2T: 75 nm) capacity using a patterned signal. Using a novel super-resolution material (tellurium, Te) with low super-resolution readout power, we also improved the read stability.
Lim, Wansu; Cho, Tae-Sik; Yun, Changho; Kim, Kiseon
2009-11-09
In this paper, we derive the average bit error rate (BER) of subcarrier multiplexing (SCM)-based free space optics (FSO) systems using a dual-drive Mach-Zehnder modulator (DD-MZM) for optical single-sideband (OSSB) signals under atmospheric turbulence channels. In particular, we consider the third-order intermodulation (IM3), a significant performance degradation factor, in the case of high input signal power systems. The derived average BER, as a function of the input signal power and the scintillation index, is employed to determine the optimum number of SCM users upon the designing FSO systems. For instance, when the user number doubles, the input signal power decreases by almost 2 dBm under the log-normal and exponential turbulence channels at a given average BER.
Directory of Open Access Journals (Sweden)
Shuyan Wang
2016-05-01
Full Text Available This paper proposes a new method to improve the resolution of the seismic signal and to compensate the energy of weak seismic signal based on matching pursuit. With a dictionary of Morlet wavelets, matching pursuit algorithm can decompose a seismic trace into a series of wavelets. We abstract complex-trace attributes from analytical expressions to shrink the search range of amplitude, frequency and phase. In addition, considering the level of correlation between constituent wavelets and average wavelet abstracted from well-seismic calibration, we can obtain the search range of scale which is an important adaptive parameter to control the width of wavelet in time and the bandwidth of frequency. Hence, the efficiency of selection of proper wavelets is improved by making first a preliminary estimate and refining a local selecting range. After removal of noise wavelets, we integrate useful wavelets which should be firstly executed by adaptive spectral whitening technique. This approach can improve the resolutions of seismic signal and enhance the energy of weak wavelets simultaneously. The application results of real seismic data show this method has a good perspective of application.
Signal Tracking Beyond the Time Resolution of an Atomic Sensor by Kalman Filtering
Jiménez-Martínez, Ricardo; Kołodyński, Jan; Troullinou, Charikleia; Lucivero, Vito Giovanni; Kong, Jia; Mitchell, Morgan W.
2018-01-01
We study causal waveform estimation (tracking) of time-varying signals in a paradigmatic atomic sensor, an alkali vapor monitored by Faraday rotation probing. We use Kalman filtering, which optimally tracks known linear Gaussian stochastic processes, to estimate stochastic input signals that we generate by optical pumping. Comparing the known input to the estimates, we confirm the accuracy of the atomic statistical model and the reliability of the Kalman filter, allowing recovery of waveform details far briefer than the sensor's intrinsic time resolution. With proper filter choice, we obtain similar benefits when tracking partially known and non-Gaussian signal processes, as are found in most practical sensing applications. The method evades the trade-off between sensitivity and time resolution in coherent sensing.
Time resolution improvement of Schottky CdTe PET detectors using digital signal processing
International Nuclear Information System (INIS)
Nakhostin, M.; Ishii, K.; Kikuchi, Y.; Matsuyama, S.; Yamazaki, H.; Torshabi, A. Esmaili
2009-01-01
We present the results of our study on the timing performance of Schottky CdTe PET detectors using the technique of digital signal processing. The coincidence signals between a CdTe detector (15x15x1 mm 3 ) and a fast liquid scintillator detector were digitized by a fast digital oscilloscope and analyzed. In the analysis, digital versions of the elements of timing circuits, including pulse shaper and time discriminator, were created and a digital implementation of the Amplitude and Rise-time Compensation (ARC) mode of timing was performed. Owing to a very fine adjustment of the parameters of timing measurement, a good time resolution of less than 9.9 ns (FWHM) at an energy threshold of 150 keV was achieved. In the next step, a new method of time pickoff for improvement of timing resolution without loss in the detection efficiency of CdTe detectors was examined. In the method, signals from a CdTe detector are grouped by their rise-times and different procedures of time pickoff are applied to the signals of each group. Then, the time pickoffs are synchronized by compensating the fixed time offset, caused by the different time pickoff procedures. This method leads to an improved time resolution of ∼7.2 ns (FWHM) at an energy threshold of as low as 150 keV. The methods presented in this work are computationally fast enough to be used for online processing of data in an actual PET system.
Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time
Smith, James F.
2017-03-01
A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.
Optimization of High-Resolution Continuous Flow Analysis for Transient Climate Signals in Ice Cores
DEFF Research Database (Denmark)
Bigler, Matthias; Svensson, Anders; Kettner, Ernesto
2011-01-01
Over the past two decades, continuous flow analysis (CFA) systems have been refined and widely used to measure aerosol constituents in polar and alpine ice cores in very high-depth resolution. Here we present a newly designed system consisting of sodium, ammonium, dust particles, and electrolytic...... meltwater conductivity detection modules. The system is optimized for high- resolution determination of transient signals in thin layers of deep polar ice cores. Based on standard measurements and by comparing sections of early Holocene and glacial ice from Greenland, we find that the new system features...
Modulation, resolution and signal processing in radar, sonar and related systems
Benjamin, R; Costrell, L
1966-01-01
Electronics and Instrumentation, Volume 35: Modulation, Resolution and Signal Processing in Radar, Sonar and Related Systems presents the practical limitations and potentialities of advanced modulation systems. This book discusses the concepts and techniques in the radar context, but they are equally essential to sonar and to a wide range of signaling and data-processing applications, including seismology, radio astronomy, and band-spread communications.Organized into 15 chapters, this volume begins with an overview of the principal developments sought in pulse radar. This text then provides a
Carlos Ferrer; Eduardo González; María E. Hernández-Díaz; Diana Torres; Anesto del Toro
2009-01-01
Harmonics-to-noise ratios (HNRs) are affected by general aperiodicity in voiced speech signals. To specifically reflect a signal-to-additive-noise ratio, the measurement should be insensitive to other periodicity perturbations, like jitter, shimmer, and waveform variability. The ensemble averaging technique is a time-domain method which has been gradually refined in terms of its sensitivity to jitter and waveform variability and required number of pulses. In this paper, shimmer is introduced ...
International Nuclear Information System (INIS)
Cho, Sanghee; Grazioso, Ron; Zhang Nan; Aykac, Mehmet; Schmand, Matthias
2011-01-01
The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.
Directory of Open Access Journals (Sweden)
Carlos Ferrer
2009-01-01
Full Text Available Harmonics-to-noise ratios (HNRs are affected by general aperiodicity in voiced speech signals. To specifically reflect a signal-to-additive-noise ratio, the measurement should be insensitive to other periodicity perturbations, like jitter, shimmer, and waveform variability. The ensemble averaging technique is a time-domain method which has been gradually refined in terms of its sensitivity to jitter and waveform variability and required number of pulses. In this paper, shimmer is introduced in the model of the ensemble average, and a formula is derived which allows the reduction of shimmer effects in HNR calculation. The validity of the technique is evaluated using synthetically shimmered signals, and the prerequisites (glottal pulse positions and amplitudes are obtained by means of fully automated methods. The results demonstrate the feasibility and usefulness of the correction.
Belkić, Dževad; Belkić, Karen
2018-01-01
This paper on molecular imaging emphasizes improving specificity of magnetic resonance spectroscopy (MRS) for early cancer diagnostics by high-resolution data analysis. Sensitivity of magnetic resonance imaging (MRI) is excellent, but specificity is insufficient. Specificity is improved with MRS by going beyond morphology to assess the biochemical content of tissue. This is contingent upon accurate data quantification of diagnostically relevant biomolecules. Quantification is spectral analysis which reconstructs chemical shifts, amplitudes and relaxation times of metabolites. Chemical shifts inform on electronic shielding of resonating nuclei bound to different molecular compounds. Oscillation amplitudes in time signals retrieve the abundance of MR sensitive nuclei whose number is proportional to metabolite concentrations. Transverse relaxation times, the reciprocal of decay probabilities of resonances, arise from spin-spin coupling and reflect local field inhomogeneities. In MRS single voxels are used. For volumetric coverage, multi-voxels are employed within a hybrid of MRS and MRI called magnetic resonance spectroscopic imaging (MRSI). Common to MRS and MRSI is encoding of time signals and subsequent spectral analysis. Encoded data do not provide direct clinical information. Spectral analysis of time signals can yield the quantitative information, of which metabolite concentrations are the most clinically important. This information is equivocal with standard data analysis through the non-parametric, low-resolution fast Fourier transform and post-processing via fitting. By applying the fast Padé transform (FPT) with high-resolution, noise suppression and exact quantification via quantum mechanical signal processing, advances are made, presented herein, focusing on four areas of critical public health importance: brain, prostate, breast and ovarian cancers.
Macromolecular 3D SEM reconstruction strategies: Signal to noise ratio and resolution
International Nuclear Information System (INIS)
Woodward, J.D.; Wepf, R.A.
2014-01-01
Three-dimensional scanning electron microscopy generates quantitative volumetric structural data from SEM images of macromolecules. This technique provides a quick and easy way to define the quaternary structure and handedness of protein complexes. Here, we apply a variety of preparation and imaging methods to filamentous actin in order to explore the relationship between resolution, signal-to-noise ratio, structural preservation and dataset size. This information can be used to define successful imaging strategies for different applications. - Highlights: • F-actin SEM datasets were collected using 8 different preparation/ imaging techniques. • Datasets were reconstructed by back projection and compared/analyzed • 3DSEM actin reconstructions can be produced with <100 views of the asymmetric unit. • Negatively stained macromolecules can be reconstructed by 3DSEM to ∼3 nm resolution
Data-driven gating in PET: Influence of respiratory signal noise on motion resolution.
Büther, Florian; Ernst, Iris; Frohwein, Lynn Johann; Pouw, Joost; Schäfers, Klaus Peter; Stegger, Lars
2018-05-21
Data-driven gating (DDG) approaches for positron emission tomography (PET) are interesting alternatives to conventional hardware-based gating methods. In DDG, the measured PET data themselves are utilized to calculate a respiratory signal, that is, subsequently used for gating purposes. The success of gating is then highly dependent on the statistical quality of the PET data. In this study, we investigate how this quality determines signal noise and thus motion resolution in clinical PET scans using a center-of-mass-based (COM) DDG approach, specifically with regard to motion management of target structures in future radiotherapy planning applications. PET list mode datasets acquired in one bed position of 19 different radiotherapy patients undergoing pretreatment [ 18 F]FDG PET/CT or [ 18 F]FDG PET/MRI were included into this retrospective study. All scans were performed over a region with organs (myocardium, kidneys) or tumor lesions of high tracer uptake and under free breathing. Aside from the original list mode data, datasets with progressively decreasing PET statistics were generated. From these, COM DDG signals were derived for subsequent amplitude-based gating of the original list mode file. The apparent respiratory shift d from end-expiration to end-inspiration was determined from the gated images and expressed as a function of signal-to-noise ratio SNR of the determined gating signals. This relation was tested against additional 25 [ 18 F]FDG PET/MRI list mode datasets where high-precision MR navigator-like respiratory signals were available as reference signal for respiratory gating of PET data, and data from a dedicated thorax phantom scan. All original 19 high-quality list mode datasets demonstrated the same behavior in terms of motion resolution when reducing the amount of list mode events for DDG signal generation. Ratios and directions of respiratory shifts between end-respiratory gates and the respective nongated image were constant over all
Dudok, Barna; Barna, László; Ledri, Marco; Szabó, Szilárd I; Szabadits, Eszter; Pintér, Balázs; Woodhams, Stephen G; Henstridge, Christopher M; Balla, Gyula Y; Nyilas, Rita; Varga, Csaba; Lee, Sang-Hun; Matolcsi, Máté; Cervenak, Judit; Kacskovics, Imre; Watanabe, Masahiko; Sagheddu, Claudia; Melis, Miriam; Pistis, Marco; Soltesz, Ivan; Katona, István
2015-01-01
A major challenge in neuroscience is to determine the nanoscale position and quantity of signaling molecules in a cell type- and subcellular compartment-specific manner. We developed a new approach to this problem by combining cell-specific physiological and anatomical characterization with super-resolution imaging and studied the molecular and structural parameters shaping the physiological properties of synaptic endocannabinoid signaling in the mouse hippocampus. We found that axon terminals of perisomatically projecting GABAergic interneurons possessed increased CB1 receptor number, active-zone complexity and receptor/effector ratio compared with dendritically projecting interneurons, consistent with higher efficiency of cannabinoid signaling at somatic versus dendritic synapses. Furthermore, chronic Δ(9)-tetrahydrocannabinol administration, which reduces cannabinoid efficacy on GABA release, evoked marked CB1 downregulation in a dose-dependent manner. Full receptor recovery required several weeks after the cessation of Δ(9)-tetrahydrocannabinol treatment. These findings indicate that cell type-specific nanoscale analysis of endogenous protein distribution is possible in brain circuits and identify previously unknown molecular properties controlling endocannabinoid signaling and cannabis-induced cognitive dysfunction.
Simulating return signals of a spaceborne high-spectral resolution lidar channel at 532 nm
Xiao, Yu; Binglong, Chen; Min, Min; Xingying, Zhang; Lilin, Yao; Yiming, Zhao; Lidong, Wang; Fu, Wang; Xiaobo, Deng
2018-06-01
High spectral resolution lidar (HSRL) system employs a narrow spectral filter to separate the particulate (cloud/aerosol) and molecular scattering components in lidar return signals, which improves the quality of the retrieved cloud/aerosol optical properties. To better develop a future spaceborne HSRL system, a novel simulation technique was developed to simulate spaceborne HSRL return signals at 532 nm using the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) cloud/aerosol extinction coefficients product and numerical weather prediction data. For validating simulated data, a mathematical particulate extinction coefficient retrieval method for spaceborne HSRL return signals is described here. We compare particulate extinction coefficient profiles from the CALIPSO operational product with simulated spaceborne HSRL data. Further uncertainty analysis shows that relative uncertainties are acceptable for retrieving the optical properties of cloud and aerosol. The final results demonstrate that they agree well with each other. It indicates that the return signals of the spaceborne HSRL molecular channel at 532 nm will be suitable for developing operational algorithms supporting a future spaceborne HSRL system.
Influence of Signal-to-Noise Ratio and Point Spread Function on Limits of Super-Resolution
Pham, T.Q.; Vliet, L.J. van; Schutte, K.
2005-01-01
This paper presents a method to predict the limit of possible resolution enhancement given a sequence of low resolution images. Three important parameters influence the outcome of this limit: the total Point Spread Function (PSF), the Signal-to-Noise Ratio (SNR) and the number of input images.
Chemyakin, Eduard; Müller, Detlef; Burton, Sharon; Kolgotin, Alexei; Hostetler, Chris; Ferrare, Richard
2014-11-01
We present the results of a feasibility study in which a simple, automated, and unsupervised algorithm, which we call the arrange and average algorithm, is used to infer microphysical parameters (complex refractive index, effective radius, total number, surface area, and volume concentrations) of atmospheric aerosol particles. The algorithm uses backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm as input information. Testing of the algorithm is based on synthetic optical data that are computed from prescribed monomodal particle size distributions and complex refractive indices that describe spherical, primarily fine mode pollution particles. We tested the performance of the algorithm for the "3 backscatter (β)+2 extinction (α)" configuration of a multiwavelength aerosol high-spectral-resolution lidar (HSRL) or Raman lidar. We investigated the degree to which the microphysical results retrieved by this algorithm depends on the number of input backscatter and extinction coefficients. For example, we tested "3β+1α," "2β+1α," and "3β" lidar configurations. This arrange and average algorithm can be used in two ways. First, it can be applied for quick data processing of experimental data acquired with lidar. Fast automated retrievals of microphysical particle properties are needed in view of the enormous amount of data that can be acquired by the NASA Langley Research Center's airborne "3β+2α" High-Spectral-Resolution Lidar (HSRL-2). It would prove useful for the growing number of ground-based multiwavelength lidar networks, and it would provide an option for analyzing the vast amount of optical data acquired with a future spaceborne multiwavelength lidar. The second potential application is to improve the microphysical particle characterization with our existing inversion algorithm that uses Tikhonov's inversion with regularization. This advanced algorithm has recently undergone development to allow automated and
Energy Technology Data Exchange (ETDEWEB)
Bernardi, G. [SKA SA, 3rd Floor, The Park, Park Road, Pinelands, 7405 (South Africa); McQuinn, M. [Department of Astronomy, University of California, Berkeley, CA 94720 (United States); Greenhill, L. J., E-mail: gbernardi@ska.ac.za [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)
2015-01-20
The most promising near-term observable of the cosmic dark age prior to widespread reionization (z ∼ 15-200) is the sky-averaged λ21 cm background arising from hydrogen in the intergalactic medium. Though an individual antenna could in principle detect the line signature, data analysis must separate foregrounds that are orders of magnitude brighter than the λ21 cm background (but that are anticipated to vary monotonically and gradually with frequency, e.g., they are considered {sup s}pectrally smooth{sup )}. Using more physically motivated models for foregrounds than in previous studies, we show that the intrinsic spectral smoothness of the foregrounds is likely not a concern, and that data analysis for an ideal antenna should be able to detect the λ21 cm signal after subtracting a ∼fifth-order polynomial in log ν. However, we find that the foreground signal is corrupted by the angular and frequency-dependent response of a real antenna. The frequency dependence complicates modeling of foregrounds commonly based on the assumption of spectral smoothness. Our calculations focus on the Large-aperture Experiment to detect the Dark Age, which combines both radiometric and interferometric measurements. We show that statistical uncertainty remaining after fitting antenna gain patterns to interferometric measurements is not anticipated to compromise extraction of the λ21 cm signal for a range of cosmological models after fitting a seventh-order polynomial to radiometric data. Our results generalize to most efforts to measure the sky-averaged spectrum.
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Directory of Open Access Journals (Sweden)
Beatriz Díez-Dacal
2010-01-01
Full Text Available Prostanoids are products of cyclooxygenase biosynthetic pathways and constitute a family of lipidic mediators of widely diverse structures and biological actions. Besides their known proinflammatory role, numerous works have revealed the anti-inflammatory effects of various prostanoids and established their role in the resolution of inflammation. Among these, prostaglandins with cyclopentenone structure (cyPG are electrophilic lipids that may act through various mechanisms, including the activation of nuclear and membrane receptors and, importantly, direct addition to protein cysteine residues and modification of protein function. Due to their ability to influence cysteine modification–mediated signaling, cyPG may play a critical role in the interplay between redox and inflammatory signaling pathways. Moreover, cellular redox status modulates cyPG addition to proteins; thus, a reciprocal regulation exists between these two factors. After initial controversy, it is becoming clear that endogenous cyPG are generated at concentrations sufficient to promote inflammatory resolution. As for other prostanoids, cyPG effects are highly dependent on context factors and they may exert pro- or anti-inflammatory actions in a cell type–dependent manner, or even biphasic or dual actions in a given cell type or tissue. In light of the growing number of cyPG protein targets identified, cyPG resemble other pleiotropic mediators acting through protein modification. However, their complex structure results in an inter- and intramolecular selectivity of the residues being modified, thus opening the way for structure-activity and drug discovery studies. Detailed characterization of cyPG interactions with cellular proteins will help us to understand their mechanism of action fully and establish their therapeutic potential in inflammation.
Rheineck-Leyssius, A T; Kalkman, C J
1999-05-01
To determine the effect of a new pulse oximeter (Nellcor Symphony N-3000, Pleasanton, CA) with signal processing technique (Oxismart) on the incidence of false alarms in the postanesthesia care unit (PACU). Prospective study. Nonuniversity hospital. 603 consecutive ASA physical status I, II, and III patients recovering from general or regional anesthesia in the PACU. We compared the number of alarms produced by a recently developed "third"-generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504, Waukesha, WI). Patients were randomly assigned to either a Nellcor pulse oximeter or a Criticare with the signal averaging time set at either 12 or 21 seconds. For each patient the number of false (artifact) alarms was counted. The Nellcor generated one false alarm in 199 patients and 36 (in 31 patients) "loss of pulse" alarms. The conventional pulse oximeter with the averaging time set at 12 seconds generated a total of 32 false alarms in 17 of 197 patients [compared with the Nellcor, relative risk (RR) 0.06, confidence interval (CI) 0.01 to 0.25] and a total of 172 "loss of pulse" alarms in 79 patients (RR 0.39, CI 0.28 to 0.55). The conventional pulse oximeter with the averaging time set at 21 seconds generated 12 false alarms in 11 of 207 patients (compared with the Nellcor, RR 0.09, CI 0.02 to 0.48) and a total of 204 "loss of pulse" alarms in 81 patients (RR 0.40, CI 0.28 to 0.56). The lower incidence of false alarms of the conventional pulse oximeter with the longest averaging time compared with the shorter averaging time did not reach statistical significance (false alarms RR 0.62, CI 0.3 to 1.27; "loss of pulse" alarms RR 0.98, CI 0.77 to 1.3). To date, this is the first report of a pulse oximeter that produced almost no false alarms in the PACU.
Estimation of red-light running frequency using high-resolution traffic and signal data.
Chen, Peng; Yu, Guizhen; Wu, Xinkai; Ren, Yilong; Li, Yueguang
2017-05-01
Red-light-running (RLR) emerges as a major cause that may lead to intersection-related crashes and endanger intersection safety. To reduce RLR violations, it's critical to identify the influential factors associated with RLR and estimate RLR frequency. Without resorting to video camera recordings, this study investigates this important issue by utilizing high-resolution traffic and signal event data collected from loop detectors at five intersections on Trunk Highway 55, Minneapolis, MN. First, a simple method is proposed to identify RLR by fully utilizing the information obtained from stop bar detectors, downstream entrance detectors and advance detectors. Using 12 months of event data, a total of 6550 RLR cases were identified. According to a definition of RLR frequency as the conditional probability of RLR on a certain traffic or signal condition (veh/1000veh), the relationships between RLR frequency and some influential factors including arriving time at advance detector, approaching speed, headway, gap to the preceding vehicle on adjacent lane, cycle length, geometric characteristics and even snowing weather were empirically investigated. Statistical analysis shows good agreement with the traffic engineering practice, e.g., RLR is most likely to occur on weekdays during peak periods under large traffic demands and longer signal cycles, and a total of 95.24% RLR events occurred within the first 1.5s after the onset of red phase. The findings confirmed that vehicles tend to run the red light when they are close to intersection during phase transition, and the vehicles following the leading vehicle with short headways also likely run the red light. Last, a simplified nonlinear regression model is proposed to estimate RLR frequency based on the data from advance detector. The study is expected to helpbetter understand RLR occurrence and further contribute to the future improvement of intersection safety. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Lee, Z.; Rose, H.; Lehtinen, O.; Biskupek, J.; Kaiser, U.
2014-01-01
In order to achieve the highest resolution in aberration-corrected (AC) high-resolution transmission electron microscopy (HRTEM) images, high electron doses are required which only a few samples can withstand. In this paper we perform dose-dependent AC-HRTEM image calculations, and study the dependence of the signal-to-noise ratio, atom contrast and resolution on electron dose and sampling. We introduce dose-dependent contrast, which can be used to evaluate the visibility of objects under different dose conditions. Based on our calculations, we determine optimum samplings for high and low electron dose imaging conditions. - Highlights: • The definition of dose-dependent atom contrast is introduced. • The dependence of the signal-to-noise ratio, atom contrast and specimen resolution on electron dose and sampling is explored. • The optimum sampling can be determined according to different dose conditions
Hosoya, Y; Kubota, I; Shibata, T; Yamaki, M; Ikeda, K; Tomoike, H
1992-06-01
There were few studies on the relation between the body surface distribution of high- and low-frequency components within the QRS complex and ventricular tachycardia (VT). Eighty-seven signal-averaged ECGs were obtained from 30 normal subjects (N group) and 30 patients with previous anterior myocardial infarction (MI) with VT (MI-VT[+] group, n = 10) or without VT (MI-VT[-] group, n = 20). The onset and offset of the QRS complex were determined from 87-lead root mean square values computed from the averaged (but not filtered) ECG waveforms. Fast Fourier transform analysis was performed on signal-averaged ECG. The resulting Fourier coefficients were attenuated by use of the transfer function, and then inverse transform was done with five frequency ranges (0-25, 25-40, 40-80, 80-150, and 150-250 Hz). From the QRS onset to the QRS offset, the time integration of the absolute value of reconstructed waveforms was calculated for each of the five frequency ranges. The body surface distributions of these areas were expressed as QRS area maps. The maximal values of QRS area maps were compared among the three groups. In the frequency ranges of 0-25 and 150-250 Hz, there were no significant differences in the maximal values among these three groups. Both MI groups had significantly smaller maximal values of QRS area maps in the frequency ranges of 25-40 and 40-80 Hz compared with the N group. The MI-VT(+) group had significantly smaller maximal values in the frequency ranges of 40-80 and 80-150 Hz than the MI-VT(-) group. These three groups were clearly differentiated by the maximal values of the 40-80-Hz QRS area map. It was suggested that the maximal value of the 40-80-Hz QRS area map was a new marker for VT after anterior MI.
Directory of Open Access Journals (Sweden)
S. P. Arunachalam
2018-01-01
Full Text Available Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR and atrial fibrillation (AF was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.
Zhang, Fangzheng; Guo, Qingshui; Pan, Shilong
2017-10-23
Real-time and high-resolution target detection is highly desirable in modern radar applications. Electronic techniques have encountered grave difficulties in the development of such radars, which strictly rely on a large instantaneous bandwidth. In this article, a photonics-based real-time high-range-resolution radar is proposed with optical generation and processing of broadband linear frequency modulation (LFM) signals. A broadband LFM signal is generated in the transmitter by photonic frequency quadrupling, and the received echo is de-chirped to a low frequency signal by photonic frequency mixing. The system can operate at a high frequency and a large bandwidth while enabling real-time processing by low-speed analog-to-digital conversion and digital signal processing. A conceptual radar is established. Real-time processing of an 8-GHz LFM signal is achieved with a sampling rate of 500 MSa/s. Accurate distance measurement is implemented with a maximum error of 4 mm within a range of ~3.5 meters. Detection of two targets is demonstrated with a range-resolution as high as 1.875 cm. We believe the proposed radar architecture is a reliable solution to overcome the limitations of current radar on operation bandwidth and processing speed, and it is hopefully to be used in future radars for real-time and high-resolution target detection and imaging.
High-resolution mass spectrometry driven discovery of peptidic danger signals in insect immunity.
Directory of Open Access Journals (Sweden)
Arton Berisha
Full Text Available The 'danger model' is an alternative concept for immune response postulating that the immune system reacts to entities that do damage (danger associated molecular patterns, DAMP and not only to entities that are foreign (pathogen-associated molecular patterns, PAMP as proposed by classical immunology concepts. In this study we used Galleria mellonella to validate the danger model in insects. Hemolymph of G. mellonella was digested with thermolysin (as a representative for virulence-associated metalloproteinases produced by humanpathogens followed by chromatographic fractionation. Immune-stimulatory activity was tested by measuring lysozyme activity with the lytic zone assays against Micrococcus luteus cell wall components. Peptides were analyzed by nano-scale liquid chromatography coupled to high-resolution Fourier transform mass spectrometers. Addressing the lack of a genome sequence we complemented the rudimentary NCBI protein database with a recently established transcriptome and de novo sequencing methods for peptide identification. This approach led to identification of 127 peptides, 9 of which were identified in bioactive fractions. Detailed MS/MS experiments in comparison with synthetic analogues confirmed the amino acid sequence of all 9 peptides. To test the potential of these putative danger signals to induce immune responses we injected the synthetic analogues into G. mellonella and monitored the anti-bacterial activity against living Micrococcus luteus. Six out of 9 peptides identified in the bioactive fractions exhibited immune-stimulatory activity when injected. Hence, we provide evidence that small peptides resulting from thermolysin-mediated digestion of hemolymph proteins function as endogenous danger signals which can set the immune system into alarm. Consequently, our study indicates that the danger model also plays a role in insect immunity.
Energy Technology Data Exchange (ETDEWEB)
Zheng, Xiaoqing; Cheng, Zeng [Department of Electrical and Computer Engineering, McMaster University (Canada); Deen, M. Jamal, E-mail: jamal@mcmaster.ca [Department of Electrical and Computer Engineering, McMaster University (Canada); School of Biomedical Engineering, McMaster University (Canada); Peng, Hao, E-mail: penghao@mcmaster.ca [Department of Electrical and Computer Engineering, McMaster University (Canada); School of Biomedical Engineering, McMaster University (Canada); Department of Medical Physics, McMaster University, Ontario L8S 4K1, Hamilton (Canada)
2016-02-01
Cadmium Zinc Telluride (CZT) semiconductor detectors are capable of providing superior energy resolution and three-dimensional position information of gamma ray interactions in a large variety of fields, including nuclear physics, gamma-ray imaging and nuclear medicine. Some dedicated Positron Emission Tomography (PET) systems, for example, for breast cancer detection, require higher contrast recovery and more accurate event location compared with a whole-body PET system. The spatial resolution is currently limited by electrode pitch in CZT detectors. A straightforward approach to increase the spatial resolution is by decreasing the detector electrode pitch, but this leads to higher fabrication cost and a larger number of readout channels. In addition, inter-electrode charge spreading can negate any improvement in spatial resolution. In this work, we studied the feasibility of achieving sub-pitch spatial resolution in CZT detectors using two methods: charge sharing effect and transient signal analysis. We noted that their valid ranges of usage were complementary. The dependences of their corresponding valid ranges on electrode design, depth-of-interaction (DOI), voltage bias and signal triggering threshold were investigated. The implementation of these two methods in both pixelated and cross-strip configuration of CZT detectors were discussed. Our results show that the valid range of charge sharing effect increases as a function of DOI, but decreases with increasing gap width and bias voltage. For a CZT detector of 5 mm thickness, 100 µm gap and biased at 400 V, the valid range of charge sharing effect was found to be about 112.3 µm around the gap center. This result complements the valid range of the transient signal analysis within one electrode pitch. For a signal-to-noise ratio (SNR) of ~17 and preliminary measurements, the sub-pitch spatial resolution is expected to be ~30 µm and ~250 µm for the charge sharing and transient signal analysis methods
High-resolution moisture profiles from full-waveform probabilistic inversion of TDR signals
Laloy, Eric; Huisman, Johan Alexander; Jacques, Diederik
2014-11-01
This study presents an novel Bayesian inversion scheme for high-dimensional undetermined TDR waveform inversion. The methodology quantifies uncertainty in the moisture content distribution, using a Gaussian Markov random field (GMRF) prior as regularization operator. A spatial resolution of 1 cm along a 70-cm long TDR probe is considered for the inferred moisture content. Numerical testing shows that the proposed inversion approach works very well in case of a perfect model and Gaussian measurement errors. Real-world application results are generally satisfying. For a series of TDR measurements made during imbibition and evaporation from a laboratory soil column, the average root-mean-square error (RMSE) between maximum a posteriori (MAP) moisture distribution and reference TDR measurements is 0.04 cm3 cm-3. This RMSE value reduces to less than 0.02 cm3 cm-3 for a field application in a podzol soil. The observed model-data discrepancies are primarily due to model inadequacy, such as our simplified modeling of the bulk soil electrical conductivity profile. Among the important issues that should be addressed in future work are the explicit inference of the soil electrical conductivity profile along with the other sampled variables, the modeling of the temperature-dependence of the coaxial cable properties and the definition of an appropriate statistical model of the residual errors.
International Nuclear Information System (INIS)
Clergeau, Jean-Francois; Ferraton, Matthieu; Guerard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick; Daulle, Thibault
2013-06-01
1D or 2D neutron imaging detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of resolution. We then apply this measure to quantify the power of resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the resolution over best-wire algorithms which are the standard way of treating these signals. (authors)
Influence of signal-to-noise ratio and point spread function on limits of super-resolution
Pham, T.Q.; Van Vliet, L.; Schutte, K.
2005-01-01
This paper presents a method to predict the limit of possible resolution enhancement given a sequence of lowresolution images. Three important parameters influence the outcome of this limit: the total Point Spread Function (PSF), the Signal-to-Noise Ratio (SNR) and the number of input images.
Signal processing techniques were applied to high-resolution time series data obtained from conductivity loggers placed upstream and downstream of a wastewater treatment facility along a river. Data was collected over 14-60 days, and several seasons. The power spectral densit...
Signal processing techniques were applied to high-resolution time series data obtained from conductivity loggers placed upstream and downstream of an oil and gas wastewater treatment facility along a river. Data was collected over 14-60 days. The power spectral density was us...
Energy Technology Data Exchange (ETDEWEB)
Clergeau, Jean-Francois; Ferraton, Matthieu; Guerard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick [Institut Laue Langevin, Neutron Detector Service, Grenoble (France); Daulle, Thibault [PHELMA Grenoble - INP Grenoble (France)
2013-06-15
1D or 2D neutron imaging detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of resolution. We then apply this measure to quantify the power of resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the resolution over best-wire algorithms which are the standard way of treating these signals. (authors)
Mossetti, Stefano; de Bartolo, Daniela; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina; Nava, Elisa
2017-04-01
International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
International Nuclear Information System (INIS)
Mossetti, Stefano; Bartolo, Daniela de; Nava, Elisa; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina
2017-01-01
International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. (authors)
Ronacher, Bernhard; Wohlgemuth, Sandra; Vogel, Astrid; Krahe, Rüdiger
2008-08-01
A characteristic feature of hearing systems is their ability to resolve both fast and subtle amplitude modulations of acoustic signals. This applies also to grasshoppers, which for mate identification rely mainly on the characteristic temporal patterns of their communication signals. Usually the signals arriving at a receiver are contaminated by various kinds of noise. In addition to extrinsic noise, intrinsic noise caused by stochastic processes within the nervous system contributes to making signal recognition a difficult task. The authors asked to what degree intrinsic noise affects temporal resolution and, particularly, the discrimination of similar acoustic signals. This study aims at exploring the neuronal basis for sexual selection, which depends on exploiting subtle differences between basically similar signals. Applying a metric, by which the similarities of spike trains can be assessed, the authors investigated how well the communication signals of different individuals of the same species could be discriminated and correctly classified based on the responses of auditory neurons. This spike train metric yields clues to the optimal temporal resolution with which spike trains should be evaluated. (c) 2008 APA, all rights reserved
Go, Sabine C.P.J.
2017-01-01
The resolution of commercial conflicts was an important issue for the municipality of Amsterdam. The city’s government realised that commerce and trade could be hampered if commercial disputes were not dealt with quickly and effectively. A special type of commercial disputes centred on marine
Widjaja, E; Mahmoodabadi, S Z; Rea, D; Moineddin, R; Vidarsson, L; Nilsson, D
2009-01-01
Tensor estimation can be improved by increasing the number of gradient directions (NGD) or increasing the number of signal averages (NSA), but at a cost of increased scan time. To evaluate the effects of NGD and NSA on fractional anisotropy (FA) and fiber density index (FDI) in vivo. Ten healthy adults were scanned on a 1.5T system using nine different diffusion tensor sequences. Combinations of 7 NGD, 15 NGD, and 25 NGD with 1 NSA, 2 NSA, and 3 NSA were used, with scan times varying from 2 to 18 min. Regions of interest (ROIs) were placed in the internal capsules, middle cerebellar peduncles, and splenium of the corpus callosum, and FA and FDI were calculated. Analysis of variance was used to assess whether there was a difference in FA and FDI of different combinations of NGD and NSA. There was no significant difference in FA of different combinations of NGD and NSA of the ROIs (P>0.005). There was a significant difference in FDI between 7 NGD/1 NSA and 25 NGD/3 NSA in all three ROIs (PNSA, 25 NGD/1 NSA, and 25 NGD/2 NSA and 25 NGD/3 NSA in all ROIs (P>0.005). We have not found any significant difference in FA with varying NGD and NSA in vivo in areas with relatively high anisotropy. However, lower NGD resulted in reduced FDI in vivo. With larger NGD, NSA has less influence on FDI. The optimal sequence among the nine sequences tested with the shortest scan time was 25 NGD/1 NSA.
International Nuclear Information System (INIS)
French, Doug; Huang Zun; Pao, H.-Y.; Jovanovic, Igor
2009-01-01
A quantum phase amplifier operated in the spatial domain can improve the signal-to-noise ratio in imaging beyond the classical limit. The scaling of the signal-to-noise ratio with the gain of the quantum phase amplifier is derived from classical information theory
International Nuclear Information System (INIS)
Nagaoka, Tomoaki; Watanabe, Soichi; Sakurai, Kiyoko; Kunieda, Etsuo; Watanabe, Satoshi; Taki, Masao; Yamanaka, Yukio
2004-01-01
With advances in computer performance, the use of high-resolution voxel models of the entire human body has become more frequent in numerical dosimetries of electromagnetic waves. Using magnetic resonance imaging, we have developed realistic high-resolution whole-body voxel models for Japanese adult males and females of average height and weight. The developed models consist of cubic voxels of 2 mm on each side; the models are segmented into 51 anatomic regions. The adult female model is the first of its kind in the world and both are the first Asian voxel models (representing average Japanese) that enable numerical evaluation of electromagnetic dosimetry at high frequencies of up to 3 GHz. In this paper, we will also describe the basic SAR characteristics of the developed models for the VHF/UHF bands, calculated using the finite-difference time-domain method
Xu, Feinan; Wang, Weizhen; Wang, Jiemin; Xu, Ziwei; Qi, Yuan; Wu, Yueru
2017-08-01
The determination of area-averaged evapotranspiration (ET) at the satellite pixel scale/model grid scale over a heterogeneous land surface plays a significant role in developing and improving the parameterization schemes of the remote sensing based ET estimation models and general hydro-meteorological models. The Heihe Watershed Allied Telemetry Experimental Research (HiWATER) flux matrix provided a unique opportunity to build an aggregation scheme for area-averaged fluxes. On the basis of the HiWATER flux matrix dataset and high-resolution land-cover map, this study focused on estimating the area-averaged ET over a heterogeneous landscape with footprint analysis and multivariate regression. The procedure is as follows. Firstly, quality control and uncertainty estimation for the data of the flux matrix, including 17 eddy-covariance (EC) sites and four groups of large-aperture scintillometers (LASs), were carefully done. Secondly, the representativeness of each EC site was quantitatively evaluated; footprint analysis was also performed for each LAS path. Thirdly, based on the high-resolution land-cover map derived from aircraft remote sensing, a flux aggregation method was established combining footprint analysis and multiple-linear regression. Then, the area-averaged sensible heat fluxes obtained from the EC flux matrix were validated by the LAS measurements. Finally, the area-averaged ET of the kernel experimental area of HiWATER was estimated. Compared with the formerly used and rather simple approaches, such as the arithmetic average and area-weighted methods, the present scheme is not only with a much better database, but also has a solid grounding in physics and mathematics in the integration of area-averaged fluxes over a heterogeneous surface. Results from this study, both instantaneous and daily ET at the satellite pixel scale, can be used for the validation of relevant remote sensing models and land surface process models. Furthermore, this work will be
Spatial resolution of cAMP signaling by soluble adenylyl cyclase
Caldieri, Giusi
2016-01-01
G protein–coupled receptor signaling starts at the plasma membrane and continues at endosomal stations. In this issue, Inda et al. (2016. J. Cell Biol. http://dx.doi.org/10.1083/jcb.201512075) show that different forms of adenylyl cyclase are activated at the plasma membrane versus endosomes, providing a rationale for the spatial encoding of cAMP signaling. PMID:27402955
International Nuclear Information System (INIS)
Geraci, A.; Zambusi, M.; Ripamonti, G.
1996-01-01
Interest for digital processing of signals from radiation detectors is subject to a growing attention due to its intrinsic adaptivity, easiness of calibration, etc. This work compares two digital processing methods: a multiple-delay-line (DL) N filter and a least-mean-squares (LMS) adaptive filter for applications in high resolution X-ray spectroscopy. The signal pulse, as appears at the output of a proper analog conditioning circuit, is digitized; the samples undergo a digital filtering procedure. Both digital filters take advantage of the possibility of synthesizing the best possible weighting function with respect to the actual noise conditions. A noticeable improvement of more than 10% in energy resolution has been achieved with both systems with respect to state-of-the-art systems based on analog circuitry. In particular, the two digital processors are shown to be the best choice respectively; for on-line use with critical ballistic deficit conditions and for very-high-resolution spectroscopy systems, ultimately limited by 1/f noise
Sotomi, Yohei; Okamura, Atsunori; Iwakura, Katsuomi; Date, Motoo; Nagai, Hiroyuki; Yamasaki, Tomohiro; Koyama, Yasushi; Inoue, Koichi; Sakata, Yasushi; Fujii, Kenshi
2017-06-01
The present study aimed to assess the mechanisms of effects of percutaneous coronary intervention (PCI) for chronic total occlusion (CTO) from two different aspects: left ventricular (LV) systolic function assessed by two-dimensional speckle tracking echocardiography (2D-STE) and electrical stability evaluated by late potential on signal-averaged electrocardiogram (SAECG). We conducted a prospective observational study with consecutive CTO-PCI patients. 2D-STE and SAECG were performed before PCI, and after 1-day and 3-months of procedure. 2D-STE computed global longitudinal strain (GLS) and regional longitudinal strain (RLS) in CTO area, collateral blood-supplying donor artery area, and non-CTO/non-donor area. A total of 37 patients (66 ± 11 years, 78% male) were analyzed. RLS in CTO and donor areas and GLS were significantly improved 1-day after the procedure, but these improvements diminished during 3 months. The improvement of RLS in donor area remained significant after 3-months the index procedure (pre-PCI -13.4 ± 4.8% vs. post-3M -15.1 ± 4.5%, P = 0.034). RLS in non-CTO/non-donor area and LV ejection fraction were not influenced. Mitral annulus velocity was improved at 3-month follow-up (5.0 ± 1.4 vs. 5.6 ± 1.7 cm/s, P = 0.049). Before the procedure, 12 patients (35%) had a late potential. All components of the late potential (filtered QRS duration, root-mean-square voltage in the terminal 40 ms, and duration of the low amplitude signal <40 μV) were not improved. CTO-PCI improved RLS in the donor area at 3-month follow-up without changes of LV ejection fraction. Although higher prevalence of late potential in the current population compared to healthy population was observed, late potential as a surrogate of arrhythmogenic substrate was not influenced by CTO-PCI.
Improvement of the GERDA Ge Detectors Energy Resolution by an Optimized Digital Signal Processing
Benato, G.; D'Andrea, V.; Cattadori, C.; Riboldi, S.
GERDA is a new generation experiment searching for neutrinoless double beta decay of 76Ge, operating at INFN Gran Sasso Laboratories (LNGS) since 2010. Coaxial and Broad Energy Germanium (BEGe) Detectors have been operated in liquid argon (LAr) in GERDA Phase I. In the framework of the second GERDA experimental phase, both the contacting technique, the connection to and the location of the front end readout devices are novel compared to those previously adopted, and several tests have been performed. In this work, starting from considerations on the energy scale stability of the GERDA Phase I calibrations and physics data sets, an optimized pulse filtering method has been developed and applied to the Phase II pilot tests data sets, and to few GERDA Phase I data sets. In this contribution the detector performances in term of energy resolution and time stability are here presented. The improvement of the energy resolution, compared to standard Gaussian shaping adopted for Phase I data analysis, is discussed and related to the optimized noise filtering capability. The result is an energy resolution better than 0.1% at 2.6 MeV for the BEGe detectors operated in the Phase II pilot tests and an improvement of the energy resolution in LAr of about 8% achieved on the GERDA Phase I calibration runs, compared to previous analysis algorithms.
Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images
Directory of Open Access Journals (Sweden)
Victor Lawrence
2012-07-01
Full Text Available Electro-optic (EO image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF of a uniform detector array and the incoherent optical transfer function (OTF of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1 inverse filter-based IR image transformation; (2 EO image edge detection; (3 registration; and (4 blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.
Development of Signal Processing Algorithms for High Resolution Airborne Millimeter Wave FMCW SAR
Meta, A.; Hoogeboom, P.
2005-01-01
For airborne earth observation applications, there is a special interest in lightweight, cost effective, imaging sensors of high resolution. The combination of Frequency Modulated Continuous Wave (FMCW) technology and Synthetic Aperture Radar (SAR) techniques can lead to such a sensor. In this
Energy Technology Data Exchange (ETDEWEB)
Tregillis, Ian Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-09-22
This document examines the performance of a generic flat-mirror multimonochromatic imager (MMI), with special emphasis on existing instruments at NIF and Omega. We begin by deriving the standard equation for the mean number of photons detected per resolution element. The pinhole energy bandwidth is a contributing factor; this is dominated by the finite size of the source and may be considerable. The most common method for estimating the spatial resolution of such a system (quadrature addition) is, technically, mathematically invalid for this case. However, under the proper circumstances it may produce good estimates compared to a rigorous calculation based on the convolution of point-spread functions. Diffraction is an important contribution to the spatial resolution. Common approximations based on Fraunhofer (farfield) diffraction may be inappropriate and misleading, as the instrument may reside in multiple regimes depending upon its configuration or the energy of interest. It is crucial to identify the correct diffraction regime; Fraunhofer and Fresnel (near-field) diffraction profiles are substantially different, the latter being considerably wider. Finally, we combine the photonics and resolution analyses to derive an expression for the minimum signal level such that the resulting images are not dominated by photon statistics. This analysis is consistent with observed performance of the NIF MMI.
International Nuclear Information System (INIS)
Takahashi, Hiroki; Bardet, Michel; De Paepe, Gael; Hediger, Sabine; Ayala, Isabel; Simorre, Jean-Pierre
2013-01-01
Dynamic nuclear polarization (DNP) enhanced solid-state nuclear magnetic resonance (NMR) has recently emerged as a powerful technique for the study of material surfaces. In this study, we demonstrate its potential to investigate cell surface in intact cells. Using Bacillus subtilis bacterial cells as an example, it is shown that the polarizing agent 1-(TEMPO-4-oxy)-3-(TEMPO-4-amino)propan-2-ol (TOTAPOL) has a strong binding affinity to cell wall polymers (peptidoglycan). This particular interaction is thoroughly investigated with a systematic study on extracted cell wall materials, disrupted cells, and entire cells, which proved that TOTAPOL is mainly accumulating in the cell wall. This property is used on one hand to selectively enhance or suppress cell wall signals by controlling radical concentrations and on the other hand to improve spectral resolution by means of a difference spectrum. Comparing DNP-enhanced and conventional solid-state NMR, an absolute sensitivity ratio of 24 was obtained on the entire cell sample. This important increase in sensitivity together with the possibility of enhancing specifically cell wall signals and improving resolution really opens new avenues for the use of DNP-enhanced solid-state NMR as an on-cell investigation tool. (authors)
Takahashi, Hiroki; Ayala, Isabel; Bardet, Michel; De Paëpe, Gaël; Simorre, Jean-Pierre; Hediger, Sabine
2013-04-03
Dynamic nuclear polarization (DNP) enhanced solid-state nuclear magnetic resonance (NMR) has recently emerged as a powerful technique for the study of material surfaces. In this study, we demonstrate its potential to investigate cell surface in intact cells. Using Bacillus subtilis bacterial cells as an example, it is shown that the polarizing agent 1-(TEMPO-4-oxy)-3-(TEMPO-4-amino)propan-2-ol (TOTAPOL) has a strong binding affinity to cell wall polymers (peptidoglycan). This particular interaction is thoroughly investigated with a systematic study on extracted cell wall materials, disrupted cells, and entire cells, which proved that TOTAPOL is mainly accumulating in the cell wall. This property is used on one hand to selectively enhance or suppress cell wall signals by controlling radical concentrations and on the other hand to improve spectral resolution by means of a difference spectrum. Comparing DNP-enhanced and conventional solid-state NMR, an absolute sensitivity ratio of 24 was obtained on the entire cell sample. This important increase in sensitivity together with the possibility of enhancing specifically cell wall signals and improving resolution really opens new avenues for the use of DNP-enhanced solid-state NMR as an on-cell investigation tool.
Energy Technology Data Exchange (ETDEWEB)
Nichols, Charles E. [Division of Structural Biology, Henry Wellcome Building for Genomic Medicine, University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Sainsbury, Sarah; Berrow, Nick S.; Alderton, David [The Oxford Protein Production Facility, Henry Wellcome Building for Genomic Medicine, University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Saunders, Nigel J. [The Bacterial Pathogenesis and Functional Genomics Group, The Sir William Dunn School of Pathology, University of Oxford, South Parks Road, Oxford OX1 3RE (United Kingdom); Stammers, David K. [Division of Structural Biology, Henry Wellcome Building for Genomic Medicine, University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); The Oxford Protein Production Facility, Henry Wellcome Building for Genomic Medicine, University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Owens, Raymond J., E-mail: ray@strubi.ox.ac.uk [The Oxford Protein Production Facility, Henry Wellcome Building for Genomic Medicine, University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Division of Structural Biology, Henry Wellcome Building for Genomic Medicine, University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom)
2006-06-01
The structure of the P{sub II} signal transduction protein of N. meningitidis at 1.85 Å resolution is described. The P{sub II} signal transduction proteins GlnB and GlnK are implicated in the regulation of nitrogen assimilation in Escherichia coli and other enteric bacteria. P{sub II}-like proteins are widely distributed in bacteria, archaea and plants. In contrast to other bacteria, Neisseria are limited to a single P{sub II} protein (NMB 1995), which shows a high level of sequence identity to GlnB and GlnK from Escherichia coli (73 and 62%, respectively). The structure of the P{sub II} protein from N. meningitidis (serotype B) has been solved by molecular replacement to a resolution of 1.85 Å. Comparison of the structure with those of other P{sub II} proteins shows that the overall fold is tightly conserved across the whole population of related proteins, in particular the positions of the residues implicated in ATP binding. It is proposed that the Neisseria P{sub II} protein shares functions with GlnB/GlnK of enteric bacteria.
Cabanski, Wolfgang A.; Breiter, Rainer; Koch, R.; Mauk, Karl-Heinz; Rode, Werner; Ziegler, Johann; Eberhardt, Kurt; Oelmaier, Reinhard; Schneider, Harald; Walther, Martin
2000-07-01
Full video format focal plane array (FPA) modules with up to 640 X 512 pixels have been developed for high resolution imaging applications in either mercury cadmium telluride (MCT) mid wave (MWIR) infrared (IR) or platinum silicide (PtSi) and quantum well infrared photodetector (QWIP) technology as low cost alternatives to MCT for high performance IR imaging in the MWIR or long wave spectral band (LWIR). For the QWIP's, a new photovoltaic technology was introduced for improved NETD performance and higher dynamic range. MCT units provide fast frame rates > 100 Hz together with state of the art thermal resolution NETD hardware platforms and software for image visualization and nonuniformity correction including scene based self learning algorithms had to be developed to accomplish for the high data rates of up to 18 M pixels/s with 14-bit deep data, allowing to take into account nonlinear effects to access the full NETD by accurate reduction of residual fixed pattern noise. The main features of these modules are summarized together with measured performance data for long range detection systems with moderately fast to slow F-numbers like F/2.0 - F/3.5. An outlook shows most recent activities at AIM, heading for multicolor and faster frame rate detector modules based on MCT devices.
Directory of Open Access Journals (Sweden)
F. Xu
2017-08-01
Full Text Available The determination of area-averaged evapotranspiration (ET at the satellite pixel scale/model grid scale over a heterogeneous land surface plays a significant role in developing and improving the parameterization schemes of the remote sensing based ET estimation models and general hydro-meteorological models. The Heihe Watershed Allied Telemetry Experimental Research (HiWATER flux matrix provided a unique opportunity to build an aggregation scheme for area-averaged fluxes. On the basis of the HiWATER flux matrix dataset and high-resolution land-cover map, this study focused on estimating the area-averaged ET over a heterogeneous landscape with footprint analysis and multivariate regression. The procedure is as follows. Firstly, quality control and uncertainty estimation for the data of the flux matrix, including 17 eddy-covariance (EC sites and four groups of large-aperture scintillometers (LASs, were carefully done. Secondly, the representativeness of each EC site was quantitatively evaluated; footprint analysis was also performed for each LAS path. Thirdly, based on the high-resolution land-cover map derived from aircraft remote sensing, a flux aggregation method was established combining footprint analysis and multiple-linear regression. Then, the area-averaged sensible heat fluxes obtained from the EC flux matrix were validated by the LAS measurements. Finally, the area-averaged ET of the kernel experimental area of HiWATER was estimated. Compared with the formerly used and rather simple approaches, such as the arithmetic average and area-weighted methods, the present scheme is not only with a much better database, but also has a solid grounding in physics and mathematics in the integration of area-averaged fluxes over a heterogeneous surface. Results from this study, both instantaneous and daily ET at the satellite pixel scale, can be used for the validation of relevant remote sensing models and land surface process models. Furthermore, this
Getter, Nir; Kaplan, Zeev; Todder, Doron
2015-10-01
Electroencephalography source localization neurofeedback, i.e Standardized low-resolution tomography (sLORETA) neurofeedback are non-invasive method for altering region specific brain activity. This is an improvement over traditional neurofeedback which were based on recordings from a single scalp-electrode. We proposed three criteria clusters as a methodological framework to evaluate electroencephalography source localization neurofeedback and present relevant data. Our objective was to evaluate standardized low resolution EEG tomography neurofeedback by examining how training one neuroanatomical area effects the mental rotation task (which is related to the activity of bilateral Parietal regions) and the stop-signal test (which is related to frontal structures). Twelve healthy participants were enrolled in a single session sLORETA neurofeedback protocol. The participants completed both the mental rotation task and the stop-signal test before and after one sLORETA neurofeedback session. During sLORETA neurofeedback sessions participants watched one sitcom episode while the picture quality co-varied with activity in the superior parietal lobule. Participants were rewarded for increasing activity in this region only. Results showed a significant reaction time decrease and an increase in accuracy after sLORETA neurofeedback on the mental rotation task but not after stop signal task. Together with behavioral changes a significant activity increase was found at the left parietal brain after sLORETA neurofeedback compared with baseline. We concluded that activity increase in the parietal region had a specific effect on the mental rotation task. Tasks unrelated to parietal brain activity were unaffected. Therefore, sLORETA neurofeedback could be used as a research, or clinical tool for cognitive disorders. Copyright © 2015 Elsevier B.V. All rights reserved.
RESOLUTION DU PROBLEME DE PREDICTION LINEAIRE PAR LA METHODE ULV. APPLICATION AU SIGNAL FID
Directory of Open Access Journals (Sweden)
M KHELIF
2003-06-01
Full Text Available Dans le cadre de la spectroscopie RMN, notre objectif est de déterminer le spectre d'absorption du signal de précession libre FID par la méthode de prédiction linéaire (PL. Ceci revient à résoudre le problème de prédiction linéaire en exploitant la méthode de corrélation par l'utilisation de la décomposition en valeurs singulières SVD pour l'inversion de la matrice de corrélation. Or, cette technique est la source d'un certain nombre de problèmes lorsque le signal est noyé dans du bruit. Aussi sera-t-elle coûteuse en temps lorsque les dimensions de la matrice de corrélation sont importantes. Afin de résoudre ce problème, nous exploitons les propriétés d'une nouvelle technique dérivée de la SVD, la décomposition ULV pour minimiser le coût du traitement et assurer une inversion correcte de la matrice de corrélation. Dans ce but, nous déterminons le spectre d'absorption par la technique ULV et nous le comparons avec le spectre déterminé par la SVD et la FFT. Nous comparons par la suite la qualité des spectres obtenus par rapport au spectre d'absorption idéal déterminé par FFT.
Chen, Meng-Yun; Liang, Dan; Zhang, Peng
2017-08-01
The interordinal relationships of Laurasiatherian mammals are currently one of the most controversial questions in mammalian phylogenetics. Previous studies mainly relied on coding sequences (CDS) and seldom used noncoding sequences. Here, by data mining public genome data, we compiled an intron data set of 3,638 genes (all introns from a protein-coding gene are considered as a gene) (19,055,073 bp) and a CDS data set of 10,259 genes (20,994,285 bp), covering all major lineages of Laurasiatheria (except Pholidota). We found that the intron data contained stronger and more congruent phylogenetic signals than the CDS data. In agreement with this observation, concatenation and species-tree analyses of the intron data set yielded well-resolved and identical phylogenies, whereas the CDS data set produced weakly supported and incongruent results. Further analyses showed that the phylogeny inferred from the intron data is highly robust to data subsampling and change in outgroup, but the CDS data produced unstable results under the same conditions. Interestingly, gene tree statistical results showed that the most frequently observed gene tree topologies for the CDS and intron data are identical, suggesting that the major phylogenetic signal within the CDS data is actually congruent with that within the intron data. Our final result of Laurasiatheria phylogeny is (Eulipotyphla,((Chiroptera, Perissodactyla),(Carnivora, Cetartiodactyla))), favoring a close relationship between Chiroptera and Perissodactyla. Our study 1) provides a well-supported phylogenetic framework for Laurasiatheria, representing a step towards ending the long-standing "hard" polytomy and 2) argues that intron within genome data is a promising data resource for resolving rapid radiation events across the tree of life. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Weatherly, Lisa M; Nelson, Andrew J; Shim, Juyoung; Riitano, Abigail M; Gerson, Erik D; Hart, Andrew J; de Juan-Sanz, Jaime; Ryan, Timothy A; Sher, Roger; Hess, Samuel T; Gosse, Julie A
2018-06-15
The antimicrobial agent triclosan (TCS) is used in products such as toothpaste and surgical soaps and is readily absorbed into oral mucosa and human skin. These and many other tissues contain mast cells, which are involved in numerous physiologies and diseases. Mast cells release chemical mediators through a process termed degranulation, which is inhibited by TCS. Investigation into the underlying mechanisms led to the finding that TCS is a mitochondrial uncoupler at non-cytotoxic, low-micromolar doses in several cell types and live zebrafish. Our aim was to determine the mechanisms underlying TCS disruption of mitochondrial function and of mast cell signaling. We combined super-resolution (fluorescence photoactivation localization) microscopy and multiple fluorescence-based assays to detail triclosan's effects in living mast cells, fibroblasts, and primary human keratinocytes. TCS disrupts mitochondrial nanostructure, causing mitochondria to undergo fission and to form a toroidal, "donut" shape. TCS increases reactive oxygen species production, decreases mitochondrial membrane potential, and disrupts ER and mitochondrial Ca 2+ levels, processes that cause mitochondrial fission. TCS is 60 × more potent than the banned uncoupler 2,4-dinitrophenol. TCS inhibits mast cell degranulation by decreasing mitochondrial membrane potential, disrupting microtubule polymerization, and inhibiting mitochondrial translocation, which reduces Ca 2+ influx into the cell. Our findings provide mechanisms for both triclosan's inhibition of mast cell signaling and its universal disruption of mitochondria. These mechanisms provide partial explanations for triclosan's adverse effects on human reproduction, immunology, and development. This study is the first to utilize super-resolution microscopy in the field of toxicology. Copyright © 2018 Elsevier Inc. All rights reserved.
Beamer, J.; Hill, D. F.; Arendt, A. A.; Luthcke, S. B.; Liston, G. E.
2015-12-01
A comprehensive study of the Gulf of Alaska (GOA) drainage basin was carried out to improve understanding of the coastal freshwater discharge (FWD) and surface mass balance (SMB) of glaciers. Coastal FWD and SMB for all glacier surfaces were modeled using a suite of physically based, spatially distributed weather, energy-balance snow/ice melt, soil water balance, and runoff routing models at a high resolution (1 km horizontal grid; daily time step). A 35 year hind cast was performed, providing complete records of precipitation, runoff, snow water equivalent (SWE) depth, evapotranspiration, coastal FWD and glacier SMB. Meteorological forcing was provided by the North American Regional Reanalysis (NARR), Modern Era Retrospective Analysis for Research and Applications (MERRA), and NCEP Climate Forecast System Reanalysis (CFSR) datasets. A fourth dataset was created by bias-correcting the NARR data to recently-developed monthly weather grids based on PRISM climatologies (NARR-BC). Each weather dataset and model combination was individually calibrated using PRISM climatologies, streamflow, and glacier mass balance measurements from four locations in the study domain. Simulated mean annual FWD into the GOA ranged from 600 km3 yr-1 using NARR to 850 km3 yr-1 from NARR-BC. The CFSR-forced simulations with optimized model parameters produced a simulated regional water storage that compared favorably to data from the NASA/DLR Gravity Recovery and Climate Experiment (GRACE) high resolution mascon solutions (Figure). Glacier runoff, taken as the sum of rainfall, snow and ice melt occurring on glacier surfaces, ranged from 260 km3 yr-1 from MERRA to 400 km3 yr-1 from NARR-BC, approximately one half of the signal from both glaciers and surrounding terrain. The large contribution from non-glacier surfaces to the seasonal water balance is likely not being fully removed from GRACE solutions aimed at isolating the glacier signal alone. We will discuss methods to use our simulations
Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M; Bai, Ruibin
2016-11-16
Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition.
Autoregressive Moving Average Graph Filtering
Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert
2016-01-01
One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...
Liu, Weixin; Jin, Ningde; Han, Yunfeng; Ma, Jing
2018-06-01
In the present study, multi-scale entropy algorithm was used to characterise the complex flow phenomena of turbulent droplets in high water-cut oil-water two-phase flow. First, we compared multi-scale weighted permutation entropy (MWPE), multi-scale approximate entropy (MAE), multi-scale sample entropy (MSE) and multi-scale complexity measure (MCM) for typical nonlinear systems. The results show that MWPE presents satisfied variability with scale and anti-noise ability. Accordingly, we conducted an experiment of vertical upward oil-water two-phase flow with high water-cut and collected the signals of a high-resolution microwave resonant sensor, based on which two indexes, the entropy rate and mean value of MWPE, were extracted. Besides, the effects of total flow rate and water-cut on these two indexes were analysed. Our researches show that MWPE is an effective method to uncover the dynamic instability of oil-water two-phase flow with high water-cut.
International Nuclear Information System (INIS)
Poussier, E.; Rambaut, M.
1986-01-01
Detection consists of a measurement of a counting rate. A probability of wrong detection is associated with this counting rate and with an average estimated rate of noise. Detection consists also in comparing the wrong detection probability to a predeterminated rate of wrong detection. The comparison can use tabulated values. Application is made to corpuscule radiation detection [fr
Wang, Feng-Fei; Luo, A-Li; Zhao, Yong-Heng
2014-02-01
The radial velocity of the star is very important for the study of the dynamics structure and chemistry evolution of the Milky Way, is also an useful tool for looking for variable or special objects. In the present work, we focus on calculating the radial velocity of different spectral types of low-resolution stellar spectra by adopting a template matching method, so as to provide effective and reliable reference to the different aspects of scientific research We choose high signal-to-noise ratio (SNR) spectra of different spectral type stellar from the Sloan Digital Sky Survey (SDSS), and add different noise to simulate the stellar spectra with different SNR. Then we obtain theradial velocity measurement accuracy of different spectral type stellar spectra at different SNR by employing a template matching method. Meanwhile, the radial velocity measurement accuracy of white dwarf stars is analyzed as well. We concluded that the accuracy of radial velocity measurements of early-type stars is much higher than late-type ones. For example, the 1-sigma standard error of radial velocity measurements of A-type stars is 5-8 times as large as K-type and M-type stars. We discuss the reason and suggest that the very narrow lines of late-type stars ensure the accuracy of measurement of radial velocities, while the early-type stars with very wide Balmer lines, such as A-type stars, become sensitive to noise and obtain low accuracy of radial velocities. For the spectra of white dwarfs stars, the standard error of radial velocity measurement could be over 50 km x s(-1) because of their extremely wide Balmer lines. The above conclusion will provide a good reference for stellar scientific study.
International Nuclear Information System (INIS)
Ivannikov, Alexander I.; Khailov, Artem M.; Orlenko, Sergey P.; Skvortsov, Valeri G.; Stepanenko, Valeri F.; Zhumadilov, Kassym Sh.; Williams, Benjamin B.; Flood, Ann B.; Swartz, Harold M.
2016-01-01
The aim of the study is to determine the average intensity and variation of the native background signal amplitude (NSA) and of the solar light-induced signal amplitude (LSA) in electron paramagnetic resonance (EPR) spectra of tooth enamel for different kinds of teeth and different groups of people. These values are necessary for determination of the intensity of the radiation-induced signal amplitude (RSA) by subtraction of the expected NSA and LSA from the total signal amplitude measured in L-band for in vivo EPR dosimetry. Variation of these signals should be taken into account when estimating the uncertainty of the estimated RSA. A new analysis of several hundred EPR spectra that were measured earlier at X-band in a large-scale examination of the population of the Central Russia was performed. Based on this analysis, the average values and the variation (standard deviation, SD) of the amplitude of the NSA for the teeth from different positions, as well as LSA in outer enamel of the front teeth for different population groups, were determined. To convert data acquired at X-band to values corresponding to the conditions of measurement at L-band, the experimental dependencies of the intensities of the RSA, LSA and NSA on the m.w. power, measured at both X and L-band, were analysed. For the two central upper incisors, which are mainly used in in vivo dosimetry, the mean LSA annual rate induced only in the outer side enamel and its variation were obtained as 10 ± 2 (SD = 8) mGy y"-"1, the same for X- and L-bands (results are presented as the mean ± error of mean). Mean NSA in enamel and its variation for the upper incisors was calculated at 2.0 ± 0.2 (SD = 0.5) Gy, relative to the calibrated RSA dose-response to gamma radiation measured under non-power saturation conditions at X-band. Assuming the same value for L-band under non-power saturating conditions, then for in vivo measurements at L-band at 25 mW (power saturation conditions), a mean NSA and its
Ivannikov, Alexander I; Khailov, Artem M; Orlenko, Sergey P; Skvortsov, Valeri G; Stepanenko, Valeri F; Zhumadilov, Kassym Sh; Williams, Benjamin B; Flood, Ann B; Swartz, Harold M
2016-12-01
The aim of the study is to determine the average intensity and variation of the native background signal amplitude (NSA) and of the solar light-induced signal amplitude (LSA) in electron paramagnetic resonance (EPR) spectra of tooth enamel for different kinds of teeth and different groups of people. These values are necessary for determination of the intensity of the radiation-induced signal amplitude (RSA) by subtraction of the expected NSA and LSA from the total signal amplitude measured in L-band for in vivo EPR dosimetry. Variation of these signals should be taken into account when estimating the uncertainty of the estimated RSA. A new analysis of several hundred EPR spectra that were measured earlier at X-band in a large-scale examination of the population of the Central Russia was performed. Based on this analysis, the average values and the variation (standard deviation, SD) of the amplitude of the NSA for the teeth from different positions, as well as LSA in outer enamel of the front teeth for different population groups, were determined. To convert data acquired at X-band to values corresponding to the conditions of measurement at L-band, the experimental dependencies of the intensities of the RSA, LSA and NSA on the m.w. power, measured at both X and L-band, were analysed. For the two central upper incisors, which are mainly used in in vivo dosimetry, the mean LSA annual rate induced only in the outer side enamel and its variation were obtained as 10 ± 2 (SD = 8) mGy y -1 , the same for X- and L-bands (results are presented as the mean ± error of mean). Mean NSA in enamel and its variation for the upper incisors was calculated at 2.0 ± 0.2 (SD = 0.5) Gy, relative to the calibrated RSA dose-response to gamma radiation measured under non-power saturation conditions at X-band. Assuming the same value for L-band under non-power saturating conditions, then for in vivo measurements at L-band at 25 mW (power saturation conditions), a mean NSA and its
Chabdarov, Shamil M.; Nadeev, Adel F.; Chickrin, Dmitry E.; Faizullin, Rashid R.
2011-04-01
In this paper we discuss unconventional detection technique also known as «full resolution receiver». This receiver uses Gaussian probability mixtures for interference structure adaptation. Full resolution receiver is alternative to conventional matched filter receivers in the case of non-Gaussian interferences. For the DS-CDMA forward channel with presence of complex interferences sufficient performance increasing was shown.
How to average logarithmic retrievals?
Directory of Open Access Journals (Sweden)
B. Funke
2012-04-01
Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.
Cheng, Bingbing; Bandi, Venugopal; Wei, Ming-Yuan; Pei, Yanbo; D'Souza, Francis; Nguyen, Kytai T; Hong, Yi; Yuan, Baohong
2016-01-01
For many years, investigators have sought after high-resolution fluorescence imaging in centimeter-deep tissue because many interesting in vivo phenomena-such as the presence of immune system cells, tumor angiogenesis, and metastasis-may be located deep in tissue. Previously, we developed a new imaging technique to achieve high spatial resolution in sub-centimeter deep tissue phantoms named continuous-wave ultrasound-switchable fluorescence (CW-USF). The principle is to use a focused ultrasound wave to externally and locally switch on and off the fluorophore emission from a small volume (close to ultrasound focal volume). By making improvements in three aspects of this technique: excellent near-infrared USF contrast agents, a sensitive frequency-domain USF imaging system, and an effective signal processing algorithm, for the first time this study has achieved high spatial resolution (~ 900 μm) in 3-centimeter-deep tissue phantoms with high signal-to-noise ratio (SNR) and high sensitivity (3.4 picomoles of fluorophore in a volume of 68 nanoliters can be detected). We have achieved these results in both tissue-mimic phantoms and porcine muscle tissues. We have also demonstrated multi-color USF to image and distinguish two fluorophores with different wavelengths, which might be very useful for simultaneously imaging of multiple targets and observing their interactions in the future. This work has opened the door for future studies of high-resolution centimeter-deep tissue fluorescence imaging.
Directory of Open Access Journals (Sweden)
Y. Narita
2011-02-01
Full Text Available A new analysis method is presented that provides a high-resolution power spectrum in a broad wave number domain based on multi-point measurements. The analysis technique is referred to as the Multi-point Signal Resonator (MSR and it benefits from Capon's minimum variance method for obtaining the proper power spectral density of the signal as well as the MUSIC algorithm (Multiple Signal Classification for considerably reducing the noise part in the spectrum. The mathematical foundation of the analysis method is presented and it is applied to synthetic data as well as Cluster observations of the interplanetary magnetic field. Using the MSR technique for Cluster data we find a wave in the solar wind propagating parallel to the mean magnetic field with relatively small amplitude, which is not identified by the Capon spectrum. The Cluster data analysis shows the potential of the MSR technique for studying waves and turbulence using multi-point measurements.
Wavelet analysis for nonstationary signals
International Nuclear Information System (INIS)
Penha, Rosani Maria Libardi da
1999-01-01
Mechanical vibration signals play an important role in anomalies identification resulting of equipment malfunctioning. Traditionally, Fourier spectral analysis is used where the signals are assumed to be stationary. However, occasional transient impulses and start-up process are examples of nonstationary signals that can be found in mechanical vibrations. These signals can provide important information about the equipment condition, as early fault detection. The Fourier analysis can not adequately be applied to nonstationary signals because the results provide data about the frequency composition averaged over the duration of the signal. In this work, two methods for nonstationary signal analysis are used: Short Time Fourier Transform (STFT) and wavelet transform. The STFT is a method of adapting Fourier spectral analysis for nonstationary application to time-frequency domain. To have a unique resolution throughout the entire time-frequency domain is its main limitation. The wavelet transform is a new analysis technique suitable to nonstationary signals, which handles the STFT drawbacks, providing multi-resolution frequency analysis and time localization in a unique time-scale graphic. The multiple frequency resolutions are obtained by scaling (dilatation/compression) the wavelet function. A comparison of the conventional Fourier transform, STFT and wavelet transform is made applying these techniques to: simulated signals, arrangement rotor rig vibration signal and rotate machine vibration signal Hanning window was used to STFT analysis. Daubechies and harmonic wavelets were used to continuos, discrete and multi-resolution wavelet analysis. The results show the Fourier analysis was not able to detect changes in the signal frequencies or discontinuities. The STFT analysis detected the changes in the signal frequencies, but with time-frequency resolution problems. The wavelet continuos and discrete transform demonstrated to be a high efficient tool to detect
International Nuclear Information System (INIS)
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs
Evaluations of average level spacings
International Nuclear Information System (INIS)
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables
Gkinis, Vasileios; Møllesøe Vinther, Bo; Terkelsen Holme, Christian; Capron, Emilie; Popp, Trevor James; Olander Rasmussen, Sune
2017-04-01
The continuity and high resolution available in polar ice core records constitutes them an excellent tool for the study of the stadial-interstadial transitions, notably through the study of the water isotopic composition of polar precipitation (δ18O, δD ). The quest for the highest resolution possible has resulted in experimental sampling and analysis techniques that have yielded data sets with a potential to change the current picture on the climatic signals of the last Glacial. Specifically, the ultra-high resolution δ18O signals from the NorthGRIP and NEEM ice cores, present a variability at multi-annual and decadal time scales, whose interpretation gives rise to further puzzling though interesting questions and an obvious paradox. By means of simple firn isotope diffusion and densification calculations, we firstly demonstrate that the variability of observed signals is unlikely to be due to post depositional effects that are known to occur on the surface of the Greenland ice cap and alter the δ18O composition of the precipitated snow. Assuming specific values for the δ18O sensitivity to temperature (commonly referred to as the δ18O slope), we estimate that the temperature signal during the stadials has a variability that extents from interstadial to extremely cold levels with peak-to-peak fluctuations of almost 35 K occurring in a few years. Similarly, during interstadial phases the temperature varies rapidly from stadial to Holocene levels while the signal variability shows a maximum during the LGM, with magnitudes of up to 15‰ that translate to ≈ 50 K when a δ18O slope of 0.3‰K-1 is used. We assess the validity of these results and comment on the stability of the δ18O slope. Driven by a simple logical queue, we conclude that the observed δ18O variability reflects a climatic signal although not necessarily attributed 100% to temperature changes. From this we can assume that there occur climatic mechanisms during the previously thought stable
DEFF Research Database (Denmark)
Knop, Filip Krag
2009-01-01
Certain types of bariatric surgical procedures have proved not only to be effective with regard to treating obesity, but they also seem to be associated with endocrine changes which independently of weight loss give rise to remission of type 2 diabetes. Currently, it is speculated that surgical re......-derived glucagonotropic signalling as putative diabetogenic signals of the foregut hypothesis. In the present paper the hypotheses describing the glucose-lowering mechanisms of bariatric surgical procedures sharing the common feature of a bypass of the duodenum and the proximal jejunum are outlined and a possible role...
International Nuclear Information System (INIS)
Liu, Hanghui; Lam, Lily; Yan, Lin; Chi, Bert; Dasgupta, Purnendu K.
2014-01-01
Highlights: • Less abundant isotopologue ions were utilized to decrease detector saturation. • A 25–50 fold increase in the upper limit of dynamic range was demonstrated. • Linear dynamic range was expanded without compromising mass resolution. - Abstract: The linear dynamic range (LDR) for quantitative liquid chromatography–mass spectrometry can be extended until ionization saturation is reached by using a number of target isotopologue ions in addition to the normally used target ion that provides the highest sensitivity. Less abundant isotopologue ions extend the LDR: the lower ion abundance decreases the probability of ion detector saturation. Effectively the sensitivity decreases and the upper limit of the LDR increases. We show in this paper that the technique is particularly powerful with a high resolution time of flight mass spectrometer because the data for all ions are automatically acquired, and we demonstrated this for four small organic molecules; the upper limits of LDRs increased by 25–50 times
DEFF Research Database (Denmark)
Maulucci, Giuseppe; Labate, Valentina; Mele, Marina
2008-01-01
We present the application of a redox-sensitive mutant of the yellow fluorescent protein (rxYFP) to image, with elevated sensitivity and high temporal and spatial resolution, oxidative responses of eukaryotic cells to pathophysiological stimuli. The method presented, based on the ratiometric...... quantitation of the distribution of fluorescence by confocal microscopy, allows us to draw real-time "redox maps" of adherent cells and to score subtle changes in the intracellular redox state, such as those induced by overexpression of redox-active proteins. This strategy for in vivo imaging of redox...
Banas, Krzysztof; Banas, Agnieszka M.; Heussler, Sascha P.; Breese, Mark B. H.
2018-01-01
In the contemporary spectroscopy there is a trend to record spectra with the highest possible spectral resolution. This is clearly justified if the spectral features in the spectrum are very narrow (for example infra-red spectra of gas samples). However there is a plethora of samples (in the liquid and especially in the solid form) where there is a natural spectral peak broadening due to collisions and proximity predominately. Additionally there is a number of portable devices (spectrometers) with inherently restricted spectral resolution, spectral range or both, which are extremely useful in some field applications (archaeology, agriculture, food industry, cultural heritage, forensic science). In this paper the investigation of the influence of spectral resolution, spectral range and signal-to-noise ratio on the identification of high explosive substances by applying multivariate statistical methods on the Fourier transform infra-red spectral data sets is studied. All mathematical procedures on spectral data for dimension reduction, clustering and validation were implemented within R open source environment.
Flexible time domain averaging technique
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Weng, Yi; Wang, Junyi; He, Xuan; Pan, Zhongqi
2018-02-01
The Nyquist spectral shaping techniques facilitate a promising solution to enhance spectral efficiency (SE) and further reduce the cost-per-bit in high-speed wavelength-division multiplexing (WDM) transmission systems. Hypothetically, any Nyquist WDM signals with arbitrary shapes can be generated by the use of the digital signal processing (DSP) based electrical filters (E-filter). Nonetheless, in actual 100G/ 200G coherent systems, the performance as well as DSP complexity are increasingly restricted by cost and power consumption. Henceforward it is indispensable to optimize DSP to accomplish the preferred performance at the least complexity. In this paper, we systematically investigated the minimum requirements and challenges of Nyquist WDM signal generation, particularly for higher-order modulation formats, including 16 quadrature amplitude modulation (QAM) or 64QAM. A variety of interrelated parameters, such as channel spacing and roll-off factor, have been evaluated to optimize the requirements of the digital-to-analog converter (DAC) resolution and transmitter E-filter bandwidth. The impact of spectral pre-emphasis has been predominantly enhanced via the proposed interleaved DAC architecture by at least 4%, and hence reducing the required optical signal to noise ratio (OSNR) at a bit error rate (BER) of 10-3 by over 0.45 dB at a channel spacing of 1.05 symbol rate and an optimized roll-off factor of 0.1. Furthermore, the requirements of sampling rate for different types of super-Gaussian E-filters are discussed for 64QAM Nyquist WDM transmission systems. Finally, the impact of the non-50% duty cycle error between sub-DACs upon the quality of the generated signals for the interleaved DAC structure has been analyzed.
International Nuclear Information System (INIS)
Thitaikumar, Arun; Krouskop, Thomas A; Ophir, Jonathan
2007-01-01
In axial-shear strain elastography, the local axial-shear strain resulting from the application of quasi-static axial compression to an inhomogeneous material is imaged. In this paper, we investigated the image quality of the axial-shear strain estimates in terms of the signal-to-noise ratio (SNR asse ) and contrast-to-noise ratio (CNR asse ) using simulations and experiments. Specifically, we investigated the influence of the system parameters (beamwidth, transducer element pitch and bandwidth), signal processing parameters (correlation window length and axial window shift) and mechanical parameters (Young's modulus contrast, applied axial strain) on the SNR asse and CNR asse . The results of the study show that the CNR asse (SNR asse ) is maximum for axial-shear strain values in the range of 0.005-0.03. For the inclusion/background modulus contrast range considered in this study ( asse (SNR asse ) is maximum for applied axial compressive strain values in the range of 0.005%-0.03%. This suggests that the RF data acquired during axial elastography can be used to obtain axial-shear strain elastograms, since this range is typically used in axial elastography as well. The CNR asse (SNR asse ) remains almost constant with an increase in the beamwidth while it increases as the pitch increases. As expected, the axial shift had only a weak influence on the CNR asse (SNR asse ) of the axial-shear strain estimates. We observed that the differential estimates of the axial-shear strain involve a trade-off between the CNR asse (SNR asse ) and the spatial resolution only with respect to pitch and not with respect to signal processing parameters. Simulation studies were performed to confirm such an observation. The results demonstrate a trade-off between CNR asse and the resolution with respect to pitch
Trépanier, Marc-Olivier; Eiden, Michael; Morin-Rivron, Delphine; Bazinet, Richard P; Masoodi, Mojgan
2017-03-01
The field of lipidomics has evolved vastly since its creation 15 years ago. Advancements in mass spectrometry have allowed for the identification of hundreds of intact lipids and lipid mediators. However, because of the release of fatty acids from the phospholipid membrane in the brain caused by ischemia, identifying the neurolipidome has been challenging. Microwave fixation has been shown to reduce the ischemia-induced release of several lipid mediators. Therefore, this study aimed to develop a method combining high-resolution tandem mass spectrometry (MS/MS), high-energy head-focused microwave fixation and statistical modeling, allowing for the measurement of intact lipids and lipid mediators in order to eliminate the ischemia-induced release of fatty acids and identify the rat neurolipidome. In this study, we demonstrated the ischemia-induced production of bioactive lipid mediators, and the reduction in variability using microwave fixation in combination with liquid chromatography (LC)-MS/MS. We have also illustrated for the first time that microwave fixation eliminates the alterations in intact lipid species following ischemia. While many phospholipid species were unchanged by ischemia, other intact lipid classes, such as diacylglycerol, were lower in concentration following microwave fixation compared to ischemia. © 2016 International Society for Neurochemistry.
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
International Nuclear Information System (INIS)
Ichiguchi, Katsuji
1998-01-01
A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
Watson, Jane; Chick, Helen
2012-01-01
This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…
Averaging operations on matrices
Indian Academy of Sciences (India)
2014-07-03
Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...
Directory of Open Access Journals (Sweden)
Patricia Bouyer
2015-09-01
Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.
Effects of NMR spectral resolution on protein structure calculation.
Directory of Open Access Journals (Sweden)
Suhas Tikole
Full Text Available Adequate digital resolution and signal sensitivity are two critical factors for protein structure determinations by solution NMR spectroscopy. The prime objective for obtaining high digital resolution is to resolve peak overlap, especially in NOESY spectra with thousands of signals where the signal analysis needs to be performed on a large scale. Achieving maximum digital resolution is usually limited by the practically available measurement time. We developed a method utilizing non-uniform sampling for balancing digital resolution and signal sensitivity, and performed a large-scale analysis of the effect of the digital resolution on the accuracy of the resulting protein structures. Structure calculations were performed as a function of digital resolution for about 400 proteins with molecular sizes ranging between 5 and 33 kDa. The structural accuracy was assessed by atomic coordinate RMSD values from the reference structures of the proteins. In addition, we monitored also the number of assigned NOESY cross peaks, the average signal sensitivity, and the chemical shift spectral overlap. We show that high resolution is equally important for proteins of every molecular size. The chemical shift spectral overlap depends strongly on the corresponding spectral digital resolution. Thus, knowing the extent of overlap can be a predictor of the resulting structural accuracy. Our results show that for every molecular size a minimal digital resolution, corresponding to the natural linewidth, needs to be achieved for obtaining the highest accuracy possible for the given protein size using state-of-the-art automated NOESY assignment and structure calculation methods.
Image compression using moving average histogram and RBF network
International Nuclear Information System (INIS)
Khowaja, S.; Ismaili, I.A.
2015-01-01
Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)
International Nuclear Information System (INIS)
Cosma, C.; Heikkinen, P.; Pekonen, S.
1991-05-01
The purpose of the high resolution borehole seismics project has been to improve the reliability and resolution of seismic methods in the particular environment of nuclear waste repository sites. The results obtained, especially the data processing and interpretation methods developed, are applicable also to other geophysical methods (e.g. Georadar). The goals of the seismic development project have been: the development of processing and interpretation techniques for mapping fractured zones, and the design and construction of a seismic source complying with the requirements of repository site characterization programs. Because these two aspects of the work are very different in nature, we have structured the report as two self contained parts. Part 1 describes the development of interpretive techniques. We have used for demonstrating the effect of different methods a VSP data set collected at the SCV site during Stage 1 of the project. Five techniques have been studied: FK-filtering, three versions of Tau-p filtering and a new technique that we have developed lately, Image Space filtering. Part 2 refers to the construction of the piezoelectric source. Earlier results obtained over short distances with low energy piezoelectric transmitters let us believe that the same principle could be applied for seismic signal transmitters, if solutions for higher energy and lower frequency output were found. The instrument which we have constructed is a cylindrical unit which can be placed in a borehole and is able to produce a radial strain when excited axially. The minimum borehole diameter is 56 mm. (au)
Hilberath, Jan N; Carlo, Troy; Pfeffer, Michael A; Croze, Roxanne H; Hastrup, Frantz; Levy, Bruce D
2011-06-01
The purpose of this study was to investigate roles for Toll-like receptor 4 (TLR4) in host responses to sterile tissue injury. Hydrochloric acid was instilled into the left mainstem bronchus of TLR4-defective (both C3H/HeJ and congenic C.C3-Tlr4(Lps-d)/J) and control mice to initiate mild, self-limited acute lung injury (ALI). Outcome measures included respiratory mechanics, barrier integrity, leukocyte accumulation, and levels of select soluble mediators. TLR4-defective mice were more resistant to ALI, with significantly decreased perturbations in lung elastance and resistance, resulting in faster resolution of these parameters [resolution interval (R(i)); ∼6 vs. 12 h]. Vascular permeability changes and oxidative stress were also decreased in injured HeJ mice. These TLR4-defective mice paradoxically displayed increased lung neutrophils [(HeJ) 24×10(3) vs. (control) 13×10(3) cells/bronchoalveolar lavage]. Proresolving mechanisms for TLR4-defective animals included decreased eicosanoid biosynthesis, including cysteinyl leukotrienes (80% mean decrease) that mediated CysLT1 receptor-dependent vascular permeability changes; and induction of lung suppressor of cytokine signaling 3 (SOCS3) expression that decreased TLR4-driven oxidative stress. Together, these findings indicate pivotal roles for TLR4 in promoting sterile ALI and suggest downstream provocative roles for cysteinyl leukotrienes and protective roles for SOCS3 in the intensity and duration of host responses to ALI.
Energy Technology Data Exchange (ETDEWEB)
Quigley, B; Smith, C; La Riviere, P [Department of Radiology, University of Chicago, Chicago, IL (United States)
2016-06-15
Purpose: To evaluate the resolution and sensitivity of XIL imaging using a surface radiance simulation based on optical diffusion and maximum likelihood expectation maximization (MLEM) image reconstruction. XIL imaging seeks to determine the distribution of luminescent nanophosphors, which could be used as nanodosimeters or radiosensitizers. Methods: The XIL simulation generated a homogeneous slab with optical properties similar to tissue. X-ray activated nanophosphors were placed at 1.0 cm depth in the tissue in concentrations of 10{sup −4} g/mL in two volumes of 10 mm{sup 3} with varying separations between each other. An analytical optical diffusion model determined the surface radiance from the photon distributions generated at depth in the tissue by the nanophosphors. The simulation then determined the detected luminescent signal collected with a f/1.0 aperture lens and back-illuminated EMCCD camera. The surface radiance was deconvolved using a MLEM algorithm to estimate the nanophosphors distribution and the resolution. To account for both Poisson and Gaussian noise, a shifted Poisson imaging model was used in the deconvolution. The deconvolved distributions were fitted to a Gaussian after radial averaging to measure the full width at half maximum (FWHM) and the peak to peak distance between distributions was measured to determine the resolving power. Results: Simulated surface radiances for doses from 1mGy to 100 cGy were computed. Each image was deconvolved using 1000 iterations. At 1mGy, deconvolution reduced the FWHM of the nanophosphors distribution by 65% and had a resolving power is 3.84 mm. Decreasing the dose from 100 cGy to 1 mGy increased the FWHM by 22% but allowed for a dose reduction of a factor of 1000. Conclusion: Deconvolving the detected surface radiance allows for dose reduction while maintaining the resolution of the nanophosphors. It proves to be a useful technique in overcoming the resolution limitations of diffuse optical imaging in
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
International Nuclear Information System (INIS)
Michel, J.
1993-02-01
This doctorate thesis studies an integrated architecture designed to a parallel massive treatment of analogue signals supplied by silicon detectors of very high spatial resolution. The first chapter is an introduction presenting the general outline and the triggering conditions of the spectrometer. Chapter two describes the operational structure of a microvertex detector made of Si micro-plates associated to the measuring chains. Information preconditioning is related to the pre-amplification stage, to the pile-up effects and to the reduction in the time characteristic due to the high counting rates. The chapter three describes the architecture of the analogue delay buffer, makes an analysis of the intrinsic noise and presents the operational testings and input/output control operations. The fourth chapter is devoted to the description of the analogue pulse shape processor and gives also the testings and the corresponding measurements on the circuit. Finally, the chapter five deals with the simplest modeling of the entire conditioning chain. Also, the testings and measuring procedures are here discussed. In conclusion the author presents some prospects for improving the signal-to-noise ratio by summation of the de-convoluted micro-paths. 78 refs., 78 figs., 1 annexe
Liu, Yu; Gruber, Nicolas; Brunner, Dominik
2017-11-01
The emission of CO2 from the burning of fossil fuel is a prime determinant of variations in atmospheric CO2. Here, we simulate this fossil-fuel signal together with the natural and background components with a regional high-resolution atmospheric transport model for central and southern Europe considering separately the emissions from different sectors and countries on the basis of emission inventories and hourly emission time functions. The simulated variations in atmospheric CO2 agree very well with observation-based estimates, although the observed variance is slightly underestimated, particularly for the fossil-fuel component. Despite relatively rapid atmospheric mixing, the simulated fossil-fuel signal reveals distinct annual mean structures deep into the troposphere, reflecting the spatially dense aggregation of most emissions. The fossil-fuel signal accounts for more than half of the total (fossil fuel + biospheric + background) temporal variations in atmospheric CO2 in most areas of northern and western central Europe, with the largest variations occurring on diurnal timescales owing to the combination of diurnal variations in emissions and atmospheric mixing and transport out of the surface layer. The covariance of the fossil-fuel emissions and atmospheric transport on diurnal timescales leads to a diurnal fossil-fuel rectifier effect of up to 9 ppm compared to a case with time-constant emissions. The spatial pattern of CO2 from the different sectors largely reflects the distribution and relative magnitude of the corresponding emissions, with power plant emissions leaving the most distinguished mark. An exception is southern and western Europe, where the emissions from the transportation sector dominate the fossil-fuel signal. Most of the fossil-fuel CO2 remains within the country responsible for the emission, although in smaller countries up to 80 % of the fossil-fuel signal can come from abroad. A fossil-fuel emission reduction of 30 % is clearly
Sensitivity of GRETINA position resolution to hole mobility
Energy Technology Data Exchange (ETDEWEB)
Prasher, V.S. [Department of Physics, University of Massachusetts Lowell, Lowell, MA 01854 (United States); Cromaz, M. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Merchan, E.; Chowdhury, P. [Department of Physics, University of Massachusetts Lowell, Lowell, MA 01854 (United States); Crawford, H.L. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lister, C.J. [Department of Physics, University of Massachusetts Lowell, Lowell, MA 01854 (United States); Campbell, C.M.; Lee, I.Y.; Macchiavelli, A.O. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Radford, D.C. [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Wiens, A. [Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)
2017-02-21
The sensitivity of the position resolution of the gamma-ray tracking array GRETINA to the hole charge-carrier mobility parameter is investigated. The χ{sup 2} results from a fit of averaged signal (“superpulse”) data exhibit a shallow minimum for hole mobilities 15% lower than the currently adopted values. Calibration data on position resolution is analyzed, together with simulations that isolate the hole mobility dependence of signal decomposition from other effects such as electronics cross-talk. The results effectively exclude hole mobility as a dominant parameter for improving the position resolution for reconstruction of gamma-ray interaction points in GRETINA.
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
Average nuclear surface properties
International Nuclear Information System (INIS)
Groote, H. von.
1979-01-01
The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)
Americans' Average Radiation Exposure
International Nuclear Information System (INIS)
2000-01-01
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body
Regression filter for signal resolution
International Nuclear Information System (INIS)
Matthes, W.
1975-01-01
The problem considered is that of resolving a measured pulse height spectrum of a material mixture, e.g. gamma ray spectrum, Raman spectrum, into a weighed sum of the spectra of the individual constituents. The model on which the analytical formulation is based is described. The problem reduces to that of a multiple linear regression. A stepwise linear regression procedure was constructed. The efficiency of this method was then tested by transforming the procedure in a computer programme which was used to unfold test spectra obtained by mixing some spectra, from a library of arbitrary chosen spectra, and adding a noise component. (U.K.)
Comparison of Interpolation Methods as Applied to Time Synchronous Averaging
National Research Council Canada - National Science Library
Decker, Harry
1999-01-01
Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...
Beamforming using subspace estimation from a diagonally averaged sample covariance.
Quijano, Jorge E; Zurk, Lisa M
2017-08-01
The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.
Assessing resolution in live cell structured illumination microscopy
Pospíšil, Jakub; Fliegel, Karel; Klíma, Miloš
2017-12-01
Structured Illumination Microscopy (SIM) is a powerful super-resolution technique, which is able to enhance the resolution of optical microscope beyond the Abbe diffraction limit. In the last decade, numerous SIM methods that achieve the resolution of 100 nm in the lateral dimension have been developed. The SIM setups with new high-speed cameras and illumination pattern generators allow rapid acquisition of the live specimen. Therefore, SIM is widely used for investigation of the live structures in molecular and live cell biology. Quantitative evaluation of resolution enhancement in a real sample is essential to describe the efficiency of super-resolution microscopy technique. However, measuring the resolution of a live cell sample is a challenging task. Based on our experimental findings, the widely used Fourier ring correlation (FRC) method does not seem to be well suited for measuring the resolution of SIM live cell video sequences. Therefore, the resolution assessing methods based on Fourier spectrum analysis are often used. We introduce a measure based on circular average power spectral density (PSDca) estimated from a single SIM image (one video frame). PSDca describes the distribution of the power of a signal with respect to its spatial frequency. Spatial resolution corresponds to the cut-off frequency in Fourier space. In order to estimate the cut-off frequency from a noisy signal, we use a spectral subtraction method for noise suppression. In the future, this resolution assessment approach might prove useful also for single-molecule localization microscopy (SMLM) live cell imaging.
Niessen, F.; Magens, D.; Kuhn, G.; Helling, D.
2008-12-01
Within the ANDRILL-MIS Project, a more than 1200 m long sediment core, dating back to about 13 Ma, was drilled beneath McMurdo Ice Shelf near Ross Island (Antarctica) in austral summer 2006/07 with the purpose of contributing to a better understanding of the Late Cenozoic history of the Antarctic Ice Sheet. One way to approach past ice dynamics and changes in the paleoenvironment quantitatively, is the analysis of high- resolution physical properties obtained from whole-core multi-sensor core logger measurements in which lithologic changes are expressed numerically. This is especially applicable for the repeating sequences of diatomites and diamictites in the upper half of the core with a prominent cyclicity between 140-300 mbsf. Rather abrupt high-amplitude variations in wet-bulk density (WBD) and magnetic susceptibility (MS) reflect a highly dynamic depositional system, oscillating between two main end-member types: a grounded ice sheet and open marine conditions. For the whole core, the WBD signal, ranging from 1.4 kg/cu.m in the diatomites to 2.3 kg/cu.m in diamictites from the lower part of the core, represents the influence of three variables: (i) the degree of compaction seen as reduction of porosities with depth of about 30 % from top to bottom, (ii) the clast content with clasts being almost absent in diatomite deposits and (iii) the individual grain density (GD). GD itself strongly reflects the variety of lithologies as well as the influence of cement (mainly pyrite and carbonate) on the matrix grain density. The calculation of residual porosities demonstrates the strong imprint of glacial loading for especially diamictites from the upper 150 m, pointing to a significant thickness of the overriding Pleistocene ice sheet. MS on the other hand mainly documents a marine vs. terrestrial source of sediments where the latter can be divided into younger local material from the McMurdo Volcanic Province and basement clasts from the Transantarctic Mountains
Directory of Open Access Journals (Sweden)
Kravtsenyuk Olga V
2007-01-01
Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a gain in spatial resolution can be obtained.
Directory of Open Access Journals (Sweden)
Vladimir V. Lyubimov
2007-01-01
Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a 27% gain in spatial resolution can be obtained.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
The difference between alternative averages
Directory of Open Access Journals (Sweden)
James Vaupel
2012-09-01
Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.
Resolution of axial shear strain elastography
International Nuclear Information System (INIS)
Thitaikumar, Arun; Righetti, Raffaella; Krouskop, Thomas A; Ophir, Jonathan
2006-01-01
The technique of mapping the local axial component of the shear strain due to quasi-static axial compression is defined as axial shear strain elastography. In this paper, the spatial resolution of axial shear strain elastography is investigated through simulations, using an elastically stiff cylindrical lesion embedded in a homogeneously softer background. Resolution was defined as the smallest size of the inclusion for which the strain value at the inclusion/background interface was greater than the average of the axial shear strain values at the interface and inside the inclusion. The resolution was measured from the axial shear strain profile oriented at 45 0 to the axis of beam propagation, due to the absence of axial shear strain along the normal directions. The effects of the ultrasound system parameters such as bandwidth, beamwidth and transducer element pitch along with signal processing parameters such as correlation window length (W) and axial shift (ΔW) on the estimated resolution were investigated. The results show that the resolution (at 45 0 orientation) is determined by the bandwidth and the beamwidth. However, the upper bound on the resolution is limited by the larger of the beamwidth and the window length, which is scaled inversely to the bandwidth. The results also show that the resolution is proportional to the pitch and not significantly affected by the axial window shift
Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system
Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi
2010-05-01
Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.
International Nuclear Information System (INIS)
Tholomier, M.
1985-01-01
In a scanning electron microscope, whatever is the measured signal, the same set is found: incident beam, sample, signal detection, signal amplification. The resulting signal is used to control the spot luminosity with the observer cathodoscope. This is synchronized with the beam scanning on the sample; on the cathodoscope, the image in secondary electrons, backscattered electrons,... of the sample surface is reconstituted. The best compromise must be found between a register time low enough to remove eventual variations (under the incident beam) of the nature of the observed phenomenon, and a good spatial resolution of the image and a signal-to-noise ratio high enough. The noise is one of the basic limitations of the scanning electron microscope performance. The whose measurement line must be optimized to reduce it [fr
International Nuclear Information System (INIS)
Yang Yongfeng; Dokhale, Purushottam A; Silverman, Robert W; Shah, Kanai S; McClish, Mickel A; Farrell, Richard; Entine, Gerald; Cherry, Simon R
2006-01-01
We explore dual-ended read out of LSO arrays with two position sensitive avalanche photodiodes (PSAPDs) as a high resolution, high efficiency depth-encoding detector for PET applications. Flood histograms, energy resolution and depth of interaction (DOI) resolution were measured for unpolished LSO arrays with individual crystal sizes of 1.0, 1.3 and 1.5 mm, and for a polished LSO array with 1.3 mm pixels. The thickness of the crystal arrays was 20 mm. Good flood histograms were obtained for all four arrays, and crystals in all four arrays can be clearly resolved. Although the amplitude of each PSAPD signal decreases as the interaction depth moves further from the PSAPD, the sum of the two PSAPD signals is essentially constant with irradiation depth for all four arrays. The energy resolutions were similar for all four arrays, ranging from 14.7% to 15.4%. A DOI resolution of 3-4 mm (including the width of the irradiation band which is ∼2 mm) was obtained for all the unpolished arrays. The best DOI resolution was achieved with the unpolished 1 mm array (average 3.5 mm). The DOI resolution for the 1.3 mm and 1.5 mm unpolished arrays was 3.7 and 4.0 mm respectively. For the polished array, the DOI resolution was only 16.5 mm. Summing the DOI profiles across all crystals for the 1 mm array only degraded the DOI resolution from 3.5 mm to 3.9 mm, indicating that it may not be necessary to calibrate the DOI response separately for each crystal within an array. The DOI response of individual crystals in the array confirms this finding. These results provide a detailed characterization of the DOI response of these PSAPD-based PET detectors which will be important in the design and calibration of a PET scanner making use of this detector approach
Lagrangian averaging with geodesic mean.
Oliver, Marcel
2017-11-01
This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.
Averaging in spherically symmetric cosmology
International Nuclear Information System (INIS)
Coley, A. A.; Pelavas, N.
2007-01-01
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis
Averaging models: parameters estimation with the R-Average procedure
Directory of Open Access Journals (Sweden)
S. Noventa
2010-01-01
Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.
Ergodic averages via dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....
High average power supercontinuum sources
Indian Academy of Sciences (India)
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.
When good = better than average
Directory of Open Access Journals (Sweden)
Don A. Moore
2007-10-01
Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.
Averaging Robertson-Walker cosmologies
International Nuclear Information System (INIS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-01-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models
High-resolution mapping and ablation of recurrent left lateral accessory pathway conduction
Directory of Open Access Journals (Sweden)
Francesco Solimene, MD
2017-08-01
Full Text Available Proper localization of the anatomical target during ablation of the accessory pathways (AP and the ability to detect clear AP potentials on the ablation catheter are crucial for successful AP ablation. We report a case of recurring AP conduction that was finally eliminated using a novel ablation catheter equipped with high-resolution mini-electrodes. Smaller and closer electrodes result in high mapping resolution with less signal averaging and cancellation effects. Owing to improved sensitivity, the new catheter seems effective in detecting fragmented and high frequency signals, thus allowing more effective radiofrequency application and improving ablation success.
Energy Technology Data Exchange (ETDEWEB)
Júnior, Décio Brandes M.F.; Oliveira, Mônica Georgia N.; Silva, Cristiano da, E-mail: deciobr@eletronuclear.gov.br, E-mail: mongeor@eletronuclear.gov.br, E-mail: cdsilva@eletronuclear.gov.br [Eletrobrás Termonuclear S.A. (ELETRONUCLEAR), Angra dos Reis, RJ (Brazil). Departamento DDD.O - Física de Reatores
2017-07-01
The goal of this work is present the new System of Acquisition and Signal Processing for the execution of the initial criticality after refueling and physical tests at low power with the incorporation of the real time resolution of Inverse Point Kinetic Equations (IPK). The system was developed using cRIO 9082 hardware (compactRIO), which is a programmable logic controller (PLC) and, the National Lab's LabVIEW programming language. The developed system enabled a better visualization and monitoring interface of the neutron flux evolution during the first criticality of cycle and following the low power physical tests, which allows the Reactor Physics Group and Reactor Operators of Angra 2 guide faster and accurately the reactivity variations at physical tests. The digital reactivity meter developed reinforces in Angra-2 the set of operational practices of reactivity management. (author)
International Nuclear Information System (INIS)
Júnior, Décio Brandes M.F.; Oliveira, Mônica Georgia N.; Silva, Cristiano da
2017-01-01
The goal of this work is present the new System of Acquisition and Signal Processing for the execution of the initial criticality after refueling and physical tests at low power with the incorporation of the real time resolution of Inverse Point Kinetic Equations (IPK). The system was developed using cRIO 9082 hardware (compactRIO), which is a programmable logic controller (PLC) and, the National Lab's LabVIEW programming language. The developed system enabled a better visualization and monitoring interface of the neutron flux evolution during the first criticality of cycle and following the low power physical tests, which allows the Reactor Physics Group and Reactor Operators of Angra 2 guide faster and accurately the reactivity variations at physical tests. The digital reactivity meter developed reinforces in Angra-2 the set of operational practices of reactivity management. (author)
Light-cone averaging in cosmology: formalism and applications
International Nuclear Information System (INIS)
Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.
2011-01-01
We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe
Topological quantization of ensemble averages
International Nuclear Information System (INIS)
Prodan, Emil
2009-01-01
We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states
A Novel Robust Audio Watermarking Algorithm by Modifying the Average Amplitude in Transform Domain
Directory of Open Access Journals (Sweden)
Qiuling Wu
2018-05-01
Full Text Available In order to improve the robustness and imperceptibility in practical application, a novel audio watermarking algorithm with strong robustness is proposed by exploring the multi-resolution characteristic of discrete wavelet transform (DWT and the energy compaction capability of discrete cosine transform (DCT. The human auditory system is insensitive to the minor changes in the frequency components of the audio signal, so the watermarks can be embedded by slightly modifying the frequency components of the audio signal. The audio fragments segmented from the cover audio signal are decomposed by DWT to obtain several groups of wavelet coefficients with different frequency bands, and then the fourth level detail coefficient is selected to be divided into the former packet and the latter packet, which are executed for DCT to get two sets of transform domain coefficients (TDC respectively. Finally, the average amplitudes of the two sets of TDC are modified to embed the binary image watermark according to the special embedding rule. The watermark extraction is blind without the carrier audio signal. Experimental results confirm that the proposed algorithm has good imperceptibility, large payload capacity and strong robustness when resisting against various attacks such as MP3 compression, low-pass filtering, re-sampling, re-quantization, amplitude scaling, echo addition and noise corruption.
The average Indian female nose.
Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh
2011-12-01
This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.
A Martian PFS average spectrum: Comparison with ISO SWS
Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.
2005-08-01
The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.
Energy Technology Data Exchange (ETDEWEB)
Walz, H.V.
1980-07-01
An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 ..mu..sec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed.
International Nuclear Information System (INIS)
Walz, H.V.
1980-07-01
An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 μsec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
Directory of Open Access Journals (Sweden)
Tellier Yoann
2018-01-01
Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien
2018-04-01
The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.
DEFF Research Database (Denmark)
Dierking, Wolfgang; Dall, Jørgen
2008-01-01
C- and L-band airborne synthetic aperture radar (SAR) imagery acquired at like- and cross-polarization over sea ice under winter conditions is examined with the objective to study the discrimination between level ice and ice deformation features. High-resolution low-noise data were analysed...... in the first paper. In this second paper, the main topics are the effects of spatial resolution and signal-to-noise ratio. Airborne, high-resolution SAR scenes are used to generate a sequence of images with increasingly coarser spatial resolution from 5 m to 25 m, keeping the number of looks constant....... The signal-to-noise ratio is varied between typical noise levels for airborne imagery and satellite data. Areal fraction of deformed ice and average deformation distance are determined for each image product. At L-band, the retrieved values of the areal fraction get larger as the image resolution is degraded...
Averaging and sampling for magnetic-observatory hourly data
Directory of Open Access Journals (Sweden)
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
International Nuclear Information System (INIS)
2003-05-01
To put a resolution to the meeting in relation with the use of weapons made of depleted uranium is the purpose of this text. The situation of the use of depleted uranium by France during the Gulf war and other recent conflicts will be established. This resolution will give the most strict recommendations face to the eventual sanitary and environmental risks in the use of these kind of weapons. (N.C.)
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
Averaging of nonlinearity-managed pulses
International Nuclear Information System (INIS)
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-01-01
We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons
Directory of Open Access Journals (Sweden)
Ernani de Sousa Grell
2006-09-01
Full Text Available OBJETIVO: Avaliar freqüência, correlações clínicas e influência prognóstica do potencial tardio no eletrocardiograma de alta resolução, em portadores de insuficiência cardíaca de diferentes etiologias. MÉTODOS: Foi estudado o eletrocardiograma de alta resolução, durante 42 meses, em 288 portadores de insuficiência cardíaca de diferentes etiologias, 215 homens (74,65% e 73 mulheres (25,35, de idades entre 16 e 70 anos (média 51,5, desvio-padrão 11,24. As etiologias da insuficiência cardíaca foram: cardiomiopatia hipertensiva, 78(27,1%; cardiomiopatia dilatada idiopática, 73(25,4%; cardiomiopatia isquêmica, 65(22,6%; cardiomiopatia da doença de Chagas, 42(14,6%; cardiomiopatia alcoólica, 9(3,1%; cardiomiopatia periparto, 6(2,1%; valvopatias em 2(4,2% e miocardite viral, 3(1,04%. Foram avaliadas a duração do QRS Standard, duração do QRS filtrado, duração do sinal abaixo de 40µV e a raiz quadrada nos últimos 40ms quanto a idade, sexo, etiologia, achados do eletrocardiograma de repouso de 12 derivações, do ecocardiograma, do eletrocardiograma de longa duração e mortalidade. Para a análise estatística, foram utilizados os testes: exato de Fisher, t de Student, de Man-Whitney, análise de variância, Log-HanK e o método de Kaplan-Meyer. RESULTADOS: O potencial tardio foi diagnosticado em 90 (31,3% pacientes e não houve correlação com as etiologias. Sua presença associou-se a: menor consumo máximo de oxigênio a cicloergoespirometria (p=0,001; taquicardia ventricular sustentada e não sustentada ao Holter (p=0,001, morte súbita e mortalidade (pOBJECTIVE: To evaluate the frequency, clinical correlations and prognosis influence of late potentials on the of heart failure patients with different etiologies using the signal averaged electrocardiogram. METHODS: A 42 month study of the signal averaged electrocardiograms of 288 heart failure patients with different etiologies was conducted. The group of patients
High resolution hadron calorimetry
International Nuclear Information System (INIS)
Wigmans, R.
1987-01-01
The components that contribute to the signal of a hadron calorimeter and the factors that affect its performance are discussed, concentrating on two aspects; energy resolution and signal linearity. Both are decisively dependent on the relative response to the electromagnetic and the non-electromagnetic shower components, the e/h signal ratio, which should be equal to 1.0 for optimal performance. The factors that determine the value of this ratio are examined. The calorimeter performance is crucially determined by its response to the abundantly present soft neutrons in the shower. The presence of a considerable fraction of hydrogen atoms in the active medium is essential for achieving the best possible results. Firstly, this allows one to tune e/h to the desired value by choosing the appropriate sampling fraction. And secondly, the efficient neutron detection via recoil protons in the readout medium itself reduces considerably the effect of fluctuations in binding energy losses at the nuclear level, which dominate the intrinsic energy resolution. Signal equalization, or compensation (e/h = 1.0) does not seem to be a property unique to 238 U, but can also be achieved with lead and probably even iron absorbers. 21 refs.; 19 figs
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
An application of commercial data averaging techniques in pulsed photothermal experiments
International Nuclear Information System (INIS)
Grozescu, I.V.; Moksin, M.M.; Wahab, Z.A.; Yunus, W.M.M.
1997-01-01
We present an application of data averaging technique commonly implemented in many commercial digital oscilloscopes or waveform digitizers. The technique was used for transient data averaging in the pulsed photothermal radiometry experiments. Photothermal signals are surrounded by an important amount of noise which affect the precision of the measurements. The effect of the noise level on photothermal signal parameters in our particular case, fitted decay time, is shown. The results of the analysis can be used in choosing the most effective averaging technique and estimating the averaging parameter values. This would help to reduce the data acquisition time while improving the signal-to-noise ratio
The average size of ordered binary subgraphs
van Leeuwen, J.; Hartel, Pieter H.
To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a
DEFF Research Database (Denmark)
Forouzesh, Mojtaba; Siwakoti, Yam Prasad; Blaabjerg, Frede
2016-01-01
Magnetically coupled Y-source impedance network is a newly proposed structure with versatile features intended for various power converter applications e.g. in the renewable energy technologies. The voltage gain of the Y-source impedance network rises exponentially as a function of turns ratio, w...
Time resolution research in liquid scintillating detection
International Nuclear Information System (INIS)
He Hongkun; Shi Haoshan
2006-01-01
The signal processing design method is introduced into liquid scintillating detection system design. By analyzing the signal of liquid scintillating detection, improving time resolution is propitious to upgrade efficiency of detecting. The scheme of realization and satisfactory experiment data is demonstrated. Besides other types of liquid scintillating detection is the same, just using more high speed data signal processing techniques and elements. (authors)
High resolution photoelectron spectroscopy
International Nuclear Information System (INIS)
Arko, A.J.
1988-01-01
Photoelectron Spectroscopy (PES) covers a very broad range of measurements, disciplines, and interests. As the next generation light source, the FEL will result in improvements over the undulator that are larger than the undulater improvements over bending magnets. The combination of high flux and high inherent resolution will result in several orders of magnitude gain in signal to noise over measurements using synchrotron-based undulators. The latter still require monochromators. Their resolution is invariably strongly energy-dependent so that in the regions of interest for many experiments (h upsilon > 100 eV) they will not have a resolving power much over 1000. In order to study some of the interesting phenomena in actinides (heavy fermions e.g.) one would need resolving powers of 10 4 to 10 5 . These values are only reachable with the FEL
Macia, Didac; Pujol, Jesus; Blanco-Hinojo, Laura; Martínez-Vilavella, Gerard; Martín-Santos, Rocío; Deus, Joan
2018-04-24
There is ample evidence from basic research in neuroscience of the importance of local cortico-cortical networks. Millimetric resolution is achievable with current functional MRI (fMRI) scanners and sequences, and consequently a number of "local" activity similarity measures have been defined to describe patterns of segregation and integration at this spatial scale. We have introduced the use of Iso-Distant local Average Correlation (IDAC), easily defined as the average fMRI temporal correlation of a given voxel with other voxels placed at increasingly separated iso-distant intervals, to characterize the curve of local fMRI signal similarities. IDAC curves can be statistically compared using parametric multivariate statistics. Furthermore, by using RGB color-coding to display jointly IDAC values belonging to three different distance lags, IDAC curves can also be displayed as multi-distance IDAC maps. We applied IDAC analysis to a sample of 41 subjects scanned under two different conditions, a resting state and an auditory-visual continuous stimulation. Multi-distance IDAC mapping was able to discriminate between gross anatomo-functional cortical areas and, moreover, was sensitive to modulation between the two brain conditions in areas known to activate and de-activate during audio-visual tasks. Unlike previous fMRI local similarity measures already in use, our approach draws special attention to the continuous smooth pattern of local functional connectivity.
Ultra-low noise miniaturized neural amplifier with hardware averaging.
Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M
2015-08-01
Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This technique can be adopted for other applications where miniaturized and implantable multichannel acquisition systems with ultra-low noise and low power are required.
Averaging for solitons with nonlinearity management
International Nuclear Information System (INIS)
Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.
2003-01-01
We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations
Perceived Average Orientation Reflects Effective Gist of the Surface.
Cha, Oakyoon; Chong, Sang Chul
2018-03-01
The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.
DSCOVR Magnetometer Level 2 One Minute Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data
DSCOVR Magnetometer Level 2 One Second Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data
Spacetime averaging of exotic singularity universes
International Nuclear Information System (INIS)
Dabrowski, Mariusz P.
2011-01-01
Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.
NOAA Average Annual Salinity (3-Zone)
California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
GPR Signal Denoising and Target Extraction With the CEEMD Method
Li, Jing
2015-04-17
In this letter, we apply a time and frequency analysis method based on the complete ensemble empirical mode decomposition (CEEMD) method in ground-penetrating radar (GPR) signal processing. It decomposes the GPR signal into a sum of oscillatory components, with guaranteed positive and smoothly varying instantaneous frequencies. The key idea of this method relies on averaging the modes obtained by empirical mode decomposition (EMD) applied to several realizations of Gaussian white noise added to the original signal. It can solve the mode-mixing problem in the EMD method and improve the resolution of ensemble EMD (EEMD) when the signal has a low signal-to-noise ratio. First, we analyze the difference between the basic theory of EMD, EEMD, and CEEMD. Then, we compare the time and frequency analysis with Hilbert-Huang transform to test the results of different methods. The synthetic and real GPR data demonstrate that CEEMD promises higher spectral-spatial resolution than the other two EMD methods in GPR signal denoising and target extraction. Its decomposition is complete, with a numerically negligible error.
Improving consensus structure by eliminating averaging artifacts
Directory of Open Access Journals (Sweden)
KC Dukka B
2009-03-01
Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which
40 CFR 76.11 - Emissions averaging.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...
Determinants of College Grade Point Averages
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…
Picosecond resolution programmable delay line
International Nuclear Information System (INIS)
Suchenek, Mariusz
2009-01-01
The note presents implementation of a programmable delay line for digital signals. The tested circuit has a subnanosecond delay range programmable with a resolution of picoseconds. Implementation of the circuit was based on low-cost components, easily available on the market. (technical design note)
Energy Technology Data Exchange (ETDEWEB)
2017-04-25
Gap Resolution is a software package that was developed to improve Newbler genome assemblies by automating the closure of sequence gaps caused by repetitive regions in the DNA. This is done by performing the follow steps:1) Identify and distribute the data for each gap in sub-projects. 2) Assemble the data associated with each sub-project using a secondary assembler, such as Newbler or PGA. 3) Determine if any gaps are closed after reassembly, and either design fakes (consensus of closed gap) for those that closed or lab experiments for those that require additional data. The software requires as input a genome assembly produce by the Newbler assembler provided by Roche and 454 data containing paired-end reads.
High resolution mid-infrared spectroscopy based on frequency upconversion
DEFF Research Database (Denmark)
Dam, Jeppe Seidelin; Hu, Qi; Tidemand-Lichtenberg, Peter
2013-01-01
signals can be analyzed. The obtainable frequency resolution is usually in the nm range where sub nm resolution is preferred in many applications, like gas spectroscopy. In this work we demonstrate how to obtain sub nm resolution when using upconversion. In the presented realization one object point...... high resolution spectral performance by observing emission from hot water vapor in a butane gas burner....
Stochastic Optimal Prediction with Application to Averaged Euler Equations
Energy Technology Data Exchange (ETDEWEB)
Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2017-04-24
Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.
Computation of the bounce-average code
International Nuclear Information System (INIS)
Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.
1977-01-01
The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended
Rotational averaging of multiphoton absorption cross sections
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic
Should the average tax rate be marginalized?
Czech Academy of Sciences Publication Activity Database
Feldman, N. E.; Katuščák, Peter
-, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Average Bandwidth Allocation Model of WFQ
Directory of Open Access Journals (Sweden)
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Nonequilibrium statistical averages and thermo field dynamics
International Nuclear Information System (INIS)
Marinaro, A.; Scarpetta, Q.
1984-01-01
An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
SIGNAL WORDS TOPIC FACT SHEET NPIC fact sheets are designed to answer questions that are commonly asked by the ... making decisions about pesticide use. What are Signal Words? Signal words are found on pesticide product labels, ...
Energy Technology Data Exchange (ETDEWEB)
Reimold, M.; Mueller-Schauenburg, W.; Dohmen, B.M.; Bares, R. [Department of Nuclear Medicine, University of Tuebingen, Otfried-Mueller-Strasse 14, 72076, Tuebingen (Germany); Becker, G.A. [Nuclear Medicine, University of Leipzig, Leipzig (Germany); Reischl, G. [Radiopharmacy, University of Tuebingen, Tuebingen (Germany)
2004-04-01
Due to the stochastic nature of radioactive decay, any measurement of radioactivity concentration requires spatial averaging. In pharmacokinetic analysis of time-activity curves (TAC), such averaging over heterogeneous tissues may introduce a systematic error (heterogeneity error) but may also improve the accuracy and precision of parameter estimation. In addition to spatial averaging (inevitable due to limited scanner resolution and intended in ROI analysis), interindividual averaging may theoretically be beneficial, too. The aim of this study was to investigate the effect of such averaging on the binding potential (BP) calculated with Logan's non-invasive graphical analysis and the ''simplified reference tissue method'' (SRTM) proposed by Lammertsma and Hume, on the basis of simulated and measured positron emission tomography data [{sup 11}C]d-threo-methylphenidate (dMP) and [{sup 11}C]raclopride (RAC) PET. dMP was not quantified with SRTM since the low k {sub 2} (washout rate constant from the first tissue compartment) introduced a high noise sensitivity. Even for considerably different shapes of TAC (dMP PET in parkinsonian patients and healthy controls, [{sup 11}C]raclopride in patients with and without haloperidol medication) and a high variance in the rate constants (e.g. simulated standard deviation of K {sub 1}=25%), the BP obtained from average TAC was close to the mean BP (<5%). However, unfavourably distributed parameters, especially a correlated large variance in two or more parameters, may lead to larger errors. In Monte Carlo simulations, interindividual averaging before quantification reduced the variance from the SRTM (beyond a critical signal to noise ratio) and the bias in Logan's method. Interindividual averaging may further increase accuracy when there is an error term in the reference tissue assumption E=DV {sub 2}-DV ' (DV {sub 2} = distribution volume of the first tissue compartment, DV &apos
Improved averaging for non-null interferometry
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
Ocean tides in GRACE monthly averaged gravity fields
DEFF Research Database (Denmark)
Knudsen, Per
2003-01-01
The GRACE mission will map the Earth's gravity fields and its variations with unprecedented accuracy during its 5-year lifetime. Unless ocean tide signals and their load upon the solid earth are removed from the GRACE data, their long period aliases obscure more subtle climate signals which GRACE...... aims at. In this analysis the results of Knudsen and Andersen (2002) have been verified using actual post-launch orbit parameter of the GRACE mission. The current ocean tide models are not accurate enough to correct GRACE data at harmonic degrees lower than 47. The accumulated tidal errors may affect...... the GRACE data up to harmonic degree 60. A study of the revised alias frequencies confirm that the ocean tide errors will not cancel in the GRACE monthly averaged temporal gravity fields. The S-2 and the K-2 terms have alias frequencies much longer than 30 days, so they remain almost unreduced...
Asynchronous Gossip for Averaging and Spectral Ranking
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
An approach to averaging digitized plantagram curves.
Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B
1994-07-01
The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Exploiting scale dependence in cosmological averaging
International Nuclear Information System (INIS)
Mattsson, Teppo; Ronkainen, Maria
2008-01-01
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Aperture averaging in strong oceanic turbulence
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
Construction of average adult Japanese voxel phantoms for dose assessment
International Nuclear Information System (INIS)
Sato, Kaoru; Takahashi, Fumiaki; Satoh, Daiki; Endo, Akira
2011-12-01
The International Commission on Radiological Protection (ICRP) adopted the adult reference voxel phantoms based on the physiological and anatomical reference data of Caucasian on October, 2007. The organs and tissues of these phantoms were segmented on the basis of ICRP Publication 103. In future, the dose coefficients for internal dose and dose conversion coefficients for external dose calculated using the adult reference voxel phantoms will be widely used for the radiation protection fields. On the other hand, the body sizes and organ masses of adult Japanese are generally smaller than those of adult Caucasian. In addition, there are some cases that the anatomical characteristics such as body sizes, organ masses and postures of subjects influence the organ doses in dose assessment for medical treatments and radiation accident. Therefore, it was needed to use human phantoms with average anatomical characteristics of Japanese. The authors constructed the averaged adult Japanese male and female voxel phantoms by modifying the previously developed high-resolution adult male (JM) and female (JF) voxel phantoms. It has been modified in the following three aspects: (1) The heights and weights were agreed with the Japanese averages; (2) The masses of organs and tissues were adjusted to the Japanese averages within 10%; (3) The organs and tissues, which were newly added for evaluation of the effective dose in ICRP Publication 103, were modeled. In this study, the organ masses, distances between organs, specific absorbed fractions (SAFs) and dose conversion coefficients of these phantoms were compared with those evaluated using the ICRP adult reference voxel phantoms. This report provides valuable information on the anatomical and dosimetric characteristics of the averaged adult Japanese male and female voxel phantoms developed as reference phantoms of adult Japanese. (author)
Regional averaging and scaling in relativistic cosmology
International Nuclear Information System (INIS)
Buchert, Thomas; Carfora, Mauro
2002-01-01
Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Average-case analysis of numerical problems
2000-01-01
The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.
Grassmann Averages for Scalable Robust PCA
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Estimating average tree crown size using high-resolution airborne data
Czech Academy of Sciences Publication Activity Database
Brovkina, Olga; Latypov, I.; Cienciala, E.
2015-01-01
Roč. 9, may 13 (2015), 096053-1-096053-13 ISSN 1931-3195 R&D Projects: GA MŠk(CZ) LO1415; GA MŠk OC09001 Institutional support: RVO:67179843 Keywords : crown size * airborne data * spruce * granulometry Subject RIV: GK - Forestry Impact factor: 0.937, year: 2015
Model averaging, optimal inference and habit formation
Directory of Open Access Journals (Sweden)
Thomas H B FitzGerald
2014-06-01
Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.
Generalized Jackknife Estimators of Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...
Average beta measurement in EXTRAP T1
International Nuclear Information System (INIS)
Hedin, E.R.
1988-12-01
Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS
International Nuclear Information System (INIS)
2005-01-01
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department
Bayesian Averaging is Well-Temperated
DEFF Research Database (Denmark)
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...
Gibbs equilibrium averages and Bogolyubov measure
International Nuclear Information System (INIS)
Sankovich, D.P.
2011-01-01
Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Function reconstruction from noisy local averages
International Nuclear Information System (INIS)
Chen Yu; Huang Jianguo; Han Weimin
2008-01-01
A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies
A singularity theorem based on spatial averages
Indian Academy of Sciences (India)
journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.
Multiphase averaging of periodic soliton equations
International Nuclear Information System (INIS)
Forest, M.G.
1979-01-01
The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Essays on model averaging and political economics
Wang, W.
2013-01-01
This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Average Costs versus Net Present Value
E.A. van der Laan (Erwin); R.H. Teunter (Ruud)
2000-01-01
textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives
Average beta-beating from random errors
Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department
2018-01-01
The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic eﬀect on the tune.
Reliability Estimates for Undergraduate Grade Point Average
Westrick, Paul A.
2017-01-01
Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…
General and Local: Averaged k-Dependence Bayesian Classifiers
Directory of Open Access Journals (Sweden)
Limin Wang
2015-06-01
Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.
Tendon surveillance requirements - average tendon force
International Nuclear Information System (INIS)
Fulton, J.F.
1982-01-01
Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)
On the Averaging of Cardiac Diffusion Tensor MRI Data: The Effect of Distance Function Selection
Giannakidis, Archontis; Melkus, Gerd; Yang, Guang; Gullberg, Grant T.
2016-01-01
Diffusion tensor magnetic resonance imaging (DT-MRI) allows a unique insight into the microstructure of highly-directional tissues. The selection of the most proper distance function for the space of diffusion tensors is crucial in enhancing the clinical application of this imaging modality. Both linear and nonlinear metrics have been proposed in the literature over the years. The debate on the most appropriate DT-MRI distance function is still ongoing. In this paper, we presented a framework to compare the Euclidean, affine-invariant Riemannian and log-Euclidean metrics using actual high-resolution DT-MRI rat heart data. We employed temporal averaging at the diffusion tensor level of three consecutive and identically-acquired DT-MRI datasets from each of five rat hearts as a means to rectify the background noise-induced loss of myocyte directional regularity. This procedure is applied here for the first time in the context of tensor distance function selection. When compared with previous studies that used a different concrete application to juxtapose the various DT-MRI distance functions, this work is unique in that it combined the following: (i) Metrics were judged by quantitative –rather than qualitative– criteria, (ii) the comparison tools were non-biased, (iii) a longitudinal comparison operation was used on a same-voxel basis. The statistical analyses of the comparison showed that the three DT-MRI distance functions tend to provide equivalent results. Hence, we came to the conclusion that the tensor manifold for cardiac DT-MRI studies is a curved space of almost zero curvature. The signal to noise ratio dependence of the operations was investigated through simulations. Finally, the “swelling effect” occurrence following Euclidean averaging was found to be too unimportant to be worth consideration. PMID:27754986
On the averaging of cardiac diffusion tensor MRI data: the effect of distance function selection
Giannakidis, Archontis; Melkus, Gerd; Yang, Guang; Gullberg, Grant T.
2016-11-01
Diffusion tensor magnetic resonance imaging (DT-MRI) allows a unique insight into the microstructure of highly-directional tissues. The selection of the most proper distance function for the space of diffusion tensors is crucial in enhancing the clinical application of this imaging modality. Both linear and nonlinear metrics have been proposed in the literature over the years. The debate on the most appropriate DT-MRI distance function is still ongoing. In this paper, we presented a framework to compare the Euclidean, affine-invariant Riemannian and log-Euclidean metrics using actual high-resolution DT-MRI rat heart data. We employed temporal averaging at the diffusion tensor level of three consecutive and identically-acquired DT-MRI datasets from each of five rat hearts as a means to rectify the background noise-induced loss of myocyte directional regularity. This procedure is applied here for the first time in the context of tensor distance function selection. When compared with previous studies that used a different concrete application to juxtapose the various DT-MRI distance functions, this work is unique in that it combined the following: (i) metrics were judged by quantitative—rather than qualitative—criteria, (ii) the comparison tools were non-biased, (iii) a longitudinal comparison operation was used on a same-voxel basis. The statistical analyses of the comparison showed that the three DT-MRI distance functions tend to provide equivalent results. Hence, we came to the conclusion that the tensor manifold for cardiac DT-MRI studies is a curved space of almost zero curvature. The signal to noise ratio dependence of the operations was investigated through simulations. Finally, the ‘swelling effect’ occurrence following Euclidean averaging was found to be too unimportant to be worth consideration.
Statistics on exponential averaging of periodograms
Energy Technology Data Exchange (ETDEWEB)
Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).
Statistics on exponential averaging of periodograms
International Nuclear Information System (INIS)
Peeters, T.T.J.M.; Ciftcioglu, Oe.
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Directory of Open Access Journals (Sweden)
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
Weighted estimates for the averaging integral operator
Czech Academy of Sciences Publication Activity Database
Opic, Bohumír; Rákosník, Jiří
2010-01-01
Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231
Average Transverse Momentum Quantities Approaching the Lightfront
Boer, Daniel
2015-01-01
In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...
Scintillation camera with second order resolution
International Nuclear Information System (INIS)
Muehllehner, G.
1976-01-01
A scintillation camera for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area is described in which means is provided for second order positional resolution. The phototubes, which normally provide only a single order of resolution, are modified to provide second order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved
Average configuration of the geomagnetic tail
International Nuclear Information System (INIS)
Fairfield, D.H.
1979-01-01
Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed
Unscrambling The "Average User" Of Habbo Hotel
Directory of Open Access Journals (Sweden)
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
Changing mortality and average cohort life expectancy
Directory of Open Access Journals (Sweden)
Robert Schoen
2005-10-01
Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.
Signal processing in microdosimetry
International Nuclear Information System (INIS)
Arbel, A.
1984-01-01
Signals occurring in microdosimetric measurements cover a dynamic range of 100 dB at a counting rate which normally stays below 10 4 but could increase significantly in case of an accident. The need for high resolution at low energies, non-linear signal processing to accommodate the specified dynamic range, easy calibration and thermal stability are conflicting requirements which pose formidable design problems. These problems are reviewed, and a practical approach to their solution is given employing a single processing channel. (author)
Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany
2016-07-01
Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.
Applications of ordered weighted averaging (OWA operators in environmental problems
Directory of Open Access Journals (Sweden)
Carlos Llopis-Albert
2017-04-01
Full Text Available This paper presents an application of a prioritized weighted aggregation operator based on ordered weighted averaging (OWA to deal with stakeholders' constructive participation in water resources projects. They have different degree of acceptance or preference regarding the measures and policies to be carried out, which lead to different environmental and socio-economic outcomes, and hence, to different levels of stakeholders’ satisfaction. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology has been successfully applied to a Public Participation Project (PPP in watershed management, thus obtaining efficient environmental measures in conflict resolution problems under actors’ preference uncertainties.
DEFF Research Database (Denmark)
Novak, Ivana
2016-01-01
The Department of Biology at the University of Copenhagen explains the function of ATP signalling in the pancreas......The Department of Biology at the University of Copenhagen explains the function of ATP signalling in the pancreas...
arXiv Time resolution of silicon pixel sensors
Riegler, W.
2017-11-21
We derive expressions for the time resolution of silicon detectors, using the Landau theory and a PAI model for describing the charge deposit of high energy particles. First we use the centroid time of the induced signal and derive analytic expressions for the three components contributing to the time resolution, namely charge deposit fluctuations, noise and fluctuations of the signal shape due to weighting field variations. Then we derive expressions for the time resolution using leading edge discrimination of the signal for various electronics shaping times. Time resolution of silicon detectors with internal gain is discussed as well.
Optimization of super-resolution processing using incomplete image sets in PET imaging.
Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R
2008-12-01
Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of
Operator product expansion and its thermal average
Energy Technology Data Exchange (ETDEWEB)
Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)
1998-05-01
QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.
Fluctuations of wavefunctions about their classical average
International Nuclear Information System (INIS)
Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics
Phase-averaged transport for quasiperiodic Hamiltonians
Bellissard, J; Schulz-Baldes, H
2002-01-01
For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.
Baseline-dependent averaging in radio interferometry
Wijnholds, S. J.; Willis, A. G.; Salvini, S.
2018-05-01
This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.
Multistage parallel-serial time averaging filters
International Nuclear Information System (INIS)
Theodosiou, G.E.
1980-01-01
Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)
Time-averaged MSD of Brownian motion
International Nuclear Information System (INIS)
Andreanov, Alexei; Grebenkov, Denis S
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution
Time-dependent angularly averaged inverse transport
International Nuclear Information System (INIS)
Bal, Guillaume; Jollivet, Alexandre
2009-01-01
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain
Independence, Odd Girth, and Average Degree
DEFF Research Database (Denmark)
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7. ...
Bootstrapping Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...
Average Nuclear properties based on statistical model
International Nuclear Information System (INIS)
El-Jaick, L.J.
1974-01-01
The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis S.
2012-07-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.
De Luca, G.; Magnus, J.R.
2011-01-01
In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Averaged null energy condition from causality
Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein
2017-07-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.
Beta-energy averaging and beta spectra
International Nuclear Information System (INIS)
Stamatelatos, M.G.; England, T.R.
1976-07-01
A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality
Asymptotic Time Averages and Frequency Distributions
Directory of Open Access Journals (Sweden)
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Chaotic Universe, Friedmannian on the average 2
Energy Technology Data Exchange (ETDEWEB)
Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij
1980-11-01
The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.
Averaging in the presence of sliding errors
International Nuclear Information System (INIS)
Yost, G.P.
1991-08-01
In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms
Digital signal processing the Tevatron BPM signals
International Nuclear Information System (INIS)
Cancelo, G.; James, E.; Wolbers, S.
2005-01-01
The Beam Position Monitor (TeV BPM) readout system at Fermilab's Tevatron has been updated and is currently being commissioned. The new BPMs use new analog and digital hardware to achieve better beam position measurement resolution. The new system reads signals from both ends of the existing directional stripline pickups to provide simultaneous proton and antiproton measurements. The signals provided by the two ends of the BPM pickups are processed by analog band-pass filters and sampled by 14-bit ADCs at 74.3MHz. A crucial part of this work has been the design of digital filters that process the signal. This paper describes the digital processing and estimation techniques used to optimize the beam position measurement. The BPM electronics must operate in narrow-band and wide-band modes to enable measurements of closed-orbit and turn-by-turn positions. The filtering and timing conditions of the signals are tuned accordingly for the operational modes. The analysis and the optimized result for each mode are presented
High average power linear induction accelerator development
International Nuclear Information System (INIS)
Bayless, J.R.; Adler, R.J.
1987-07-01
There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs
FEL system with homogeneous average output
Energy Technology Data Exchange (ETDEWEB)
Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph
2018-01-16
A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
[Quetelet, the average man and medical knowledge].
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Angle-averaged Compton cross sections
International Nuclear Information System (INIS)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV
Average Gait Differential Image Based Human Recognition
Directory of Open Access Journals (Sweden)
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Reynolds averaged simulation of unsteady separated flow
International Nuclear Information System (INIS)
Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.
2003-01-01
The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation
Angle-averaged Compton cross sections
Energy Technology Data Exchange (ETDEWEB)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-05-07
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.
International Nuclear Information System (INIS)
Wortis, R.; Song Yun; Atkinson, W.A.
2008-01-01
With the goal of measuring localization in disordered interacting systems, we examine the finite-size scaling of the geometrically averaged density of states calculated from the local Green's function with finite energy resolution. Our results show that, unlike in a simple energy binning procedure, there is no limit in which the finite energy resolution is irrelevant
van Staaden, Moira J; Searcy, William A; Hanlon, Roger T
2011-01-01
From psychological and sociological standpoints, aggression is regarded as intentional behavior aimed at inflicting pain and manifested by hostility and attacking behaviors. In contrast, biologists define aggression as behavior associated with attack or escalation toward attack, omitting any stipulation about intentions and goals. Certain animal signals are strongly associated with escalation toward attack and have the same function as physical attack in intimidating opponents and winning contests, and ethologists therefore consider them an integral part of aggressive behavior. Aggressive signals have been molded by evolution to make them ever more effective in mediating interactions between the contestants. Early theoretical analyses of aggressive signaling suggested that signals could never be honest about fighting ability or aggressive intentions because weak individuals would exaggerate such signals whenever they were effective in influencing the behavior of opponents. More recent game theory models, however, demonstrate that given the right costs and constraints, aggressive signals are both reliable about strength and intentions and effective in influencing contest outcomes. Here, we review the role of signaling in lieu of physical violence, considering threat displays from an ethological perspective as an adaptive outcome of evolutionary selection pressures. Fighting prowess is conveyed by performance signals whose production is constrained by physical ability and thus limited to just some individuals, whereas aggressive intent is encoded in strategic signals that all signalers are able to produce. We illustrate recent advances in the study of aggressive signaling with case studies of charismatic taxa that employ a range of sensory modalities, viz. visual and chemical signaling in cephalopod behavior, and indicators of aggressive intent in the territorial calls of songbirds. Copyright © 2011 Elsevier Inc. All rights reserved.
Fast digitizing and digital signal processing of detector signals
International Nuclear Information System (INIS)
Hannaske, Roland
2008-01-01
A fast-digitizer data acquisition system recently installed at the neutron time-of-flight experiment nELBE, which is located at the superconducting electron accelerator ELBE of Forschungszentrum Dresden-Rossendorf, is tested with two different detector types. Preamplifier signals from a high-purity germanium detector are digitized, stored and finally processed. For a precise determination of the energy of the detected radiation, the moving-window deconvolution algorithm is used to compensate the ballistic deficit and different shaping algorithms are applied. The energy resolution is determined in an experiment with γ-rays from a 22 Na source and is compared to the energy resolution achieved with analogously processed signals. On the other hand, signals from the photomultipliers of barium fluoride and plastic scintillation detectors are digitized. These signals have risetimes of a few nanoseconds only. The moment of interaction of the radiation with the detector is determined by methods of digital signal processing. Therefore, different timing algorithms are implemented and tested with data from an experiment at nELBE. The time resolutions achieved with these algorithms are compared to each other as well as to reference values coming from analog signal processing. In addition to these experiments, some properties of the digitizing hardware are measured and a program for the analysis of stored, digitized data is developed. The analysis of the signals shows that the energy resolution achieved with the 10-bit digitizer system used here is not competitive to a 14-bit peak-sensing ADC, although the ballistic deficit can be fully corrected. However, digital methods give better result in sub-ns timing than analog signal processing. (orig.)
Industrial Applications of High Average Power FELS
Shinn, Michelle D
2005-01-01
The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Geographic Gossip: Efficient Averaging for Sensor Networks
Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.
High-average-power solid state lasers
International Nuclear Information System (INIS)
Summers, M.A.
1989-01-01
In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs
The concept of average LET values determination
International Nuclear Information System (INIS)
Makarewicz, M.
1981-01-01
The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)
On spectral averages in nuclear spectroscopy
International Nuclear Information System (INIS)
Verbaarschot, J.J.M.
1982-01-01
In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)
The consequences of time averaging for measuring temporal species turnover in the fossil record
Tomašových, Adam; Kidwell, Susan
2010-05-01
Modeling time averaging effects with simple simulations allows us to evaluate the magnitude of change in temporal species turnover that is expected to occur in long (paleoecological) time series with fossil assemblages. Distinguishing different modes of metacommunity dynamics (such as neutral, density-dependent, or trade-off dynamics) with time-averaged fossil assemblages requires scaling-up time-averaging effects because the decrease in temporal resolution and the decrease in temporal inter-sample separation (i.e., the two main effects of time averaging) substantially increase community stability relative to assemblages without or with weak time averaging. Large changes in temporal scale that cover centuries to millennia can lead to unprecedented effects on temporal rate of change in species composition. Temporal variation in species composition monotonically decreases with increasing duration of time-averaging in simulated fossil assemblages. Time averaging is also associated with the reduction of species dominance owing to the temporal switching in the identity of dominant species. High degrees of time averaging can cause that community parameters of local fossil assemblages converge to parameters of metacommunity rather that to parameters of individual local non-averaged communities. We find that the low variation in species composition observed among mollusk and ostracod subfossil assemblages can be explained by time averaging alone, and low temporal resolution and reduced temporal separation among assemblages in time series can thus explain a substantial part of the reduced variation in species composition relative to unscaled predictions of neutral model (i.e., species do not differ in birth, death, and immigration rates on per capita basis). The structure of time-averaged assemblages can thus provide important insights into processes that act over larger temporal scales, such as evolution of niches and dispersal, range-limit dynamics, taxon cycles, and
Clustering method to process signals from a CdZnTe detector
International Nuclear Information System (INIS)
Zhang, Lan; Takahashi, Hiroyuki; Fukuda, Daiji; Nakazawa, Masaharu
2001-01-01
The poor mobility of holes in a compound semiconductor detector results in the imperfect collection of the primary charge deposited in the detector. Furthermore the fluctuation of the charge loss efficiency due to the change in the hole collection path length seriously degrades the energy resolution of the detector. Since the charge collection efficiency varies with the signal waveform, we can expect the improvement of the energy resolution through a proper waveform signal processing method. We developed a new digital signal processing technique, a clustering method which derives typical patterns containing the information on the real situation inside a detector from measured signals. The obtained typical patterns for the detector are then used for the pattern matching method. Measured signals are classified through analyzing the practical waveform variation due to the charge trapping, the electric field and the crystal defect etc. Signals with similar shape are placed into the same cluster. For each cluster we calculate an average waveform as a reference pattern. Using these reference patterns obtained from all the clusters, we can classify other measured signal waveforms from the same detector. Then signals are independently processed according to the classified category and form corresponding spectra. Finally these spectra are merged into one spectrum by multiplying normalization coefficients. The effectiveness of this method was verified with a CdZnTe detector of 2 mm thick and a 137 Cs gamma-ray source. The obtained energy resolution as improved to about 8 keV (FWHM). Because the clustering method is only related to the measured waveforms, it can be applied to any type and size of detectors and compatible with any type of filtering methods. (author)
Track resolution in the RPC chamber
International Nuclear Information System (INIS)
Cardarelli, R.; Aielli, G.; Camarri, P.; Di Ciaccio, A.; Liberti, B.; Santonico, R.
2007-01-01
A new very promising read out, in addition to the well-known charge centroid method, is proposed for improving the space resolution in the Resistive Plate Chamber (RPC) in the sub-millimeter range. The method is based on the read out of the signal propagating in the graphite electrode which was simulated using a distributed resistance-capacitance model in SPICE. The results show that a good space-time correlation in the diffusion process is only possible by suitable signal processing. Three RPC detectors with the new layout and dedicated electronics were tested. The measured space resolution was in the order of a few 100μm
PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM
Bahubali K. Shiragapur; Uday Wali
2016-01-01
In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR) quantity. The Golay Code (24, 12), Reed-Muller code (16, 11), Hamming code (7, 4) and Hybrid technique (Combination of Signal Scrambling and Signal Distortion) proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conve...
Signal-averaged P wave duration and the long-term risk of permanent atrial fibrillation
DEFF Research Database (Denmark)
Dixen, Ulrik; Larsen, Mette Vang; Ravn, Lasse Steen
2008-01-01
of permanent AF. The risk of permanent AF after 3 years follow-up was 0.72 with an SAPWD equal to 180 ms versus 0.39 with a normal SAPWD (130 ms). We found no prognostic effect of age, gender, dilated left atrium, long duration of AF history, or long duration of the most recent episode of AF. Co...
An Investigation of Vibration Signal Averaging of Individual Components in an Epicyclic Gearbox
1991-06-01
kHz. These were mounted on a set of small steel blocks bonded to the gearbox casing at various positions. A six channel PCB charge amplifying power... callable library of routines, ATLAB, available with the data acquisition card was used to acquire the digitised data into Fortran 2-byte integer arrays. A
Aarthi, G.; Ramachandra Reddy, G.
2018-03-01
In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.
On the resolution of ECG acquisition systems for the reliable analysis of the P-wave
International Nuclear Information System (INIS)
Censi, Federica; Calcagnini, Giovanni; Mattei, Eugenio; Triventi, Michele; Bartolini, Pietro; Corazza, Ivan; Boriani, Giuseppe
2012-01-01
The analysis of the P-wave on surface ECG is widely used to assess the risk of atrial arrhythmias. In order to provide reliable results, the automatic analysis of the P-wave must be precise and reliable and must take into account technical aspects, one of those being the resolution of the acquisition system. The aim of this note is to investigate the effects of the amplitude resolution of ECG acquisition systems on the P-wave analysis. Starting from ECG recorded by an acquisition system with a less significant bit (LSB) of 31 nV (24 bit on an input range of 524 mVpp), we reproduced an ECG signal as acquired by systems with lower resolution (16, 15, 14, 13 and 12 bit). We found that, when the LSB is of the order of 128 µV (12 bit), a single P-wave is not recognizable on ECG. However, when averaging is applied, a P-wave template can be extracted, apparently suitable for the P-wave analysis. Results obtained in terms of P-wave duration and morphology revealed that the analysis of ECG at lowest resolutions (from 12 to 14 bit, LSB higher than 30 µV) could lead to misleading results. However, the resolution used nowadays in modern electrocardiographs (15 and 16 bit, LSB <10 µV) is sufficient for the reliable analysis of the P-wave. (note)
Signal transduction by growth factor receptors: signaling in an instant
DEFF Research Database (Denmark)
Dengjel, Joern; Akimov, Vyacheslav; Blagoev, Blagoy
2007-01-01
Phosphorylation-based signaling events happening within the first minute of receptor stimulation have so far only been analyzed by classical cell biological approaches like live-cell microscopy. The development of a quench flow system with a time resolution of one second coupled to a read...
Multibeam swath bathymetry signal processing techniques
Digital Repository Service at National Institute of Oceanography (India)
Ranade, G.; Sudhakar, T.
Mathematical advances and the advances in the real time signal processing techniques in the recent times, have considerably improved the state of art in the bathymetry systems. These improvements have helped in developing high resolution swath...
Direct measurement of fast transients by using boot-strapped waveform averaging
Olsson, Mattias; Edman, Fredrik; Karki, Khadga Jung
2018-03-01
An approximation to coherent sampling, also known as boot-strapped waveform averaging, is presented. The method uses digital cavities to determine the condition for coherent sampling. It can be used to increase the effective sampling rate of a repetitive signal and the signal to noise ratio simultaneously. The method is demonstrated by using it to directly measure the fluorescence lifetime from Rhodamine 6G by digitizing the signal from a fast avalanche photodiode. The obtained lifetime of 4.0 ns is in agreement with the known values.
Wielicki, Bruce A. (Principal Investigator)
The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2000-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].
Wielicki, Bruce A. (Principal Investigator)
The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2004-05-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].
Wielicki, Bruce A. (Principal Investigator)
The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2003-02-28] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].
Wielicki, Bruce A. (Principal Investigator)
The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2003-02-28] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].
DEFF Research Database (Denmark)
Nasrollahi, Kamal; Moeslund, Thomas B.
2014-01-01
Super-resolution, the process of obtaining one or more high-resolution images from one or more low-resolution observations, has been a very attractive research topic over the last two decades. It has found practical applications in many real world problems in different fields, from satellite...
To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space
International Nuclear Information System (INIS)
Khrennikov, Andrei
2007-01-01
We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'
Determining average path length and average trapping time on generalized dual dendrimer
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Directory of Open Access Journals (Sweden)
G. H. de Rooij
2009-07-01
Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.
A Framework for Control System Design Subject to Average Data-Rate Constraints
DEFF Research Database (Denmark)
Silva, Eduardo; Derpich, Milan; Østergaard, Jan
2011-01-01
This paper studies discrete-time control systems subject to average data-rate limits. We focus on a situation where a noisy linear system has been designed assuming transparent feedback and, due to implementation constraints, a source-coding scheme (with unity signal transfer function) has to be ...
Optimized high energy resolution in γ-ray spectroscopy with AGATA triple cluster detectors
Energy Technology Data Exchange (ETDEWEB)
Wiens, Andreas
2011-06-20
The AGATA demonstrator consists of five AGATA Triple Cluster (ATC) detectors. Each triple cluster detector contains three asymmetric, 36-fold segmented, encapsulated high purity germanium detectors. The purpose of the demonstrator is to show the feasibility of position-dependent γ-ray detection by means of γ-ray tracking, which is based on pulse shape analysis. The thesis describes the first optimization procedure of the first triple cluster detectors. Here, a high signal quality is mandatory for the energy resolution and the pulse shape analysis. The signal quality was optimized and the energy resolution was improved through the modification of the electronic properties, of the grounding scheme of the detector in particular. The first part of the work was the successful installation of the first four triple cluster detectors at INFN (National Institute of Nuclear Physics) in Legnaro, Italy, in the demonstrator frame prior to the AGATA commissioning experiments and the first physics campaign. The four ATC detectors combine 444 high resolution spectroscopy channels. This number combined with a high density were achieved for the first time for in-beam γ-ray spectroscopy experiments. The high quality of the ATC detectors is characterized by the average energy resolutions achieved for the segments of each crystal in the range of 1.943 and 2.131 keV at a γ-ray energy of 1.33 MeV for the first 12 crystals. The crosstalk level between individual detectors in the ATC is negligible. The crosstalk within one crystal is at a level of 10{sup -3}. In the second part of the work new methods for enhanced energy resolution in highly segmented and position sensitive detectors were developed. The signal-to-noise ratio was improved through averaging of the core and the segment signals, which led to an improvement of the energy resolution of 21% for γ-energies of 60 keV to a FWHM of 870 eV. In combination with crosstalk correction, a clearly improved energy resolution was
20 CFR 404.221 - Computing your average monthly wage.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...
Average and local structure of α-CuI by configurational averaging
International Nuclear Information System (INIS)
Mohn, Chris E; Stoelen, Svein
2007-01-01
Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs
Measuring displacement signal with an accelerometer
International Nuclear Information System (INIS)
Han, Sang Bo
2010-01-01
An effective and simple way to reconstruct displacement signal from a measured acceleration signal is proposed in this paper. To reconstruct displacement signal by means of double-integrating the time domain acceleration signal, the Nyquist frequency of the digital sampling of the acceleration signal should be much higher than the highest frequency component of the signal. On the other hand, to reconstruct displacement signal by taking the inverse Fourier transform, the magnitude of the significant frequency components of the Fourier transform of the acceleration signal should be greater than the 6 dB increment line along the frequency axis. With a predetermined resolution in time and frequency domain, determined by the sampling rate to measure and record the original signal, reconstructing high-frequency signals in the time domain and reconstructing low-frequency signals in the frequency domain will produce biased errors. Furthermore, because of the DC components inevitably included in the sampling process, low-frequency components of the signals are overestimated when displacement signals are reconstructed from the Fourier transform of the acceleration signal. The proposed method utilizes curve-fitting around the significant frequency components of the Fourier transform of the acceleration signal before it is inverse-Fourier transformed. Curve-fitting around the dominant frequency components provides much better results than simply ignoring the insignificant frequency components of the signal
A benchmark test of computer codes for calculating average resonance parameters
International Nuclear Information System (INIS)
Ribon, P.; Thompson, A.
1983-01-01
A set of resonance parameters has been generated from known, but secret, average values; the parameters have then been adjusted to mimic experimental data by including the effects of Doppler broadening, resolution broadening and statistical fluctuations. Average parameters calculated from the dataset by various computer codes are compared with each other, and also with the true values. The benchmark test is fully described in the report NEANDC160-U (NEA Data Bank Newsletter No. 27 July 1982); the present paper is a summary of this document. (Auth.)
Talsma, D.
2008-01-01
The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the
Talsma, D.
2008-01-01
The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the
Scintillation camera with second order resolution
International Nuclear Information System (INIS)
1975-01-01
A scintillation camera is described for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area in which means is provided for second-order positional resolution. The phototubes which normally provide only a single order of resolution, are modified to provide second-order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved
International Nuclear Information System (INIS)
Anon.
1992-01-01
Signal processing techniques, extensively used nowadays to maximize the performance of audio and video equipment, have been a key part in the design of hardware and software for high energy physics detectors since pioneering applications in the UA1 experiment at CERN in 1979
International Nuclear Information System (INIS)
Minehara, E.; Kutschera, W.; Hartog, P.D.; Billquist, P.
1985-01-01
The ANL (Argonne National Laboratory) high-resolution injector has been installed to obtain higher mass resolution and higher preacceleration, and to utilize effectively the full mass range of ATLAS (Argonne Tandem Linac Accelerator System). Preliminary results of the first beam test are reported briefly. The design and performance, in particular a high-mass-resolution magnet with aberration compensation, are discussed. 7 refs., 5 figs., 2 tabs
Directory of Open Access Journals (Sweden)
Moath Kassim
2018-05-01
Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors
Energy Technology Data Exchange (ETDEWEB)
Chen, Z. [School of Physics and Astronomy, Monash University, Clayton, Victoria 3800 (Australia); Weyland, M. [Monash Centre for Electron Microscopy, Monash University, Clayton, Victoria 3800 (Australia); Department of Materials Science and Engineering, Monash University, Clayton, Victoria 3800 (Australia); Sang, X.; Xu, W.; Dycus, J.H.; LeBeau, J.M. [Department of Materials Science and Engineering, North Carolina State University, Raleigh, NC 27695 (United States); D' Alfonso, A.J.; Allen, L.J. [School of Physics, University of Melbourne, Parkville, Victoria 3010 (Australia); Findlay, S.D., E-mail: scott.findlay@monash.edu [School of Physics and Astronomy, Monash University, Clayton, Victoria 3800 (Australia)
2016-09-15
Quantitative agreement on an absolute scale is demonstrated between experiment and simulation for two-dimensional, atomic-resolution elemental mapping via energy dispersive X-ray spectroscopy. This requires all experimental parameters to be carefully characterized. The agreement is good, but some discrepancies remain. The most likely contributing factors are identified and discussed. Previous predictions that increasing the probe forming aperture helps to suppress the channelling enhancement in the average signal are confirmed experimentally. It is emphasized that simple column-by-column analysis requires a choice of sample thickness that compromises between being thick enough to yield a good signal-to-noise ratio while being thin enough that the overwhelming majority of the EDX signal derives from the column on which the probe is placed, despite strong electron scattering effects. - Highlights: • Absolute scale quantification of 2D atomic-resolution EDX maps is demonstrated. • Factors contributing to remaining small quantitative discrepancies are identified. • Experiment confirms large probe-forming apertures suppress channelling enhancement. • The thickness range suitable for reliable column-by-column analysis is discussed.
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Shedding light on endocytosis with optimized super-resolution microscopy
Leyton Puig, D.M.
2017-01-01
Super-resolution microscopy is a relatively new microscopy technique that is still under optimization. In this thesis we focus on the improvement of the quality of super-resolution images, to apply them to the study of the processes of cell signaling and endocytosis. First, we show that the use of a
Signal processing for liquid ionization calorimeters
International Nuclear Information System (INIS)
Cleland, W.E.; Stern, E.G.
1992-01-01
We present the results of a study of the effects of thermal and pileup noise in liquid ionization calorimeters operating in a high luminosity calorimeters operating in a high luminosity environment. The method of optimal filtering of multiply-sampled signals which may be used to improve the timing and amplitude resolution of calorimeter signals is described, and its implications for signal shaping functions are examined. The dependence of the time and amplitude resolution on the relative strength of the pileup and thermal noise, which varies with such parameters as luminosity, rapidity and calorimeter cell size, is examined
Relationship of signal-to-noise ratio with acquisition parameters in MRI for a given contrast
International Nuclear Information System (INIS)
Bittoun, J.; Leroy-Willig, A.; Idy, I.; Halimi, P.; Syrota, A.; Desgrez, A.; Saint-Jalmes, H.
1987-01-01
The signal-to-noise ratio (SNR) is certainly the most important characteristic of medical images, since the spatial resolution and the visualization of contrast are dependent on its value. On the other hand, modifying an acquisition variable in magnetic resonance imaging, in order to improve spatial resolution for example, may induce a SNR loss and finally alter the image quality. We have studied a theoretical relation between SNR and 2DFT method acquisition variables with the exception of parameters such as TR, TE and TI; these parameters are determined by the desired contrast in order to confirm a diagnosis. According to this relation SNR is proportional to each dimension of the slice, and to the square root of the number of averaged signals; it is inversely proportional to the number of frequency points and to the square root of the number of phase points. This relation was experimentally verified with phantoms and on an MR system at 1.5 T. It was then plotted as a multiple-entry graph on which operators at the console can read the number of averaged signals necessary to compensate SNR loss induced by a modification of other parameters [fr
Average combination difference morphological filters for fault feature extraction of bearing
Lv, Jingxiang; Yu, Jianbo
2018-02-01
In order to extract impulse components from vibration signals with much noise and harmonics, a new morphological filter called average combination difference morphological filter (ACDIF) is proposed in this paper. ACDIF constructs firstly several new combination difference (CDIF) operators, and then integrates the best two CDIFs as the final morphological filter. This design scheme enables ACIDF to extract positive and negative impacts existing in vibration signals to enhance accuracy of bearing fault diagnosis. The length of structure element (SE) that affects the performance of ACDIF is determined adaptively by a new indicator called Teager energy kurtosis (TEK). TEK further improves the effectiveness of ACDIF for fault feature extraction. Experimental results on the simulation and bearing vibration signals demonstrate that ACDIF can effectively suppress noise and extract periodic impulses from bearing vibration signals.
High resolution SETI: Experiences and prospects
Horowitz, Paul; Clubok, Ken
Megachannel spectroscopy with sub-Hertz resolution constitutes an attractive strategy for a microwave search for extraterrestrial intelligence (SETI), assuming the transmission of a narrowband radiofrequency beacon. Such resolution matches the properties of the interstellar medium, and the necessary Doppler corrections provide a high degree of interference rejection. We have constructed a frequency-agile receiver with an FFT-based 8 megachannel digital spectrum analyzer, on-line signal recognition, and multithreshold archiving. We are using it to conduct a meridian transit search of the northern sky at the Harvard-Smithsonian 26-m antenna, with a second identical system scheduled to begin observations in Argentina this month. Successive 400 kHz spectra, at 0.05 Hz resolution, are searched for features characteristic of an intentional narrowband beacon transmission. These spectra are centered on guessable frequencies (such as λ21 cm), referenced successively to the local standard of rest, the galactic barycenter, and the cosmic blackbody rest frame. This search has rejected interference admirably, but is greatly limited both in total frequency coverage and sensitivity to signals other than carriers. We summarize five years of high resolution SETI at Harvard, in the context of answering the questions "How useful is narrowband SETI, how serious are its limitations, what can be done to circumvent them, and in what direction should SETI evolve?" Increasingly powerful signal processing hardware, combined with ever-higher memory densities, are particularly relevant, permitting the construction of compact and affordable gigachannel spectrum analyzers covering hundreds of megahertz of instantaneous bandwidth.
A compact high resolution ion mobility spectrometer for fast trace gas analysis.
Kirk, Ansgar T; Allers, Maria; Cochems, Philipp; Langejuergen, Jens; Zimmermann, Stefan
2013-09-21
Drift tube ion mobility spectrometers (IMS) are widely used for fast trace gas detection in air, but portable compact systems are typically very limited in their resolving power. Decreasing the initial ion packet width improves the resolution, but is generally associated with a reduced signal-to-noise-ratio (SNR) due to the lower number of ions injected into the drift region. In this paper, we present a refined theory of IMS operation which employs a combined approach for the analysis of the ion drift and the subsequent amplification to predict both the resolution and the SNR of the measured ion current peak. This theoretical analysis shows that the SNR is not a function of the initial ion packet width, meaning that compact drift tube IMS with both very high resolution and extremely low limits of detection can be designed. Based on these implications, an optimized combination of a compact drift tube with a length of just 10 cm and a transimpedance amplifier has been constructed with a resolution of 183 measured for the positive reactant ion peak (RIP(+)), which is sufficient to e.g. separate the RIP(+) from the protonated acetone monomer, even though their drift times only differ by a factor of 1.007. Furthermore, the limits of detection (LODs) for acetone are 180 pptv within 1 s of averaging time and 580 pptv within only 100 ms.
An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet
International Nuclear Information System (INIS)
Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi
2010-01-01
Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
Generation of earthquake signals
International Nuclear Information System (INIS)
Kjell, G.
1994-01-01
Seismic verification can be performed either as a full scale test on a shaker table or as numerical calculations. In both cases it is necessary to have an earthquake acceleration time history. This report describes generation of such time histories by filtering white noise. Analogue and digital filtering methods are compared. Different methods of predicting the response spectrum of a white noise signal filtered by a band-pass filter are discussed. Prediction of both the average response level and the statistical variation around this level are considered. Examples with both the IEEE 301 standard response spectrum and a ground spectrum suggested for Swedish nuclear power stations are included in the report
Automated conflict resolution issues
Wike, Jeffrey S.
1991-01-01
A discussion is presented of how conflicts for Space Network resources should be resolved in the ATDRSS era. The following topics are presented: a description of how resource conflicts are currently resolved; a description of issues associated with automated conflict resolution; present conflict resolution strategies; and topics for further discussion.
Signal-noise separation based on self-similarity testing in 1D-timeseries data
Bourdin, Philippe A.
2015-08-01
The continuous improvement of the resolution delivered by modern instrumentation is a cost-intensive part of any new space- or ground-based observatory. Typically, scientists later reduce the resolution of the obtained raw-data, for example in the spatial, spectral, or temporal domain, in order to suppress the effects of noise in the measurements. In practice, only simple methods are used that just smear out the noise, instead of trying to remove it, so that the noise can nomore be seen. In high-precision 1D-timeseries data, this usually results in an unwanted quality-loss and corruption of power spectra at selected frequency ranges. Novel methods exist that are based on non-local averaging, which would conserve much of the initial resolution, but these methods are so far focusing on 2D or 3D data. We present here a method specialized for 1D-timeseries, e.g. as obtained by magnetic field measurements from the recently launched MMS satellites. To identify the noise, we use a self-similarity testing and non-local averaging method in order to separate different types of noise and signals, like the instrument noise, non-correlated fluctuations in the signal from heliospheric sources, and correlated fluctuations such as harmonic waves or shock fronts. In power spectra of test data, we are able to restore significant parts of a previously know signal from a noisy measurement. This method also works for high frequencies, where the background noise may have a larger contribution to the spectral power than the signal itself. We offer an easy-to-use software tools set, which enables scientists to use this novel technique on their own noisy data. This allows to use the maximum possible capacity of the instrumental hardware and helps to enhance the quality of the obtained scientific results.
Ultrasound imaging using coded signals
DEFF Research Database (Denmark)
Misaridis, Athanasios
Modulated (or coded) excitation signals can potentially improve the quality and increase the frame rate in medical ultrasound scanners. The aim of this dissertation is to investigate systematically the applicability of modulated signals in medical ultrasound imaging and to suggest appropriate...... methods for coded imaging, with the goal of making better anatomic and flow images and three-dimensional images. On the first stage, it investigates techniques for doing high-resolution coded imaging with improved signal-to-noise ratio compared to conventional imaging. Subsequently it investigates how...... coded excitation can be used for increasing the frame rate. The work includes both simulated results using Field II, and experimental results based on measurements on phantoms as well as clinical images. Initially a mathematical foundation of signal modulation is given. Pulse compression based...
Analytical expressions for conditional averages: A numerical test
DEFF Research Database (Denmark)
Pécseli, H.L.; Trulsen, J.
1991-01-01
Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...
Experimental demonstration of squeezed-state quantum averaging
DEFF Research Database (Denmark)
Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...
Detectors for high resolution dynamic pet
International Nuclear Information System (INIS)
Derenzo, S.E.; Budinger, T.F.; Huesman, R.H.
1983-05-01
This report reviews the motivation for high spatial resolution in dynamic positron emission tomography of the head and the technical problems in realizing this objective. We present recent progress in using small silicon photodiodes to measure the energy deposited by 511 keV photons in small BGO crystals with an energy resolution of 9.4% full-width at half-maximum. In conjunction with a suitable phototube coupled to a group of crystals, the photodiode signal to noise ratio is sufficient for the identification of individual crystals both for conventional and time-of-flight positron tomography
Parameter-free resolution of the superposition of stochastic signals
Energy Technology Data Exchange (ETDEWEB)
Scholz, Teresa, E-mail: tascholz@fc.ul.pt [Center for Theoretical and Computational Physics, University of Lisbon (Portugal); Raischel, Frank [Center for Geophysics, IDL, University of Lisbon (Portugal); Closer Consulting, Av. Eng. Duarte Pacheco Torre 1 15" 0, 1070-101 Lisboa (Portugal); Lopes, Vitor V. [DEIO-CIO, University of Lisbon (Portugal); UTEC–Universidad de Ingeniería y Tecnología, Lima (Peru); Lehle, Bernd; Wächter, Matthias; Peinke, Joachim [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Lind, Pedro G. [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Institute of Physics, University of Osnabrück, Osnabrück (Germany)
2017-01-30
This paper presents a direct method to obtain the deterministic and stochastic contribution of the sum of two independent stochastic processes, one of which is an Ornstein–Uhlenbeck process and the other a general (non-linear) Langevin process. The method is able to distinguish between the stochastic processes, retrieving their corresponding stochastic evolution equations. This framework is based on a recent approach for the analysis of multidimensional Langevin-type stochastic processes in the presence of strong measurement (or observational) noise, which is here extended to impose neither constraints nor parameters and extract all coefficients directly from the empirical data sets. Using synthetic data, it is shown that the method yields satisfactory results.
Bullen, A; Patel, S S; Saggau, P
1997-07-01
The design and implementation of a high-speed, random-access, laser-scanning fluorescence microscope configured to record fast physiological signals from small neuronal structures with high spatiotemporal resolution is presented. The laser-scanning capability of this nonimaging microscope is provided by two orthogonal acousto-optic deflectors under computer control. Each scanning point can be randomly accessed and has a positioning time of 3-5 microseconds. Sampling time is also computer-controlled and can be varied to maximize the signal-to-noise ratio. Acquisition rates up to 200k samples/s at 16-bit digitizing resolution are possible. The spatial resolution of this instrument is determined by the minimal spot size at the level of the preparation (i.e., 2-7 microns). Scanning points are selected interactively from a reference image collected with differential interference contrast optics and a video camera. Frame rates up to 5 kHz are easily attainable. Intrinsic variations in laser light intensity and scanning spot brightness are overcome by an on-line signal-processing scheme. Representative records obtained with this instrument by using voltage-sensitive dyes and calcium indicators demonstrate the ability to make fast, high-fidelity measurements of membrane potential and intracellular calcium at high spatial resolution (2 microns) without any temporal averaging.
Noise and resolution with digital filtering for nuclear spectrometry
International Nuclear Information System (INIS)
Lakatos, T.
1991-01-01
Digital noise filtering looks very promising for semiconductor spectrometry. The resolution and conversion speed of the analog to digital converter (ADC) used at the input of a digital signal processor and analyzer can strongly influence the signal to noise ratio, the peak position and shape. The article leads with the investigation of these effects using computer modelling. (orig.)
The flattening of the average potential in models with fermions
International Nuclear Information System (INIS)
Bornholdt, S.
1993-01-01
The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)
Directory of Open Access Journals (Sweden)
Zhou G Tong
2007-01-01
Full Text Available Many modern communication signal formats, such as orthogonal frequency-division multiplexing (OFDM and code-division multiple access (CDMA, have high peak-to-average power ratios (PARs. A signal with a high PAR not only is vulnerable in the presence of nonlinear components such as power amplifiers (PAs, but also leads to low transmission power efficiency. Selected mapping (SLM and clipping are well-known PAR reduction techniques. We propose to combine SLM with threshold clipping and digital baseband predistortion to improve the overall efficiency of the transmission system. Testbed experiments demonstrate the effectiveness of the proposed approach.
20 CFR 404.220 - Average-monthly-wage method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...
A time-averaged cosmic ray propagation theory
International Nuclear Information System (INIS)
Klimas, A.J.
1975-01-01
An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de
7 CFR 51.2561 - Average moisture content.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...
Averaging in SU(2) open quantum random walk
International Nuclear Information System (INIS)
Ampadu Clement
2014-01-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT
Averaging in SU(2) open quantum random walk
Clement, Ampadu
2014-03-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.
Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function
Directory of Open Access Journals (Sweden)
Christofer Toumazou
2013-07-01
Full Text Available A novel noise filtering algorithm based on averaging Intrinsic Mode Function (aIMF, which is a derivation of Empirical Mode Decomposition (EMD, is proposed to remove white-Gaussian noise of foreign currency exchange rates that are nonlinear nonstationary times series signals. Noise patterns with different amplitudes and frequencies were randomly mixed into the five exchange rates. A number of filters, namely; Extended Kalman Filter (EKF, Wavelet Transform (WT, Particle Filter (PF and the averaging Intrinsic Mode Function (aIMF algorithm were used to compare filtering and smoothing performance. The aIMF algorithm demonstrated high noise reduction among the performance of these filters.
Schelfaut, Roselien
2005-01-01
Integrins are receptors presented on most cells. By binding ligand they can generate signalling pathways inside the cell. Those pathways are a linkage to proteins in the cytosol. It is known that tumor cells can survive and proliferate in the absence of a solid support while normal cells need to be bound to ligand. To understand why tumour cells act that way, we first have to know how ligand-binding to integrins affect the cell. This research field includes studies on activation of proteins b...
DEFF Research Database (Denmark)
N. Gordon, Jeffery; Ringe, Georg
2015-01-01
Bank resolution is a key pillar of the European Banking Union. This column argues that the current structure of large EU banks is not conducive to an effective and unbiased resolution procedure. The authors would require systemically important banks to reorganise into a ‘holding company’ structure......, where the parent company holds unsecured term debt sufficient to cover losses at its operating financial subsidiaries. This would facilitate a ‘single point of entry’ resolution procedure, minimising the risk of creditor runs and destructive ring-fencing by national regulators....
Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing
2017-09-01
The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
Microbes make average 2 nanometer diameter crystalline UO2 particles.
Suzuki, Y.; Kelly, S. D.; Kemner, K. M.; Banfield, J. F.
2001-12-01
It is well known that phylogenetically diverse groups of microorganisms are capable of catalyzing the reduction of highly soluble U(VI) to highly insoluble U(IV), which rapidly precipitates as uraninite (UO2). Because biological uraninite is highly insoluble, microbial uranyl reduction is being intensively studied as the basis for a cost-effective in-situ bioremediation strategy. Previous studies have described UO2 biomineralization products as amorphous or poorly crystalline. The objective of this study is to characterize the nanocrystalline uraninite in detail in order to determine the particle size, crystallinity, and size-related structural characteristics, and to examine the implications of these for reoxidation and transport. In this study, we obtained U-contaminated sediment and water from an inactive U mine and incubated them anaerobically with nutrients to stimulate reductive precipitation of UO2 by indigenous anaerobic bacteria, mainly Gram-positive spore-forming Desulfosporosinus and Clostridium spp. as revealed by RNA-based phylogenetic analysis. Desulfosporosinus sp. was isolated from the sediment and UO2 was precipitated by this isolate from a simple solution that contains only U and electron donors. We characterized UO2 formed in both of the experiments by high resolution-TEM (HRTEM) and X-ray absorption fine structure analysis (XAFS). The results from HRTEM showed that both the pure and the mixed cultures of microorganisms precipitated around 1.5 - 3 nm crystalline UO2 particles. Some particles as small as around 1 nm could be imaged. Rare particles around 10 nm in diameter were also present. Particles adhere to cells and form colloidal aggregates with low fractal dimension. In some cases, coarsening by oriented attachment on \\{111\\} is evident. Our preliminary results from XAFS for the incubated U-contaminated sample also indicated an average diameter of UO2 of 2 nm. In nanoparticles, the U-U distance obtained by XAFS was 0.373 nm, 0.012 nm
On the construction of a time base and the elimination of averaging errors in proxy records
Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.
2009-04-01
Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The
High Resolution Elevation Contours
Minnesota Department of Natural Resources — This dataset contains contours generated from high resolution data sources such as LiDAR. Generally speaking this data is 2 foot or less contour interval.
Microscopic resolution broadband dielectric spectroscopy
International Nuclear Information System (INIS)
Mukherjee, S; Watson, P; Prance, R J
2011-01-01
Results are presented for a non-contact measurement system capable of micron level spatial resolution. It utilises the novel electric potential sensor (EPS) technology, invented at Sussex, to image the electric field above a simple composite dielectric material. EP sensors may be regarded as analogous to a magnetometer and require no adjustments or offsets during either setup or use. The sample consists of a standard glass/epoxy FR4 circuit board, with linear defects machined into the surface by a PCB milling machine. The sample is excited with an a.c. signal over a range of frequencies from 10 kHz to 10 MHz, from the reverse side, by placing it on a conducting sheet connected to the source. The single sensor is raster scanned over the surface at a constant working distance, consistent with the spatial resolution, in order to build up an image of the electric field, with respect to the reference potential. The results demonstrate that both the surface defects and the internal dielectric variations within the composite may be imaged in this way, with good contrast being observed between the glass mat and the epoxy resin.
Microscopic resolution broadband dielectric spectroscopy
Mukherjee, S.; Watson, P.; Prance, R. J.
2011-08-01
Results are presented for a non-contact measurement system capable of micron level spatial resolution. It utilises the novel electric potential sensor (EPS) technology, invented at Sussex, to image the electric field above a simple composite dielectric material. EP sensors may be regarded as analogous to a magnetometer and require no adjustments or offsets during either setup or use. The sample consists of a standard glass/epoxy FR4 circuit board, with linear defects machined into the surface by a PCB milling machine. The sample is excited with an a.c. signal over a range of frequencies from 10 kHz to 10 MHz, from the reverse side, by placing it on a conducting sheet connected to the source. The single sensor is raster scanned over the surface at a constant working distance, consistent with the spatial resolution, in order to build up an image of the electric field, with respect to the reference potential. The results demonstrate that both the surface defects and the internal dielectric variations within the composite may be imaged in this way, with good contrast being observed between the glass mat and the epoxy resin.
Super Resolution Algorithm for CCTVs
Gohshi, Seiichi
2015-03-01
Recently, security cameras and CCTV systems have become an important part of our daily lives. The rising demand for such systems has created business opportunities in this field, especially in big cities. Analogue CCTV systems are being replaced by digital systems, and HDTV CCTV has become quite common. HDTV CCTV can achieve images with high contrast and decent quality if they are clicked in daylight. However, the quality of an image clicked at night does not always have sufficient contrast and resolution because of poor lighting conditions. CCTV systems depend on infrared light at night to compensate for insufficient lighting conditions, thereby producing monochrome images and videos. However, these images and videos do not have high contrast and are blurred. We propose a nonlinear signal processing technique that significantly improves visual and image qualities (contrast and resolution) of low-contrast infrared images. The proposed method enables the use of infrared cameras for various purposes such as night shot and poor lighting environments under poor lighting conditions.
International Nuclear Information System (INIS)
McMurray, J. S.; Williams, C. C.
1998-01-01
Scanning Capacitance Microscopy (SCM) is capable of providing two-dimensional information about dopant and carrier concentrations in semiconducting devices. This information can be used to calibrate models used in the simulation of these devices prior to manufacturing and to develop and optimize the manufacturing processes. To provide information for future generations of devices, ultra-high spatial accuracy (<10 nm) will be required. One method, which potentially provides a means to obtain these goals, is inverse modeling of SCM data. Current semiconducting devices have large dopant gradients. As a consequence, the capacitance probe signal represents an average over the local dopant gradient. Conversion of the SCM signal to dopant density has previously been accomplished with a physical model which assumes that no dopant gradient exists in the sampling area of the tip. The conversion of data using this model produces results for abrupt profiles which do not have adequate resolution and accuracy. A new inverse model and iterative method has been developed to obtain higher resolution and accuracy from the same SCM data. This model has been used to simulate the capacitance signal obtained from one and two-dimensional ideal abrupt profiles. This simulated data has been input to a new iterative conversion algorithm, which has recovered the original profiles in both one and two dimensions. In addition, it is found that the shape of the tip can significantly impact resolution. Currently SCM tips are found to degrade very rapidly. Initially the apex of the tip is approximately hemispherical, but quickly becomes flat. This flat region often has a radius of about the original hemispherical radius. This change in geometry causes the silicon directly under the disk to be sampled with approximately equal weight. In contrast, a hemispherical geometry samples most strongly the silicon centered under the SCM tip and falls off quickly with distance from the tip's apex. Simulation
Wacyk, Ihor; Prache, Olivier; Ghosh, Amal
2011-06-01
AMOLED microdisplays continue to show improvement in resolution and optical performance, enhancing their appeal for a broad range of near-eye applications such as night vision, simulation and training, situational awareness, augmented reality, medical imaging, and mobile video entertainment and gaming. eMagin's latest development of an HDTV+ resolution technology integrates an OLED pixel of 3.2 × 9.6 microns in size on a 0.18 micron CMOS backplane to deliver significant new functionality as well as the capability to implement a 1920×1200 microdisplay in a 0.86" diagonal area. In addition to the conventional matrix addressing circuitry, the HDTV+ display includes a very lowpower, low-voltage-differential-signaling (LVDS) serialized interface to minimize cable and connector size as well as electromagnetic emissions (EMI), an on-chip set of look-up-tables for digital gamma correction, and a novel pulsewidth- modulation (PWM) scheme that together with the standard analog control provides a total dimming range of 0.05cd/m2 to 2000cd/m2 in the monochrome version. The PWM function also enables an impulse drive mode of operation that significantly reduces motion artifacts in high speed scene changes. An internal 10-bit DAC ensures that a full 256 gamma-corrected gray levels are available across the entire dimming range, resulting in a measured dynamic range exceeding 20-bits. This device has been successfully tested for operation at frame rates ranging from 30Hz up to 85Hz. This paper describes the operational features and detailed optical and electrical test results for the new AMOLED WUXGA resolution microdisplay.
Ultra high resolution tomography
Energy Technology Data Exchange (ETDEWEB)
Haddad, W.S.
1994-11-15
Recent work and results on ultra high resolution three dimensional imaging with soft x-rays will be presented. This work is aimed at determining microscopic three dimensional structure of biological and material specimens. Three dimensional reconstructed images of a microscopic test object will be presented; the reconstruction has a resolution on the order of 1000 A in all three dimensions. Preliminary work with biological samples will also be shown, and the experimental and numerical methods used will be discussed.
High resolution positron tomography
International Nuclear Information System (INIS)
Brownell, G.L.; Burnham, C.A.
1982-01-01
The limits of spatial resolution in practical positron tomography are examined. The four factors that limit spatial resolution are: positron range; small angle deviation; detector dimensions and properties; statistics. Of these factors, positron range may be considered the fundamental physical limitation since it is independent of instrument properties. The other factors are to a greater or lesser extent dependent on the design of the tomograph
Scalable Resolution Display Walls
Leigh, Jason; Johnson, Andrew; Renambot, Luc; Peterka, Tom; Jeong, Byungil; Sandin, Daniel J.; Talandis, Jonas; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung; Sun, Yiwen
2013-01-01
This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.
Flame Motion In Gas Turbine Burner From Averages Of Single-Pulse Flame Fronts
Energy Technology Data Exchange (ETDEWEB)
Tylli, N.; Hubschmid, W.; Inauen, A.; Bombach, R.; Schenker, S.; Guethe, F. [Alstom (Switzerland); Haffner, K. [Alstom (Switzerland)
2005-03-01
Thermo acoustic instabilities of a gas turbine burner were investigated by flame front localization from measured OH laser-induced fluorescence single pulse signals. The average position of the flame was obtained from the superposition of the single pulse flame fronts at constant phase of the dominant acoustic oscillation. One observes that the flame position varies periodically with the phase angle of the dominant acoustic oscillation. (author)
Resolution 1540 (2004) overview
International Nuclear Information System (INIS)
Kasprzyk, N.
2013-01-01
This series of slides presents the Resolution 1540, its features and its status of implementation. Resolution 1540 is a response to the risk that non-State actors may acquire, develop, traffic in weapons of mass destruction and their means of delivery. Resolution 1540 was adopted on 28 April 2004 by the U.N. Security Council at the unanimity of its members. Resolution 1540 deals with the 3 kinds of weapons of mass destruction (nuclear, chemical and biological weapons) as well as 'related materials'. This resolution implies 3 sets of obligations: first no support of non-state actors concerning weapons of mass destruction, secondly to set national laws that prohibit any non-state actors to deal with weapons of mass destruction and thirdly to enforce domestic control to prevent the proliferation of nuclear, chemical or biological weapons and their means of delivery. Four working groups operated by the 1540 Committee have been settled: - Implementation (coordinator: Germany); - Assistance (coordinator: France); - International cooperation (interim coordinator: South Africa); and - Transparency and media outreach (coordinator: USA). The status of implementation of the resolution continues to improve since 2004, much work remains to be done and the gravity of the threat remains considerable. (A.C.)
Enhancement of Lamb Wave Imaging Resolution by Step Pulse Excitation and Prewarping
Directory of Open Access Journals (Sweden)
Shangchen Fu
2015-01-01
Full Text Available For the purpose of improving the damage localization accuracy, a prewarping technology is combined with step pulse excitation and this method is used in Lamb wave imaging of plate structures with adjacent damages. Based on the step pulse excitation, various narrowband or burst response can be derived by signal processing technology and this method provides flexibility for further prewarping approach. A narrowband signal warped with a preselected distance is then designed, and the dispersion in the response of this prewarping signal will be greatly reduced. However, in order to calculate the distance for prewarping, the first arrival needs to be estimated from the burst response. From the step-pulse response, narrowband responses at different central frequencies can be obtained, and by averaging peak-value time of their first arrivals, a more accurate estimation can be calculated. By using the prewarping method to the damage scattering signals before imaging, the imaging resolution of the delay-and-sum method can be highly enhanced. The experiment carried out in an aluminum plate with adjacent damages proves the efficiency of this method.
Zhang, Shengli; Tang, J.
2018-01-01
Gear fault diagnosis relies heavily on the scrutiny of vibration responses measured. In reality, gear vibration signals are noisy and dominated by meshing frequencies as well as their harmonics, which oftentimes overlay the fault related components. Moreover, many gear transmission systems, e.g., those in wind turbines, constantly operate under non-stationary conditions. To reduce the influences of non-synchronous components and noise, a fault signature enhancement method that is built upon angle-frequency domain synchronous averaging is developed in this paper. Instead of being averaged in the time domain, the signals are processed in the angle-frequency domain to solve the issue of phase shifts between signal segments due to uncertainties caused by clearances, input disturbances, and sampling errors, etc. The enhanced results are then analyzed through feature extraction algorithms to identify the most distinct features for fault classification and identification. Specifically, Kernel Principal Component Analysis (KPCA) targeting at nonlinearity, Multilinear Principal Component Analysis (MPCA) targeting at high dimensionality, and Locally Linear Embedding (LLE) targeting at local similarity among the enhanced data are employed and compared to yield insights. Numerical and experimental investigations are performed, and the results reveal the effectiveness of angle-frequency domain synchronous averaging in enabling feature extraction and classification.
Digital storage of repeated signals
International Nuclear Information System (INIS)
Prozorov, S.P.
1984-01-01
An independent digital storage system designed for repeated signal discrimination from background noises is described. The signal averaging is performed off-line in the real time mode by means of multiple selection of the investigated signal and integration in each point. Digital values are added in a simple summator and the result is recorded the storage device with the volume of 1024X20 bit from where it can be output on an oscillograph, a plotter or transmitted to a compUter for subsequent processing. The described storage is reliable and simple device on one base of which the systems for the nuclear magnetic resonapce signal acquisition in different experiments are developed
International Nuclear Information System (INIS)
Wu, Pei-Hsin; Chung, Hsiao-Wen; Tsai, Ping-Huei; Wu, Ming-Long; Chuang, Tzu-Chao; Shih, Yi-Yu; Huang, Teng-Yi
2013-01-01
Purpose: One of the technical advantages of functional magnetic resonance imaging (fMRI) is its precise localization of changes from neuronal activities. While current practice of fMRI acquisition at voxel size around 3 × 3 × 3 mm 3 achieves satisfactory results in studies of basic brain functions, higher spatial resolution is required in order to resolve finer cortical structures. This study investigated spatial resolution effects on brain fMRI experiments using balanced steady-state free precession (bSSFP) imaging with 0.37 mm 3 voxel volume at 3.0 T. Methods: In fMRI experiments, full and unilateral visual field 5 Hz flashing checkerboard stimulations were given to healthy subjects. The bSSFP imaging experiments were performed at three different frequency offsets to widen the coverage, with functional activations in the primary visual cortex analyzed using the general linear model. Variations of the spatial resolution were achieved by removing outerk-space data components. Results: Results show that a reduction in voxel volume from 3.44 × 3.44 × 2 mm 3 to 0.43 × 0.43 × 2 mm 3 has resulted in an increase of the functional activation signals from (7.7 ± 1.7)% to (20.9 ± 2.0)% at 3.0 T, despite of the threefold SNR decreases in the original images, leading to nearly invariant functional contrast-to-noise ratios (fCNR) even at high spatial resolution. Activation signals aligning nicely with gray matter sulci at high spatial resolution would, on the other hand, have possibly been mistaken as noise at low spatial resolution. Conclusions: It is concluded that the bSSFP sequence is a plausible technique for fMRI investigations at submillimeter voxel widths without compromising fCNR. The reduction of partial volume averaging with nonactivated brain tissues to retain fCNR is uniquely suitable for high spatial resolution applications such as the resolving of columnar organization in the brain
Energy Technology Data Exchange (ETDEWEB)
Wu, Pei-Hsin; Chung, Hsiao-Wen [Department of Electrical Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Tsai, Ping-Huei [Imaging Research Center, Taipei Medical University, Taipei 11031, Taiwan and Department of Medical Imaging, Taipei Medical University Hospital, Taipei Medical University, Taipei 11031, Taiwan (China); Wu, Ming-Long, E-mail: minglong.wu@csie.ncku.edu.tw [Institute of Medical Informatics, National Cheng-Kung University, Tainan 70101, Taiwan and Department of Computer Science and Information Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China); Chuang, Tzu-Chao [Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung 80424, Taiwan (China); Shih, Yi-Yu [Siemens Limited Healthcare Sector, Taipei 11503, Taiwan (China); Huang, Teng-Yi [Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei 10607, Taiwan (China)
2013-12-15
Purpose: One of the technical advantages of functional magnetic resonance imaging (fMRI) is its precise localization of changes from neuronal activities. While current practice of fMRI acquisition at voxel size around 3 × 3 × 3 mm{sup 3} achieves satisfactory results in studies of basic brain functions, higher spatial resolution is required in order to resolve finer cortical structures. This study investigated spatial resolution effects on brain fMRI experiments using balanced steady-state free precession (bSSFP) imaging with 0.37 mm{sup 3} voxel volume at 3.0 T. Methods: In fMRI experiments, full and unilateral visual field 5 Hz flashing checkerboard stimulations were given to healthy subjects. The bSSFP imaging experiments were performed at three different frequency offsets to widen the coverage, with functional activations in the primary visual cortex analyzed using the general linear model. Variations of the spatial resolution were achieved by removing outerk-space data components. Results: Results show that a reduction in voxel volume from 3.44 × 3.44 × 2 mm{sup 3} to 0.43 × 0.43 × 2 mm{sup 3} has resulted in an increase of the functional activation signals from (7.7 ± 1.7)% to (20.9 ± 2.0)% at 3.0 T, despite of the threefold SNR decreases in the original images, leading to nearly invariant functional contrast-to-noise ratios (fCNR) even at high spatial resolution. Activation signals aligning nicely with gray matter sulci at high spatial resolution would, on the other hand, have possibly been mistaken as noise at low spatial resolution. Conclusions: It is concluded that the bSSFP sequence is a plausible technique for fMRI investigations at submillimeter voxel widths without compromising fCNR. The reduction of partial volume averaging with nonactivated brain tissues to retain fCNR is uniquely suitable for high spatial resolution applications such as the resolving of columnar organization in the brain.
High speed, High resolution terahertz spectrometers
International Nuclear Information System (INIS)
Kim, Youngchan; Yee, Dae Su; Yi, Miwoo; Ahn, Jaewook
2008-01-01
A variety of sources and methods have been developed for terahertz spectroscopy during almost two decades. Terahertz time domain spectroscopy (THz TDS)has attracted particular attention as a basic measurement method in the fields of THz science and technology. Recently, asynchronous optical sampling (AOS)THz TDS has been demonstrated, featuring rapid data acquisition and a high spectral resolution. Also, terahertz frequency comb spectroscopy (TFCS)possesses attractive features for high precision terahertz spectroscopy. In this presentation, we report on these two types of terahertz spectrometer. Our high speed, high resolution terahertz spectrometer is demonstrated using two mode locked femtosecond lasers with slightly different repetition frequencies without a mechanical delay stage. The repetition frequencies of the two femtosecond lasers are stabilized by use of two phase locked loops sharing the same reference oscillator. The time resolution of our terahertz spectrometer is measured using the cross correlation method to be 270 fs. AOS THz TDS is presented in Fig. 1, which shows a time domain waveform rapidly acquired on a 10ns time window. The inset shows a zoom into the signal with 100ps time window. The spectrum obtained by the fast Fourier Transformation (FFT)of the time domain waveform has a frequency resolution of 100MHz. The dependence of the signal to noise ratio (SNR)on the measurement time is also investigated
1994 Average Monthly Sea Surface Temperature for California
National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA/ NASA AVHRR Oceans Pathfinder sea surface temperature data are derived from the 5-channel Advanced Very High Resolution Radiometers (AVHRR) on board the...
1993 Average Monthly Sea Surface Temperature for California
National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA/NASA AVHRR Oceans Pathfinder sea surface temperature data are derived from the 5-channel Advanced Very High Resolution Radiometers (AVHRR) on board the NOAA...
Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum
2017-04-01
We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.
MARD—A moving average rose diagram application for the geosciences
Munro, Mark A.; Blenkinsop, Thomas G.
2012-12-01
MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.
Safety Impact of Average Speed Control in the UK
DEFF Research Database (Denmark)
Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert
2016-01-01
of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....
on the performance of Autoregressive Moving Average Polynomial
African Journals Online (AJOL)
Timothy Ademakinwa
Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
Ocean circulation generated magnetic signals
DEFF Research Database (Denmark)
Manoj, C.; Kuvshinov, A.; Maus, S.
2006-01-01
Conducting ocean water, as it flows through the Earth's magnetic field, generates secondary electric and magnetic fields. An assessment of the ocean-generated magnetic fields and their detectability may be of importance for geomagnetism and oceanography. Motivated by the clear identification...... of ocean tidal signatures in the CHAMP magnetic field data we estimate the ocean magnetic signals of steady flow using a global 3-D EM numerical solution. The required velocity data are from the ECCO ocean circulation experiment and alternatively from the OCCAM model for higher resolution. We assume...... of the magnetic field, as compared to the ECCO simulation. Besides the expected signatures of the global circulation patterns, we find significant seasonal variability of ocean magnetic signals in the Indian and Western Pacific Oceans. Compared to seasonal variation, interannual variations produce weaker signals....
Delineation of facial archetypes by 3d averaging.
Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G
2004-10-01
The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.
Signal processing for distributed readout using TESs
International Nuclear Information System (INIS)
Smith, Stephen J.; Whitford, Chris H.; Fraser, George W.
2006-01-01
We describe optimal filtering algorithms for determining energy and position resolution in position-sensitive Transition Edge Sensor (TES) Distributed Read-Out Imaging Devices (DROIDs). Improved algorithms, developed using a small-signal finite-element model, are based on least-squares minimisation of the total noise power in the correlated dual TES DROID. Through numerical simulations we show that significant improvements in energy and position resolution are theoretically possible over existing methods
Time-optimized high-resolution readout-segmented diffusion tensor imaging.
Directory of Open Access Journals (Sweden)
Gernot Reishofer
Full Text Available Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min generates results comparable to the un-regularized data with three averages (48 min. This significant reduction in scan time renders high resolution (1 × 1 × 2.5 mm(3 diffusion tensor imaging of the entire brain applicable in a clinical context.
In-depth study of single photon time resolution for the Philips digital silicon photomultiplier
International Nuclear Information System (INIS)
Liu, Z.; Pizzichemi, M.; Ghezzi, A.; Paganoni, M.; Gundacker, S.; Auffray, E.; Lecoq, P.
2016-01-01
The digital silicon photomultiplier (SiPM) has been commercialised by Philips as an innovative technology compared to analog silicon photomultiplier devices. The Philips digital SiPM, has a pair of time to digital converters (TDCs) connected to 12800 single photon avalanche diodes (SPADs). Detailed measurements were performed to understand the low photon time response of the Philips digital SiPM. The single photon time resolution (SPTR) of every single SPAD in a pixel consisting of 3200 SPADs was measured and an average value of 85 ps full width at half maximum (FWHM) was observed. Each SPAD sends the signal to the TDC with different signal propagation time, resulting in a so called trigger network skew. This distribution of the trigger network skew for a pixel (3200 SPADs) has been measured and a variation of 50 ps FWHM was extracted. The SPTR of the whole pixel is the combination of SPAD jitter, trigger network skew, and the SPAD non-uniformity. The SPTR of a complete pixel was 103 ps FWHM at 3.3 V above breakdown voltage. Further, the effect of the crosstalk at a low photon level has been studied, with the two photon time resolution degrading if the events are a combination of detected (true) photons and crosstalk events. Finally, the time response to multiple photons was investigated.
High resolution solar observations
International Nuclear Information System (INIS)
Title, A.
1985-01-01
Currently there is a world-wide effort to develop optical technology required for large diffraction limited telescopes that must operate with high optical fluxes. These developments can be used to significantly improve high resolution solar telescopes both on the ground and in space. When looking at the problem of high resolution observations it is essential to keep in mind that a diffraction limited telescope is an interferometer. Even a 30 cm aperture telescope, which is small for high resolution observations, is a big interferometer. Meter class and above diffraction limited telescopes can be expected to be very unforgiving of inattention to details. Unfortunately, even when an earth based telescope has perfect optics there are still problems with the quality of its optical path. The optical path includes not only the interior of the telescope, but also the immediate interface between the telescope and the atmosphere, and finally the atmosphere itself
Directory of Open Access Journals (Sweden)
Adina FOLTIŞ
2012-01-01
Full Text Available The resolution, the termination and the reduction of labour conscription are regulated by articles 1549-1554 in the new Civil Code, which represents the common law in this matter. We appreciate that the new regulation does not conclusively clarify the issue related to whether the existence of liability in order to call upon the resolution is necessary or not, because the existence of this condition has been inferred under the previous regulation from the fact that the absence of liability shifts the inexecution issue on the domain of fortuitous impossibility of execution, situation in which the resolution of the contract is not in question, but that of the risk it implies.
Effect of temporal averaging of meteorological data on predictions of groundwater recharge
Directory of Open Access Journals (Sweden)
Batalha Marcia S.
2018-06-01
Full Text Available Accurate estimates of infiltration and groundwater recharge are critical for many hydrologic, agricultural and environmental applications. Anticipated climate change in many regions of the world, especially in tropical areas, is expected to increase the frequency of high-intensity, short-duration precipitation events, which in turn will affect the groundwater recharge rate. Estimates of recharge are often obtained using monthly or even annually averaged meteorological time series data. In this study we employed the HYDRUS-1D software package to assess the sensitivity of groundwater recharge calculations to using meteorological time series of different temporal resolutions (i.e., hourly, daily, weekly, monthly and yearly averaged precipitation and potential evaporation rates. Calculations were applied to three sites in Brazil having different climatological conditions: a tropical savanna (the Cerrado, a humid subtropical area (the temperate southern part of Brazil, and a very wet tropical area (Amazonia. To simplify our current analysis, we did not consider any land use effects by ignoring root water uptake. Temporal averaging of meteorological data was found to lead to significant bias in predictions of groundwater recharge, with much greater estimated recharge rates in case of very uneven temporal rainfall distributions during the year involving distinct wet and dry seasons. For example, at the Cerrado site, using daily averaged data produced recharge rates of up to 9 times greater than using yearly averaged data. In all cases, an increase in the time of averaging of meteorological data led to lower estimates of groundwater recharge, especially at sites having coarse-textured soils. Our results show that temporal averaging limits the ability of simulations to predict deep penetration of moisture in response to precipitation, so that water remains in the upper part of the vadose zone subject to upward flow and evaporation.
Interpreting Bivariate Regression Coefficients: Going beyond the Average
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Average stress in a Stokes suspension of disks
Prosperetti, Andrea
2004-01-01
The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is
47 CFR 1.959 - Computation of average terrain elevation.
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...
47 CFR 80.759 - Average terrain elevation.
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...
The average covering tree value for directed graph games
Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf
We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering
The Average Covering Tree Value for Directed Graph Games
Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.
2012-01-01
Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all
18 CFR 301.7 - Average System Cost methodology functionalization.
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...
Analytic computation of average energy of neutrons inducing fission
International Nuclear Information System (INIS)
Clark, Alexander Rich
2016-01-01
The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.
An alternative scheme of the Bogolyubov's average method
International Nuclear Information System (INIS)
Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.
1990-01-01
In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail
2015-01-01
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
A high-resolution atlas of composite Sloan Digital Sky Survey galaxy spectra
Dobos, László; Csabai, István.; Yip, Ching-Wa; Budavári, Tamás.; Wild, Vivienne; Szalay, Alexander S.
2012-02-01
In this work we present an atlas of composite spectra of galaxies based on the data of the Sloan Digital Sky Survey Data Release 7 (SDSS DR7). Galaxies are classified by colour, nuclear activity and star formation activity to calculate average spectra of high signal-to-noise ratio (S/N) and resolution (? at Δλ= 1 Å), using an algorithm that is robust against outliers. Besides composite spectra, we also compute the first five principal components of the distributions in each galaxy class to characterize the nature of variations of individual spectra around the averages. The continua of the composite spectra are fitted with BC03 stellar population synthesis models to extend the wavelength coverage beyond the coverage of the SDSS spectrographs. Common derived parameters of the composites are also calculated: integrated colours in the most popular filter systems, line-strength measurements and continuum absorption indices (including Lick indices). These derived parameters are compared with the distributions of parameters of individual galaxies, and it is shown on many examples that the composites of the atlas cover much of the parameter space spanned by SDSS galaxies. By co-adding thousands of spectra, a total integration time of several months can be reached, which results in extremely low noise composites. The variations in redshift not only allow for extending the spectral coverage bluewards to the original wavelength limit of the SDSS spectrographs, but also make higher spectral resolution achievable. The composite spectrum atlas is available online at .
International Nuclear Information System (INIS)
Kozlovskii, Andrei V
2007-01-01
The scheme of an active interferometer for amplification of small optical signals for their subsequent photodetection is proposed. The scheme provides a considerable amplification of signals by preserving their quantum-statistical properties (ideal amplification) and also can improve these properties under certain conditions. The two-mode squeezed state of light produced upon four-wave mixing, which is used for signal amplification, can be transformed to the non-classical state of the output field squeezed in the number of photons. The scheme is phase-sensitive upon amplification of the input coherent signal. It is shown that in the case of the incoherent input signal with the average number of photons (n s )∼1, the amplification process introduces no additional quantum noise at signal amplification as large as is wished. A scheme is also proposed for the cascade small-signal amplification ((n s )∼1) in the coherent state producing the amplified signal in the squeezed sub-Poisson state, which can be used for the high-resolution detection of weak and ultraweak optical signals. (quantum optics)
Self-similarity of higher-order moving averages
Arianos, Sergio; Carbone, Anna; Türk, Christian
2011-10-01
In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).
Anomalous behavior of q-averages in nonextensive statistical mechanics
International Nuclear Information System (INIS)
Abe, Sumiyoshi
2009-01-01
A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases
Bootstrapping pre-averaged realized volatility under market microstructure noise
DEFF Research Database (Denmark)
Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour
The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...
Front-end data reduction of diagnostic signals by real-time digital filtering
International Nuclear Information System (INIS)
Zasche, D.; Fahrbach, H.U.; Harmeyer, E.
1984-01-01
Diagnostic measurements on a fusion plasma with high resolution in space, time and signal amplitude involve handling large amounts of data. In the design of the soft-X-ray pinhole camera diagnostic for JET (100 detectors in 2 cameras) a new approach to this problem was found. The analogue-to-digital conversion is performed continuously at the highest sample rate of 200 kHz, lower sample rates (10 kHz, 1 kHz, 100 Hz) are obtained by real-time digital filters which calculate weighted averages over consecutive samples and are undersampled at their outputs to reduce the data rate. At any time, the signals from all detectors are available at all possible data rates in ring buffers. The appropriate data rate can always be recorded on demand. (author)
High efficiency processing for reduced amplitude zones detection in the HRECG signal
Dugarte, N.; Álvarez, A.; Balacco, J.; Mercado, G.; Gonzalez, A.; Dugarte, E.; Olivares, A.
2016-04-01
Summary - This article presents part of a more detailed research proposed in the medium to long term, with the intention of establishing a new philosophy of electrocardiogram surface analysis. This research aims to find indicators of cardiovascular disease in its early stage that may go unnoticed with conventional electrocardiography. This paper reports the development of a software processing which collect some existing techniques and incorporates novel methods for detection of reduced amplitude zones (RAZ) in high resolution electrocardiographic signal (HRECG).The algorithm consists of three stages, an efficient processing for QRS detection, averaging filter using correlation techniques and a step for RAZ detecting. Preliminary results show the efficiency of system and point to incorporation of techniques new using signal analysis with involving 12 leads.
Non-sky-averaged sensitivity curves for space-based gravitational-wave observatories
International Nuclear Information System (INIS)
Vallisneri, Michele; Galley, Chad R
2012-01-01
The signal-to-noise ratio (SNR) is used in gravitational-wave observations as the basic figure of merit for detection confidence and, together with the Fisher matrix, for the amount of physical information that can be extracted from a detected signal. SNRs are usually computed from a sensitivity curve, which describes the gravitational-wave amplitude needed by a monochromatic source of given frequency to achieve a threshold SNR. Although the term 'sensitivity' is used loosely to refer to the detector's noise spectral density, the two quantities are not the same: the sensitivity includes also the frequency- and orientation-dependent response of the detector to gravitational waves and takes into account the duration of observation. For interferometric space-based detectors similar to LISA, which are sensitive to long-lived signals and have constantly changing position and orientation, exact SNRs need to be computed on a source-by-source basis. For convenience, most authors prefer to work with sky-averaged sensitivities, accepting inaccurate SNRs for individual sources and giving up control over the statistical distribution of SNRs for source populations. In this paper, we describe a straightforward end-to-end recipe to compute the non-sky-averaged sensitivity of interferometric space-based detectors of any geometry. This recipe includes the effects of spacecraft motion and of seasonal variations in the partially subtracted confusion foreground from Galactic binaries, and it can be used to generate a sampling distribution of sensitivities for a given source population. In effect, we derive error bars for the sky-averaged sensitivity curve, which provide a stringent statistical interpretation for previously unqualified statements about sky-averaged SNRs. As a worked-out example, we consider isotropic and Galactic-disk populations of monochromatic sources, as observed with the 'classic LISA' configuration. We confirm that the (standard) inverse-rms average sensitivity
Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis
Energy Technology Data Exchange (ETDEWEB)
Nguyen, Hoa T. [Univ. of Utah, Salt Lake City, UT (United States); Stone, Daithi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2016-01-01
An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.
International Nuclear Information System (INIS)
Fick, S.; Shilts, E.
2008-01-01
Nitrogen dioxide (NO 2 ) gas is created when fossil fuels are burned. Hot spots of NO 2 pollution in the troposphere have been identified by researchers at the Royal Netherlands Meteorological Institute. In addition to traffic, the biggest emitters of NO 2 include power plants, heavy industry and oil refineries. The City of Shanghai in China ranks with Los Angeles and Mexico City as the urban areas with the highest NO 2 concentrations in the world. NO 2 combines with particles in the air to create a smog that hangs over larger cities in the summer and plays a role in the production of ground-level ozone, both of which cause a variety of respiratory problems. According to the World Health Organization, such air pollution reduces the life of the average European by 8.6 months. This article included a map indicating NO 2 concentrations around the world. The high levels at Fort McMurray, Alberta can be attributed to the NO 2 emitted by oil sands plants. Individual power plants in Chandrapur and Ramagundam in India and oil refineries around the Persian Gulf also revealed high levels, as did the Highveld area outside of Johannesburg in South Africa, where a number of power plants sit on a plateau. At high altitude, NO 2 lingers longer in the air. In the past decade in Europe and eastern North America, cleaner technology in cars and power plants has led to declines in NO 2 in those regions. However, huge increases in emissions in East Asia mean the air will remain smoggy. 1 fig
Average values of 235U resonance parameters up to 500 eV
International Nuclear Information System (INIS)
Leal, L.C.
1991-01-01
An R-matrix analysis of 235 U neutron cross sections was recently completed. The analysis was performed with the multilevel-multichannel Reich-Moore computer code SAMMY and extended the resolved resonance region up to 500 eV. Several high resolution measurements namely, transmission, fission and capture data as well as spin separated fission data were analyzed in a consistent manner and a very accurate parametrization up to 500 eV of these data were obtained. The aim of this paper is to present the results of average values of the resonance parameters. 9 refs., 1 tab
A sub-millimeter resolution PET detector module using a multi-pixel photon counter array
International Nuclear Information System (INIS)
Song, Tae Yong; Wu Heyu; Komarov, Sergey; Tai, Yuan-Chuan; Siegel, Stefan B
2010-01-01
A PET block detector module using an array of sub-millimeter lutetium oxyorthosilicate (LSO) crystals read out by an array of surface-mount, semiconductor photosensors has been developed. The detector consists of a LSO array, a custom acrylic light guide, a 3 x 3 multi-pixel photon counter (MPPC) array (S10362-11-050P, Hamamatsu Photonics, Japan) and a readout board with a charge division resistor network. The LSO array consists of 100 crystals, each measuring 0.8 x 0.8 x 3 mm 3 and arranged in 0.86 mm pitches. A Monte Carlo simulation was used to aid the design and fabrication of a custom light guide to control distribution of scintillation light over the surface of the MPPC array. The output signals of the nine MPPC are multiplexed by a charge division resistor network to generate four position-encoded analog outputs. Flood image, energy resolution and timing resolution measurements were performed using standard NIM electronics. The linearity of the detector response was investigated using gamma-ray sources of different energies. The 10 x 10 array of 0.8 mm LSO crystals was clearly resolved in the flood image. The average energy resolution and standard deviation were 20.0% full-width at half-maximum (FWHM) and ±5.0%, respectively, at 511 keV. The timing resolution of a single MPPC coupled to a LSO crystal was found to be 857 ps FWHM, and the value for the central region of detector module was 1182 ps FWHM when ±10% energy window was applied. The nonlinear response of a single MPPC when used to read out a single LSO was observed among the corner crystals of the proposed detector module. However, the central region of the detector module exhibits significantly less nonlinearity (6.5% for 511 keV). These results demonstrate that (1) a charge-sharing resistor network can effectively multiplex MPPC signals and reduce the number of output signals without significantly degrading the performance of a PET detector and (2) a custom light guide to permit light sharing
High-Resolution PET Detector. Final report
International Nuclear Information System (INIS)
Karp, Joel
2014-01-01
The objective of this project was to develop an understanding of the limits of performance for a high resolution PET detector using an approach based on continuous scintillation crystals rather than pixelated crystals. The overall goal was to design a high-resolution detector, which requires both high spatial resolution and high sensitivity for 511 keV gammas. Continuous scintillation detectors (Anger cameras) have been used extensively for both single-photon and PET scanners, however, these instruments were based on NaI(Tl) scintillators using relatively large, individual photo-multipliers. In this project we investigated the potential of this type of detector technology to achieve higher spatial resolution through the use of improved scintillator materials and photo-sensors, and modification of the detector surface to optimize the light response function.We achieved an average spatial resolution of 3-mm for a 25-mm thick, LYSO continuous detector using a maximum likelihood position algorithm and shallow slots cut into the entrance surface
High resolution drift chambers
International Nuclear Information System (INIS)
Va'vra, J.
1985-07-01
High precision drift chambers capable of achieving less than or equal to 50 μm resolutions are discussed. In particular, we compare so called cool and hot gases, various charge collection geometries, several timing techniques and we also discuss some systematic problems. We also present what we would consider an ''ultimate'' design of the vertex chamber. 50 refs., 36 figs., 6 tabs
A Divergence Median-based Geometric Detector with A Weighted Averaging Filter
Hua, Xiaoqiang; Cheng, Yongqiang; Li, Yubo; Wang, Hongqiang; Qin, Yuliang
2018-01-01
To overcome the performance degradation of the classical fast Fourier transform (FFT)-based constant false alarm rate detector with the limited sample data, a divergence median-based geometric detector on the Riemannian manifold of Heimitian positive definite matrices is proposed in this paper. In particular, an autocorrelation matrix is used to model the correlation of sample data. This method of the modeling can avoid the poor Doppler resolution as well as the energy spread of the Doppler filter banks result from the FFT. Moreover, a weighted averaging filter, conceived from the philosophy of the bilateral filtering in image denoising, is proposed and combined within the geometric detection framework. As the weighted averaging filter acts as the clutter suppression, the performance of the geometric detector is improved. Numerical experiments are given to validate the effectiveness of our proposed method.
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Lateral dispersion coefficients as functions of averaging time
International Nuclear Information System (INIS)
Sheih, C.M.
1980-01-01
Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion
2010-07-01
... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...
2010-07-01
... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...
Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.
Directory of Open Access Journals (Sweden)
Jacinta Chan Phooi M'ng
Full Text Available The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR, Malaysia (MYR, the Philippines (PHP, Singapore (SGD, and Thailand (THB through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA' in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.
Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.
Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah
2016-01-01
The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.
Average and local structure of selected metal deuterides
Energy Technology Data Exchange (ETDEWEB)
Soerby, Magnus H.
2005-07-01
deuterides at 1 bar D2 and elevated temperatures (373-573 K) is presented in Paper 1. Deuterium atoms occupy chiefly three types of tetrahedral interstitial sites; two coordinated by 4 Zr atoms and one coordinated by 3 Zr and 1 Ni atoms. The site preference is predominantly ruled by sample composition and less by temperature. On the other hand, the spatial deuterium distribution among the preferred sites is strongly temperature dependant as the long-range correlations break down on heating. The sample is fully decomposed into tetragonal ZrD2 and Zr7Ni10 at 873 K. Th2AlD4 was the only metal deuteride with reported D-D separation substantially below 2 Aa (1.79 Aa) prior to the discovery of RENiInD1.33. However, as being the first ternary deuteride ever studied by PND, the original structure solution was based on very low-resolution data. The present reinvestigation (Paper 2) shows that the site preference was correctly determined, but the deuterium atoms are slightly shifted compared to the earlier report, now yielding acceptable interatomic separations. Solely Th4 tetrahedra are occupied in various Th2Al deuterides. Th8Al4D11 (Th2AlD2.75) takes a superstructure with tripled c-axis due to deuterium ordering. Th2AlD2.3 is disordered and the average distance between partly occupied sites appears as just 1.55 Aa in Rietveld refinements. However, short-range order is expected to prevent D-D distances under 2 Aa. Paper 3 present the first Reverse Monte Carlo (RMC) study of a metal deuteride. RMC is used in combination with total neutron scattering to model short-range deuterium correlations in disordered c-VD0.77. A practically complete blocking of interstitial sites closer than 2 Aa from any occupied deuterium site is observed. The short-range correlations resemble those of the fully ordered low temperature phase c-VD0.75 at length scales up to about 3 Aa, i.e. for the first two coordination spheres. Paper 4 concerns RMC modelling of short-range deuterium correlations in ZrCr2D4
Average and local structure of selected metal deuterides
International Nuclear Information System (INIS)
Soerby, Magnus H.
2004-01-01
elevated temperatures (373-573 K) is presented in Paper 1. Deuterium atoms occupy chiefly three types of tetrahedral interstitial sites; two coordinated by 4 Zr atoms and one coordinated by 3 Zr and 1 Ni atoms. The site preference is predominantly ruled by sample composition and less by temperature. On the other hand, the spatial deuterium distribution among the preferred sites is strongly temperature dependant as the long-range correlations break down on heating. The sample is fully decomposed into tetragonal ZrD2 and Zr7Ni10 at 873 K. Th2AlD4 was the only metal deuteride with reported D-D separation substantially below 2 Aa (1.79 Aa) prior to the discovery of RENiInD1.33. However, as being the first ternary deuteride ever studied by PND, the original structure solution was based on very low-resolution data. The present reinvestigation (Paper 2) shows that the site preference was correctly determined, but the deuterium atoms are slightly shifted compared to the earlier report, now yielding acceptable interatomic separations. Solely Th4 tetrahedra are occupied in various Th2Al deuterides. Th8Al4D11 (Th2AlD2.75) takes a superstructure with tripled c-axis due to deuterium ordering. Th2AlD2.3 is disordered and the average distance between partly occupied sites appears as just 1.55 Aa in Rietveld refinements. However, short-range order is expected to prevent D-D distances under 2 Aa. Paper 3 present the first Reverse Monte Carlo (RMC) study of a metal deuteride. RMC is used in combination with total neutron scattering to model short-range deuterium correlations in disordered c-VD0.77. A practically complete blocking of interstitial sites closer than 2 Aa from any occupied deuterium site is observed. The short-range correlations resemble those of the fully ordered low temperature phase c-VD0.75 at length scales up to about 3 Aa, i.e. for the first two coordination spheres. Paper 4 concerns RMC modelling of short-range deuterium correlations in ZrCr2D4 at ambient and low
Average inactivity time model, associated orderings and reliability properties
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
Average L-shell fluorescence, Auger, and electron yields
International Nuclear Information System (INIS)
Krause, M.O.
1980-01-01
The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization
Simultaneous inference for model averaging of derived parameters
DEFF Research Database (Denmark)
Jensen, Signe Marie; Ritz, Christian
2015-01-01
Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...
Salecker-Wigner-Peres clock and average tunneling times
International Nuclear Information System (INIS)
Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.
2011-01-01
The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).
Time average vibration fringe analysis using Hilbert transformation
International Nuclear Information System (INIS)
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-01-01
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Average multiplications in deep inelastic processes and their interpretation
International Nuclear Information System (INIS)
Kiselev, A.V.; Petrov, V.A.
1983-01-01
Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity
Fitting a function to time-dependent ensemble averaged data
DEFF Research Database (Denmark)
Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders
2018-01-01
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....
Average wind statistics for SRP area meteorological towers
International Nuclear Information System (INIS)
Laurinat, J.E.
1987-01-01
A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics
A simple consensus algorithm for distributed averaging in random ...
Indian Academy of Sciences (India)
Random geographical networks are realistic models for wireless sensor ... work are cheap, unreliable, with limited computational power and limited .... signal xj from node j, j does not need to transmit its degree to i in order to let i compute.
International Nuclear Information System (INIS)
Huh, Hyung; Koo, Kil Mo; Cheong, Yong Moo; Kim, G. J.
1995-01-01
Many signal-processing techniques have been found to be useful in ultrasonic and nondestructive evaluation. Among the most popular techniques are signal averaging, spatial compounding, matched filters, and homomorphic processing. One of the significant new process is split-spectrum processing(SSP), which can be equally useful in signal-to-noise ratio(SNR) improvement and grain characterization in several engineering materials. The purpose of this paper is to explore the utility of SSP in ultrasonic NDE. A wide variety of engineering problems are reviewed and suggestions for implementation of the technique are provided. SSP uses the frequency-dependent response of the interfering coherent noise produced by unresolvable scatters in the resolution range cell of a transducer. It is implemented by splitting the Sequency spectrum of the received signal by using Gaussian bandpass filters. The theoretical basis for the potential of SSP for grain characterization in SUS 304 material is discussed, and some experimental-evidence for the feasibility of the approach is presented. Results of SNR enhancement in signals obtained from real four samples of SUS 304. The influence of various processing parameters on the performance of the processing technique is also discussed. The minimization algorithm. which provides an excellent SNR enhancement when used either in conjunction with other SSP algorithms like polarity-check or by itself, is also presented.
International Nuclear Information System (INIS)
Huh, H.; Koo, K. M.; Kim, G. J.
1996-01-01
Many signal-processing techniques have been found to be useful in ultrasonic and nondestructive evaluation. Among the most popular techniques are signal averaging, spatial compounding, matched filters and homomorphic processing. One of the significant new process is split-spectrum processing(SSP), which can be equally useful in signal-to-noise ratio(SNR) improvement and grain characterization in several specimens. The purpose of this paper is to explore the utility of SSP in ultrasonic NDE. A wide variety of engineering problems are reviewed, and suggestions for implementation of the technique are provided. SSP uses the frequency-dependent response of the interfering coherent noise produced by unresolvable scatters in the resolution range cell of a transducer. It is implemented by splitting the frequency spectrum of the received signal by using gaussian bandpass filter. The theoretical basis for the potential of SSP for grain characterization in SUS 304 material is discussed, and some experimental evidence for the feasibility of the approach is presented. Results of SNR enhancement in signals obtained from real four samples of SUS 304. The influence of various processing parameters on the performance of the processing technique is also discussed. The minimization algorithm, which provides an excellent SNR enhancement when used either in conjunction with other SSP algorithms like polarity-check or by itself, is also presented
Directory of Open Access Journals (Sweden)
François Nicolas
2009-03-01
Full Text Available Abstract Background There are many sources of variation in dual labelled microarray experiments, including data acquisition and image processing. The final interpretation of experiments strongly relies on the accuracy of the measurement of the signal intensity. For low intensity spots in particular, accurately estimating gene expression variations remains a challenge as signal measurement is, in this case, highly subject to fluctuations. Results To evaluate the fluctuations in the fluorescence intensities of spots, we used series of successive scans, at the same settings, of whole genome arrays. We measured the decrease in fluorescence and we evaluated the influence of different parameters (PMT gain, resolution and chemistry of the slide on the signal variability, at the level of the array as a whole and by intensity interval. Moreover, we assessed the effect of averaging scans on the fluctuations. We found that the extent of photo-bleaching was low and we established that 1 the fluorescence fluctuation is linked to the resolution e.g. it depends on the number of pixels in the spot 2 the fluorescence fluctuation increases as the scanner voltage increases and, moreover, is higher for the red as opposed to the green fluorescence which can introduce bias in the analysis 3 the signal variability is linked to the intensity level, it is higher for low intensities 4 the heterogeneity of the spots and the variability of the signal and the intensity ratios decrease when two or three scans are averaged. Conclusion Protocols consisting of two scans, one at low and one at high PMT gains, or multiple scans (ten scans can introduce bias or be difficult to implement. We found that averaging two, or at most three, acquisitions of microarrays scanned at moderate photomultiplier settings (PMT gain is sufficient to significantly improve the accuracy (quality of the data and particularly those for spots having low intensities and we propose this as a general
Signals, systems, transforms, and digital signal processing with Matlab
Corinthios, Michael
2009-01-01
Continuous-Time and Discrete-Time Signals and SystemsIntroductionContinuous-Time SignalsPeriodic FunctionsUnit Step FunctionGraphical Representation of FunctionsEven and Odd Parts of a FunctionDirac-Delta ImpulseBasic Properties of the Dirac-Delta ImpulseOther Important Properties of the ImpulseContinuous-Time SystemsCausality, StabilityExamples of Electrical Continuous-Time SystemsMechanical SystemsTransfer Function and Frequency ResponseConvolution and CorrelationA Right-Sided and a Left-Sided FunctionConvolution with an Impulse and Its DerivativesAdditional Convolution PropertiesCorrelation FunctionProperties of the Correlation FunctionGraphical InterpretationCorrelation of Periodic FunctionsAverage, Energy and Power of Continuous-Time SignalsDiscrete-Time SignalsPeriodicityDifference EquationsEven/Odd DecompositionAverage Value, Energy and Power SequencesCausality, StabilityProblemsAnswers to Selected ProblemsFourier Series ExpansionTrigonometric Fourier SeriesExponential Fourier SeriesExponential versus ...
Parameter restoration of a soft X radiation spectrum by the signals of X-ray vacuum diodes
International Nuclear Information System (INIS)
Branitskij, A.V.; Olejnik, G.M.
2000-01-01
The multichannel measurement complex on the basis of vacuum X-ray diodes with various filters and the methodology of signals treatment, making it possible to obtained with nanosecond time resolution the radiation capacity in several spectral intervals, ranging from 0.1 up to 4 keV, the radiation capacity within the whole spectral range and the quantum average energy, are described. The method of linear combinations and the two-parametric models, providing close capacity values, are the most acceptable algorithms. This methodology was used at the Angara-5-1 facility in experiments on the cascade liners implosion and with the Z-pinch [ru
Directory of Open Access Journals (Sweden)
Shelley Mo
Full Text Available To assess the effect of image registration and averaging on the visualization and quantification of the radial peripapillary capillary (RPC network on optical coherence tomography angiography (OCTA.Twenty-two healthy controls were imaged with a commercial OCTA system (AngioVue, Optovue, Inc.. Ten 10x10° scans of the optic disc were obtained, and the most superficial layer (50-μm slab extending from the inner limiting membrane was extracted for analysis. Rigid registration was achieved using ImageJ, and averaging of each 2 to 10 frames was performed in five ~2x2° regions of interest (ROI located 1° from the optic disc margin. The ROI were automatically skeletonized. Signal-to-noise ratio (SNR, number of endpoints and mean capillary length from the skeleton, capillary density, and mean intercapillary distance (ICD were measured for the reference and each averaged ROI. Repeated measures analysis of variance was used to assess statistical significance. Three patients with primary open angle glaucoma were also imaged to compare RPC density to controls.Qualitatively, vessels appeared smoother and closer to histologic descriptions with increasing number of averaged frames. Quantitatively, number of endpoints decreased by 51%, and SNR, mean capillary length, capillary density, and ICD increased by 44%, 91%, 11%, and 4.5% from single frame to 10-frame averaged, respectively. The 10-frame averaged images from the glaucomatous eyes revealed decreased density correlating to visual field defects and retinal nerve fiber layer thinning.OCTA image registration and averaging is a viable and accessible method to enhance the visualization of RPCs, with significant improvements in image quality and RPC quantitative parameters. With this technique, we will be able to non-invasively and reliably study RPC involvement in diseases such as glaucoma.
Lenses and effective spatial resolution in macroscopic optical mapping
International Nuclear Information System (INIS)
Bien, Harold; Parikh, Puja; Entcheva, Emilia
2007-01-01
Optical mapping of excitation dynamically tracks electrical waves travelling through cardiac or brain tissue by the use of fluorescent dyes. There are several characteristics that set optical mapping apart from other imaging modalities: dynamically changing signals requiring short exposure times, dim fluorescence demanding sensitive sensors and wide fields of view (low magnification) resulting in poor optical performance. These conditions necessitate the use of optics with good light gathering ability, i.e. lenses having high numerical aperture. Previous optical mapping studies often used sensor resolution to estimate the minimum spatial feature resolvable, assuming perfect optics and infinite contrast. We examine here the influence of finite contrast and real optics on the effective spatial resolution in optical mapping under broad-field illumination for both lateral (in-plane) resolution and axial (depth) resolution of collected fluorescence signals
Medicare Part B Drug Average Sales Pricing Files
U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...
High Average Power Fiber Laser for Satellite Communications, Phase I
National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...
A time averaged background compensator for Geiger-Mueller counters
International Nuclear Information System (INIS)
Bhattacharya, R.C.; Ghosh, P.K.
1983-01-01
The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)
Time averaging, ageing and delay analysis of financial time series
Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf
2017-06-01
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
Historical Data for Average Processing Time Until Hearing Held
Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...
GIS Tools to Estimate Average Annual Daily Traffic
2012-06-01
This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...
The average-shadowing property and topological ergodicity for flows
International Nuclear Information System (INIS)
Gu Rongbao; Guo Wenjing
2005-01-01
In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic
Energy Technology Data Exchange (ETDEWEB)
Jaeger, M; Preisser, S; Kitz, M; Frenz, M [Institute of Applied Physics, University of Bern, Sidlerstrasse 5, CH-3012 Bern (Switzerland); Ferrara, D; Senegas, S; Schweizer, D, E-mail: frenz@iap.unibe.ch [Fukuda Denshi Switzerland AG, Reinacherstrasse 131, CH-4002 Basel (Switzerland)
2011-09-21
For real-time optoacoustic (OA) imaging of the human body, a linear array transducer and reflection mode optical irradiation is usually preferred. Such a setup, however, results in significant image background, which prevents imaging structures at the ultimate depth determined by the light distribution and the signal noise level. Therefore, we previously proposed a method for image background reduction, based on displacement-compensated averaging (DCA) of image series obtained when the tissue sample under investigation is gradually deformed. OA signals and background signals are differently affected by the deformation and can thus be distinguished. The proposed method is now experimentally applied to image artificial tumours embedded inside breast phantoms. OA images are acquired alternately with pulse-echo images using a combined OA/echo-ultrasound device. Tissue deformation is accessed via speckle tracking in pulse echo images, and used to compensate in the OA images for the local tissue displacement. In that way, OA sources are highly correlated between subsequent images, while background is decorrelated and can therefore be reduced by averaging. We show that image contrast in breast phantoms is strongly improved and detectability of embedded tumours significantly increased, using the DCA method.
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin
2017-08-01
Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.
Application of Bayesian approach to estimate average level spacing
International Nuclear Information System (INIS)
Huang Zhongfu; Zhao Zhixiang
1991-01-01
A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out
Annual average equivalent dose of workers form health area
International Nuclear Information System (INIS)
Daltro, T.F.L.; Campos, L.L.
1992-01-01
The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)
A precise measurement of the average b hadron lifetime
Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G
1996-01-01
An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.
Bivariate copulas on the exponentially weighted moving average control chart
Directory of Open Access Journals (Sweden)
Sasigarn Kuvattana
2016-10-01
Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.
The average action for scalar fields near phase transitions
International Nuclear Information System (INIS)
Wetterich, C.
1991-08-01
We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)
Wave function collapse implies divergence of average displacement
Marchewka, A.; Schuss, Z.
2005-01-01
We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.
Average geodesic distance of skeleton networks of Sierpinski tetrahedron
Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao
2018-04-01
The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.
Computed tomography with selectable image resolution
International Nuclear Information System (INIS)
Dibianca, F.A.; Dallapiazza, D.G.
1981-01-01
A computed tomography system x-ray detector has a central group of half-width detector elements and groups of full-width elements on each side of the central group. To obtain x-ray attenuation data for whole body layers, the half-width elements are switched effectively into paralleled pairs so all elements act like full-width elements and an image of normal resolution is obtained. For narrower head layers, the elements in the central group are used as half-width elements so resolution which is twice as great as normal is obtained. The central group is also used in the half-width mode and the outside groups are used in the full-width mode to obtain a high resolution image of a body zone within a full body layer. In one embodiment data signals from the detector are switched by electronic multiplexing and in another embodiment a processor chooses the signals for the various kinds of images that are to be reconstructed. (author)
Muon Signals at a Low Signal-to-Noise Ratio Environment
Zakareishvili, Tamar; The ATLAS collaboration
2017-01-01
Calorimeters provide high-resolution energy measurements for particle detection. Muon signals are important for evaluating electronics performance, since they produce a signal that is close to electronic noise values. This work provides a noise RMS analysis for the Demonstrator drawer of the 2016 Tile Calorimeter (TileCal) Test Beam in order to help reconstruct events in a low signal-to-noise environment. Muon signals were then found for a beam penetrating through all three layers of the drawer. The Demonstrator drawer is an electronic candidate for TileCal, part of the ATLAS experiment for the Large Hadron Collider that operates at the European Organization for Nuclear Research (CERN).
High resolution data acquisition
Thornton, Glenn W.; Fuller, Kenneth R.
1993-01-01
A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.
Particle detector spatial resolution
International Nuclear Information System (INIS)
Perez-Mendez, V.
1992-01-01
Method and apparatus for producing separated columns of scintillation layer material, for use in detection of X-rays and high energy charged particles with improved spatial resolution is disclosed. A pattern of ridges or projections is formed on one surface of a substrate layer or in a thin polyimide layer, and the scintillation layer is grown at controlled temperature and growth rate on the ridge-containing material. The scintillation material preferentially forms cylinders or columns, separated by gaps conforming to the pattern of ridges, and these columns direct most of the light produced in the scintillation layer along individual columns for subsequent detection in a photodiode layer. The gaps may be filled with a light-absorbing material to further enhance the spatial resolution of the particle detector. 12 figs
Directory of Open Access Journals (Sweden)
A. Ziemann
2017-11-01
30 % for a single measurement. Instantaneous wind components can be derived with a maximum uncertainty of 0.3 m s−1 depending on sampling, signal analysis, and environmental influences on sound propagation. Averaging over a period of 30 min, the standard error of the mean values can be decreased by a factor of at least 0.5 for OP-FTIR and 0.1 for A-TOM depending on the required spatial resolution. The presented validation of the joint application of the two independent, nonintrusive methods is in the focus of attention concerning their ability to quantify advective fluxes.
Ziemann, Astrid; Starke, Manuela; Schütze, Claudia
2017-11-01
measurement. Instantaneous wind components can be derived with a maximum uncertainty of 0.3 m s-1 depending on sampling, signal analysis, and environmental influences on sound propagation. Averaging over a period of 30 min, the standard error of the mean values can be decreased by a factor of at least 0.5 for OP-FTIR and 0.1 for A-TOM depending on the required spatial resolution. The presented validation of the joint application of the two independent, nonintrusive methods is in the focus of attention concerning their ability to quantify advective fluxes.
Czech Academy of Sciences Publication Activity Database
Bonacina, I.; Galesi, N.; Thapen, Neil
2016-01-01
Roč. 45, č. 5 (2016), s. 1894-1909 ISSN 0097-5397 R&D Projects: GA ČR GBP202/12/G061 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : total space * resolution random CNFs * proof complexity Subject RIV: BA - General Mathematics Impact factor: 1.433, year: 2016 http://epubs.siam.org/doi/10.1137/15M1023269
High resolution (transformers.
Garcia-Souto, Jose A; Lamela-Rivera, Horacio
2006-10-16
A novel fiber-optic interferometric sensor is presented for vibrations measurements and analysis. In this approach, it is shown applied to the vibrations of electrical structures within power transformers. A main feature of the sensor is that an unambiguous optical phase measurement is performed using the direct detection of the interferometer output, without external modulation, for a more compact and stable implementation. High resolution of the interferometric measurement is obtained with this technique (transformers are also highlighted.
ALTERNATIVE DISPUTE RESOLUTION
Directory of Open Access Journals (Sweden)
Mihaela Irina IONESCU
2016-05-01
Full Text Available Alternative dispute resolution (ADR includes dispute resolution processes and techniques that act as a means for disagreeing parties to come to an agreement short of litigation. It is a collective term for the ways that parties can settle disputes, with (or without the help of a third party. Despite historic resistance to ADR by many popular parties and their advocates, ADR has gained widespread acceptance among both the general public and the legal profession in recent years. In fact, some courts now require some parties to resort to ADR of some type, before permitting the parties' cases to be tried. The rising popularity of ADR can be explained by the increasing caseload of traditional courts, the perception that ADR imposes fewer costs than litigation, a preference for confidentiality, and the desire of some parties to have greater control over the selection of the individual or individuals who will decide their dispute. Directive 2013/11/EU of the European Parliament and of the Council on alternative dispute resolution for consumer disputes and amending Regulation (EC No 2006/2004 and Directive 2009/22/EC (hereinafter „Directive 2013/11/EU” aims to ensure a high level of consumer protection and the proper functioning of the internal market by ensuring that complaints against traders can be submitted by consumers on a voluntary basis, to entities of alternative disputes which are independent, impartial, transparent, effective, simple,quick and fair. Directive 2013/11/EU establishes harmonized quality requirements for entities applying alternative dispute resolution procedure (hereinafter "ADR entity" to provide the same protection and the same rights of consumers in all Member States. Besides this, the present study is trying to present broadly how are all this trasposed in the romanian legislation.
High-resolution axial MR imaging of tibial stress injuries
Directory of Open Access Journals (Sweden)
Mammoto Takeo
2012-05-01
Full Text Available Abstract Purpose To evaluate the relative involvement of tibial stress injuries using high-resolution axial MR imaging and the correlation with MR and radiographic images. Methods A total of 33 patients with exercise-induced tibial pain were evaluated. All patients underwent radiograph and high-resolution axial MR imaging. Radiographs were taken at initial presentation and 4 weeks later. High-resolution MR axial images were obtained using a microscopy surface coil with 60 × 60 mm field of view on a 1.5T MR unit. All images were evaluated for abnormal signals of the periosteum, cortex and bone marrow. Results Nineteen patients showed no periosteal reaction at initial and follow-up radiographs. MR imaging showed abnormal signals in the periosteal tissue and partially abnormal signals in the bone marrow. In 7 patients, periosteal reaction was not seen at initial radiograph, but was detected at follow-up radiograph. MR imaging showed abnormal signals in the periosteal tissue and entire bone marrow. Abnormal signals in the cortex were found in 6 patients. The remaining 7 showed periosteal reactions at initial radiograph. MR imaging showed abnormal signals in the periosteal tissue in 6 patients. Abnormal signals were seen in the partial and entire bone marrow in 4 and 3 patients, respectively. Conclusions Bone marrow abnormalities in high-resolution axial MR imaging were related to periosteal reactions at follow-up radiograph. Bone marrow abnormalities might predict later periosteal reactions, suggesting shin splints or stress fractures. High-resolution axial MR imaging is useful in early discrimination of tibial stress injuries.
High-resolution axial MR imaging of tibial stress injuries
2012-01-01
Purpose To evaluate the relative involvement of tibial stress injuries using high-resolution axial MR imaging and the correlation with MR and radiographic images. Methods A total of 33 patients with exercise-induced tibial pain were evaluated. All patients underwent radiograph and high-resolution axial MR imaging. Radiographs were taken at initial presentation and 4 weeks later. High-resolution MR axial images were obtained using a microscopy surface coil with 60 × 60 mm field of view on a 1.5T MR unit. All images were evaluated for abnormal signals of the periosteum, cortex and bone marrow. Results Nineteen patients showed no periosteal reaction at initial and follow-up radiographs. MR imaging showed abnormal signals in the periosteal tissue and partially abnormal signals in the bone marrow. In 7 patients, periosteal reaction was not seen at initial radiograph, but was detected at follow-up radiograph. MR imaging showed abnormal signals in the periosteal tissue and entire bone marrow. Abnormal signals in the cortex were found in 6 patients. The remaining 7 showed periosteal reactions at initial radiograph. MR imaging showed abnormal signals in the periosteal tissue in 6 patients. Abnormal signals were seen in the partial and entire bone marrow in 4 and 3 patients, respectively. Conclusions Bone marrow abnormalities in high-resolution axial MR imaging were related to periosteal reactions at follow-up radiograph. Bone marrow abnormalities might predict later periosteal reactions, suggesting shin splints or stress fractures. High-resolution axial MR imaging is useful in early discrimination of tibial stress injuries. PMID:22574840
Adaptive optics improves multiphoton super-resolution imaging
Zheng, Wei; Wu, Yicong; Winter, Peter; Shroff, Hari
2018-02-01
Three dimensional (3D) fluorescence microscopy has been essential for biological studies. It allows interrogation of structure and function at spatial scales spanning the macromolecular, cellular, and tissue levels. Critical factors to consider in 3D microscopy include spatial resolution, signal-to-noise (SNR), signal-to-background (SBR), and temporal resolution. Maintaining high quality imaging becomes progressively more difficult at increasing depth (where optical aberrations, induced by inhomogeneities of refractive index in the sample, degrade resolution and SNR), and in thick or densely labeled samples (where out-of-focus background can swamp the valuable, in-focus-signal from each plane). In this report, we introduce our new instrumentation to address these problems. A multiphoton structured illumination microscope was simply modified to integrate an adpative optics system for optical aberrations correction. Firstly, the optical aberrations are determined using direct wavefront sensing with a nonlinear guide star and subsequently corrected using a deformable mirror, restoring super-resolution information. We demonstrate the flexibility of our adaptive optics approach on a variety of semi-transparent samples, including bead phantoms, cultured cells in collagen gels and biological tissues. The performance of our super-resolution microscope is improved in all of these samples, as peak intensity is increased (up to 40-fold) and resolution recovered (up to 176+/-10 nm laterally and 729+/-39 nm axially) at depths up to 250 μm from the coverslip surface.
International Nuclear Information System (INIS)
Wu, Yunfeng; Yang, Shanshan; Zheng, Fang; Cai, Suxian; Lu, Meng; Wu, Meihong
2014-01-01
High-resolution knee joint vibroarthrographic (VAG) signals can help physicians accurately evaluate the pathological condition of a degenerative knee joint, in order to prevent unnecessary exploratory surgery. Artifact cancellation is vital to preserve the quality of VAG signals prior to further computer-aided analysis. This paper describes a novel method that effectively utilizes ensemble empirical mode decomposition (EEMD) and detrended fluctuation analysis (DFA) algorithms for the removal of baseline wander and white noise in VAG signal processing. The EEMD method first successively decomposes the raw VAG signal into a set of intrinsic mode functions (IMFs) with fast and low oscillations, until the monotonic baseline wander remains in the last residue. Then, the DFA algorithm is applied to compute the fractal scaling index parameter for each IMF, in order to identify the anti-correlation and the long-range correlation components. Next, the DFA algorithm can be used to identify the anti-correlated and the long-range correlated IMFs, which assists in reconstructing the artifact-reduced VAG signals. Our experimental results showed that the combination of EEMD and DFA algorithms was able to provide averaged signal-to-noise ratio (SNR) values of 20.52 dB (standard deviation: 1.14 dB) and 20.87 dB (standard deviation: 1.89 dB) for 45 normal signals in healthy subjects and 20 pathological signals in symptomatic patients, respectively. The combination of EEMD and DFA algorithms can ameliorate the quality of VAG signals with great SNR improvements over the raw signal, and the results were also superior to those achieved by wavelet matching pursuit decomposition and time-delay neural filter. (paper)
Müller, M.; Graus, M.; Wisthaler, A.; Hansel, A.; Metzger, A.; Dommen, J.; Baltensperger, U.
2012-01-01
A series of 1,3,5-trimethylbenzene (TMB) photo-oxidation experiments was performed in the 27-m3 Paul Scherrer Institute environmental chamber under various NOx conditions. A University of Innsbruck prototype high resolution Proton Transfer Reaction Time-of-Flight Mass Spectrometer (PTR-TOF) was used for measurements of gas and particulate phase organics. The gas phase mass spectrum displayed ~200 ion signals during the TMB photo-oxidation experiments. Molecular formulas CmHnNoOp were determined and ion signals were separated and grouped according to their C, O and N numbers. This allowed to determine the time evolution of the O:C ratio and of the average carbon oxidation state solid #000; color: #000;">OSC of the reaction mixture. Both quantities were compared with master chemical mechanism (MCMv3.1) simulations. The O:C ratio in the particle phase was about twice the O:C ratio in the gas phase. Average carbon oxidation states of secondary organic aerosol (SOA) samples solid #000; color: #000;">OSCSOA were in the range of -0.34 to -0.31, in agreement with expected average carbon oxidation states of fresh SOA (solid #000; color: #000;">OSC = -0.5-0).
Average Soil Water Retention Curves Measured by Neutron Radiography
Energy Technology Data Exchange (ETDEWEB)
Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD
2011-01-01
Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.
Resolution enhancement in medical ultrasound imaging.
Ploquin, Marie; Basarab, Adrian; Kouamé, Denis
2015-01-01
Image resolution enhancement is a problem of considerable interest in all medical imaging modalities. Unlike general purpose imaging or video processing, for a very long time, medical image resolution enhancement has been based on optimization of the imaging devices. Although some recent works purport to deal with image postprocessing, much remains to be done regarding medical image enhancement via postprocessing, especially in ultrasound imaging. We face a resolution improvement issue in the case of medical ultrasound imaging. We propose to investigate this problem using multidimensional autoregressive (AR) models. Noting that the estimation of the envelope of an ultrasound radio frequency (RF) signal is very similar to the estimation of classical Fourier-based power spectrum estimation, we theoretically show that a domain change and a multidimensional AR model can be used to achieve super-resolution in ultrasound imaging provided the order is estimated correctly. Here, this is done by means of a technique that simultaneously estimates the order and the parameters of a multidimensional model using relevant regression matrix factorization. Doing so, the proposed method specifically fits ultrasound imaging and provides an estimated envelope. Moreover, an expression that links the theoretical image resolution to both the image acquisition features (such as the point spread function) and a postprocessing feature (the AR model) order is derived. The overall contribution of this work is threefold. First, it allows for automatic resolution improvement. Through a simple model and without any specific manual algorithmic parameter tuning, as is used in common methods, the proposed technique simply and exclusively uses the ultrasound RF signal as input and provides the improved B-mode as output. Second, it allows for the a priori prediction of the improvement in resolution via the knowledge of the parametric model order before actual processing. Finally, to achieve the
PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM
Directory of Open Access Journals (Sweden)
Bahubali K. Shiragapur
2016-03-01
Full Text Available In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR quantity. The Golay Code (24, 12, Reed-Muller code (16, 11, Hamming code (7, 4 and Hybrid technique (Combination of Signal Scrambling and Signal Distortion proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conventional and Modified Selective mapping techniques. The simulation results are validated through statistical properties, for proposed technique’s autocorrelation value is maximum shows reduction in PAPR. The symbol preference is the key idea to reduce PAPR based on Hamming distance. The simulation results are discussed in detail, in this article.
Zhang, Shengli; Tang, Jiong
2016-04-01
Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.
DEFF Research Database (Denmark)
Chon, K H; Hoyer, D; Armoundas, A A
1999-01-01
In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...
Estimating average glandular dose by measuring glandular rate in mammograms
International Nuclear Information System (INIS)
Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru
2003-01-01
The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)