WorldWideScience

Sample records for adaptive thresholding method

  1. Time-efficient multidimensional threshold tracking method

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Kowalewski, Borys; Dau, Torsten

    2015-01-01

    Traditionally, adaptive methods have been used to reduce the time it takes to estimate psychoacoustic thresholds. However, even with adaptive methods, there are many cases where the testing time is too long to be clinically feasible, particularly when estimating thresholds as a function of anothe...

  2. A method of camera calibration with adaptive thresholding

    Science.gov (United States)

    Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei

    2009-07-01

    In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.

  3. Comparison of an adaptive local thresholding method on CBCT and µCT endodontic images

    Science.gov (United States)

    Michetti, Jérôme; Basarab, Adrian; Diemer, Franck; Kouame, Denis

    2018-01-01

    Root canal segmentation on cone beam computed tomography (CBCT) images is difficult because of the noise level, resolution limitations, beam hardening and dental morphological variations. An image processing framework, based on an adaptive local threshold method, was evaluated on CBCT images acquired on extracted teeth. A comparison with high quality segmented endodontic images on micro computed tomography (µCT) images acquired from the same teeth was carried out using a dedicated registration process. Each segmented tooth was evaluated according to volume and root canal sections through the area and the Feret’s diameter. The proposed method is shown to overcome the limitations of CBCT and to provide an automated and adaptive complete endodontic segmentation. Despite a slight underestimation (-4, 08%), the local threshold segmentation method based on edge-detection was shown to be fast and accurate. Strong correlations between CBCT and µCT segmentations were found both for the root canal area and diameter (respectively 0.98 and 0.88). Our findings suggest that combining CBCT imaging with this image processing framework may benefit experimental endodontology, teaching and could represent a first development step towards the clinical use of endodontic CBCT segmentation during pulp cavity treatment.

  4. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image

    Science.gov (United States)

    Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.

    2017-12-01

    Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.

  5. Spike-threshold adaptation predicted by membrane potential dynamics in vivo.

    Directory of Open Access Journals (Sweden)

    Bertrand Fontaine

    2014-04-01

    Full Text Available Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo.

  6. Robust Adaptive Thresholder For Document Scanning Applications

    Science.gov (United States)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  7. Statistical Algorithm for the Adaptation of Detection Thresholds

    DEFF Research Database (Denmark)

    Stotsky, Alexander A.

    2008-01-01

    Many event detection mechanisms in spark ignition automotive engines are based on the comparison of the engine signals to the detection threshold values. Different signal qualities for new and aged engines necessitate the development of an adaptation algorithm for the detection thresholds...... remains constant regardless of engine age and changing detection threshold values. This, in turn, guarantees the same event detection performance for new and aged engines/sensors. Adaptation of the engine knock detection threshold is given as an example. Udgivelsesdato: 2008...

  8. Simplified Threshold RSA with Adaptive and Proactive Security

    DEFF Research Database (Denmark)

    Almansa Guerra, Jesus Fernando; Damgård, Ivan Bjerre; Nielsen, Jesper Buus

    2006-01-01

    We present the currently simplest, most efficient, optimally resilient, adaptively secure, and proactive threshold RSA scheme. A main technical contribution is a new rewinding strategy for analysing threshold signature schemes. This new rewinding strategy allows to prove adaptive security...... of a proactive threshold signature scheme which was previously assumed to be only statically secure. As a separate contribution we prove that our protocol is secure in the UC framework....

  9. QRS Detection Based on Improved Adaptive Threshold

    Directory of Open Access Journals (Sweden)

    Xuanyu Lu

    2018-01-01

    Full Text Available Cardiovascular disease is the first cause of death around the world. In accomplishing quick and accurate diagnosis, automatic electrocardiogram (ECG analysis algorithm plays an important role, whose first step is QRS detection. The threshold algorithm of QRS complex detection is known for its high-speed computation and minimized memory storage. In this mobile era, threshold algorithm can be easily transported into portable, wearable, and wireless ECG systems. However, the detection rate of the threshold algorithm still calls for improvement. An improved adaptive threshold algorithm for QRS detection is reported in this paper. The main steps of this algorithm are preprocessing, peak finding, and adaptive threshold QRS detecting. The detection rate is 99.41%, the sensitivity (Se is 99.72%, and the specificity (Sp is 99.69% on the MIT-BIH Arrhythmia database. A comparison is also made with two other algorithms, to prove our superiority. The suspicious abnormal area is shown at the end of the algorithm and RR-Lorenz plot drawn for doctors and cardiologists to use as aid for diagnosis.

  10. Thresholding methods for PET imaging: A review

    International Nuclear Information System (INIS)

    Dewalle-Vignion, A.S.; Betrouni, N.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; El Abiad, A.

    2010-01-01

    This work deals with positron emission tomography segmentation methods for tumor volume determination. We propose a state of art techniques based on fixed or adaptive threshold. Methods found in literature are analysed with an objective point of view on their methodology, advantages and limitations. Finally, a comparative study is presented. (authors)

  11. Passive Sonar Target Detection Using Statistical Classifier and Adaptive Threshold

    Directory of Open Access Journals (Sweden)

    Hamed Komari Alaie

    2018-01-01

    Full Text Available This paper presents the results of an experimental investigation about target detecting with passive sonar in Persian Gulf. Detecting propagated sounds in the water is one of the basic challenges of the researchers in sonar field. This challenge will be complex in shallow water (like Persian Gulf and noise less vessels. Generally, in passive sonar, the targets are detected by sonar equation (with constant threshold that increases the detection error in shallow water. The purpose of this study is proposed a new method for detecting targets in passive sonars using adaptive threshold. In this method, target signal (sound is processed in time and frequency domain. For classifying, Bayesian classification is used and posterior distribution is estimated by Maximum Likelihood Estimation algorithm. Finally, target was detected by combining the detection points in both domains using Least Mean Square (LMS adaptive filter. Results of this paper has showed that the proposed method has improved true detection rate by about 24% when compared other the best detection method.

  12. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  13. Consumption of vitamin a rich foods and dark adaptation threshold ...

    African Journals Online (AJOL)

    BACKGROUND: More than 7.2 million pregnant women in developing countries suffer from vitamin A deficiency. The objective of this study was to assess dark adaptation threshold of pregnant women and related socio-demographic factors in Damot Sore District, Wolayita Zone, Southern Ethiopia. METHODS: A ...

  14. Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting

    Directory of Open Access Journals (Sweden)

    ZHU Xiaoxiao

    2018-02-01

    Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.

  15. Adaptive thresholding and dynamic windowing method for automatic centroid detection of digital Shack-Hartmann wavefront sensor

    International Nuclear Information System (INIS)

    Yin Xiaoming; Li Xiang; Zhao Liping; Fang Zhongping

    2009-01-01

    A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.

  16. Convergence acceleration of Navier-Stokes equation using adaptive wavelet method

    International Nuclear Information System (INIS)

    Kang, Hyung Min; Ghafoor, Imran; Lee, Do Hyung

    2010-01-01

    An efficient adaptive wavelet method is proposed for the enhancement of computational efficiency of the Navier-Stokes equations. The method is based on sparse point representation (SPR), which uses the wavelet decomposition and thresholding to obtain a sparsely distributed dataset. The threshold mechanism is modified in order to maintain the spatial accuracy of a conventional Navier-Stokes solver by adapting the threshold value to the order of spatial truncation error. The computational grid can be dynamically adapted to a transient solution to reflect local changes in the solution. The flux evaluation is then carried out only at the points of the adapted dataset, which reduces the computational effort and memory requirements. A stabilization technique is also implemented to avoid the additional numerical errors introduced by the threshold procedure. The numerical results of the adaptive wavelet method are compared with a conventional solver to validate the enhancement in computational efficiency of Navier-Stokes equations without the degeneration of the numerical accuracy of a conventional solver

  17. A Fast Method for Measuring Psychophysical Thresholds Across the Cochlear Implant Array

    Directory of Open Access Journals (Sweden)

    Julie A. Bierer

    2015-02-01

    Full Text Available A rapid threshold measurement procedure, based on Bekesy tracking, is proposed and evaluated for use with cochlear implants (CIs. Fifteen postlingually deafened adult CI users participated. Absolute thresholds for 200-ms trains of biphasic pulses were measured using the new tracking procedure and were compared with thresholds obtained with a traditional forced-choice adaptive procedure under both monopolar and quadrupolar stimulation. Virtual spectral sweeps across the electrode array were implemented in the tracking procedure via current steering, which divides the current between two adjacent electrodes and varies the proportion of current directed to each electrode. Overall, no systematic differences were found between threshold estimates with the new channel sweep procedure and estimates using the adaptive forced-choice procedure. Test–retest reliability for the thresholds from the sweep procedure was somewhat poorer than for thresholds from the forced-choice procedure. However, the new method was about 4 times faster for the same number of repetitions. Overall the reliability and speed of the new tracking procedure provides it with the potential to estimate thresholds in a clinical setting. Rapid methods for estimating thresholds could be of particular clinical importance in combination with focused stimulation techniques that result in larger threshold variations between electrodes.

  18. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    Science.gov (United States)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  19. Self-Tuning Threshold Method for Real-Time Gait Phase Detection Based on Ground Contact Forces Using FSRs

    Directory of Open Access Journals (Sweden)

    Jing Tang

    2018-02-01

    Full Text Available This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM sets a threshold to divide the ground contact forces (GCFs into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA, which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM and Lopez–Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.

  20. Intelligent Mechanical Fault Diagnosis Based on Multiwavelet Adaptive Threshold Denoising and MPSO

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2014-01-01

    Full Text Available The condition diagnosis of rotating machinery depends largely on the feature analysis of vibration signals measured for the condition diagnosis. However, the signals measured from rotating machinery usually are nonstationary and nonlinear and contain noise. The useful fault features are hidden in the heavy background noise. In this paper, a novel fault diagnosis method for rotating machinery based on multiwavelet adaptive threshold denoising and mutation particle swarm optimization (MPSO is proposed. Geronimo, Hardin, and Massopust (GHM multiwavelet is employed for extracting weak fault features under background noise, and the method of adaptively selecting appropriate threshold for multiwavelet with energy ratio of multiwavelet coefficient is presented. The six nondimensional symptom parameters (SPs in the frequency domain are defined to reflect the features of the vibration signals measured in each state. Detection index (DI using statistical theory has been also defined to evaluate the sensitiveness of SP for condition diagnosis. MPSO algorithm with adaptive inertia weight adjustment and particle mutation is proposed for condition identification. MPSO algorithm effectively solves local optimum and premature convergence problems of conventional particle swarm optimization (PSO algorithm. It can provide a more accurate estimate on fault diagnosis. Practical examples of fault diagnosis for rolling element bearings are given to verify the effectiveness of the proposed method.

  1. On the limitations of fixed-step-size adaptive methods with response confidence.

    Science.gov (United States)

    Hsu, Yung-Fong; Chin, Ching-Lan

    2014-05-01

    The family of (non-parametric, fixed-step-size) adaptive methods, also known as 'up-down' or 'staircase' methods, has been used extensively in psychophysical studies for threshold estimation. Extensions of adaptive methods to non-binary responses have also been proposed. An example is the three-category weighted up-down (WUD) method (Kaernbach, 2001) and its four-category extension (Klein, 2001). Such an extension, however, is somewhat restricted, and in this paper we discuss its limitations. To facilitate the discussion, we characterize the extension of WUD by an algorithm that incorporates response confidence into a family of adaptive methods. This algorithm can also be applied to two other adaptive methods, namely Derman's up-down method and the biased-coin design, which are suitable for estimating any threshold quantiles. We then discuss via simulations of the above three methods the limitations of the algorithm. To illustrate, we conduct a small scale of experiment using the extended WUD under different response confidence formats to evaluate the consistency of threshold estimation. © 2013 The British Psychological Society.

  2. Segmentasi Pembuluh Darah Retina Pada Citra Fundus Menggunakan Gradient Based Adaptive Thresholding Dan Region Growing

    Directory of Open Access Journals (Sweden)

    Deni Sutaji

    2016-07-01

    , segmentasi. AbstractSegmentation of blood vessels in the retina fundus image becomes substantial in the medical, because it can be used to detect diseases, such as diabetic retinopathy, hypertension, and cardiovascular. Doctor takes about two hours to detect the blood vessels of the retina, so screening methods are needed to make it faster. The previous methods are able to segment the blood vessels that are sensitive to variations in the size of the width of blood vessels, but there is over-segmentation in the area of pathology. Therefore, this study aims to develop a segmentation method of blood vessels in retinal fundus images which can reduce over-segmentation in the area of pathology using Gradient Based Adaptive Thresholding and Region Growing. The proposed method consists of three stages, namely the segmentation of the main blood vessels, detection area of pathology and segmentation thin blood vessels. Main blood vessels segmentation using high-pass filtering and tophat reconstruction on the green channel which adjusted of contras image that results the clearly between object and background. Detection area of pathology using Gradient Based Adaptive thresholding method. Thin blood vessels segmentation using Region Growing based on the information main blood vessel segmentation and detection of pathology area. Output of the main blood vessel segmentation and thin blood vessels are then combined to reconstruct an image of the blood vessels as output system.This method is able to segment the blood vessels in retinal fundus images DRIVE with an accuracy of 95.25% and the value of Area Under Curve (AUC in the relative operating characteristic curve (ROC of 74.28%.Keywords: Blood vessel, fundus retina image, gradient based adaptive thresholding, pathology, region growing, segmentation.

  3. Comparisons of adaptive TIN modelling filtering method and threshold segmentation filtering method of LiDAR point cloud

    International Nuclear Information System (INIS)

    Chen, Lin; Fan, Xiangtao; Du, Xiaoping

    2014-01-01

    Point cloud filtering is the basic and key step in LiDAR data processing. Adaptive Triangle Irregular Network Modelling (ATINM) algorithm and Threshold Segmentation on Elevation Statistics (TSES) algorithm are among the mature algorithms. However, few researches concentrate on the parameter selections of ATINM and the iteration condition of TSES, which can greatly affect the filtering results. First the paper presents these two key problems under two different terrain environments. For a flat area, small height parameter and angle parameter perform well and for areas with complex feature changes, large height parameter and angle parameter perform well. One-time segmentation is enough for flat areas, and repeated segmentations are essential for complex areas. Then the paper makes comparisons and analyses of the results by these two methods. ATINM has a larger I error in both two data sets as it sometimes removes excessive points. TSES has a larger II error in both two data sets as it ignores topological relations between points. ATINM performs well even with a large region and a dramatic topology while TSES is more suitable for small region with flat topology. Different parameters and iterations can cause relative large filtering differences

  4. Modern Adaptive Analytics Approach to Lowering Seismic Network Detection Thresholds

    Science.gov (United States)

    Johnson, C. E.

    2017-12-01

    Modern seismic networks present a number of challenges, but perhaps most notably are those related to 1) extreme variation in station density, 2) temporal variation in station availability, and 3) the need to achieve detectability for much smaller events of strategic importance. The first of these has been reasonably addressed in the development of modern seismic associators, such as GLASS 3.0 by the USGS/NEIC, though some work still remains to be done in this area. However, the latter two challenges demand special attention. Station availability is impacted by weather, equipment failure or the adding or removing of stations, and while thresholds have been pushed to increasingly smaller magnitudes, new algorithms are needed to achieve even lower thresholds. Station availability can be addressed by a modern, adaptive architecture that maintains specified performance envelopes using adaptive analytics coupled with complexity theory. Finally, detection thresholds can be lowered using a novel approach that tightly couples waveform analytics with the event detection and association processes based on a principled repicking algorithm that uses particle realignment for enhanced phase discrimination.

  5. Image reconstruction with an adaptive threshold technique in electrical resistance tomography

    International Nuclear Information System (INIS)

    Kim, Bong Seok; Khambampati, Anil Kumar; Kim, Sin; Kim, Kyung Youn

    2011-01-01

    In electrical resistance tomography, electrical currents are injected through the electrodes placed on the surface of a domain and the corresponding voltages are measured. Based on these currents and voltage data, the cross-sectional resistivity distribution is reconstructed. Electrical resistance tomography shows high temporal resolution for monitoring fast transient processes, but it still remains a challenging problem to improve the spatial resolution of the reconstructed images. In this paper, a novel image reconstruction technique is proposed to improve the spatial resolution by employing an adaptive threshold method to the iterative Gauss–Newton method. Numerical simulations and phantom experiments have been performed to illustrate the superior performance of the proposed scheme in the sense of spatial resolution

  6. Kinetics of the early adaptive response and adaptation threshold dose

    International Nuclear Information System (INIS)

    Mendiola C, M.T.; Morales R, P.

    2003-01-01

    The expression kinetics of the adaptive response (RA) in mouse leukocytes in vivo and the minimum dose of gamma radiation that induces it was determined. The mice were exposed 0.005 or 0.02 Gy of 137 Cs like adaptation and 1h later to the challenge dose (1.0 Gy), another group was only exposed at 1.0 Gy and the damage is evaluated in the DNA with the rehearsal it makes. The treatment with 0. 005 Gy didn't induce RA and 0. 02 Gy causes a similar effect to the one obtained with 0.01 Gy. The RA was show from an interval of 0.5 h being obtained the maximum expression with 5.0 h. The threshold dose to induce the RA is 0.01 Gy and in 5.0 h the biggest quantity in molecules is presented presumably that are related with the protection of the DNA. (Author)

  7. Threshold-adaptive canny operator based on cross-zero points

    Science.gov (United States)

    Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu

    2018-03-01

    Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.

  8. Adaptive threshold control for auto-rate fallback algorithm in IEEE 802.11 multi-rate WLANs

    Science.gov (United States)

    Wu, Qilin; Lu, Yang; Zhu, Xiaolin; Ge, Fangzhen

    2012-03-01

    The IEEE 802.11 standard supports multiple rates for data transmission in the physical layer. Nowadays, to improve network performance, a rate adaptation scheme called auto-rate fallback (ARF) is widely adopted in practice. However, ARF scheme suffers performance degradation in multiple contending nodes environments. In this article, we propose a novel rate adaptation scheme called ARF with adaptive threshold control. In multiple contending nodes environment, the proposed scheme can effectively mitigate the frame collision effect on rate adaptation decision by adaptively adjusting rate-up and rate-down threshold according to the current collision level. Simulation results show that the proposed scheme can achieve significantly higher throughput than the other existing rate adaptation schemes. Furthermore, the simulation results also demonstrate that the proposed scheme can effectively respond to the varying channel condition.

  9. Impact of sub and supra-threshold adaptation currents in networks of spiking neurons.

    Science.gov (United States)

    Colliaux, David; Yger, Pierre; Kaneko, Kunihiko

    2015-12-01

    Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks.

  10. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    J. Soraghan

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by “blowing out” the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  11. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    Salleh MFM

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by "blowing out" the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  12. Adaptive and non-adaptive data hiding methods for grayscale images based on modulus function

    Directory of Open Access Journals (Sweden)

    Najme Maleki

    2014-07-01

    Full Text Available This paper presents two adaptive and non-adaptive data hiding methods for grayscale images based on modulus function. Our adaptive scheme is based on the concept of human vision sensitivity, so the pixels in edge areas than to smooth areas can tolerate much more changes without making visible distortion for human eyes. In our adaptive scheme, the average differencing value of four neighborhood pixels into a block via a threshold secret key determines whether current block is located in edge or smooth area. Pixels in the edge areas are embedded by Q-bit of secret data with a larger value of Q than that of pixels placed in smooth areas. Also in this scholar, we represent one non-adaptive data hiding algorithm. Our non-adaptive scheme, via an error reduction procedure, produces a high visual quality for stego-image. The proposed schemes present several advantages. 1-of aspects the embedding capacity and visual quality of stego-image are scalable. In other words, the embedding rate as well as the image quality can be scaled for practical applications 2-the high embedding capacity with minimal visual distortion can be achieved, 3-our methods require little memory space for secret data embedding and extracting phases, 4-secret keys have used to protect of the embedded secret data. Thus, level of security is high, 5-the problem of overflow or underflow does not occur. Experimental results indicated that the proposed adaptive scheme significantly is superior to the currently existing scheme, in terms of stego-image visual quality, embedding capacity and level of security and also our non-adaptive method is better than other non-adaptive methods, in view of stego-image quality. Results show which our adaptive algorithm can resist against the RS steganalysis attack.

  13. FOXP3-stained image analysis for follicular lymphoma: optimal adaptive thresholding with maximal nucleus coverage

    Science.gov (United States)

    Senaras, C.; Pennell, M.; Chen, W.; Sahiner, B.; Shana'ah, A.; Louissaint, A.; Hasserjian, R. P.; Lozanski, G.; Gurcan, M. N.

    2017-03-01

    Immunohistochemical detection of FOXP3 antigen is a usable marker for detection of regulatory T lymphocytes (TR) in formalin fixed and paraffin embedded sections of different types of tumor tissue. TR plays a major role in homeostasis of normal immune systems where they prevent auto reactivity of the immune system towards the host. This beneficial effect of TR is frequently "hijacked" by malignant cells where tumor-infiltrating regulatory T cells are recruited by the malignant nuclei to inhibit the beneficial immune response of the host against the tumor cells. In the majority of human solid tumors, an increased number of tumor-infiltrating FOXP3 positive TR is associated with worse outcome. However, in follicular lymphoma (FL) the impact of the number and distribution of TR on the outcome still remains controversial. In this study, we present a novel method to detect and enumerate nuclei from FOXP3 stained images of FL biopsies. The proposed method defines a new adaptive thresholding procedure, namely the optimal adaptive thresholding (OAT) method, which aims to minimize under-segmented and over-segmented nuclei for coarse segmentation. Next, we integrate a parameter free elliptical arc and line segment detector (ELSD) as additional information to refine segmentation results and to split most of the merged nuclei. Finally, we utilize a state-of-the-art super-pixel method, Simple Linear Iterative Clustering (SLIC) to split the rest of the merged nuclei. Our dataset consists of 13 region-ofinterest images containing 769 negative and 88 positive nuclei. Three expert pathologists evaluated the method and reported sensitivity values in detecting negative and positive nuclei ranging from 83-100% and 90-95%, and precision values of 98-100% and 99-100%, respectively. The proposed solution can be used to investigate the impact of FOXP3 positive nuclei on the outcome and prognosis in FL.

  14. An Adaptive Threshold Image Reconstruction Algorithm of Oil-Water Two-Phase Flow in Electrical Capacitance Tomography System

    International Nuclear Information System (INIS)

    Qin, M; Chen, D Y; Wang, L L; Yu, X Y

    2006-01-01

    The subject investigated in this paper is the ECT system of 8-electrode oil-water two-phase flow, and the measuring principle is analysed. In ART image-reconstruction algorithm, an adaptive threshold image reconstruction is presented to improve quality of image reconstruction and calculating accuracy of concentration, and generally the measurement error is about 1%. Such method can well solve many defects that other measurement methods may have, such as slow speed, high cost, and poor security and so on. Therefore, it offers a new method for the concentration measurement of oil-water two-phase flow

  15. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    Science.gov (United States)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  16. Rapid Estimation of Gustatory Sensitivity Thresholds with SIAM and QUEST

    Directory of Open Access Journals (Sweden)

    Richard Höchenberger

    2017-06-01

    Full Text Available Adaptive methods provide quick and reliable estimates of sensory sensitivity. Yet, these procedures are typically developed for and applied to the non-chemical senses only, i.e., to vision, audition, and somatosensation. The relatively long inter-stimulus-intervals in gustatory studies, which are required to minimize adaptation and habituation, call for time-efficient threshold estimations. We therefore tested the suitability of two adaptive yes-no methods based on SIAM and QUEST for rapid estimation of taste sensitivity by comparing test-retest reliability for sucrose, citric acid, sodium chloride, and quinine hydrochloride thresholds. We show that taste thresholds can be obtained in a time efficient manner with both methods (within only 6.5 min on average using QUEST and ~9.5 min using SIAM. QUEST yielded higher test-retest correlations than SIAM in three of the four tastants. Either method allows for taste threshold estimation with low strain on participants, rendering them particularly advantageous for use in subjects with limited attentional or mnemonic capacities, and for time-constrained applications during cohort studies or in the testing of patients and children.

  17. Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.

    Science.gov (United States)

    Vikhe, P S; Thool, V R

    2016-04-01

    Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection.

  18. ‘Soglitude’- introducing a method of thinking thresholds

    Directory of Open Access Journals (Sweden)

    Tatjana Barazon

    2010-04-01

    Full Text Available ‘Soglitude’ is an invitation to acknowledge the existence of thresholds in thought. A threshold in thought designates the indetermination, the passage, the evolution of every state the world is in. The creation we add to it, and the objectivity we suppose, on the border of those two ideas lies our perceptive threshold. No state will ever be permanent, and in order to stress the temporary, fluent character of the world and our perception of it, we want to introduce a new suitable method to think change and transformation, when we acknowledge our own threshold nature. The contributions gathered in this special issue come from various disciplines: anthropology, philosophy, critical theory, film studies, political science, literature and history. The variety of these insights shows the resonance of the idea of threshold in every category of thought. We hope to enlarge the notion in further issues on physics and chemistry, as well as mathematics. The articles in this issue introduce the method of threshold thinking by showing the importance of the in-between, of the changing of perspective in their respective domain. The ‘Documents’ section named INTERSTICES, includes a selection of poems, two essays, a philosophical-artistic project called ‘infraphysique’, a performance on thresholds in the soul, and a dialogue with Israel Rosenfield. This issue presents a kaleidoscope of possible threshold thinking and hopes to initiate new ways of looking at things.For every change that occurs in reality there is a subjective counterpart in our perception and this needs to be acknowledged as such. What we name objective is reflected in our own personal perception in its own personal manner, in such a way that the objectivity of an event might altogether be questioned. The absolute point of view, the view from “nowhere”, could well be the projection that causes dogmatism. By introducing the method of thinking thresholds into a system, be it

  19. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    Science.gov (United States)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  20. Adaptive threshold-based shadow masking for across-date settlement classification of panchromatic quickBird images

    CSIR Research Space (South Africa)

    Luus, FPS

    2014-06-01

    Full Text Available -1 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 6, JUNE 2014 1153 Adaptive Threshold-Based Shadow Masking for Across- Date Settlement Classification of Panchromatic QuickBird Images F. P. S. Luus, F. van den Bergh, and B. T. J. Maharaj...

  1. Alternative method for determining anaerobic threshold in rowers

    Directory of Open Access Journals (Sweden)

    Giovani Dos Santos Cunha

    2008-01-01

    Full Text Available http://dx.doi.org/10.5007/1980-0037.2008v10n4p367 In rowing, the standard breathing that athletes are trained to use makes it difficult, or even impossible, to detect ventilatory limits, due to the coupling of the breath with the technical movement. For this reason, some authors have proposed determining the anaerobic threshold from the respiratory exchange ratio (RER, but there is not yet consensus on what value of RER should be used. The objective of this study was to test what value of RER corresponds to the anaerobic threshold and whether this value can be used as an independent parameter for determining the anaerobic threshold of rowers. The sample comprised 23 male rowers. They were submitted to a maximal cardiorespiratory test on a rowing ergometer with concurrent ergospirometry in order to determine VO2máx and the physiological variables corresponding to their anaerobic threshold. The anaerobic threshold was determined using the Dmax (maximal distance method. The physiological variables were classified into maximum values and anaerobic threshold values. The maximal state of these rowers reached VO2 (58.2±4.4 ml.kg-1.min-1, lactate (8.2±2.1 mmol.L-1, power (384±54.3 W and RER (1.26±0.1. At the anaerobic threshold they reached VO2 (46.9±7.5 ml.kg-1.min-1, lactate (4.6±1.3 mmol.L-1, power (300± 37.8 W and RER (0.99±0.1. Conclusions - the RER can be used as an independent method for determining the anaerobic threshold of rowers, adopting a value of 0.99, however, RER should exhibit a non-linear increase above this figure.

  2. A New Wavelet Threshold Determination Method Considering Interscale Correlation in Signal Denoising

    Directory of Open Access Journals (Sweden)

    Can He

    2015-01-01

    Full Text Available Due to simple calculation and good denoising effect, wavelet threshold denoising method has been widely used in signal denoising. In this method, the threshold is an important parameter that affects the denoising effect. In order to improve the denoising effect of the existing methods, a new threshold considering interscale correlation is presented. Firstly, a new correlation index is proposed based on the propagation characteristics of the wavelet coefficients. Then, a threshold determination strategy is obtained using the new index. At the end of the paper, a simulation experiment is given to verify the effectiveness of the proposed method. In the experiment, four benchmark signals are used as test signals. Simulation results show that the proposed method can achieve a good denoising effect under various signal types, noise intensities, and thresholding functions.

  3. Threshold selection for classification of MR brain images by clustering method

    Energy Technology Data Exchange (ETDEWEB)

    Moldovanu, Simona [Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunărea de Jos University of Galaţi, 47 Domnească St., 800008, Romania, Phone: +40 236 460 780 (Romania); Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi (Romania); Obreja, Cristian; Moraru, Luminita, E-mail: luminita.moraru@ugal.ro [Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunărea de Jos University of Galaţi, 47 Domnească St., 800008, Romania, Phone: +40 236 460 780 (Romania)

    2015-12-07

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.

  4. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    Science.gov (United States)

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.

  5. Evaluation of Maryland abutment scour equation through selected threshold velocity methods

    Science.gov (United States)

    Benedict, S.T.

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.

  6. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  7. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.

    Science.gov (United States)

    Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-08-16

    Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.

  8. Variable threshold method for ECG R-peak detection.

    Science.gov (United States)

    Kew, Hsein-Ping; Jeong, Do-Un

    2011-10-01

    In this paper, a wearable belt-type ECG electrode worn around the chest by measuring the real-time ECG is produced in order to minimize the inconvenient in wearing. ECG signal is detected using a potential instrument system. The measured ECG signal is transmits via an ultra low power consumption wireless data communications unit to personal computer using Zigbee-compatible wireless sensor node. ECG signals carry a lot of clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed. There will be errors in peak detection when the baseline changes due to motion artifacts and signal size changes. Preprocessing process which includes differentiation process and Hilbert transform is used as signal preprocessing algorithm. Thereafter, variable threshold method is used to detect the R-peak which is more accurate and efficient than fixed threshold value method. R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research in order to evaluate the performance analysis.

  9. Viral Diversity Threshold for Adaptive Immunity in Prokaryotes

    Science.gov (United States)

    Weinberger, Ariel D.; Wolf, Yuri I.; Lobkovsky, Alexander E.; Gilmore, Michael S.; Koonin, Eugene V.

    2012-01-01

    ABSTRACT Bacteria and archaea face continual onslaughts of rapidly diversifying viruses and plasmids. Many prokaryotes maintain adaptive immune systems known as clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated genes (Cas). CRISPR-Cas systems are genomic sensors that serially acquire viral and plasmid DNA fragments (spacers) that are utilized to target and cleave matching viral and plasmid DNA in subsequent genomic invasions, offering critical immunological memory. Only 50% of sequenced bacteria possess CRISPR-Cas immunity, in contrast to over 90% of sequenced archaea. To probe why half of bacteria lack CRISPR-Cas immunity, we combined comparative genomics and mathematical modeling. Analysis of hundreds of diverse prokaryotic genomes shows that CRISPR-Cas systems are substantially more prevalent in thermophiles than in mesophiles. With sequenced bacteria disproportionately mesophilic and sequenced archaea mostly thermophilic, the presence of CRISPR-Cas appears to depend more on environmental temperature than on bacterial-archaeal taxonomy. Mutation rates are typically severalfold higher in mesophilic prokaryotes than in thermophilic prokaryotes. To quantitatively test whether accelerated viral mutation leads microbes to lose CRISPR-Cas systems, we developed a stochastic model of virus-CRISPR coevolution. The model competes CRISPR-Cas-positive (CRISPR-Cas+) prokaryotes against CRISPR-Cas-negative (CRISPR-Cas−) prokaryotes, continually weighing the antiviral benefits conferred by CRISPR-Cas immunity against its fitness costs. Tracking this cost-benefit analysis across parameter space reveals viral mutation rate thresholds beyond which CRISPR-Cas cannot provide sufficient immunity and is purged from host populations. These results offer a simple, testable viral diversity hypothesis to explain why mesophilic bacteria disproportionately lack CRISPR-Cas immunity. More generally, fundamental limits on the adaptability of biological

  10. Alternative method for determining anaerobic threshold in rowers

    Directory of Open Access Journals (Sweden)

    Giovani dos Santos Cunha

    2008-12-01

    Full Text Available In rowing, the standard breathing that athletes are trained to use makes it difficult, or even impossible, to detectventilatory limits, due to the coupling of the breath with the technical movement. For this reason, some authors have proposeddetermining the anaerobic threshold from the respiratory exchange ratio (RER, but there is not yet consensus on what valueof RER should be used. The objective of this study was to test what value of RER corresponds to the anaerobic thresholdand whether this value can be used as an independent parameter for determining the anaerobic threshold of rowers. Thesample comprised 23 male rowers. They were submitted to a maximal cardiorespiratory test on a rowing ergometer withconcurrent ergospirometry in order to determine VO2máx and the physiological variables corresponding to their anaerobicthreshold. The anaerobic threshold was determined using the Dmax (maximal distance method. The physiological variableswere classified into maximum values and anaerobic threshold values. The maximal state of these rowers reached VO2(58.2±4.4 ml.kg-1.min-1, lactate (8.2±2.1 mmol.L-1, power (384±54.3 W and RER (1.26±0.1. At the anaerobic thresholdthey reached VO2 (46.9±7.5 ml.kg-1.min-1, lactate (4.6±1.3 mmol.L-1, power (300± 37.8 W and RER (0.99±0.1. Conclusions- the RER can be used as an independent method for determining the anaerobic threshold of rowers, adopting a value of0.99, however, RER should exhibit a non-linear increase above this figure.

  11. A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm.

    Science.gov (United States)

    Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei; Wang, Hongxun; Dai, Wei

    2018-04-08

    A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry-Perot (F-P) filter and optical switch. To improve system resolution, the F-P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.

  12. A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Weifang Zhang

    2018-04-01

    Full Text Available A Fiber Bragg Grating (FBG interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA and advanced RISC machine (ARM platform, tunable Fabry–Perot (F–P filter and optical switch. To improve system resolution, the F–P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.

  13. Investigation of Adaptive-threshold Approaches for Determining Area-Time Integrals from Satellite Infrared Data to Estimate Convective Rain Volumes

    Science.gov (United States)

    Smith, Paul L.; VonderHaar, Thomas H.

    1996-01-01

    The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.

  14. Is there a minimum intensity threshold for resistance training-induced hypertrophic adaptations?

    Science.gov (United States)

    Schoenfeld, Brad J

    2013-12-01

    In humans, regimented resistance training has been shown to promote substantial increases in skeletal muscle mass. With respect to traditional resistance training methods, the prevailing opinion is that an intensity of greater than ~60 % of 1 repetition maximum (RM) is necessary to elicit significant increases in muscular size. It has been surmised that this is the minimum threshold required to activate the complete spectrum of fiber types, particularly those associated with the largest motor units. There is emerging evidence, however, that low-intensity resistance training performed with blood flow restriction (BFR) can promote marked increases in muscle hypertrophy, in many cases equal to that of traditional high-intensity exercise. The anabolic effects of such occlusion-based training have been attributed to increased levels of metabolic stress that mediate hypertrophy at least in part by enhancing recruitment of high-threshold motor units. Recently, several researchers have put forth the theory that low-intensity exercise (≤50 % 1RM) performed without BFR can promote increases in muscle size equal, or perhaps even superior, to that at higher intensities, provided training is carried out to volitional muscular failure. Proponents of the theory postulate that fatiguing contractions at light loads is simply a milder form of BFR and thus ultimately results in maximal muscle fiber recruitment. Current research indicates that low-load exercise can indeed promote increases in muscle growth in untrained subjects, and that these gains may be functionally, metabolically, and/or aesthetically meaningful. However, whether hypertrophic adaptations can equal that achieved with higher intensity resistance exercise (≤60 % 1RM) remains to be determined. Furthermore, it is not clear as to what, if any, hypertrophic effects are seen with low-intensity exercise in well-trained subjects as experimental studies on the topic in this population are lacking. Practical

  15. Cost-effectiveness thresholds: methods for setting and examples from around the world.

    Science.gov (United States)

    Santos, André Soares; Guerra-Junior, Augusto Afonso; Godman, Brian; Morton, Alec; Ruas, Cristina Mariano

    2018-06-01

    Cost-effectiveness thresholds (CETs) are used to judge if an intervention represents sufficient value for money to merit adoption in healthcare systems. The study was motivated by the Brazilian context of HTA, where meetings are being conducted to decide on the definition of a threshold. Areas covered: An electronic search was conducted on Medline (via PubMed), Lilacs (via BVS) and ScienceDirect followed by a complementary search of references of included studies, Google Scholar and conference abstracts. Cost-effectiveness thresholds are usually calculated through three different approaches: the willingness-to-pay, representative of welfare economics; the precedent method, based on the value of an already funded technology; and the opportunity cost method, which links the threshold to the volume of health displaced. An explicit threshold has never been formally adopted in most places. Some countries have defined thresholds, with some flexibility to consider other factors. An implicit threshold could be determined by research of funded cases. Expert commentary: CETs have had an important role as a 'bridging concept' between the world of academic research and the 'real world' of healthcare prioritization. The definition of a cost-effectiveness threshold is paramount for the construction of a transparent and efficient Health Technology Assessment system.

  16. A rule based method for context sensitive threshold segmentation in SPECT using simulation

    International Nuclear Information System (INIS)

    Fleming, John S.; Alaamer, Abdulaziz S.

    1998-01-01

    Robust techniques for automatic or semi-automatic segmentation of objects in single photon emission computed tomography (SPECT) are still the subject of development. This paper describes a threshold based method which uses empirical rules derived from analysis of computer simulated images of a large number of objects. The use of simulation allowed the factors affecting the threshold which correctly segmented objects to be investigated systematically. Rules could then be derived from these data to define the threshold in any particular context. The technique operated iteratively and calculated local context sensitive thresholds along radial profiles from the centre of gravity of the object. It was evaluated in a further series of simulated objects and in human studies, and compared to the use of a global fixed threshold. The method was capable of improving accuracy of segmentation and volume assessment compared to the global threshold technique. The improvements were greater for small volumes, shapes with large surface area to volume ratio, variable surrounding activity and non-uniform distributions. The method was applied successfully to simulated objects and human studies and is considered to be a significant advance on global fixed threshold techniques. (author)

  17. Adaptive optics for reduced threshold energy in femtosecond laser induced optical breakdown in water based eye model

    Science.gov (United States)

    Hansen, Anja; Krueger, Alexander; Ripken, Tammo

    2013-03-01

    In ophthalmic microsurgery tissue dissection is achieved using femtosecond laser pulses to create an optical breakdown. For vitreo-retinal applications the irradiance distribution in the focal volume is distorted by the anterior components of the eye causing a raised threshold energy for breakdown. In this work, an adaptive optics system enables spatial beam shaping for compensation of aberrations and investigation of wave front influence on optical breakdown. An eye model was designed to allow for aberration correction as well as detection of optical breakdown. The eye model consists of an achromatic lens for modeling the eye's refractive power, a water chamber for modeling the tissue properties, and a PTFE sample for modeling the retina's scattering properties. Aberration correction was performed using a deformable mirror in combination with a Hartmann-Shack-sensor. The influence of an adaptive optics aberration correction on the pulse energy required for photodisruption was investigated using transmission measurements for determination of the breakdown threshold and video imaging of the focal region for study of the gas bubble dynamics. The threshold energy is considerably reduced when correcting for the aberrations of the system and the model eye. Also, a raise in irradiance at constant pulse energy was shown for the aberration corrected case. The reduced pulse energy lowers the potential risk of collateral damage which is especially important for retinal safety. This offers new possibilities for vitreo-retinal surgery using femtosecond laser pulses.

  18. Thresholds for Coral Bleaching: Are Synergistic Factors and Shifting Thresholds Changing the Landscape for Management? (Invited)

    Science.gov (United States)

    Eakin, C.; Donner, S. D.; Logan, C. A.; Gledhill, D. K.; Liu, G.; Heron, S. F.; Christensen, T.; Rauenzahn, J.; Morgan, J.; Parker, B. A.; Hoegh-Guldberg, O.; Skirving, W. J.; Strong, A. E.

    2010-12-01

    As carbon dioxide rises in the atmosphere, climate change and ocean acidification are modifying important physical and chemical parameters in the oceans with resulting impacts on coral reef ecosystems. Rising CO2 is warming the world’s oceans and causing corals to bleach, with both alarming frequency and severity. The frequent return of stressful temperatures has already resulted in major damage to many of the world’s coral reefs and is expected to continue in the foreseeable future. Warmer oceans also have contributed to a rise in coral infectious diseases. Both bleaching and infectious disease can result in coral mortality and threaten one of the most diverse ecosystems on Earth and the important ecosystem services they provide. Additionally, ocean acidification from rising CO2 is reducing the availability of carbonate ions needed by corals to build their skeletons and perhaps depressing the threshold for bleaching. While thresholds vary among species and locations, it is clear that corals around the world are already experiencing anomalous temperatures that are too high, too often, and that warming is exceeding the rate at which corals can adapt. This is despite a complex adaptive capacity that involves both the coral host and the zooxanthellae, including changes in the relative abundance of the latter in their coral hosts. The safe upper limit for atmospheric CO2 is probably somewhere below 350ppm, a level we passed decades ago, and for temperature is a sustained global temperature increase of less than 1.5°C above pre-industrial levels. How much can corals acclimate and/or adapt to the unprecedented fast changing environmental conditions? Any change in the threshold for coral bleaching as the result of acclimation and/or adaption may help corals to survive in the future but adaptation to one stress may be maladaptive to another. There also is evidence that ocean acidification and nutrient enrichment modify this threshold. What do shifting thresholds mean

  19. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    Science.gov (United States)

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L₁ algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  20. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    Science.gov (United States)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-08-12

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  1. Model Threshold untuk Pembelajaran Memproduksi Pantun Kelas XI

    Directory of Open Access Journals (Sweden)

    Fitri Nura Murti

    2017-03-01

    Full Text Available Abstract: The learning pantun method in schools provided less opportunity to develop the students’ creativity in producing pantun. This situation was supported by the result of the observation conducted on eleventh graders at SMAN 2 Bondowoso. It showed that the students tend to plagiarize their pantun. The general objective of this research and development is to develop Threshold Pantun model for learning to produce pantun for elevent graders. The product was presented in guidance book for teachers entitled “Pembelajaran Memproduksi Pantun Menggunakan Model Threshold Pantun untuk Kelas XI”. This study adapted design method of Borg-Gall’s R&D procedure. The result of this study showed that Threshold Pantun model was appropriate to be implemented for learning to produce pantun. Key Words: Threshold Pantun model, produce pantun Abstrak: Pembelajaran pantun di sekolah selama ini kurang mengembangkan kreativitas siswa dalam memproduksi pantun. Hal tersebut dikuatkan oleh hasil observasi siswa kelas XI SMAN 2 Bondowoso yang menunjukkan adanya kecenderungan produk siswa bersifat plagiat. Tujuan penelitian dan pengembangan ini secara umum adalah mengembangkan model Threshold Pantun untuk pembelajaran memproduksi pantun kelas XI..Produk disajikan dalam bentuk buku panduan bagi guru dengan judul “Pembelajaran Memproduksi Pantun Menggunakan Model Threshold Pantun untuk Kelas XI”. Penelitian ini menggunakan rancangan penelitian yang diadaptasi dari prosedur penelitian dan pengembangan Borg dan Gall. Berdasarkan hasil validasi model Threshold Pantun untuk pembelajaran memproduksi pantun layak diimplementasikan. Kata kunci: model Threshold Pantun, memproduksi pantun

  2. Watershed safety and quality control by safety threshold method

    Science.gov (United States)

    Da-Wei Tsai, David; Mengjung Chou, Caroline; Ramaraj, Rameshprabu; Liu, Wen-Cheng; Honglay Chen, Paris

    2014-05-01

    Taiwan was warned as one of the most dangerous countries by IPCC and the World Bank. In such an exceptional and perilous island, we would like to launch the strategic research of land-use management on the catastrophe prevention and environmental protection. This study used the watershed management by "Safety Threshold Method" to restore and to prevent the disasters and pollution on island. For the deluge prevention, this study applied the restoration strategy to reduce total runoff which was equilibrium to 59.4% of the infiltration each year. For the sediment management, safety threshold management could reduce the sediment below the equilibrium of the natural sediment cycle. In the water quality issues, the best strategies exhibited the significant total load reductions of 10% in carbon (BOD5), 15% in nitrogen (nitrate) and 9% in phosphorus (TP). We found out the water quality could meet the BOD target by the 50% peak reduction with management. All the simulations demonstrated the safety threshold method was helpful to control the loadings within the safe range of disasters and environmental quality. Moreover, from the historical data of whole island, the past deforestation policy and the mistake economic projects were the prime culprits. Consequently, this study showed a practical method to manage both the disasters and pollution in a watershed scale by the land-use management.

  3. An Advanced Method to Apply Multiple Rainfall Thresholds for Urban Flood Warnings

    Directory of Open Access Journals (Sweden)

    Jiun-Huei Jang

    2015-11-01

    Full Text Available Issuing warning information to the public when rainfall exceeds given thresholds is a simple and widely-used method to minimize flood risk; however, this method lacks sophistication when compared with hydrodynamic simulation. In this study, an advanced methodology is proposed to improve the warning effectiveness of the rainfall threshold method for urban areas through deterministic-stochastic modeling, without sacrificing simplicity and efficiency. With regards to flooding mechanisms, rainfall thresholds of different durations are divided into two groups accounting for flooding caused by drainage overload and disastrous runoff, which help in grading the warning level in terms of emergency and severity when the two are observed together. A flood warning is then classified into four levels distinguished by green, yellow, orange, and red lights in ascending order of priority that indicate the required measures, from standby, flood defense, evacuation to rescue, respectively. The proposed methodology is tested according to 22 historical events in the last 10 years for 252 urbanized townships in Taiwan. The results show satisfactory accuracy in predicting the occurrence and timing of flooding, with a logical warning time series for taking progressive measures. For systems with multiple rainfall thresholds already in place, the methodology can be used to ensure better application of rainfall thresholds in urban flood warnings.

  4. Integrating adaptive governance and participatory multicriteria methods: a framework for climate adaptation governance

    Directory of Open Access Journals (Sweden)

    Stefania Munaretto

    2014-06-01

    Full Text Available Climate adaptation is a dynamic social and institutional process where the governance dimension is receiving growing attention. Adaptive governance is an approach that promises to reduce uncertainty by improving the knowledge base for decision making. As uncertainty is an inherent feature of climate adaptation, adaptive governance seems to be a promising approach for improving climate adaptation governance. However, the adaptive governance literature has so far paid little attention to decision-making tools and methods, and the literature on the governance of adaptation is in its infancy in this regard. We argue that climate adaptation governance would benefit from systematic and yet flexible decision-making tools and methods such as participatory multicriteria methods for the evaluation of adaptation options, and that these methods can be linked to key adaptive governance principles. Moving from these premises, we propose a framework that integrates key adaptive governance features into participatory multicriteria methods for the governance of climate adaptation.

  5. Low-Threshold Active Teaching Methods for Mathematic Instruction

    Science.gov (United States)

    Marotta, Sebastian M.; Hargis, Jace

    2011-01-01

    In this article, we present a large list of low-threshold active teaching methods categorized so the instructor can efficiently access and target the deployment of conceptually based lessons. The categories include teaching strategies for lecture on large and small class sizes; student action individually, in pairs, and groups; games; interaction…

  6. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  7. Kinetics of the early adaptive response and adaptation threshold dose; Cinetica de la respuesta adaptativa temprana y dosis umbral de adaptacion

    Energy Technology Data Exchange (ETDEWEB)

    Mendiola C, M.T.; Morales R, P. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)

    2003-07-01

    The expression kinetics of the adaptive response (RA) in mouse leukocytes in vivo and the minimum dose of gamma radiation that induces it was determined. The mice were exposed 0.005 or 0.02 Gy of {sup 137} Cs like adaptation and 1h later to the challenge dose (1.0 Gy), another group was only exposed at 1.0 Gy and the damage is evaluated in the DNA with the rehearsal it makes. The treatment with 0. 005 Gy didn't induce RA and 0. 02 Gy causes a similar effect to the one obtained with 0.01 Gy. The RA was show from an interval of 0.5 h being obtained the maximum expression with 5.0 h. The threshold dose to induce the RA is 0.01 Gy and in 5.0 h the biggest quantity in molecules is presented presumably that are related with the protection of the DNA. (Author) =.

  8. METHOD OF ADAPTIVE MAGNETOTHERAPY

    OpenAIRE

    Rudyk, Valentine Yu.; Tereshchenko, Mykola F.; Rudyk, Tatiana A.

    2016-01-01

    Practical realization of adaptive control in magnetotherapy apparatus acquires an actual importance on the modern stage of development of magnetotherapy.The structural scheme of method of adaptive impulsive magnetotherapy and algorithm of adaptive control of feed-back signal during procedure of magnetotherapy is represented.A feed-back in magnetotherapy complex will be realized with control of magnetic induction and analysis of man's physiological indexes (temperature, pulse, blood prassure, ...

  9. An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification.

    Science.gov (United States)

    Li, Fangmin; Yang, Chao; Xia, Yuqing; Ma, Xiaolin; Zhang, Tao; Zhou, Zhou

    2017-11-29

    In this paper, we propose the multiwindow Adaptive S-method (AS-method) distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM) trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference.

  10. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods; La methode du recuit simule pour la conception des circuits electroniques: adaptation et comparaison avec d`autres methodes d`optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Berthiau, G

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)

  11. Constructing financial network based on PMFG and threshold method

    Science.gov (United States)

    Nie, Chun-Xiao; Song, Fu-Tie

    2018-04-01

    Based on planar maximally filtered graph (PMFG) and threshold method, we introduced a correlation-based network named PMFG-based threshold network (PTN). We studied the community structure of PTN and applied ISOMAP algorithm to represent PTN in low-dimensional Euclidean space. The results show that the community corresponds well to the cluster in the Euclidean space. Further, we studied the dynamics of the community structure and constructed the normalized mutual information (NMI) matrix. Based on the real data in the market, we found that the volatility of the market can lead to dramatic changes in the community structure, and the structure is more stable during the financial crisis.

  12. Torque-onset determination: Unintended consequences of the threshold method.

    Science.gov (United States)

    Dotan, Raffy; Jenkins, Glenn; O'Brien, Thomas D; Hansen, Steve; Falk, Bareket

    2016-12-01

    Compared with visual torque-onset-detection (TOD), threshold-based TOD produces onset bias, which increases with lower torques or rates of torque development (RTD). To compare the effects of differential TOD-bias on common contractile parameters in two torque-disparate groups. Fifteen boys and 12 men performed maximal, explosive, isometric knee-extensions. Torque and EMG were recorded for each contraction. Best contractions were selected by peak torque (MVC) and peak RTD. Visual-TOD-based torque-time traces, electromechanical delays (EMD), and times to peak RTD (tRTD) were compared with corresponding data derived from fixed 4-Nm- and relative 5%MVC-thresholds. The 5%MVC TOD-biases were similar for boys and men, but the corresponding 4-Nm-based biases were markedly different (40.3±14.1 vs. 18.4±7.1ms, respectively; ptorque kinetics tended to be faster than the boys' (NS), but the 4-Nm-based kinetics erroneously depicted the boys as being much faster to any given %MVC (p<0.001). When comparing contractile properties of dissimilar groups, e.g., children vs. adults, threshold-based TOD methods can misrepresent reality and lead to erroneous conclusions. Relative-thresholds (e.g., 5% MVC) still introduce error, but group-comparisons are not confounded. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Adaptive thresholding with inverted triangular area for real-time detection of the heart rate from photoplethysmogram traces on a smartphone.

    Science.gov (United States)

    Jiang, Wen Jun; Wittek, Peter; Zhao, Li; Gao, Shi Chao

    2014-01-01

    Photoplethysmogram (PPG) signals acquired by smartphone cameras are weaker than those acquired by dedicated pulse oximeters. Furthermore, the signals have lower sampling rates, have notches in the waveform and are more severely affected by baseline drift, leading to specific morphological characteristics. This paper introduces a new feature, the inverted triangular area, to address these specific characteristics. The new feature enables real-time adaptive waveform detection using an algorithm of linear time complexity. It can also recognize notches in the waveform and it is inherently robust to baseline drift. An implementation of the algorithm on Android is available for free download. We collected data from 24 volunteers and compared our algorithm in peak detection with two competing algorithms designed for PPG signals, Incremental-Merge Segmentation (IMS) and Adaptive Thresholding (ADT). A sensitivity of 98.0% and a positive predictive value of 98.8% were obtained, which were 7.7% higher than the IMS algorithm in sensitivity, and 8.3% higher than the ADT algorithm in positive predictive value. The experimental results confirmed the applicability of the proposed method.

  14. Adaptive scalarization methods in multiobjective optimization

    CERN Document Server

    Eichfelder, Gabriele

    2008-01-01

    This book presents adaptive solution methods for multiobjective optimization problems based on parameter dependent scalarization approaches. Readers will benefit from the new adaptive methods and ideas for solving multiobjective optimization.

  15. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    Science.gov (United States)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  16. Key Parameters Estimation and Adaptive Warning Strategy for Rear-End Collision of Vehicle

    Directory of Open Access Journals (Sweden)

    Xiang Song

    2015-01-01

    Full Text Available The rear-end collision warning system requires reliable warning decision mechanism to adapt the actual driving situation. To overcome the shortcomings of existing warning methods, an adaptive strategy is proposed to address the practical aspects of the collision warning problem. The proposed strategy is based on the parameter-adaptive and variable-threshold approaches. First, several key parameter estimation algorithms are developed to provide more accurate and reliable information for subsequent warning method. They include a two-stage algorithm which contains a Kalman filter and a Luenberger observer for relative acceleration estimation, a Bayesian theory-based algorithm of estimating the road friction coefficient, and an artificial neural network for estimating the driver’s reaction time. Further, the variable-threshold warning method is designed to achieve the global warning decision. In the method, the safety distance is employed to judge the dangerous state. The calculation method of the safety distance in this paper can be adaptively adjusted according to the different driving conditions of the leading vehicle. Due to the real-time estimation of the key parameters and the adaptive calculation of the warning threshold, the strategy can adapt to various road and driving conditions. Finally, the proposed strategy is evaluated through simulation and field tests. The experimental results validate the feasibility and effectiveness of the proposed strategy.

  17. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    International Nuclear Information System (INIS)

    Prieto, Elena; Peñuelas, Iván; Martí-Climent, Josep M; Lecumberri, Pablo; Gómez, Marisol; Pagola, Miguel; Bilbao, Izaskun; Ecay, Margarita

    2012-01-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18 F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools. (paper)

  18. An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification

    Directory of Open Access Journals (Sweden)

    Fangmin Li

    2017-11-01

    Full Text Available In this paper, we propose the multiwindow Adaptive S-method (AS-method distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference.

  19. Particle identification using the time-over-threshold method in the ATLAS Transition Radiation Tracker

    International Nuclear Information System (INIS)

    Akesson, T.; Arik, E.; Assamagan, K.; Baker, K.; Barberio, E.; Barberis, D.; Bertelsen, H.; Bytchkov, V.; Callahan, J.; Catinaccio, A.; Danielsson, H.; Dittus, F.; Dolgoshein, B.; Dressnandt, N.; Ebenstein, W.L.; Eerola, P.; Farthouat, P.; Froidevaux, D.; Grichkevitch, Y.; Hajduk, Z.; Hansen, J.R.; Keener, P.T.; Kekelidze, G.; Konovalov, S.; Kowalski, T.; Kramarenko, V.A.; Krivchitch, A.; Laritchev, A.; Lichard, P.; Lucotte, A.; Lundberg, B.; Luehring, F.; Mailov, A.; Manara, A.; McFarlane, K.; Mitsou, V.A.; Morozov, S.; Muraviev, S.; Nadtochy, A.; Newcomer, F.M.; Olszowska, J.; Ogren, H.; Oh, S.H.; Peshekhonov, V.; Rembser, C.; Romaniouk, A.; Rousseau, D.; Rust, D.R.; Schegelsky, V.; Sapinski, M.; Shmeleva, A.; Smirnov, S.; Smirnova, L.N.; Sosnovtsev, V.; Soutchkov, S.; Spiridenkov, E.; Tikhomirov, V.; Van Berg, R.; Vassilakopoulos, V.; Wang, C.; Williams, H.H.

    2001-01-01

    Test-beam studies of the ATLAS Transition Radiation Tracker (TRT) straw tube performance in terms of electron-pion separation using a time-over-threshold method are described. The test-beam data are compared with Monte Carlo simulations of charged particles passing through the straw tubes of the TRT. For energies below 10 GeV, the time-over-threshold method combined with the standard transition-radiation cluster-counting technique significantly improves the electron-pion separation in the TRT. The use of the time-over-threshold information also provides some kaon-pion separation, thereby significantly enhancing the B-physics capabilities of the ATLAS detector

  20. Adaptive method of lines

    CERN Document Server

    Saucez, Ph

    2001-01-01

    The general Method of Lines (MOL) procedure provides a flexible format for the solution of all the major classes of partial differential equations (PDEs) and is particularly well suited to evolutionary, nonlinear wave PDEs. Despite its utility, however, there are relatively few texts that explore it at a more advanced level and reflect the method''s current state of development.Written by distinguished researchers in the field, Adaptive Method of Lines reflects the diversity of techniques and applications related to the MOL. Most of its chapters focus on a particular application but also provide a discussion of underlying philosophy and technique. Particular attention is paid to the concept of both temporal and spatial adaptivity in solving time-dependent PDEs. Many important ideas and methods are introduced, including moving grids and grid refinement, static and dynamic gridding, the equidistribution principle and the concept of a monitor function, the minimization of a functional, and the moving finite elem...

  1. Using ecological thresholds to inform resource management: current options and future possibilities

    Directory of Open Access Journals (Sweden)

    Melissa M Foley

    2015-11-01

    Full Text Available In the face of growing human impacts on ecosystems, scientists and managers recognize the need to better understand thresholds and nonlinear dynamics in ecological systems to help set management targets. However, our understanding of the factors that drive threshold dynamics, and when and how rapidly thresholds will be crossed is currently limited in many systems. In spite of these limitations, there are approaches available to practitioners today—including ecosystem monitoring, statistical methods to identify thresholds and indicators, and threshold-based adaptive management—that can be used to help avoid ecological thresholds or restore systems that have crossed them. We briefly review the current state of knowledge and then use real-world examples to demonstrate how resource managers can use available approaches to avoid crossing ecological thresholds. We also highlight new tools and indicators being developed that have the potential to enhance our ability to detect change, predict when a system is approaching an ecological threshold, or restore systems that have already crossed a tipping point.

  2. Adaptive Method Using Controlled Grid Deformation

    Directory of Open Access Journals (Sweden)

    Florin FRUNZULICA

    2011-09-01

    Full Text Available The paper presents an adaptive method using the controlled grid deformation over an elastic, isotropic and continuous domain. The adaptive process is controlled with the principal strains and principal strain directions and uses the finite elements method. Numerical results are presented for several test cases.

  3. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula

    International Nuclear Information System (INIS)

    Mera, David; Cotos, José M.; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-01-01

    Highlights: ► We present an adaptive thresholding algorithm to segment oil spills. ► The segmentation algorithm is based on SAR images and wind field estimations. ► A Database of oil spill confirmations was used for the development of the algorithm. ► Wind field estimations have demonstrated to be useful for filtering look-alikes. ► Parallel programming has been successfully used to minimize processing time. - Abstract: Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean’s surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.

  4. Clinical feasibility of a myocardial signal intensity threshold-based semi-automated cardiac magnetic resonance segmentation method

    Energy Technology Data Exchange (ETDEWEB)

    Varga-Szemes, Akos; Schoepf, U.J.; Suranyi, Pal; De Cecco, Carlo N.; Fox, Mary A. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Muscogiuri, Giuseppe [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' , Department of Medical-Surgical Sciences and Translational Medicine, Rome (Italy); Wichmann, Julian L. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Cannao, Paola M. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Milan, Scuola di Specializzazione in Radiodiagnostica, Milan (Italy); Renker, Matthias [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Kerckhoff Heart and Thorax Center, Bad Nauheim (Germany); Mangold, Stefanie [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Ruzsics, Balazs [Royal Liverpool and Broadgreen University Hospitals, Department of Cardiology, Liverpool (United Kingdom)

    2016-05-15

    To assess the accuracy and efficiency of a threshold-based, semi-automated cardiac MRI segmentation algorithm in comparison with conventional contour-based segmentation and aortic flow measurements. Short-axis cine images of 148 patients (55 ± 18 years, 81 men) were used to evaluate left ventricular (LV) volumes and mass (LVM) using conventional and threshold-based segmentations. Phase-contrast images were used to independently measure stroke volume (SV). LV parameters were evaluated by two independent readers. Evaluation times using the conventional and threshold-based methods were 8.4 ± 1.9 and 4.2 ± 1.3 min, respectively (P < 0.0001). LV parameters measured by the conventional and threshold-based methods, respectively, were end-diastolic volume (EDV) 146 ± 59 and 134 ± 53 ml; end-systolic volume (ESV) 64 ± 47 and 59 ± 46 ml; SV 82 ± 29 and 74 ± 28 ml (flow-based 74 ± 30 ml); ejection fraction (EF) 59 ± 16 and 58 ± 17 %; and LVM 141 ± 55 and 159 ± 58 g. Significant differences between the conventional and threshold-based methods were observed in EDV, ESV, and LVM measurements; SV from threshold-based and flow-based measurements were in agreement (P > 0.05) but were significantly different from conventional analysis (P < 0.05). Excellent inter-observer agreement was observed. Threshold-based LV segmentation provides improved accuracy and faster assessment compared to conventional contour-based methods. (orig.)

  5. Reliability and validity of a brief method to assess nociceptive flexion reflex (NFR) threshold.

    Science.gov (United States)

    Rhudy, Jamie L; France, Christopher R

    2011-07-01

    The nociceptive flexion reflex (NFR) is a physiological tool to study spinal nociception. However, NFR assessment can take several minutes and expose participants to repeated suprathreshold stimulations. The 4 studies reported here assessed the reliability and validity of a brief method to assess NFR threshold that uses a single ascending series of stimulations (Peak 1 NFR), by comparing it to a well-validated method that uses 3 ascending/descending staircases of stimulations (Staircase NFR). Correlations between the NFR definitions were high, were on par with test-retest correlations of Staircase NFR, and were not affected by participant sex or chronic pain status. Results also indicated the test-retest reliabilities for the 2 definitions were similar. Using larger stimulus increments (4 mAs) to assess Peak 1 NFR tended to result in higher NFR threshold estimates than using the Staircase NFR definition, whereas smaller stimulus increments (2 mAs) tended to result in lower NFR threshold estimates than the Staircase NFR definition. Neither NFR definition was correlated with anxiety, pain catastrophizing, or anxiety sensitivity. In sum, a single ascending series of electrical stimulations results in a reliable and valid estimate of NFR threshold. However, caution may be warranted when comparing NFR thresholds across studies that differ in the ascending stimulus increments. This brief method to assess NFR threshold is reliable and valid; therefore, it should be useful to clinical pain researchers interested in quickly assessing inter- and intra-individual differences in spinal nociceptive processes. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  6. Error signals driving locomotor adaptation

    DEFF Research Database (Denmark)

    Choi, Julia T; Jensen, Peter; Nielsen, Jens Bo

    2016-01-01

    Locomotor patterns must be adapted to external forces encountered during daily activities. The contribution of different sensory inputs to detecting perturbations and adapting movements during walking is unclear. Here we examined the role of cutaneous feedback in adapting walking patterns to force...... walking (Choi et al. 2013). Sensory tests were performed to measure cutaneous touch threshold and perceptual threshold of force perturbations. Ankle movement were measured while subjects walked on the treadmill over three periods: baseline (1 min), adaptation (1 min) and post-adaptation (3 min). Subjects...

  7. Evaluation of the threshold trimming method for micro inertial fluidic switch based on electrowetting technology

    Directory of Open Access Journals (Sweden)

    Tingting Liu

    2014-03-01

    Full Text Available The switch based on electrowetting technology has the advantages of no moving part, low contact resistance, long life and adjustable acceleration threshold. The acceleration threshold of switch can be fine-tuned by adjusting the applied voltage. This paper is focused on the electrowetting properties of switch and the influence of microchannel structural parameters, applied voltage and droplet volume on acceleration threshold. In the presence of process errors of micro inertial fluidic switch and measuring errors of droplet volume, there is a deviation between test acceleration threshold and target acceleration threshold. Considering the process errors and measuring errors, worst-case analysis is used to analyze the influence of parameter tolerance on the acceleration threshold. Under worst-case condition the total acceleration threshold tolerance caused by various errors is 9.95%. The target acceleration threshold can be achieved by fine-tuning the applied voltage. The acceleration threshold trimming method of micro inertial fluidic switch is verified.

  8. A lower dose threshold for the in vivo protective adaptive response to radiation. Tumorigenesis in chronically exposed normal and Trp53 heterozygous C57BL/6 mice

    International Nuclear Information System (INIS)

    Mitchel, R.E.J.; Burchart, P.; Wyatt, H.

    2008-01-01

    Low doses of ionizing radiation to cells and animals may induce adaptive responses that reduce the risk of cancer. However, there are upper dose thresholds above which these protective adaptive responses do not occur. We have now tested the hypothesis that there are similar lower dose thresholds that must be exceeded in order to induce protective effects in vivo. We examined the effects of low dose/low dose rate fractionated exposures on cancer formation in Trp53 normal or cancer-prone Trp53 heterozygous female C57BL/6 mice. Beginning at 6 weeks of age, mice were exposed 5 days/week to single daily doses (0.33 mGy, 0.7 mGy/h) totaling 48, 97 or 146 mGy over 30, 60 or 90 weeks. The exposures for shorter times (up to 60 weeks) appeared to be below the level necessary to induce overall protective adaptive responses in Trp53 normal mice, and detrimental effects (shortened lifespan, increased frequency) evident for only specific tumor types (B- and T-cell lymphomas), were produced. Only when the exposures were continued for 90 weeks did the dose become sufficient to induce protective adaptive responses, balancing the detrimental effects for these specific cancers, and reducing the risk level back to that of the unexposed animals. Detrimental effects were not seen for other tumor types, and a protective effect was seen for sarcomas after 60 weeks of exposure, which was then lost when the exposure continued for 90 weeks. As previously shown for the upper dose threshold for protection by low doses, the lower dose boundary between protection and harm was influenced by Trp53 functionality. Neither protection nor harm was observed in exposed Trp53 heterozygous mice, indicating that reduced Trp53 function raises the lower dose/dose rate threshold for both detrimental and protective tumorigenic effects. (author)

  9. An adaptive method for γ spectra smoothing

    International Nuclear Information System (INIS)

    Xiao Gang; Zhou Chunlin; Li Tiantuo; Han Feng; Di Yuming

    2001-01-01

    Adaptive wavelet method and multinomial fitting gliding method are used for smoothing γ spectra, respectively, and then FWHM of 1332 keV peak of 60 Co and activities of 238 U standard specimen are calculated. Calculated results show that adaptive wavelet method is better than the other

  10. Comparison on taste threshold between adult male white cigarette and clove cigarette smokers using Murphy clinical test method

    OpenAIRE

    Ronald Reyses Tapilatu; Edeh Rolleta Haroen; Rosiliwati Wihardja

    2008-01-01

    The habit of smoking white cigarettes and clove cigarettes may affect the gustatory function, that is, it will cause damage to taste buds, resulting in an increase in gustatory threshold. This research used the descriptive comparative method and had the purpose of obtaining an illustration of gustatory threshold and compare gustatory threshold in white cigarette smokers and clove cigarette smokers in young, male adults. For gustatory threshold evaluation, the Murphy method was used to obtain ...

  11. Bayesian methods for jointly estimating genomic breeding values of one continuous and one threshold trait.

    Directory of Open Access Journals (Sweden)

    Chonglong Wang

    Full Text Available Genomic selection has become a useful tool for animal and plant breeding. Currently, genomic evaluation is usually carried out using a single-trait model. However, a multi-trait model has the advantage of using information on the correlated traits, leading to more accurate genomic prediction. To date, joint genomic prediction for a continuous and a threshold trait using a multi-trait model is scarce and needs more attention. Based on the previously proposed methods BayesCπ for single continuous trait and BayesTCπ for single threshold trait, we developed a novel method based on a linear-threshold model, i.e., LT-BayesCπ, for joint genomic prediction of a continuous trait and a threshold trait. Computing procedures of LT-BayesCπ using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the advantages of LT-BayesCπ over BayesCπ and BayesTCπ with regard to the accuracy of genomic prediction on both traits. Factors affecting the performance of LT-BayesCπ were addressed. The results showed that, in all scenarios, the accuracy of genomic prediction obtained from LT-BayesCπ was significantly increased for the threshold trait compared to that from single trait prediction using BayesTCπ, while the accuracy for the continuous trait was comparable with that from single trait prediction using BayesCπ. The proposed LT-BayesCπ could be a method of choice for joint genomic prediction of one continuous and one threshold trait.

  12. Incorporating adaptive responses into future projections of coral bleaching.

    Science.gov (United States)

    Logan, Cheryl A; Dunne, John P; Eakin, C Mark; Donner, Simon D

    2014-01-01

    Climate warming threatens to increase mass coral bleaching events, and several studies have projected the demise of tropical coral reefs this century. However, recent evidence indicates corals may be able to respond to thermal stress though adaptive processes (e.g., genetic adaptation, acclimatization, and symbiont shuffling). How these mechanisms might influence warming-induced bleaching remains largely unknown. This study compared how different adaptive processes could affect coral bleaching projections. We used the latest bias-corrected global sea surface temperature (SST) output from the NOAA/GFDL Earth System Model 2 (ESM2M) for the preindustrial period through 2100 to project coral bleaching trajectories. Initial results showed that, in the absence of adaptive processes, application of a preindustrial climatology to the NOAA Coral Reef Watch bleaching prediction method overpredicts the present-day bleaching frequency. This suggests that corals may have already responded adaptively to some warming over the industrial period. We then modified the prediction method so that the bleaching threshold either permanently increased in response to thermal history (e.g., simulating directional genetic selection) or temporarily increased for 2-10 years in response to a bleaching event (e.g., simulating symbiont shuffling). A bleaching threshold that changes relative to the preceding 60 years of thermal history reduced the frequency of mass bleaching events by 20-80% compared with the 'no adaptive response' prediction model by 2100, depending on the emissions scenario. When both types of adaptive responses were applied, up to 14% more reef cells avoided high-frequency bleaching by 2100. However, temporary increases in bleaching thresholds alone only delayed the occurrence of high-frequency bleaching by ca. 10 years in all but the lowest emissions scenario. Future research should test the rate and limit of different adaptive responses for coral species across latitudes and

  13. Comparison on taste threshold between adult male white cigarette and clove cigarette smokers using Murphy clinical test method

    Directory of Open Access Journals (Sweden)

    Ronald Reyses Tapilatu

    2008-03-01

    Full Text Available The habit of smoking white cigarettes and clove cigarettes may affect the gustatory function, that is, it will cause damage to taste buds, resulting in an increase in gustatory threshold. This research used the descriptive comparative method and had the purpose of obtaining an illustration of gustatory threshold and compare gustatory threshold in white cigarette smokers and clove cigarette smokers in young, male adults. For gustatory threshold evaluation, the Murphy method was used to obtain a value for perception threshold and taste identification threshold using sucrose solution of 0.0006 M-0.06 M concentration. Research results indicate that the perception threshold and identification threshold of young, male adult smokers are 0.0119 M and 0.0292 M. Young, male adult clove cigarette smokers have a perception threshold and identification threshold of 0.0151 M and 0.0348 M. The conclusion of this research is that the perception threshold of young, male adult white cigarette smokers and clove cigarette smokers are the same, whereas the identification threshold of young, male adult white cigarette smokers and clove cigarette smokers are different, that is, the identification threshold of clove cigarette smokers is higher than that of white cigarette smokers.

  14. Adaptive threshold hunting for the effects of transcranial direct current stimulation on primary motor cortex inhibition.

    Science.gov (United States)

    Mooney, Ronan A; Cirillo, John; Byblow, Winston D

    2018-06-01

    Primary motor cortex excitability can be modulated by anodal and cathodal transcranial direct current stimulation (tDCS). These neuromodulatory effects may, in part, be dependent on modulation within gamma-aminobutyric acid (GABA)-mediated inhibitory networks. GABAergic function can be quantified non-invasively using adaptive threshold hunting paired-pulse transcranial magnetic stimulation (TMS). The previous studies have used TMS with posterior-anterior (PA) induced current to assess tDCS effects on inhibition. However, TMS with anterior-posterior (AP) induced current in the brain provides a more robust measure of GABA-mediated inhibition. The aim of the present study was to assess the modulation of corticomotor excitability and inhibition after anodal and cathodal tDCS using TMS with PA- and AP-induced current. In 16 young adults (26 ± 1 years), we investigated the response to anodal, cathodal, and sham tDCS in a repeated-measures double-blinded crossover design. Adaptive threshold hunting paired-pulse TMS with PA- and AP-induced current was used to examine separate interneuronal populations within M1 and their influence on corticomotor excitability and short- and long-interval inhibition (SICI and LICI) for up to 60 min after tDCS. Unexpectedly, cathodal tDCS increased corticomotor excitability assessed with AP (P = 0.047) but not PA stimulation (P = 0.74). SICI AP was reduced after anodal tDCS compared with sham (P = 0.040). Pearson's correlations indicated that SICI AP and LICI AP modulation was associated with corticomotor excitability after anodal (P = 0.027) and cathodal tDCS (P = 0.042). The after-effects of tDCS on corticomotor excitability may depend on the direction of the TMS-induced current used to make assessments, and on modulation within GABA-mediated inhibitory circuits.

  15. Designing adaptive intensive interventions using methods from engineering.

    Science.gov (United States)

    Lagoa, Constantino M; Bekiroglu, Korkut; Lanza, Stephanie T; Murphy, Susan A

    2014-10-01

    Adaptive intensive interventions are introduced, and new methods from the field of control engineering for use in their design are illustrated. A detailed step-by-step explanation of how control engineering methods can be used with intensive longitudinal data to design an adaptive intensive intervention is provided. The methods are evaluated via simulation. Simulation results illustrate how the designed adaptive intensive intervention can result in improved outcomes with less treatment by providing treatment only when it is needed. Furthermore, the methods are robust to model misspecification as well as the influence of unobserved causes. These new methods can be used to design adaptive interventions that are effective yet reduce participant burden. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal

    Science.gov (United States)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long

    2015-06-01

    Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.

  17. Threshold-based Adaptive Detection for WSN

    KAUST Repository

    Abuzaid, Abdulrahman I.

    2014-01-06

    Efficient receiver designs for wireless sensor networks (WSNs) are becoming increasingly important. Cooperative WSNs communicated with the use of L sensors. As the receiver is constrained, it can only process U out of L sensors. Channel shortening and reduced-rank techniques were employed to design the preprocessing matrix. In this work, a receiver structure is proposed which combines the joint iterative optimization (JIO) algorithm and our proposed threshold selection criteria. This receiver structure assists in determining the optimal Uopt. It also provides the freedom to choose U

  18. Threshold-based Adaptive Detection for WSN

    KAUST Repository

    Abuzaid, Abdulrahman I.; Ahmed, Qasim Zeeshan; Alouini, Mohamed-Slim

    2014-01-01

    Efficient receiver designs for wireless sensor networks (WSNs) are becoming increasingly important. Cooperative WSNs communicated with the use of L sensors. As the receiver is constrained, it can only process U out of L sensors. Channel shortening and reduced-rank techniques were employed to design the preprocessing matrix. In this work, a receiver structure is proposed which combines the joint iterative optimization (JIO) algorithm and our proposed threshold selection criteria. This receiver structure assists in determining the optimal Uopt. It also provides the freedom to choose U

  19. Low-resolution expression recognition based on central oblique average CS-LBP with adaptive threshold

    Science.gov (United States)

    Han, Sheng; Xi, Shi-qiong; Geng, Wei-dong

    2017-11-01

    In order to solve the problem of low recognition rate of traditional feature extraction operators under low-resolution images, a novel algorithm of expression recognition is proposed, named central oblique average center-symmetric local binary pattern (CS-LBP) with adaptive threshold (ATCS-LBP). Firstly, the features of face images can be extracted by the proposed operator after pretreatment. Secondly, the obtained feature image is divided into blocks. Thirdly, the histogram of each block is computed independently and all histograms can be connected serially to create a final feature vector. Finally, expression classification is achieved by using support vector machine (SVM) classifier. Experimental results on Japanese female facial expression (JAFFE) database show that the proposed algorithm can achieve a recognition rate of 81.9% when the resolution is as low as 16×16, which is much better than that of the traditional feature extraction operators.

  20. Ventilatory thresholds determined from HRV: comparison of 2 methods in obese adolescents.

    Science.gov (United States)

    Quinart, S; Mourot, L; Nègre, V; Simon-Rigaud, M-L; Nicolet-Guénat, M; Bertrand, A-M; Meneveau, N; Mougin, F

    2014-03-01

    The development of personalised training programmes is crucial in the management of obesity. We evaluated the ability of 2 heart rate variability analyses to determine ventilatory thresholds (VT) in obese adolescents. 20 adolescents (mean age 14.3±1.6 years and body mass index z-score 4.2±0.1) performed an incremental test to exhaustion before and after a 9-month multidisciplinary management programme. The first (VT1) and second (VT2) ventilatory thresholds were identified by the reference method (gas exchanges). We recorded RR intervals to estimate VT1 and VT2 from heart rate variability using time-domain analysis and time-varying spectral-domain analysis. The coefficient correlations between thresholds were higher with spectral-domain analysis compared to time-domain analysis: Heart rate at VT1: r=0.91 vs. =0.66 and VT2: r=0.91 vs. =0.66; power at VT1: r=0.91 vs. =0.74 and VT2: r=0.93 vs. =0.78; spectral-domain vs. time-domain analysis respectively). No systematic bias in heart rate at VT1 and VT2 with standard deviations <6 bpm were found, confirming that spectral-domain analysis could replace the reference method for the detection of ventilatory thresholds. Furthermore, this technique is sensitive to rehabilitation and re-training, which underlines its utility in clinical practice. This inexpensive and non-invasive tool is promising for prescribing physical activity programs in obese adolescents. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Estimating resting motor thresholds in transcranial magnetic stimulation research and practice: a computer simulation evaluation of best methods.

    Science.gov (United States)

    Borckardt, Jeffrey J; Nahas, Ziad; Koola, Jejo; George, Mark S

    2006-09-01

    Resting motor threshold is the basic unit of dosing in transcranial magnetic stimulation (TMS) research and practice. There is little consensus on how best to estimate resting motor threshold with TMS, and only a few tools and resources are readily available to TMS researchers. The current study investigates the accuracy and efficiency of 5 different approaches to motor threshold assessment for TMS research and practice applications. Computer simulation models are used to test the efficiency and accuracy of 5 different adaptive parameter estimation by sequential testing (PEST) procedures. For each approach, data are presented with respect to the mean number of TMS trials necessary to reach the motor threshold estimate as well as the mean accuracy of the estimates. A simple nonparametric PEST procedure appears to provide the most accurate motor threshold estimates, but takes slightly longer (on average, 3.48 trials) to complete than a popular parametric alternative (maximum likelihood PEST). Recommendations are made for the best starting values for each of the approaches to maximize both efficiency and accuracy. In light of the computer simulation data provided in this article, the authors review and suggest which techniques might best fit different TMS research and clinical situations. Lastly, a free user-friendly software package is described and made available on the world wide web that allows users to run all of the motor threshold estimation procedures discussed in this article for clinical and research applications.

  2. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Directory of Open Access Journals (Sweden)

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  3. Evidence accumulator or decision threshold - which cortical mechanism are we observing?

    Directory of Open Access Journals (Sweden)

    Patrick eSimen

    2012-06-01

    Full Text Available Most psychological models of perceptual decision making are of the accumulation-to-threshold variety. The neural basis of accumulation in parietal and prefrontal cortex is therefore a topic of great interest in neuroscience. In contrast, threshold mechanisms have received less attention, and their neural basis has usually been sought in subcortical structures. Here I analyze a model of a decision threshold that can be implemented in the same cortical areas as evidence accumulators, and whose behavior bears on two open questions in decision neuroscience: 1 When ramping activity is observed in a brain region during decision making, does it reflect evidence accumulation? 2 Are changes in speed-accuracy tradeoffs and response biases more likely to be achieved by changes in thresholds, or in accumulation rates and starting points? The analysis suggests that task-modulated ramping activity, by itself, is weak evidence that a brain area mediates evidence accumulation as opposed to threshold readout; and that signs of modulated accumulation are as likely to indicate threshold adaptation as adaptation of starting points and accumulation rates. These conclusions imply that how thresholds are modeled can dramatically impact accumulator-based interpretations of this data.

  4. Adaptive finite element methods for differential equations

    CERN Document Server

    Bangerth, Wolfgang

    2003-01-01

    These Lecture Notes discuss concepts of `self-adaptivity' in the numerical solution of differential equations, with emphasis on Galerkin finite element methods. The key issues are a posteriori error estimation and it automatic mesh adaptation. Besides the traditional approach of energy-norm error control, a new duality-based technique, the Dual Weighted Residual method for goal-oriented error estimation, is discussed in detail. This method aims at economical computation of arbitrary quantities of physical interest by properly adapting the computational mesh. This is typically required in the design cycles of technical applications. For example, the drag coefficient of a body immersed in a viscous flow is computed, then it is minimized by varying certain control parameters, and finally the stability of the resulting flow is investigated by solving an eigenvalue problem. `Goal-oriented' adaptivity is designed to achieve these tasks with minimal cost. At the end of each chapter some exercises are posed in order ...

  5. Adaptive discrete cosine transform coding algorithm for digital mammography

    Science.gov (United States)

    Baskurt, Atilla M.; Magnin, Isabelle E.; Goutte, Robert

    1992-09-01

    The need for storage, transmission, and archiving of medical images has led researchers to develop adaptive and efficient data compression techniques. Among medical images, x-ray radiographs of the breast are especially difficult to process because of their particularly low contrast and very fine structures. A block adaptive coding algorithm based on the discrete cosine transform to compress digitized mammograms is described. A homogeneous repartition of the degradation in the decoded images is obtained using a spatially adaptive threshold. This threshold depends on the coding error associated with each block of the image. The proposed method is tested on a limited number of pathological mammograms including opacities and microcalcifications. A comparative visual analysis is performed between the original and the decoded images. Finally, it is shown that data compression with rather high compression rates (11 to 26) is possible in the mammography field.

  6. Laying the Groundwork for NCLEX Success: An Exploration of Adaptive Quizzing as an Examination Preparation Method.

    Science.gov (United States)

    Cox-Davenport, Rebecca A; Phelan, Julia C

    2015-05-01

    First-time NCLEX-RN pass rates are an important indicator of nursing school success and quality. Nursing schools use different methods to anticipate NCLEX outcomes and help prevent student failure and possible threat to accreditation. This study evaluated the impact of a shift in NCLEX preparation policy at a BSN program in the southeast United States. The policy shifted from the use of predictor score thresholds to determine graduation eligibility to a more proactive remediation strategy involving adaptive quizzing. A descriptive correlational design evaluated the impact of an adaptive quizzing system designed to give students ongoing active practice and feedback and explored the relationship between predictor examinations and NCLEX success. Data from student usage of the system as well as scores on predictor tests were collected for three student cohorts. Results revealed a positive correlation between adaptive quizzing system usage and content mastery. Two of the 69 students in the sample did not pass the NCLEX. With so few students failing the NCLEX, predictability of any course variables could not be determined. The power of predictor examinations to predict NCLEX failure could also not be supported. The most consistent factor among students, however, was their content mastery level within the adaptive quizzing system. Implications of these findings are discussed.

  7. Adaptive 4d Psi-Based Change Detection

    Science.gov (United States)

    Yang, Chia-Hsiang; Soergel, Uwe

    2018-04-01

    In a previous work, we proposed a PSI-based 4D change detection to detect disappearing and emerging PS points (3D) along with their occurrence dates (1D). Such change points are usually caused by anthropic events, e.g., building constructions in cities. This method first divides an entire SAR image stack into several subsets by a set of break dates. The PS points, which are selected based on their temporal coherences before or after a break date, are regarded as change candidates. Change points are then extracted from these candidates according to their change indices, which are modelled from their temporal coherences of divided image subsets. Finally, we check the evolution of the change indices for each change point to detect the break date that this change occurred. The experiment validated both feasibility and applicability of our method. However, two questions still remain. First, selection of temporal coherence threshold associates with a trade-off between quality and quantity of PS points. This selection is also crucial for the amount of change points in a more complex way. Second, heuristic selection of change index thresholds brings vulnerability and causes loss of change points. In this study, we adapt our approach to identify change points based on statistical characteristics of change indices rather than thresholding. The experiment validates this adaptive approach and shows increase of change points compared with the old version. In addition, we also explore and discuss optimal selection of temporal coherence threshold.

  8. The Method of Adaptive Comparative Judgement

    Science.gov (United States)

    Pollitt, Alastair

    2012-01-01

    Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…

  9. Estimating extremes in climate change simulations using the peaks-over-threshold method with a non-stationary threshold

    Czech Academy of Sciences Publication Activity Database

    Kyselý, Jan; Picek, J.; Beranová, Romana

    2010-01-01

    Roč. 72, 1-2 (2010), s. 55-68 ISSN 0921-8181 R&D Projects: GA ČR GA205/06/1535; GA ČR GAP209/10/2045 Grant - others:GA MŠk(CZ) LC06024 Institutional research plan: CEZ:AV0Z30420517 Keywords : climate change * extreme value analysis * global climate models * peaks-over-threshold method * peaks-over-quantile regression * quantile regression * Poisson process * extreme temperatures Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 3.351, year: 2010

  10. An Adaptive Reordered Method for Computing PageRank

    Directory of Open Access Journals (Sweden)

    Yi-Ming Bu

    2013-01-01

    Full Text Available We propose an adaptive reordered method to deal with the PageRank problem. It has been shown that one can reorder the hyperlink matrix of PageRank problem to calculate a reduced system and get the full PageRank vector through forward substitutions. This method can provide a speedup for calculating the PageRank vector. We observe that in the existing reordered method, the cost of the recursively reordering procedure could offset the computational reduction brought by minimizing the dimension of linear system. With this observation, we introduce an adaptive reordered method to accelerate the total calculation, in which we terminate the reordering procedure appropriately instead of reordering to the end. Numerical experiments show the effectiveness of this adaptive reordered method.

  11. Large Covariance Estimation by Thresholding Principal Orthogonal Complements.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2013-09-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.

  12. Can adaptive threshold-based metabolic tumor volume (MTV) and lean body mass corrected standard uptake value (SUL) predict prognosis in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy?

    Science.gov (United States)

    Akagunduz, Ozlem Ozkaya; Savas, Recep; Yalman, Deniz; Kocacelebi, Kenan; Esassolak, Mustafa

    2015-11-01

    To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for

  13. Circuit and method for controlling the threshold voltage of transistors.

    NARCIS (Netherlands)

    2008-01-01

    A control unit, for controlling a threshold voltage of a circuit unit having transistor devices, includes a reference circuit and a measuring unit. The measuring unit is configured to measure a threshold voltage of at least one sensing transistor of the circuit unit, and to measure a threshold

  14. Methods of scaling threshold color difference using printed samples

    Science.gov (United States)

    Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier

    2012-01-01

    A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.

  15. Track and vertex reconstruction: From classical to adaptive methods

    International Nuclear Information System (INIS)

    Strandlie, Are; Fruehwirth, Rudolf

    2010-01-01

    This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.

  16. Adaptation of an Agile Information System Development Method

    NARCIS (Netherlands)

    Aydin, M.N.; Harmsen, A.F.; van Hillegersberg, Jos; Stegwee, R.A.; Siau, K.

    2007-01-01

    Little specific research has been conducted to date on the adaptation of agile information systems development (ISD) methods. This chapter presents the work practice in dealing with the adaptation of such a method in the ISD department of one of the leading financial institutes in Europe. The

  17. Critical review and hydrologic application of threshold detection methods for the generalized Pareto (GP) distribution

    Science.gov (United States)

    Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto

    2016-04-01

    Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the

  18. Adaptive [theta]-methods for pricing American options

    Science.gov (United States)

    Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran

    2008-12-01

    We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.

  19. An NMR log echo data de-noising method based on the wavelet packet threshold algorithm

    International Nuclear Information System (INIS)

    Meng, Xiangning; Xie, Ranhong; Li, Changxi; Hu, Falong; Li, Chaoliu; Zhou, Cancan

    2015-01-01

    To improve the de-noising effects of low signal-to-noise ratio (SNR) nuclear magnetic resonance (NMR) log echo data, this paper applies the wavelet packet threshold algorithm to the data. The principle of the algorithm is elaborated in detail. By comparing the properties of a series of wavelet packet bases and the relevance between them and the NMR log echo train signal, ‘sym7’ is found to be the optimal wavelet packet basis of the wavelet packet threshold algorithm to de-noise the NMR log echo train signal. A new method is presented to determine the optimal wavelet packet decomposition scale; this is within the scope of its maximum, using the modulus maxima and the Shannon entropy minimum standards to determine the global and local optimal wavelet packet decomposition scales, respectively. The results of applying the method to the simulated and actual NMR log echo data indicate that compared with the wavelet threshold algorithm, the wavelet packet threshold algorithm, which shows higher decomposition accuracy and better de-noising effect, is much more suitable for de-noising low SNR–NMR log echo data. (paper)

  20. Methods used in adaptation of health-related guidelines: A systematic survey.

    Science.gov (United States)

    Abdul-Khalek, Rima A; Darzi, Andrea J; Godah, Mohammad W; Kilzar, Lama; Lakis, Chantal; Agarwal, Arnav; Abou-Jaoude, Elias; Meerpohl, Joerg J; Wiercioch, Wojtek; Santesso, Nancy; Brax, Hneine; Schünemann, Holger; Akl, Elie A

    2017-12-01

    Adaptation refers to the systematic approach for considering the endorsement or modification of recommendations produced in one setting for application in another as an alternative to de novo development. To describe and assess the methods used for adapting health-related guidelines published in peer-reviewed journals, and to assess the quality of the resulting adapted guidelines. We searched Medline and Embase up to June 2015. We assessed the method of adaptation, and the quality of included guidelines. Seventy-two papers were eligible. Most adapted guidelines and their source guidelines were published by professional societies (71% and 68% respectively), and in high-income countries (83% and 85% respectively). Of the 57 adapted guidelines that reported any detail about adaptation method, 34 (60%) did not use a published adaptation method. The number (and percentage) of adapted guidelines fulfilling each of the ADAPTE steps ranged between 2 (4%) and 57 (100%). The quality of adapted guidelines was highest for the "scope and purpose" domain and lowest for the "editorial independence" domain (respective mean percentages of the maximum possible scores were 93% and 43%). The mean score for "rigor of development" was 57%. Most adapted guidelines published in peer-reviewed journals do not report using a published adaptation method, and their adaptation quality was variable.

  1. Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.

    Science.gov (United States)

    Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles

    2015-11-01

    Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.

  2. [A cloud detection algorithm for MODIS images combining Kmeans clustering and multi-spectral threshold method].

    Science.gov (United States)

    Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei

    2011-04-01

    An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.

  3. Post-analysis methods for lactate threshold depend on training intensity and aerobic capacity in runners. An experimental laboratory study

    Directory of Open Access Journals (Sweden)

    Tiago Lazzaretti Fernandes

    Full Text Available ABSTRACT CONTEXT AND OBJECTIVE: This study aimed to evaluate different mathematical post-analysis methods of determining lactate threshold in highly and lowly trained endurance runners. DESIGN AND SETTING: Experimental laboratory study, in a tertiary-level public university hospital. METHOD: Twenty-seven male endurance runners were divided into two training load groups: lowly trained (frequency < 4 times per week, < 6 consecutive months, training velocity ≥ 5.0 min/km and highly trained (frequency ≥ 4 times per week, ≥ 6 consecutive months, training velocity < 5.0 min/km. The subjects performed an incremental treadmill protocol, with 1 km/h increases at each subsequent 4-minute stage. Fingerprint blood-lactate analysis was performed at the end of each stage. The lactate threshold (i.e. the running velocity at which blood lactate levels began to exponentially increase was measured using three different methods: increase in blood lactate of 1 mmol/l at stages (DT1, absolute 4 mmol/l blood lactate concentration (4 mmol, and the semi-log method (semi-log. ANOVA was used to compare different lactate threshold methods and training groups. RESULTS: Highly trained athletes showed significantly greater lactate thresholds than lowly trained runners, regardless of the calculation method used. When all the subject data were combined, DT1 and semi-log were not different, while 4 mmol was significantly lower than the other two methods. These same trends were observed when comparing lactate threshold methods in the lowly trained group. However, 4 mmol was only significantly lower than DT1 in the highly trained group. CONCLUSION: The 4 mmol protocol did not show lactate threshold measurements comparable with DT1 and semi-log protocols among lowly trained athletes.

  4. Interocular transfer of spatial adaptation is weak at low spatial frequencies.

    Science.gov (United States)

    Baker, Daniel H; Meese, Tim S

    2012-06-15

    Adapting one eye to a high contrast grating reduces sensitivity to similar target gratings shown to the same eye, and also to those shown to the opposite eye. According to the textbook account, interocular transfer (IOT) of adaptation is around 60% of the within-eye effect. However, most previous studies on this were limited to using high spatial frequencies, sustained presentation, and criterion-dependent methods for assessing threshold. Here, we measure IOT across a wide range of spatiotemporal frequencies, using a criterion-free 2AFC method. We find little or no IOT at low spatial frequencies, consistent with other recent observations. At higher spatial frequencies, IOT was present, but weaker than previously reported (around 35%, on average, at 8c/deg). Across all conditions, monocular adaptation raised thresholds by around a factor of 2, and observers showed normal binocular summation, demonstrating that they were not binocularly compromised. These findings prompt a reassessment of our understanding of the binocular architecture implied by interocular adaptation. In particular, the output of monocular channels may be available to perceptual decision making at low spatial frequencies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Adaptive Tuning of Frequency Thresholds Using Voltage Drop Data in Decentralized Load Shedding

    DEFF Research Database (Denmark)

    Hoseinzadeh, Bakhtyar; Faria Da Silva, Filipe Miguel; Bak, Claus Leth

    2015-01-01

    Load shedding (LS) is the last firewall and the most expensive control action against power system blackout. In the conventional under frequency LS (UFLS) schemes, the load drop locations are already determined independently of the event location. Furthermore, the frequency thresholds of LS relays...... are prespecified and constant values which may not be a comprehensive solution for widespread range of possible events. This paper addresses the decentralized LS in which the instantaneous voltage deviation of load buses is used to determine the frequency thresholds of LS relays. The higher frequency thresholds...

  6. Shrinkage-thresholding enhanced born iterative method for solving 2D inverse electromagnetic scattering problem

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2014-01-01

    A numerical framework that incorporates recently developed iterative shrinkage thresholding (IST) algorithms within the Born iterative method (BIM) is proposed for solving the two-dimensional inverse electromagnetic scattering problem. IST

  7. Sub-Volumetric Classification and Visualization of Emphysema Using a Multi-Threshold Method and Neural Network

    Science.gov (United States)

    Tan, Kok Liang; Tanaka, Toshiyuki; Nakamura, Hidetoshi; Shirahata, Toru; Sugiura, Hiroaki

    Chronic Obstructive Pulmonary Disease is a disease in which the airways and tiny air sacs (alveoli) inside the lung are partially obstructed or destroyed. Emphysema is what occurs as more and more of the walls between air sacs get destroyed. The goal of this paper is to produce a more practical emphysema-quantification algorithm that has higher correlation with the parameters of pulmonary function tests compared to classical methods. The use of the threshold range from approximately -900 Hounsfield Unit to -990 Hounsfield Unit for extracting emphysema from CT has been reported in many papers. From our experiments, we realize that a threshold which is optimal for a particular CT data set might not be optimal for other CT data sets due to the subtle radiographic variations in the CT images. Consequently, we propose a multi-threshold method that utilizes ten thresholds between and including -900 Hounsfield Unit and -990 Hounsfield Unit for identifying the different potential emphysematous regions in the lung. Subsequently, we divide the lung into eight sub-volumes. From each sub-volume, we calculate the ratio of the voxels with the intensity below a certain threshold. The respective ratios of the voxels below the ten thresholds are employed as the features for classifying the sub-volumes into four emphysema severity classes. Neural network is used as the classifier. The neural network is trained using 80 training sub-volumes. The performance of the classifier is assessed by classifying 248 test sub-volumes of the lung obtained from 31 subjects. Actual diagnoses of the sub-volumes are hand-annotated and consensus-classified by radiologists. The four-class classification accuracy of the proposed method is 89.82%. The sub-volumetric classification results produced in this study encompass not only the information of emphysema severity but also the distribution of emphysema severity from the top to the bottom of the lung. We hypothesize that besides emphysema severity, the

  8. New adaptive sampling method in particle image velocimetry

    International Nuclear Information System (INIS)

    Yu, Kaikai; Xu, Jinglei; Tang, Lan; Mo, Jianwei

    2015-01-01

    This study proposes a new adaptive method to enable the number of interrogation windows and their positions in a particle image velocimetry (PIV) image interrogation algorithm to become self-adapted according to the seeding density. The proposed method can relax the constraint of uniform sampling rate and uniform window size commonly adopted in the traditional PIV algorithm. In addition, the positions of the sampling points are redistributed on the basis of the spring force generated by the sampling points. The advantages include control of the number of interrogation windows according to the local seeding density and smoother distribution of sampling points. The reliability of the adaptive sampling method is illustrated by processing synthetic and experimental images. The synthetic example attests to the advantages of the sampling method. Compared with that of the uniform interrogation technique in the experimental application, the spatial resolution is locally enhanced when using the proposed sampling method. (technical design note)

  9. Adaptive design methods in clinical trials – a review

    Directory of Open Access Journals (Sweden)

    Chang Mark

    2008-05-01

    Full Text Available Abstract In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular due to its flexibility and efficiency. Based on adaptations applied, adaptive designs can be classified into three categories: prospective, concurrent (ad hoc, and retrospective adaptive designs. An adaptive design allows modifications made to trial and/or statistical procedures of ongoing clinical trials. However, it is a concern that the actual patient population after the adaptations could deviate from the originally target patient population and consequently the overall type I error (to erroneously claim efficacy for an infective drug rate may not be controlled. In addition, major adaptations of trial and/or statistical procedures of on-going trials may result in a totally different trial that is unable to address the scientific/medical questions the trial intends to answer. In this article, several commonly considered adaptive designs in clinical trials are reviewed. Impacts of ad hoc adaptations (protocol amendments, challenges in by design (prospective adaptations, and obstacles of retrospective adaptations are described. Strategies for the use of adaptive design in clinical development of rare diseases are discussed. Some examples concerning the development of Velcade intended for multiple myeloma and non-Hodgkin's lymphoma are given. Practical issues that are commonly encountered when implementing adaptive design methods in clinical trials are also discussed.

  10. Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088

  11. Implementing Adaptive Educational Methods with IMS Learning Design

    NARCIS (Netherlands)

    Specht, Marcus; Burgos, Daniel

    2006-01-01

    Please, cite this publication as: Specht, M. & Burgos, D. (2006). Implementing Adaptive Educational Methods with IMS Learning Design. Proceedings of Adaptive Hypermedia. June, Dublin, Ireland. Retrieved June 30th, 2006, from http://dspace.learningnetworks.org

  12. Threshold Signature Schemes Application

    Directory of Open Access Journals (Sweden)

    Anastasiya Victorovna Beresneva

    2015-10-01

    Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.

  13. The adaptive value of gluttony: predators mediate the life history trade-offs of satiation threshold.

    Science.gov (United States)

    Pruitt, J N; Krauel, J J

    2010-10-01

    Animals vary greatly in their tendency to consume large meals. Yet, whether or how meal size influences fitness in wild populations is infrequently considered. Using a predator exclusion, mark-recapture experiment, we estimated selection on the amount of food accepted during an ad libitum feeding bout (hereafter termed 'satiation threshold') in the wolf spider Schizocosa ocreata. Individually marked, size-matched females of known satiation threshold were assigned to predator exclusion and predator inclusion treatments and tracked for a 40-day period. We also estimated the narrow-sense heritability of satiation threshold using dam-on-female-offspring regression. In the absence of predation, high satiation threshold was positively associated with larger and faster egg case production. However, these selective advantages were lost when predators were present. We estimated the heritability of satiation threshold to be 0.56. Taken together, our results suggest that satiation threshold can respond to selection and begets a life history trade-off in this system: high satiation threshold individuals tend to produce larger egg cases but also suffer increased susceptibility to predation. © 2010 The Authors. Journal Compilation © 2010 European Society For Evolutionary Biology.

  14. Adaptive Control Methods for Soft Robots

    Data.gov (United States)

    National Aeronautics and Space Administration — I propose to develop methods for soft and inflatable robots that will allow the control system to adapt and change control parameters based on changing conditions...

  15. Self-adapted sliding scale spectroscopy ADC

    International Nuclear Information System (INIS)

    Xu Qichun; Wang Jingjin

    1992-01-01

    The traditional sliding scale technique causes a disabled range that is equal to the sliding length, thus reduces the analysis range of a MCA. A method for reduce ADC's DNL, which is called self-adapted sliding scale method, has been designed and tested. With this method, the disabled range caused by a traditional sliding scale method can be eliminated by a random trial scale and there is no need of an additional amplitude discriminator with swing threshold. A special trial-and-correct logic is presented. The tested DNL of the spectroscopy ADC described here is less than 0.5%

  16. Determining lower threshold concentrations for synergistic effects

    DEFF Research Database (Denmark)

    Bjergager, Maj-Britt Andersen; Dalhoff, Kristoffer; Kretschmann, Andreas

    2017-01-01

    which proven synergists cease to act as synergists towards the aquatic crustacean Daphnia magna. To do this, we compared several approaches and test-setups to evaluate which approach gives the most conservative estimate for the lower threshold for synergy for three known azole synergists. We focus...... on synergistic interactions between the pyrethroid insecticide, alpha-cypermethrin, and one of the three azole fungicides prochloraz, propiconazole or epoxiconazole measured on Daphnia magna immobilization. Three different experimental setups were applied: A standard 48h acute toxicity test, an adapted 48h test...... of immobile organisms increased more than two-fold above what was predicted by independent action (vertical assessment). All three tests confirmed the hypothesis of the existence of a lower azole threshold concentration below which no synergistic interaction was observed. The lower threshold concentration...

  17. Development and testing of methods for adaptive image processing in odontology and medicine

    International Nuclear Information System (INIS)

    Sund, Torbjoern

    2005-01-01

    Medical diagnostic imaging has undergone radical changes during the last ten years. In the early 1990'ies, the medical imaging department was almost exclusively film-based. Today, all major hospitals have converted to digital acquisition and handling of their diagnostic imaging, or are in the process of conversion. It is therefore important to investigate whether diagnostic reading of digitally acquired images on computer display screens can match or even surpass film recording and viewing. At the same time, the digitalisation opens new possibilities for image processing, which may challenge the traditional way of studying medical images. The current work explores some of the possibilities of digital processing techniques, and evaluates the results both by quantitative methods (ROC analysis) and by subjective qualification by real users. Summary of papers: Paper I: Locally adaptive image binarization with a sliding window threshold was used for the detection of bone ridges in radiotherapy portal images. A new thresholding criterion suitable for incremental update within the sliding window was developed, and it was shown that the algorithm gave better results on difficult portal images than various publicly available adaptive thresholding routines. For small windows the routine was also faster than an adaptive implementation of the Otsu algorithm that uses interpolation between fixed tiles, and the resulting images had equal quality. Paper II: It was investigated whether contrast enhancement by non-interactive, sliding window adaptive histogram equalization could enhance the diagnostic quality of intra-oral radiographs in the dental clinic. Three dentists read 22 periapical and 12 bitewing storage phosphor (SP) radiographs. For the periapical readings they graded the quality of the examination with regard to visually locating the root apex. For the bitewing readings they registered all occurrences of approximal caries on a confidence scale. Each reading was first

  18. Development and testing of methods for adaptive image processing in odontology and medicine

    Energy Technology Data Exchange (ETDEWEB)

    Sund, Torbjoern

    2005-07-01

    Medical diagnostic imaging has undergone radical changes during the last ten years. In the early 1990'ies, the medical imaging department was almost exclusively film-based. Today, all major hospitals have converted to digital acquisition and handling of their diagnostic imaging, or are in the process of conversion. It is therefore important to investigate whether diagnostic reading of digitally acquired images on computer display screens can match or even surpass film recording and viewing. At the same time, the digitalisation opens new possibilities for image processing, which may challenge the traditional way of studying medical images. The current work explores some of the possibilities of digital processing techniques, and evaluates the results both by quantitative methods (ROC analysis) and by subjective qualification by real users. Summary of papers: Paper I: Locally adaptive image binarization with a sliding window threshold was used for the detection of bone ridges in radiotherapy portal images. A new thresholding criterion suitable for incremental update within the sliding window was developed, and it was shown that the algorithm gave better results on difficult portal images than various publicly available adaptive thresholding routines. For small windows the routine was also faster than an adaptive implementation of the Otsu algorithm that uses interpolation between fixed tiles, and the resulting images had equal quality. Paper II: It was investigated whether contrast enhancement by non-interactive, sliding window adaptive histogram equalization could enhance the diagnostic quality of intra-oral radiographs in the dental clinic. Three dentists read 22 periapical and 12 bitewing storage phosphor (SP) radiographs. For the periapical readings they graded the quality of the examination with regard to visually locating the root apex. For the bitewing readings they registered all occurrences of approximal caries on a confidence scale. Each reading was

  19. Comparison of Threshold Detection Methods for the Generalized Pareto Distribution (GPD): Application to the NOAA-NCDC Daily Rainfall Dataset

    Science.gov (United States)

    Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas

    2015-04-01

    One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General

  20. Thresholding magnetic resonance images of human brain

    Institute of Scientific and Technical Information of China (English)

    Qing-mao HU; Wieslaw L NOWINSKI

    2005-01-01

    In this paper, methods are proposed and validated to determine low and high thresholds to segment out gray matter and white matter for MR images of different pulse sequences of human brain. First, a two-dimensional reference image is determined to represent the intensity characteristics of the original three-dimensional data. Then a region of interest of the reference image is determined where brain tissues are present. The non-supervised fuzzy c-means clustering is employed to determine: the threshold for obtaining head mask, the low threshold for T2-weighted and PD-weighted images, and the high threshold for T1-weighted, SPGR and FLAIR images. Supervised range-constrained thresholding is employed to determine the low threshold for T1-weighted, SPGR and FLAIR images. Thresholding based on pairs of boundary pixels is proposed to determine the high threshold for T2- and PD-weighted images. Quantification against public data sets with various noise and inhomogeneity levels shows that the proposed methods can yield segmentation robust to noise and intensity inhomogeneity. Qualitatively the proposed methods work well with real clinical data.

  1. An n -material thresholding method for improving integerness of solutions in topology optimization

    International Nuclear Information System (INIS)

    Watts, Seth; Engineering); Tortorelli, Daniel A.; Engineering)

    2016-01-01

    It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, the canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.

  2. Automatic luminous reflections detector using global threshold with increased luminosity contrast in images

    Science.gov (United States)

    Silva, Ricardo Petri; Naozuka, Gustavo Taiji; Mastelini, Saulo Martiello; Felinto, Alan Salvany

    2018-01-01

    The incidence of luminous reflections (LR) in captured images can interfere with the color of the affected regions. These regions tend to oversaturate, becoming whitish and, consequently, losing the original color information of the scene. Decision processes that employ images acquired from digital cameras can be impaired by the LR incidence. Such applications include real-time video surgeries, facial, and ocular recognition. This work proposes an algorithm called contrast enhancement of potential LR regions, which is a preprocessing to increase the contrast of potential LR regions, in order to improve the performance of automatic LR detectors. In addition, three automatic detectors were compared with and without the employment of our preprocessing method. The first one is a technique already consolidated in the literature called the Chang-Tseng threshold. We propose two automatic detectors called adapted histogram peak and global threshold. We employed four performance metrics to evaluate the detectors, namely, accuracy, precision, exactitude, and root mean square error. The exactitude metric is developed by this work. Thus, a manually defined reference model was created. The global threshold detector combined with our preprocessing method presented the best results, with an average exactitude rate of 82.47%.

  3. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    Science.gov (United States)

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems

  4. Wavelet methods in multi-conjugate adaptive optics

    International Nuclear Information System (INIS)

    Helin, T; Yudytskiy, M

    2013-01-01

    The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory. (paper)

  5. Adaptative mixed methods to axisymmetric shells

    International Nuclear Information System (INIS)

    Malta, S.M.C.; Loula, A.F.D.; Garcia, E.L.M.

    1989-09-01

    The mixed Petrov-Galerkin method is applied to axisymmetric shells with uniform and non uniform meshes. Numerical experiments with a cylindrical shell showed a significant improvement in convergence and accuracy with adaptive meshes. (A.C.A.S.) [pt

  6. Recovering from a bad start: rapid adaptation and tradeoffs to growth below a threshold density

    Directory of Open Access Journals (Sweden)

    Marx Christopher J

    2012-07-01

    Full Text Available Abstract Background Bacterial growth in well-mixed culture is often assumed to be an autonomous process only depending upon the external conditions under control of the investigator. However, increasingly there is awareness that interactions between cells in culture can lead to surprising phenomena such as density-dependence in the initiation of growth. Results Here I report the unexpected discovery of a density threshold for growth of a strain of Methylobacterium extorquens AM1 used to inoculate eight replicate populations that were evolved in methanol. Six of these populations failed to grow to the expected full density during the first couple transfers. Remarkably, the final cell number of six populations crashed to levels 60- to 400-fold smaller than their cohorts. Five of these populations recovered to full density soon after, but one population remained an order of magnitude smaller for over one hundred generations. These variable dynamics appeared to be due to a density threshold for growth that was specific to both this particular ancestral strain and to growth on methanol. When tested at full density, this population had become less fit than its ancestor. Simply increasing the initial dilution 16-fold reversed this result, revealing that this population had more than a 3-fold advantage when tested at this lower density. As this population evolved and ultimately recovered to the same final density range as the other populations this low-density advantage waned. Conclusions These results demonstrate surprisingly strong tradeoffs during adaptation to growth at low absolute densities that manifest over just a 16-fold change in density. Capturing laboratory examples of transitions to and from growth at low density may help us understand the physiological and evolutionary forces that have led to the unusual properties of natural bacteria that have specialized to low-density environments such as the open ocean.

  7. Recovering from a bad start: rapid adaptation and tradeoffs to growth below a threshold density.

    Science.gov (United States)

    Marx, Christopher J

    2012-07-04

    Bacterial growth in well-mixed culture is often assumed to be an autonomous process only depending upon the external conditions under control of the investigator. However, increasingly there is awareness that interactions between cells in culture can lead to surprising phenomena such as density-dependence in the initiation of growth. Here I report the unexpected discovery of a density threshold for growth of a strain of Methylobacterium extorquens AM1 used to inoculate eight replicate populations that were evolved in methanol. Six of these populations failed to grow to the expected full density during the first couple transfers. Remarkably, the final cell number of six populations crashed to levels 60- to 400-fold smaller than their cohorts. Five of these populations recovered to full density soon after, but one population remained an order of magnitude smaller for over one hundred generations. These variable dynamics appeared to be due to a density threshold for growth that was specific to both this particular ancestral strain and to growth on methanol. When tested at full density, this population had become less fit than its ancestor. Simply increasing the initial dilution 16-fold reversed this result, revealing that this population had more than a 3-fold advantage when tested at this lower density. As this population evolved and ultimately recovered to the same final density range as the other populations this low-density advantage waned. These results demonstrate surprisingly strong tradeoffs during adaptation to growth at low absolute densities that manifest over just a 16-fold change in density. Capturing laboratory examples of transitions to and from growth at low density may help us understand the physiological and evolutionary forces that have led to the unusual properties of natural bacteria that have specialized to low-density environments such as the open ocean.

  8. Noninvasive method to estimate anaerobic threshold in individuals with type 2 diabetes

    Directory of Open Access Journals (Sweden)

    Sales Marcelo M

    2011-01-01

    Full Text Available Abstract Background While several studies have identified the anaerobic threshold (AT through the responses of blood lactate, ventilation and blood glucose others have suggested the response of the heart rate variability (HRV as a method to identify the AT in young healthy individuals. However, the validity of HRV in estimating the lactate threshold (LT and ventilatory threshold (VT for individuals with type 2 diabetes (T2D has not been investigated yet. Aim To analyze the possibility of identifying the heart rate variability threshold (HRVT by considering the responses of parasympathetic indicators during incremental exercise test in type 2 diabetics subjects (T2D and non diabetics individuals (ND. Methods Nine T2D (55.6 ± 5.7 years, 83.4 ± 26.6 kg, 30.9 ± 5.2 kg.m2(-1 and ten ND (50.8 ± 5.1 years, 76.2 ± 14.3 kg, 26.5 ± 3.8 kg.m2(-1 underwent to an incremental exercise test (IT on a cycle ergometer. Heart rate (HR, rate of perceived exertion (RPE, blood lactate and expired gas concentrations were measured at the end of each stage. HRVT was identified through the responses of root mean square successive difference between adjacent R-R intervals (RMSSD and standard deviation of instantaneous beat-to-beat R-R interval variability (SD1 by considering the last 60 s of each incremental stage, and were known as HRVT by RMSSD and SD1 (HRVT-RMSSD and HRVT-SD1, respectively. Results No differences were observed within groups for the exercise intensities corresponding to LT, VT, HRVT-RMSSD and HHVT-SD1. Furthermore, a strong relationship were verified among the studied parameters both for T2D (r = 0.68 to 0.87 and ND (r = 0.91 to 0.98 and the Bland & Altman technique confirmed the agreement among them. Conclusion The HRVT identification by the proposed autonomic indicators (SD1 and RMSSD were demonstrated to be valid to estimate the LT and VT for both T2D and ND.

  9. Selective Extraction of Entangled Textures via Adaptive PDE Transform

    Directory of Open Access Journals (Sweden)

    Yang Wang

    2012-01-01

    Full Text Available Texture and feature extraction is an important research area with a wide range of applications in science and technology. Selective extraction of entangled textures is a challenging task due to spatial entanglement, orientation mixing, and high-frequency overlapping. The partial differential equation (PDE transform is an efficient method for functional mode decomposition. The present work introduces adaptive PDE transform algorithm to appropriately threshold the statistical variance of the local variation of functional modes. The proposed adaptive PDE transform is applied to the selective extraction of entangled textures. Successful separations of human face, clothes, background, natural landscape, text, forest, camouflaged sniper and neuron skeletons have validated the proposed method.

  10. Incompressible Navier-Stokes inverse design method based on adaptive unstructured meshes

    International Nuclear Information System (INIS)

    Rahmati, M.T.; Charlesworth, D.; Zangeneh, M.

    2005-01-01

    An inverse method for blade design based on Navier-Stokes equations on adaptive unstructured meshes has been developed. In the method, unlike the method based on inviscid equations, the effect of viscosity is directly taken into account. In the method, the pressure (or pressure loading) is prescribed. The design method then computes the blade shape that would accomplish the target prescribed pressure distribution. The method is implemented using a cell-centered finite volume method, which solves the incompressible Navier-Stokes equations on unstructured meshes. An adaptive unstructured mesh method based on grid subdivision and local adaptive mesh method is utilized for increasing the accuracy. (author)

  11. Near threshold fatigue testing

    Science.gov (United States)

    Freeman, D. C.; Strum, M. J.

    1993-01-01

    Measurement of the near-threshold fatigue crack growth rate (FCGR) behavior provides a basis for the design and evaluation of components subjected to high cycle fatigue. Typically, the near-threshold fatigue regime describes crack growth rates below approximately 10(exp -5) mm/cycle (4 x 10(exp -7) inch/cycle). One such evaluation was recently performed for the binary alloy U-6Nb. The procedures developed for this evaluation are described in detail to provide a general test method for near-threshold FCGR testing. In particular, techniques for high-resolution measurements of crack length performed in-situ through a direct current, potential drop (DCPD) apparatus, and a method which eliminates crack closure effects through the use of loading cycles with constant maximum stress intensity are described.

  12. Cultural adaptation and translation of measures: an integrated method.

    Science.gov (United States)

    Sidani, Souraya; Guruge, Sepali; Miranda, Joyal; Ford-Gilboe, Marilyn; Varcoe, Colleen

    2010-04-01

    Differences in the conceptualization and operationalization of health-related concepts may exist across cultures. Such differences underscore the importance of examining conceptual equivalence when adapting and translating instruments. In this article, we describe an integrated method for exploring conceptual equivalence within the process of adapting and translating measures. The integrated method involves five phases including selection of instruments for cultural adaptation and translation; assessment of conceptual equivalence, leading to the generation of a set of items deemed to be culturally and linguistically appropriate to assess the concept of interest in the target community; forward translation; back translation (optional); and pre-testing of the set of items. Strengths and limitations of the proposed integrated method are discussed. (c) 2010 Wiley Periodicals, Inc.

  13. Stochastic analysis of epidemics on adaptive time varying networks

    Science.gov (United States)

    Kotnis, Bhushan; Kuri, Joy

    2013-06-01

    Many studies investigating the effect of human social connectivity structures (networks) and human behavioral adaptations on the spread of infectious diseases have assumed either a static connectivity structure or a network which adapts itself in response to the epidemic (adaptive networks). However, human social connections are inherently dynamic or time varying. Furthermore, the spread of many infectious diseases occur on a time scale comparable to the time scale of the evolving network structure. Here we aim to quantify the effect of human behavioral adaptations on the spread of asymptomatic infectious diseases on time varying networks. We perform a full stochastic analysis using a continuous time Markov chain approach for calculating the outbreak probability, mean epidemic duration, epidemic reemergence probability, etc. Additionally, we use mean-field theory for calculating epidemic thresholds. Theoretical predictions are verified using extensive simulations. Our studies have uncovered the existence of an “adaptive threshold,” i.e., when the ratio of susceptibility (or infectivity) rate to recovery rate is below the threshold value, adaptive behavior can prevent the epidemic. However, if it is above the threshold, no amount of behavioral adaptations can prevent the epidemic. Our analyses suggest that the interaction patterns of the infected population play a major role in sustaining the epidemic. Our results have implications on epidemic containment policies, as awareness campaigns and human behavioral responses can be effective only if the interaction levels of the infected populace are kept in check.

  14. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    Science.gov (United States)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  15. Adaptive finite element method for shape optimization

    KAUST Repository

    Morin, Pedro; Nochetto, Ricardo H.; Pauletti, Miguel S.; Verani, Marco

    2012-01-01

    We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.

  16. Adaptive finite element method for shape optimization

    KAUST Repository

    Morin, Pedro

    2012-01-16

    We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.

  17. Music effect on pain threshold evaluated with current perception threshold

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    AIM: Music relieves anxiety and psychotic tension. This effect of music is applied to surgical operation in the hospital and dental office. It is still unclear whether this music effect is only limited to the psychological aspect but not to the physical aspect or whether its music effect is influenced by the mood or emotion of audience. To elucidate these issues, we evaluated the music effect on pain threshold by current perception threshold (CPT) and profile of mood states (POMC) test. METHODS: Healthy 30 subjects (12 men, 18 women, 25-49 years old, mean age 34.9) were tested. (1)After POMC test, all subjects were evaluated pain threshold with CPT by Neurometer (Radionics, USA) under 6 conditions, silence, listening to the slow tempo classic music, nursery music, hard rock music, classic paino music and relaxation music with 30 seconds interval. (2)After Stroop color word test as the stresser, pain threshold was evaluated with CPT under 2 conditions, silence and listening to the slow tempo classic music. RESULTS: Under litening to the music, CPT sores increased, especially 2 000 Hz level related with compression, warm and pain sensation. Type of music, preference of music and stress also affected CPT score. CONCLUSION: The present study demonstrated that the concentration on the music raise the pain threshold and that stress and mood influence the music effect on pain threshold.

  18. Fixed or adapted conditioning intensity for repeated conditioned pain modulation.

    Science.gov (United States)

    Hoegh, M; Petersen, K K; Graven-Nielsen, T

    2017-12-29

    Aims Conditioned pain modulation (CPM) is used to assess descending pain modulation through a test stimulation (TS) and a conditioning stimulation (CS). Due to potential carry-over effects, sequential CPM paradigms might alter the intensity of the CS, which potentially can alter the CPM-effect. This study aimed to investigate the difference between a fixed and adaptive CS intensity on CPM-effect. Methods On the dominant leg of 20 healthy subjects the cuff pressure detection threshold (PDT) was recorded as TS and the pain tolerance threshold (PTT) was assessed on the non-dominant leg for estimating the CS. The difference in PDT before and during CS defined the CPM-effect. The CPM-effect was assessed four times using a CS with intensities of 70% of baseline PTT (fixed) or 70% of PTT measured throughout the session (adaptive). Pain intensity of the conditioning stimulus was assessed on a numeric rating scale (NRS). Data were analyzed with repeated-measures ANOVA. Results No difference was found comparing the four PDTs assessed before CSs for the fixed and the adaptive paradigms. The CS pressure intensity for the adaptive paradigm was increasing during the four repeated assessments (P CPM-effect was higher using the fixed condition compared with the adaptive condition (P CPM paradigms using a fixed conditioning stimulus produced an increased CPM-effect compared with adaptive and increasing conditioning intensities.

  19. Automatic Threshold Determination for a Local Approach of Change Detection in Long-Term Signal Recordings

    Directory of Open Access Journals (Sweden)

    David Hewson

    2007-01-01

    Full Text Available CUSUM (cumulative sum is a well-known method that can be used to detect changes in a signal when the parameters of this signal are known. This paper presents an adaptation of the CUSUM-based change detection algorithms to long-term signal recordings where the various hypotheses contained in the signal are unknown. The starting point of the work was the dynamic cumulative sum (DCS algorithm, previously developed for application to long-term electromyography (EMG recordings. DCS has been improved in two ways. The first was a new procedure to estimate the distribution parameters to ensure the respect of the detectability property. The second was the definition of two separate, automatically determined thresholds. One of them (lower threshold acted to stop the estimation process, the other one (upper threshold was applied to the detection function. The automatic determination of the thresholds was based on the Kullback-Leibler distance which gives information about the distance between the detected segments (events. Tests on simulated data demonstrated the efficiency of these improvements of the DCS algorithm.

  20. Adaptive sampling method in deep-penetration particle transport problem

    International Nuclear Information System (INIS)

    Wang Ruihong; Ji Zhicheng; Pei Lucheng

    2012-01-01

    Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance sampling method gets satisfied results as well. (authors)

  1. On the Adaptation of an Agile Information Systems Development Method

    NARCIS (Netherlands)

    Aydin, M.N.; Harmsen, F.; van Slooten, C.; Stegwee, R.A.

    2005-01-01

    Little specific research has been conducted to date on the adaptation of agile information systems development (ISD) methods. This article presents the work practice in dealing with the adaptation of such a method in the ISD department of one of the leading financial institutes in Europe. Two forms

  2. The adaptive collision source method for discrete ordinates radiation transport

    International Nuclear Information System (INIS)

    Walters, William J.; Haghighat, Alireza

    2017-01-01

    Highlights: • A new adaptive quadrature method to solve the discrete ordinates transport equation. • The adaptive collision source (ACS) method splits the flux into n’th collided components. • Uncollided flux requires high quadrature; this is lowered with number of collisions. • ACS automatically applies appropriate quadrature order each collided component. • The adaptive quadrature is 1.5–4 times more efficient than uniform quadrature. - Abstract: A novel collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order used for each. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This method allows for an optimal use of processing power, by using a high order quadrature for the first iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and is referred to as the adaptive collision source (ACS) method. The ACS methodology has been implemented in the 3-D, parallel, multigroup discrete ordinates code TITAN. This code was tested on a several simple and complex fixed-source problems. The ACS implementation in TITAN has shown a reduction in computation time by a factor of 1.5–4 on the fixed-source test problems, for the same desired level of accuracy, as compared to the standard TITAN code.

  3. Relationships Between Vestibular Measures as Potential Predictors for Spaceflight Sensorimotor Adaptation

    Science.gov (United States)

    Clark, T. K.; Peters, B.; Gadd, N. E.; De Dios, Y. E.; Wood, S.; Bloomberg, J. J.; Mulavara, A. P.

    2016-01-01

    Introduction: During space exploration missions astronauts are exposed to a series of novel sensorimotor environments, requiring sensorimotor adaptation. Until adaptation is complete, sensorimotor decrements occur, affecting critical tasks such as piloted landing or docking. Of particularly interest are locomotion tasks such as emergency vehicle egress or extra-vehicular activity. While nearly all astronauts eventually adapt sufficiently, it appears there are substantial individual differences in how quickly and effectively this adaptation occurs. These individual differences in capacity for sensorimotor adaptation are poorly understood. Broadly, we aim to identify measures that may serve as pre-flight predictors of and individual's adaptation capacity to spaceflight-induced sensorimotor changes. As a first step, since spaceflight is thought to involve a reinterpretation of graviceptor cues (e.g. otolith cues from the vestibular system) we investigate the relationships between various measures of vestibular function in humans. Methods: In a set of 15 ground-based control subjects, we quantified individual differences in vestibular function using three measures: 1) ocular vestibular evoked myogenic potential (oVEMP), 2) computerized dynamic posturography and 3) vestibular perceptual thresholds. oVEMP responses are elicited using a mechanical stimuli approach. Computerized dynamic posturography was used to quantify Sensory Organization Tests (SOTs), including SOT5M which involved performing pitching head movements while balancing on a sway-reference support surface with eyes closed. We implemented a vestibular perceptual threshold task using the tilt capabilities of the Tilt-Translation Sled (TTS) at JSC. On each trial, the subject was passively roll-tilted left ear down or right ear down in the dark and verbally provided a forced-choice response regarding which direction they felt tilted. The motion profile was a single-cycle sinusoid of angular acceleration with a

  4. Epidemic spreading on preferred degree adaptive networks.

    Science.gov (United States)

    Jolad, Shivakumar; Liu, Wenjia; Schmittmann, B; Zia, R K P

    2012-01-01

    We study the standard SIS model of epidemic spreading on networks where individuals have a fluctuating number of connections around a preferred degree κ. Using very simple rules for forming such preferred degree networks, we find some unusual statistical properties not found in familiar Erdös-Rényi or scale free networks. By letting κ depend on the fraction of infected individuals, we model the behavioral changes in response to how the extent of the epidemic is perceived. In our models, the behavioral adaptations can be either 'blind' or 'selective'--depending on whether a node adapts by cutting or adding links to randomly chosen partners or selectively, based on the state of the partner. For a frozen preferred network, we find that the infection threshold follows the heterogeneous mean field result λ(c)/μ = / and the phase diagram matches the predictions of the annealed adjacency matrix (AAM) approach. With 'blind' adaptations, although the epidemic threshold remains unchanged, the infection level is substantially affected, depending on the details of the adaptation. The 'selective' adaptive SIS models are most interesting. Both the threshold and the level of infection changes, controlled not only by how the adaptations are implemented but also how often the nodes cut/add links (compared to the time scales of the epidemic spreading). A simple mean field theory is presented for the selective adaptations which capture the qualitative and some of the quantitative features of the infection phase diagram.

  5. Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.

    Science.gov (United States)

    Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga

    2015-11-01

    Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.

  6. Data-Driven Jump Detection Thresholds for Application in Jump Regressions

    Directory of Open Access Journals (Sweden)

    Robert Davies

    2018-03-01

    Full Text Available This paper develops a method to select the threshold in threshold-based jump detection methods. The method is motivated by an analysis of threshold-based jump detection methods in the context of jump-diffusion models. We show that over the range of sampling frequencies a researcher is most likely to encounter that the usual in-fill asymptotics provide a poor guide for selecting the jump threshold. Because of this we develop a sample-based method. Our method estimates the number of jumps over a grid of thresholds and selects the optimal threshold at what we term the ‘take-off’ point in the estimated number of jumps. We show that this method consistently estimates the jumps and their indices as the sampling interval goes to zero. In several Monte Carlo studies we evaluate the performance of our method based on its ability to accurately locate jumps and its ability to distinguish between true jumps and large diffusive moves. In one of these Monte Carlo studies we evaluate the performance of our method in a jump regression context. Finally, we apply our method in two empirical studies. In one we estimate the number of jumps and report the jump threshold our method selects for three commonly used market indices. In the other empirical application we perform a series of jump regressions using our method to select the jump threshold.

  7. Resonances, cusp effects and a virtual state in e/sup -/-He scattering near the n = 3 thresholds. [Variational methods, resonance, threshold structures

    Energy Technology Data Exchange (ETDEWEB)

    Nesbet, R K [International Business Machines Corp., San Jose, Calif. (USA). Research Lab.

    1978-01-14

    Variational calculations locate and identify resonances and new threshold structures in electron impact excitation of He metastable states, in the region of the 3/sup 3/S and 3/sup 1/S excitation thresholds. A virtual state is found at the 3/sup 3/S threshold.

  8. Identification of chaotic memristor systems based on piecewise adaptive Legendre filters

    International Nuclear Information System (INIS)

    Zhao, Yibo; Zhang, Xiuzai; Xu, Jin; Guo, Yecai

    2015-01-01

    Memristor is a nonlinear device, which plays an important role in the design and implementation of chaotic systems. In order to be able to understand in-depth the complex nonlinear dynamic behaviors in chaotic memristor systems, modeling or identification of its nonlinear model is very important premise. This paper presents a chaotic memristor system identification method based on piecewise adaptive Legendre filters. The threshold decomposition is carried out for the input vector, and also the input signal subintervals via decomposition satisfy the convergence condition of the adaptive Legendre filters. Then the adaptive Legendre filter structure and adaptive weight update algorithm are derived. Final computer simulation results show the effectiveness as well as fast convergence characteristics.

  9. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    Science.gov (United States)

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  10. An h-adaptive finite element method for turbulent heat transfer

    Energy Technology Data Exchange (ETDEWEB)

    Carriington, David B [Los Alamos National Laboratory

    2009-01-01

    A two-equation turbulence closure model (k-{omega}) using an h-adaptive grid technique and finite element method (FEM) has been developed to simulate low Mach flow and heat transfer. These flows are applicable to many flows in engineering and environmental sciences. Of particular interest in the engineering modeling areas are: combustion, solidification, and heat exchanger design. Flows for indoor air quality modeling and atmospheric pollution transport are typical types of environmental flows modeled with this method. The numerical method is based on a hybrid finite element model using an equal-order projection process. The model includes thermal and species transport, localized mesh refinement (h-adaptive) and Petrov-Galerkin weighting for the stabilizing the advection. This work develops the continuum model of a two-equation turbulence closure method. The fractional step solution method is stated along with the h-adaptive grid method (Carrington and Pepper, 2002). Solutions are presented for 2d flow over a backward-facing step.

  11. The Translation and Adaptation of Agile Methods

    DEFF Research Database (Denmark)

    Pries-Heje, Jan; Baskerville, Richard

    2017-01-01

    Purpose The purpose of this paper is to use translation theory to develop a framework (called FTRA) that explains how companies adopt agile methods in a discourse of fragmentation and articulation. Design/methodology/approach A qualitative multiple case study of six firms using the Scrum agile...... (Scrum). This limits the confidence that the framework is suitable for other kinds of methodologies. Practical implications The FTRA framework and the technological rules are promising for use in practice as a prescriptive or even normative frame for governing methodology adaptation. Social implications....../value The use of translation theory and the FTRA framework to explain how agile adaptation (in particular Scrum) emerges continuously in a process where method fragments are articulated and re-articulated to momentarily suit the local setting. Complete agility that rapidly and elegantly changes its own...

  12. Adaptive BDDC Deluxe Methods for H(curl)

    KAUST Repository

    Zampini, Stefano

    2017-01-01

    The work presents numerical results using adaptive BDDC deluxe methods for preconditioning the linear systems arising from finite element discretizations of the time-domain, quasi-static approximation of the Maxwell’s equations. The provided results

  13. Resilience thinking: integrating resilience, adaptability and transformability

    Science.gov (United States)

    Carl Folke; Stephen R. Carpenter; Brian Walker; Marten Scheffer; Terry Chapin; Johan. Rockstrom

    2010-01-01

    Resilience thinking addresses the dynamics and development of complex social-ecological systems (SES). Three aspects are central: resilience, adaptability and transformability. These aspects interrelate across multiple scales. Resilience in this context is the capacity of a SES to continually change and adapt yet remain within critical thresholds. Adaptability is part...

  14. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  15. Effects of tubing length and coupling method on hearing threshold and real-ear to coupler difference measures.

    Science.gov (United States)

    Gustafson, Samantha; Pittman, Andrea; Fanning, Robert

    2013-06-01

    This tutorial demonstrates the effects of tubing length and coupling type (i.e., foam tip or personal earmold) on hearing threshold and real-ear-to-coupler difference (RECD) measures. Hearing thresholds from 0.25 kHz through 8 kHz are reported at various tubing lengths for 28 normal-hearing adults between the ages of 22 and 31 years. RECD values are reported for 14 of the adults. All measures were made with an insert earphone coupled to a standard foam tip and with an insert earphone coupled to each participant's personal earmold. Threshold and RECD measures obtained with a personal earmold were significantly different from those obtained with a foam tip on repeated measures analyses of variance. One-sample t tests showed these differences to vary systematically with increasing tubing length, with the largest average differences (7-8 dB) occurring at 4 kHz. This systematic examination demonstrates the equal and opposite effects of tubing length on threshold and acoustic measures. Specifically, as tubing length increased, sound pressure level in the ear canal decreased, affecting both hearing thresholds and the real-ear portion of the RECDs. This demonstration shows that when the same coupling method is used to obtain the hearing thresholds and RECD, equal and accurate estimates of real-ear sound pressure level are obtained.

  16. Metastability Thresholds for Anisotropic Bootstrap Percolation in Three Dimensions

    NARCIS (Netherlands)

    Enter, Aernout C.D. van; Fey, Anne

    In this paper we analyze several anisotropic bootstrap percolation models in three dimensions. We present the order of magnitude for the metastability thresholds for a fairly general class of models. In our proofs, we use an adaptation of the technique of dimensional reduction. We find that the

  17. An Adaptively Accelerated Bayesian Deblurring Method with Entropy Prior

    Directory of Open Access Journals (Sweden)

    Yong-Hoon Kim

    2008-05-01

    Full Text Available The development of an efficient adaptively accelerated iterative deblurring algorithm based on Bayesian statistical concept has been reported. Entropy of an image has been used as a “prior” distribution and instead of additive form, used in conventional acceleration methods an exponent form of relaxation constant has been used for acceleration. Thus the proposed method is called hereafter as adaptively accelerated maximum a posteriori with entropy prior (AAMAPE. Based on empirical observations in different experiments, the exponent is computed adaptively using first-order derivatives of the deblurred image from previous two iterations. This exponent improves speed of the AAMAPE method in early stages and ensures stability at later stages of iteration. In AAMAPE method, we also consider the constraint of the nonnegativity and flux conservation. The paper discusses the fundamental idea of the Bayesian image deblurring with the use of entropy as prior, and the analytical analysis of superresolution and the noise amplification characteristics of the proposed method. The experimental results show that the proposed AAMAPE method gives lower RMSE and higher SNR in 44% lesser iterations as compared to nonaccelerated maximum a posteriori with entropy prior (MAPE method. Moreover, AAMAPE followed by wavelet wiener filtering gives better result than the state-of-the-art methods.

  18. Use of dynamic grid adaption in the ASWR-method

    International Nuclear Information System (INIS)

    Graf, U.; Romstedt, P.; Werner, W.

    1985-01-01

    A dynamic grid adaption method has been developed for use with the ASWR-method. The method automatically adapts the number and position of the spatial meshpoints as the solution of hyperbolic or parabolic vector partial differential equations progresses in time. The mesh selection algorithm is based on the minimization of the L 2 -norm of the spatial discretization error. The method permits accurate calculation of the evolution of inhomogenities like wave fronts, shock layers and other sharp transitions, while generally using a coarse computational grid. The number of required mesh points is significantly reduced, relative to a fixed Eulerian grid. Since the mesh selection algorithm is computationally inexpensive, a corresponding reduction of computing time results

  19. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons.

    Science.gov (United States)

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-02-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.

  20. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    Energy Technology Data Exchange (ETDEWEB)

    Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.

    1998-12-10

    OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  1. Adaptive SLICE method: an enhanced method to determine nonlinear dynamic respiratory system mechanics

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Guttmann, Josef

    2012-01-01

    The objective of this paper is to introduce and evaluate the adaptive SLICE method (ASM) for continuous determination of intratidal nonlinear dynamic compliance and resistance. The tidal volume is subdivided into a series of volume intervals called slices. For each slice, one compliance and one resistance are calculated by applying a least-squares-fit method. The volume window (width) covered by each slice is determined based on the confidence interval of the parameter estimation. The method was compared to the original SLICE method and evaluated using simulation and animal data. The ASM was also challenged with separate analysis of dynamic compliance during inspiration. If the signal-to-noise ratio (SNR) in the respiratory data decreased from +∞ to 10 dB, the relative errors of compliance increased from 0.1% to 22% for the ASM and from 0.2% to 227% for the SLICE method. Fewer differences were found in resistance. When the SNR was larger than 40 dB, the ASM delivered over 40 parameter estimates (42.2 ± 1.3). When analyzing the compliance during inspiration separately, the estimates calculated with the ASM were more stable. The adaptive determination of slice bounds results in consistent and reliable parameter values. Online analysis of nonlinear respiratory mechanics will profit from such an adaptive selection of interval size. (paper)

  2. Method of Anti-Virus Protection Based on (n, t Threshold Proxy Signature with an Arbitrator

    Directory of Open Access Journals (Sweden)

    E. A. Tolyupa

    2014-01-01

    Full Text Available The article suggests the method of anti-virus protection of mobile devices based on the usage of proxy digital signatures and an (n;t-threshold proxy signature scheme with an arbitrator. The unique feature of the suggested method is in the absence of necessity to install anti-virus software in a mobile device. It will be enough only to have the software verifying digital signatures and the Internet. The method is used on the base of public keys infrastructure /PKI/, thus minimizing implementation expenses.

  3. Evaluation framework based on fuzzy measured method in adaptive learning systems

    OpenAIRE

    Houda Zouari Ounaies, ,; Yassine Jamoussi; Henda Hajjami Ben Ghezala

    2008-01-01

    Currently, e-learning systems are mainly web-based applications and tackle a wide range of users all over the world. Fitting learners’ needs is considered as a key issue to guaranty the success of these systems. Many researches work on providing adaptive systems. Nevertheless, evaluation of the adaptivity is still in an exploratory phase. Adaptation methods are a basic factor to guaranty an effective adaptation. This issue is referred as meta-adaptation in numerous researches. In our research...

  4. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  5. An adaptive EFG-FE coupling method for elasto-plastic contact of rough surfaces

    International Nuclear Information System (INIS)

    Liu Lan; Liu Geng; Tong Ruiting; Jin Saiying

    2010-01-01

    Differing from Finite Element Method, the meshless method does not need any mesh information and can arrange nodes freely which is perfectly suitable for adaptive analysis. In order to simulate the contact condition factually and improve computational efficiency, an adaptive procedure for Element-free Galerkin-Finite Element (EFG-FE) coupling contact model is established and developed to investigate the elastoplastic contact performance for engineering rough surfaces. The local adaptive refinement strategy combined with the strain energy gradient-based error estimation model is employed. The schemes, including principle explanation, arithmetic analysis and programming realization, are introduced and discussed. Furthermore, some related parameters on adaptive convergence criterion are researched emphatically, including adaptation-stop criterion, refinement or coarsening criterion which are guided by the relative error in total strain energy with two adjacent stages. Based on pioneering works of the EFG-FE coupling method for contact problems, an adaptive EFG-FE model for asperity contact is studied. Compared with the solutions obtained from the uniform refinement model, the adaptation results indicate that the adaptive method presented in this paper is capable of solving asperity contact problems with excellent calculation accuracy and computational efficiency.

  6. LEACH-A: An Adaptive Method for Improving LEACH Protocol

    Directory of Open Access Journals (Sweden)

    Jianli ZHAO

    2014-01-01

    Full Text Available Energy has become one of the most important constraints on wireless sensor networks. Hence, many researchers in this field focus on how to design a routing protocol to prolong the lifetime of the network. The classical hierarchical protocols such as LEACH and LEACH-C have better performance in saving the energy consumption. However, the choosing strategy only based on the largest residue energy or shortest distance will still consume more energy. In this paper an adaptive routing protocol named “LEACH-A” which has an energy threshold E0 is proposed. If there are cluster nodes whose residual energy are greater than E0, the node of largest residual energy is selected to communicated with the base station; When all the cluster nodes energy are less than E0, the node nearest to the base station is select to communication with the base station. Simulations show that our improved protocol LEACH-A performs better than the LEACH and the LEACH-C.

  7. Thresholds in chemical respiratory sensitisation.

    Science.gov (United States)

    Cochrane, Stella A; Arts, Josje H E; Ehnes, Colin; Hindle, Stuart; Hollnagel, Heli M; Poole, Alan; Suto, Hidenori; Kimber, Ian

    2015-07-03

    There is a continuing interest in determining whether it is possible to identify thresholds for chemical allergy. Here allergic sensitisation of the respiratory tract by chemicals is considered in this context. This is an important occupational health problem, being associated with rhinitis and asthma, and in addition provides toxicologists and risk assessors with a number of challenges. In common with all forms of allergic disease chemical respiratory allergy develops in two phases. In the first (induction) phase exposure to a chemical allergen (by an appropriate route of exposure) causes immunological priming and sensitisation of the respiratory tract. The second (elicitation) phase is triggered if a sensitised subject is exposed subsequently to the same chemical allergen via inhalation. A secondary immune response will be provoked in the respiratory tract resulting in inflammation and the signs and symptoms of a respiratory hypersensitivity reaction. In this article attention has focused on the identification of threshold values during the acquisition of sensitisation. Current mechanistic understanding of allergy is such that it can be assumed that the development of sensitisation (and also the elicitation of an allergic reaction) is a threshold phenomenon; there will be levels of exposure below which sensitisation will not be acquired. That is, all immune responses, including allergic sensitisation, have threshold requirement for the availability of antigen/allergen, below which a response will fail to develop. The issue addressed here is whether there are methods available or clinical/epidemiological data that permit the identification of such thresholds. This document reviews briefly relevant human studies of occupational asthma, and experimental models that have been developed (or are being developed) for the identification and characterisation of chemical respiratory allergens. The main conclusion drawn is that although there is evidence that the

  8. Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model

    Science.gov (United States)

    Kim, Sangjo; Kim, Kuisoon; Son, Changmin

    2018-04-01

    An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.

  9. Acrolein-stressed threshold adaptation alters the molecular and metabolic bases of an engineered Saccharomyces cerevisiae to improve glutathione production.

    Science.gov (United States)

    Zhou, Wenlong; Yang, Yan; Tang, Liang; Cheng, Kai; Li, Changkun; Wang, Huimin; Liu, Minzhi; Wang, Wei

    2018-03-14

    Acrolein (Acr) was used as a selection agent to improve the glutathione (GSH) overproduction of the prototrophic strain W303-1b/FGP PT . After two rounds of adaptive laboratory evolution (ALE), an unexpected result was obtained wherein identical GSH production was observed in the selected isolates. Then, a threshold selection mechanism of Acr-stressed adaption was clarified based on the formation of an Acr-GSH adduct, and a diffusion coefficient (0.36 ± 0.02 μmol·min -1 ·OD 600 -1 ) was calculated. Metabolomic analysis was carried out to reveal the molecular bases that triggered GSH overproduction. The results indicated that all three precursors (glutamic acid (Glu), glycine (Gly) and cysteine (Cys)) needed for GSH synthesis were at a relativity higher concentration in the evolved strain and that the accumulation of homocysteine (Hcy) and cystathionine might promote Cys synthesis and then improve GSH production. In addition to GSH and Cys, it was observed that other non-protein thiols and molecules related to ATP generation were at obviously different levels. To divert the accumulated thiols to GSH biosynthesis, combinatorial strategies, including deletion of cystathionine β-lyase (STR3), overexpression of cystathionine γ-lyase (CYS3) and cystathionine β-synthase (CYS4), and reduction of the unfolded protein response (UPR) through up-regulation of protein disulphide isomerase (PDI), were also investigated.

  10. Identification of Molecular Fingerprints in Human Heat Pain Thresholds by Use of an Interactive Mixture Model R Toolbox (AdaptGauss).

    Science.gov (United States)

    Ultsch, Alfred; Thrun, Michael C; Hansen-Goos, Onno; Lötsch, Jörn

    2015-10-28

    Biomedical data obtained during cell experiments, laboratory animal research, or human studies often display a complex distribution. Statistical identification of subgroups in research data poses an analytical challenge. Here were introduce an interactive R-based bioinformatics tool, called "AdaptGauss". It enables a valid identification of a biologically-meaningful multimodal structure in the data by fitting a Gaussian mixture model (GMM) to the data. The interface allows a supervised selection of the number of subgroups. This enables the expectation maximization (EM) algorithm to adapt more complex GMM than usually observed with a noninteractive approach. Interactively fitting a GMM to heat pain threshold data acquired from human volunteers revealed a distribution pattern with four Gaussian modes located at temperatures of 32.3, 37.2, 41.4, and 45.4 °C. Noninteractive fitting was unable to identify a meaningful data structure. Obtained results are compatible with known activity temperatures of different TRP ion channels suggesting the mechanistic contribution of different heat sensors to the perception of thermal pain. Thus, sophisticated analysis of the modal structure of biomedical data provides a basis for the mechanistic interpretation of the observations. As it may reflect the involvement of different TRP thermosensory ion channels, the analysis provides a starting point for hypothesis-driven laboratory experiments.

  11. A two-dimensional adaptive numerical grids generation method and its realization

    International Nuclear Information System (INIS)

    Xu Tao; Shui Hongshou

    1998-12-01

    A two-dimensional adaptive numerical grids generation method and its particular realization is discussed. This method is effective and easy to realize if the control functions are given continuously, and the grids for some regions is showed in this case. For Computational Fluid Dynamics, because the control values of adaptive grids-numerical solution is given in dispersed form, it is needed to interpolate these values to get the continuous control functions. These interpolation techniques are discussed, and some efficient adaptive grids are given. A two-dimensional fluid dynamics example was also given

  12. Detection thresholds of macaque otolith afferents.

    Science.gov (United States)

    Yu, Xiong-Jie; Dickman, J David; Angelaki, Dora E

    2012-06-13

    The vestibular system is our sixth sense and is important for spatial perception functions, yet the sensory detection and discrimination properties of vestibular neurons remain relatively unexplored. Here we have used signal detection theory to measure detection thresholds of otolith afferents using 1 Hz linear accelerations delivered along three cardinal axes. Direction detection thresholds were measured by comparing mean firing rates centered on response peak and trough (full-cycle thresholds) or by comparing peak/trough firing rates with spontaneous activity (half-cycle thresholds). Thresholds were similar for utricular and saccular afferents, as well as for lateral, fore/aft, and vertical motion directions. When computed along the preferred direction, full-cycle direction detection thresholds were 7.54 and 3.01 cm/s(2) for regular and irregular firing otolith afferents, respectively. Half-cycle thresholds were approximately double, with excitatory thresholds being half as large as inhibitory thresholds. The variability in threshold among afferents was directly related to neuronal gain and did not depend on spike count variance. The exact threshold values depended on both the time window used for spike count analysis and the filtering method used to calculate mean firing rate, although differences between regular and irregular afferent thresholds were independent of analysis parameters. The fact that minimum thresholds measured in macaque otolith afferents are of the same order of magnitude as human behavioral thresholds suggests that the vestibular periphery might determine the limit on our ability to detect or discriminate small differences in head movement, with little noise added during downstream processing.

  13. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    Science.gov (United States)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  14. Solving delay differential equations in S-ADAPT by method of steps.

    Science.gov (United States)

    Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech

    2013-09-01

    S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.

  15. An adaptive tensor voting algorithm combined with texture spectrum

    Science.gov (United States)

    Wang, Gang; Su, Qing-tang; Lü, Gao-huan; Zhang, Xiao-feng; Liu, Yu-huan; He, An-zhi

    2015-01-01

    An adaptive tensor voting algorithm combined with texture spectrum is proposed. The image texture spectrum is used to get the adaptive scale parameter of voting field. Then the texture information modifies both the attenuation coefficient and the attenuation field so that we can use this algorithm to create more significant and correct structures in the original image according to the human visual perception. At the same time, the proposed method can improve the edge extraction quality, which includes decreasing the flocculent region efficiently and making image clear. In the experiment for extracting pavement cracks, the original pavement image is processed by the proposed method which is combined with the significant curve feature threshold procedure, and the resulted image displays the faint crack signals submerged in the complicated background efficiently and clearly.

  16. Entropy-Based Method of Choosing the Decomposition Level in Wavelet Threshold De-noising

    Directory of Open Access Journals (Sweden)

    Yan-Fang Sang

    2010-06-01

    Full Text Available In this paper, the energy distributions of various noises following normal, log-normal and Pearson-III distributions are first described quantitatively using the wavelet energy entropy (WEE, and the results are compared and discussed. Then, on the basis of these analytic results, a method for use in choosing the decomposition level (DL in wavelet threshold de-noising (WTD is put forward. Finally, the performance of the proposed method is verified by analysis of both synthetic and observed series. Analytic results indicate that the proposed method is easy to operate and suitable for various signals. Moreover, contrary to traditional white noise testing which depends on “autocorrelations”, the proposed method uses energy distributions to distinguish real signals and noise in noisy series, therefore the chosen DL is reliable, and the WTD results of time series can be improved.

  17. Adaptive multiresolution method for MAP reconstruction in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)

    2016-11-15

    3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.

  18. Adaptively optimizing stochastic resonance in visual system

    Science.gov (United States)

    Yang, Tao

    1998-08-01

    Recent psychophysics experiment has showed that the noise strength could affect the perceived image quality. This work gives an adaptive process for achieving the optimal perceived image quality in a simple image perception array, which is a simple model of an image sensor. A reference image from memory is used for constructing a cost function and defining the optimal noise strength where the cost function gets its minimum point. The reference image is a binary image, which is used to define the background and the object. Finally, an adaptive algorithm is proposed for searching the optimal noise strength. Computer experimental results show that if the reference image is a thresholded version of the sub-threshold input image then the output of the sensor array gives an optimal output, in which the background and the object have the biggest contrast. If the reference image is different from a thresholded version of the sub-threshold input image then the output usually gives a sub-optimal contrast between the object and the background.

  19. Comparison between intensity- duration thresholds and cumulative rainfall thresholds for the forecasting of landslide

    Science.gov (United States)

    Lagomarsino, Daniela; Rosi, Ascanio; Rossi, Guglielmo; Segoni, Samuele; Catani, Filippo

    2014-05-01

    This work makes a quantitative comparison between the results of landslide forecasting obtained using two different rainfall threshold models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds in an area of northern Tuscany of 116 km2. The first methodology identifies rainfall intensity-duration thresholds by means a software called MaCumBA (Massive CUMulative Brisk Analyzer) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I-D values. A back analysis using data from past events can be used to identify the threshold conditions associated with the least amount of false alarms. The second method (SIGMA) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the standard deviation (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the standard deviations in the proposed methodology. The definition of intensity-duration rainfall thresholds requires the combined use of rainfall measurements and an inventory of dated landslides, whereas SIGMA model can be implemented using only rainfall data. These two methodologies were applied in an area of 116 km2 where a database of 1200 landslides was available for the period 2000-2012. The results obtained are compared and discussed. Although several examples of visual comparisons between different intensity-duration rainfall thresholds are reported in the international literature, a quantitative comparison between thresholds obtained in the same area using different techniques and approaches is a relatively undebated research topic.

  20. Dynamic multiple thresholding breast boundary detection algorithm for mammograms

    International Nuclear Information System (INIS)

    Wu, Yi-Ta; Zhou Chuan; Chan, Heang-Ping; Paramagul, Chintana; Hadjiiski, Lubomir M.; Daly, Caroline Plowden; Douglas, Julie A.; Zhang Yiheng; Sahiner, Berkman; Shi Jiazheng; Wei Jun

    2010-01-01

    Purpose: Automated detection of breast boundary is one of the fundamental steps for computer-aided analysis of mammograms. In this study, the authors developed a new dynamic multiple thresholding based breast boundary (MTBB) detection method for digitized mammograms. Methods: A large data set of 716 screen-film mammograms (442 CC view and 274 MLO view) obtained from consecutive cases of an Institutional Review Board approved project were used. An experienced breast radiologist manually traced the breast boundary on each digitized image using a graphical interface to provide a reference standard. The initial breast boundary (MTBB-Initial) was obtained by dynamically adapting the threshold to the gray level range in local regions of the breast periphery. The initial breast boundary was then refined by using gradient information from horizontal and vertical Sobel filtering to obtain the final breast boundary (MTBB-Final). The accuracy of the breast boundary detection algorithm was evaluated by comparison with the reference standard using three performance metrics: The Hausdorff distance (HDist), the average minimum Euclidean distance (AMinDist), and the area overlap measure (AOM). Results: In comparison with the authors' previously developed gradient-based breast boundary (GBB) algorithm, it was found that 68%, 85%, and 94% of images had HDist errors less than 6 pixels (4.8 mm) for GBB, MTBB-Initial, and MTBB-Final, respectively. 89%, 90%, and 96% of images had AMinDist errors less than 1.5 pixels (1.2 mm) for GBB, MTBB-Initial, and MTBB-Final, respectively. 96%, 98%, and 99% of images had AOM values larger than 0.9 for GBB, MTBB-Initial, and MTBB-Final, respectively. The improvement by the MTBB-Final method was statistically significant for all the evaluation measures by the Wilcoxon signed rank test (p<0.0001). Conclusions: The MTBB approach that combined dynamic multiple thresholding and gradient information provided better performance than the breast boundary

  1. Automatic Multi-Level Thresholding Segmentation Based on Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    L. DJEROU,

    2012-01-01

    Full Text Available In this paper, we present a new multi-level image thresholding technique, called Automatic Threshold based on Multi-objective Optimization "ATMO" that combines the flexibility of multi-objective fitness functions with the power of a Binary Particle Swarm Optimization algorithm "BPSO", for searching the "optimum" number of the thresholds and simultaneously the optimal thresholds of three criteria: the between-class variances criterion, the minimum error criterion and the entropy criterion. Some examples of test images are presented to compare our segmentation method, based on the multi-objective optimization approach with Otsu’s, Kapur’s and Kittler’s methods. Our experimental results show that the thresholding method based on multi-objective optimization is more efficient than the classical Otsu’s, Kapur’s and Kittler’s methods.

  2. A novel fusion method of improved adaptive LTP and two-directional two-dimensional PCA for face feature extraction

    Science.gov (United States)

    Luo, Yuan; Wang, Bo-yu; Zhang, Yi; Zhao, Li-ming

    2018-03-01

    In this paper, under different illuminations and random noises, focusing on the local texture feature's defects of a face image that cannot be completely described because the threshold of local ternary pattern (LTP) cannot be calculated adaptively, a local three-value model of improved adaptive local ternary pattern (IALTP) is proposed. Firstly, the difference function between the center pixel and the neighborhood pixel weight is established to obtain the statistical characteristics of the central pixel and the neighborhood pixel. Secondly, the adaptively gradient descent iterative function is established to calculate the difference coefficient which is defined to be the threshold of the IALTP operator. Finally, the mean and standard deviation of the pixel weight of the local region are used as the coding mode of IALTP. In order to reflect the overall properties of the face and reduce the dimension of features, the two-directional two-dimensional PCA ((2D)2PCA) is adopted. The IALTP is used to extract local texture features of eyes and mouth area. After combining the global features and local features, the fusion features (IALTP+) are obtained. The experimental results on the Extended Yale B and AR standard face databases indicate that under different illuminations and random noises, the algorithm proposed in this paper is more robust than others, and the feature's dimension is smaller. The shortest running time reaches 0.329 6 s, and the highest recognition rate reaches 97.39%.

  3. Adaptive Subband Filtering Method for MEMS Accelerometer Noise Reduction

    Directory of Open Access Journals (Sweden)

    Piotr PIETRZAK

    2008-12-01

    Full Text Available Silicon microaccelerometers can be considered as an alternative to high-priced piezoelectric sensors. Unfortunately, relatively high noise floor of commercially available MEMS (Micro-Electro-Mechanical Systems sensors limits the possibility of their usage in condition monitoring systems of rotating machines. The solution of this problem is the method of signal filtering described in the paper. It is based on adaptive subband filtering employing Adaptive Line Enhancer. For filter weights adaptation, two novel algorithms have been developed. They are based on the NLMS algorithm. Both of them significantly simplify its software and hardware implementation and accelerate the adaptation process. The paper also presents the software (Matlab and hardware (FPGA implementation of the proposed noise filter. In addition, the results of the performed tests are reported. They confirm high efficiency of the solution.

  4. A Robust Threshold for Iterative Channel Estimation in OFDM Systems

    Directory of Open Access Journals (Sweden)

    A. Kalaycioglu

    2010-04-01

    Full Text Available A novel threshold computation method for pilot symbol assisted iterative channel estimation in OFDM systems is considered. As the bits are transmitted in packets, the proposed technique is based on calculating a particular threshold for each data packet in order to select the reliable decoder output symbols to improve the channel estimation performance. Iteratively, additional pilot symbols are established according to the threshold and the channel is re-estimated with the new pilots inserted to the known channel estimation pilot set. The proposed threshold calculation method for selecting additional pilots performs better than non-iterative channel estimation, no threshold and fixed threshold techniques in poor HF channel simulations.

  5. A Regular k-Shrinkage Thresholding Operator for the Removal of Mixed Gaussian-Impulse Noise

    Directory of Open Access Journals (Sweden)

    Han Pan

    2017-01-01

    Full Text Available The removal of mixed Gaussian-impulse noise plays an important role in many areas, such as remote sensing. However, traditional methods may be unaware of promoting the degree of the sparsity adaptively after decomposing into low rank component and sparse component. In this paper, a new problem formulation with regular spectral k-support norm and regular k-support l1 norm is proposed. A unified framework is developed to capture the intrinsic sparsity structure of all two components. To address the resulting problem, an efficient minimization scheme within the framework of accelerated proximal gradient is proposed. This scheme is achieved by alternating regular k-shrinkage thresholding operator. Experimental comparison with the other state-of-the-art methods demonstrates the efficacy of the proposed method.

  6. Effects of temperature on heat pain adaptation and habituation in men and women.

    Science.gov (United States)

    Hashmi, Javeria A; Davis, Karen D

    2010-12-01

    We recently reported that women report greater pain adaptation and habituation to moderately painful heat stimuli than men (Hashmi and Davis [16]); but slightly lower temperatures were needed to evoke moderate pain in the women. Hardy et al (1962) and LaMotte (1979) suggested that pain adaptation is most prominent at modest noxious heat temperatures and may occur at temperatures close to pain thresholds. Thus, as a follow-up to our previous study, we examined the role of absolute temperature in pain adaptation and habituation in men and women and assessed whether pain threshold impacts these findings. We hypothesised that pain adaptation and habituation would be more prominent at low and moderate temperatures, and that higher temperatures would induce pain adaptation and habituation in women but not in men. We further hypothesized that pain adaptation would not be correlated with pain thresholds. To test this, we obtained continuous ratings of pain evoked by 44.5-47.5°C stimuli applied to the dorsal foot of men and women. Each run consisted of three 30s stimuli at the same temperature with a 60s inter-stimulus interval. Women showed within-stimulus adaptation of total pain at all temperatures, but men showed significant adaptation to temperatures less than 47°C. There were no sex differences in inter-stimulus habituation and both men and women reported habituation to temperatures less than 46°C. Pain thresholds did not correlate with pain adaptation. These data highlight the temperature-sensitivity and sex differences of pain adaptation and habituation. Copyright © 2010 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  7. Extended generalized Lagrangian multipliers for magnetohydrodynamics using adaptive multiresolution methods

    Directory of Open Access Journals (Sweden)

    Domingues M. O.

    2013-12-01

    Full Text Available We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via Harten’s cell average multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge–Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution.

  8. Comparative Study of Retinal Vessel Segmentation Based on Global Thresholding Techniques

    Directory of Open Access Journals (Sweden)

    Temitope Mapayi

    2015-01-01

    Full Text Available Due to noise from uneven contrast and illumination during acquisition process of retinal fundus images, the use of efficient preprocessing techniques is highly desirable to produce good retinal vessel segmentation results. This paper develops and compares the performance of different vessel segmentation techniques based on global thresholding using phase congruency and contrast limited adaptive histogram equalization (CLAHE for the preprocessing of the retinal images. The results obtained show that the combination of preprocessing technique, global thresholding, and postprocessing techniques must be carefully chosen to achieve a good segmentation performance.

  9. An adaptive sampling and windowing interrogation method in PIV

    Science.gov (United States)

    Theunissen, R.; Scarano, F.; Riethmuller, M. L.

    2007-01-01

    This study proposes a cross-correlation based PIV image interrogation algorithm that adapts the number of interrogation windows and their size to the image properties and to the flow conditions. The proposed methodology releases the constraint of uniform sampling rate (Cartesian mesh) and spatial resolution (uniform window size) commonly adopted in PIV interrogation. Especially in non-optimal experimental conditions where the flow seeding is inhomogeneous, this leads either to loss of robustness (too few particles per window) or measurement precision (too large or coarsely spaced interrogation windows). Two criteria are investigated, namely adaptation to the local signal content in the image and adaptation to local flow conditions. The implementation of the adaptive criteria within a recursive interrogation method is described. The location and size of the interrogation windows are locally adapted to the image signal (i.e., seeding density). Also the local window spacing (commonly set by the overlap factor) is put in relation with the spatial variation of the velocity field. The viability of the method is illustrated over two experimental cases where the limitation of a uniform interrogation approach appears clearly: a shock-wave-boundary layer interaction and an aircraft vortex wake. The examples show that the spatial sampling rate can be adapted to the actual flow features and that the interrogation window size can be arranged so as to follow the spatial distribution of seeding particle images and flow velocity fluctuations. In comparison with the uniform interrogation technique, the spatial resolution is locally enhanced while in poorly seeded regions the level of robustness of the analysis (signal-to-noise ratio) is kept almost constant.

  10. A new iterative triclass thresholding technique in image segmentation.

    Science.gov (United States)

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  11. Identifying Threshold Concepts for Information Literacy: A Delphi Study

    Directory of Open Access Journals (Sweden)

    Lori Townsend

    2016-06-01

    Full Text Available This study used the Delphi method to engage expert practitioners on the topic of threshold concepts for information literacy. A panel of experts considered two questions. First, is the threshold concept approach useful for information literacy instruction? The panel unanimously agreed that the threshold concept approach holds potential for information literacy instruction. Second, what are the threshold concepts for information literacy instruction? The panel proposed and discussed over fifty potential threshold concepts, finally settling on six information literacy threshold concepts.

  12. New method to evaluate the 7Li(p, n)7Be reaction near threshold

    International Nuclear Information System (INIS)

    Herrera, María S.; Moreno, Gustavo A.; Kreiner, Andrés J.

    2015-01-01

    In this work a complete description of the 7 Li(p, n) 7 Be reaction near threshold is given using center-of-mass and relative coordinates. It is shown that this standard approach, not used before in this context, leads to a simple mathematical representation which gives easy access to all relevant quantities in the reaction and allows a precise numerical implementation. It also allows in a simple way to include proton beam-energy spread affects. The method, implemented as a C++ code, was validated both with numerical and experimental data finding a good agreement. This tool is also used here to analyze scattered published measurements such as (p, n) cross sections, differential and total neutron yields for thick targets. Using these data we derive a consistent set of parameters to evaluate neutron production near threshold. Sensitivity of the results to data uncertainty and the possibility of incorporating new measurements are also discussed

  13. A non-parametric framework for estimating threshold limit values

    Directory of Open Access Journals (Sweden)

    Ulm Kurt

    2005-11-01

    Full Text Available Abstract Background To estimate a threshold limit value for a compound known to have harmful health effects, an 'elbow' threshold model is usually applied. We are interested on non-parametric flexible alternatives. Methods We describe how a step function model fitted by isotonic regression can be used to estimate threshold limit values. This method returns a set of candidate locations, and we discuss two algorithms to select the threshold among them: the reduced isotonic regression and an algorithm considering the closed family of hypotheses. We assess the performance of these two alternative approaches under different scenarios in a simulation study. We illustrate the framework by analysing the data from a study conducted by the German Research Foundation aiming to set a threshold limit value in the exposure to total dust at workplace, as a causal agent for developing chronic bronchitis. Results In the paper we demonstrate the use and the properties of the proposed methodology along with the results from an application. The method appears to detect the threshold with satisfactory success. However, its performance can be compromised by the low power to reject the constant risk assumption when the true dose-response relationship is weak. Conclusion The estimation of thresholds based on isotonic framework is conceptually simple and sufficiently powerful. Given that in threshold value estimation context there is not a gold standard method, the proposed model provides a useful non-parametric alternative to the standard approaches and can corroborate or challenge their findings.

  14. Color image Segmentation using automatic thresholding techniques

    International Nuclear Information System (INIS)

    Harrabi, R.; Ben Braiek, E.

    2011-01-01

    In this paper, entropy and between-class variance based thresholding methods for color images segmentation are studied. The maximization of the between-class variance (MVI) and the entropy (ME) have been used as a criterion functions to determine an optimal threshold to segment images into nearly homogenous regions. Segmentation results from the two methods are validated and the segmentation sensitivity for the test data available is evaluated, and a comparative study between these methods in different color spaces is presented. The experimental results demonstrate the superiority of the MVI method for color image segmentation.

  15. A Dynamic and Adaptive Selection Radar Tracking Method Based on Information Entropy

    Directory of Open Access Journals (Sweden)

    Ge Jianjun

    2017-12-01

    Full Text Available Nowadays, the battlefield environment has become much more complex and variable. This paper presents a quantitative method and lower bound for the amount of target information acquired from multiple radar observations to adaptively and dynamically organize the detection of battlefield resources based on the principle of information entropy. Furthermore, for minimizing the given information entropy’s lower bound for target measurement at every moment, a method to dynamically and adaptively select radars with a high amount of information for target tracking is proposed. The simulation results indicate that the proposed method has higher tracking accuracy than that of tracking without adaptive radar selection based on entropy.

  16. Self-adaptive demodulation for polarization extinction ratio in distributed polarization coupling.

    Science.gov (United States)

    Zhang, Hongxia; Ren, Yaguang; Liu, Tiegen; Jia, Dagong; Zhang, Yimo

    2013-06-20

    A self-adaptive method for distributed polarization extinction ratio (PER) demodulation is demonstrated. It is characterized by dynamic PER threshold coupling intensity (TCI) and nonuniform PER iteration step length (ISL). Based on the preset PER calculation accuracy and original distribution coupling intensity, TCI and ISL can be made self-adaptive to determine contributing coupling points inside the polarizing devices. Distributed PER is calculated by accumulating those coupling points automatically and selectively. Two different kinds of polarization-maintaining fibers are tested, and PERs are obtained after merely 3-5 iterations using the proposed method. Comparison experiments with Thorlabs commercial instrument are also conducted, and results show high consistency. In addition, the optimum preset PER calculation accuracy of 0.05 dB is obtained through many repeated experiments.

  17. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    Energy Technology Data Exchange (ETDEWEB)

    Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron

    1998-12-08

    Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  18. An h-adaptive mesh method for Boltzmann-BGK/hydrodynamics coupling

    International Nuclear Information System (INIS)

    Cai Zhenning; Li Ruo

    2010-01-01

    We introduce a coupled method for hydrodynamic and kinetic equations on 2-dimensional h-adaptive meshes. We adopt the Euler equations with a fast kinetic solver in the region near thermodynamical equilibrium, while use the Boltzmann-BGK equation in kinetic regions where fluids are far from equilibrium. A buffer zone is created around the kinetic regions, on which a gradually varying numerical flux is adopted. Based on the property of a continuously discretized cut-off function which describes how the flux varies, the coupling will be conservative. In order for the conservative 2-dimensional specularly reflective boundary condition to be implemented conveniently, the discrete Maxwellian is approximated by a high order continuous formula with improved accuracy on a disc instead of on a square domain. The h-adaptive method can work smoothly with a time-split numerical scheme. Through h-adaptation, the cell number is greatly reduced. This method is particularly suitable for problems with hydrodynamics breakdown on only a small part of the whole domain, so that the total efficiency of the algorithm can be greatly improved. Three numerical examples are presented to validate the proposed method and demonstrate its efficiency.

  19. Adaptive L1/2 Shooting Regularization Method for Survival Analysis Using Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Xiao-Ying Liu

    2013-01-01

    Full Text Available A new adaptive L1/2 shooting regularization method for variable selection based on the Cox’s proportional hazards mode being proposed. This adaptive L1/2 shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L1 penalties and a shooting strategy of L1/2 penalty. Simulation results based on high dimensional artificial data show that the adaptive L1/2 shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL also indicate that the L1/2 regularization method performs competitively.

  20. Adaptive BDDC Deluxe Methods for H(curl)

    KAUST Repository

    Zampini, Stefano

    2017-03-17

    The work presents numerical results using adaptive BDDC deluxe methods for preconditioning the linear systems arising from finite element discretizations of the time-domain, quasi-static approximation of the Maxwell’s equations. The provided results, obtained using the BDDC implementation of the PETSc library, show that these methods are poly-logarithmic in the polynomial degree of the Nédélec elements of first and second kind, and robust with respect to arbitrary distributions of the magnetic permeability and the conductivity of the medium.

  1. ECG-derived respiration methods: adapted ICA and PCA.

    Science.gov (United States)

    Tiinanen, Suvi; Noponen, Kai; Tulppo, Mikko; Kiviniemi, Antti; Seppänen, Tapio

    2015-05-01

    Respiration is an important signal in early diagnostics, prediction, and treatment of several diseases. Moreover, a growing trend toward ambulatory measurements outside laboratory environments encourages developing indirect measurement methods such as ECG derived respiration (EDR). Recently, decomposition techniques like principal component analysis (PCA), and its nonlinear version, kernel PCA (KPCA), have been used to derive a surrogate respiration signal from single-channel ECG. In this paper, we propose an adapted independent component analysis (AICA) algorithm to obtain EDR signal, and extend the normal linear PCA technique based on the best principal component (PC) selection (APCA, adapted PCA) to improve its performance further. We also demonstrate that the usage of smoothing spline resampling and bandpass-filtering improve the performance of all EDR methods. Compared with other recent EDR methods using correlation coefficient and magnitude squared coherence, the proposed AICA and APCA yield a statistically significant improvement with correlations 0.84, 0.82, 0.76 and coherences 0.90, 0.91, 0.85 between reference respiration and AICA, APCA and KPCA, respectively. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    Science.gov (United States)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  3. Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique

    Science.gov (United States)

    Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.

    2017-12-01

    Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.

  4. Closed-loop adaptation of neurofeedback based on mental effort facilitates reinforcement learning of brain self-regulation.

    Science.gov (United States)

    Bauer, Robert; Fels, Meike; Royter, Vladislav; Raco, Valerio; Gharabaghi, Alireza

    2016-09-01

    Considering self-rated mental effort during neurofeedback may improve training of brain self-regulation. Twenty-one healthy, right-handed subjects performed kinesthetic motor imagery of opening their left hand, while threshold-based classification of beta-band desynchronization resulted in proprioceptive robotic feedback. The experiment consisted of two blocks in a cross-over design. The participants rated their perceived mental effort nine times per block. In the adaptive block, the threshold was adjusted on the basis of these ratings whereas adjustments were carried out at random in the other block. Electroencephalography was used to examine the cortical activation patterns during the training sessions. The perceived mental effort was correlated with the difficulty threshold of neurofeedback training. Adaptive threshold-setting reduced mental effort and increased the classification accuracy and positive predictive value. This was paralleled by an inter-hemispheric cortical activation pattern in low frequency bands connecting the right frontal and left parietal areas. Optimal balance of mental effort was achieved at thresholds significantly higher than maximum classification accuracy. Rating of mental effort is a feasible approach for effective threshold-adaptation during neurofeedback training. Closed-loop adaptation of the neurofeedback difficulty level facilitates reinforcement learning of brain self-regulation. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  5. A multilevel correction adaptive finite element method for Kohn-Sham equation

    Science.gov (United States)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  6. THRESHOLD OF SIGNIFICANCE IN STRESS MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Elena RUSE

    2015-12-01

    Full Text Available Stress management is the individual's ability to handle any situation, external conditions, to match the demands of the external environment. The researchers revealed several stages in the stress response. A first phase was called ‘‘alert reaction'' or ‘‘immediate reaction to stress‘‘, phase in which there are physiological modifications and manifestations that occur under psychological aspect. Adaptation phase is the phase in which the reactions from the first phase diminishes or disappears. Exhaustion phase is related to the diversity of stress factors and time and may exceed the resources of the human body to adapt. Influencing factors may be: limited, cognitive, perceptual, and a priori. But there is a threshold of significance in stress management. Once the reaction to external stimuli occurs, awareness is needed. The capability effect occurs, any side effect goes away and comes out the ''I AM'' effect.

  7. Optimal and adaptive methods of processing hydroacoustic signals (review)

    Science.gov (United States)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  8. HAM-Based Adaptive Multiscale Meshless Method for Burgers Equation

    Directory of Open Access Journals (Sweden)

    Shu-Li Mei

    2013-01-01

    Full Text Available Based on the multilevel interpolation theory, we constructed a meshless adaptive multiscale interpolation operator (MAMIO with the radial basis function. Using this operator, any nonlinear partial differential equations such as Burgers equation can be discretized adaptively in physical spaces as a nonlinear matrix ordinary differential equation. In order to obtain the analytical solution of the system of ODEs, the homotopy analysis method (HAM proposed by Shijun Liao was developed to solve the system of ODEs by combining the precise integration method (PIM which can be employed to get the analytical solution of linear system of ODEs. The numerical experiences show that HAM is not sensitive to the time step, and so the arithmetic error is mainly derived from the discrete in physical space.

  9. Adaptive Multiresolution Methods: Practical issues on Data Structures, Implementation and Parallelization*

    Directory of Open Access Journals (Sweden)

    Bachmann M.

    2011-12-01

    Full Text Available The concept of fully adaptive multiresolution finite volume schemes has been developed and investigated during the past decade. Here grid adaptation is realized by performing a multiscale decomposition of the discrete data at hand. By means of hard thresholding the resulting multiscale data are compressed. From the remaining data a locally refined grid is constructed. The aim of the present work is to give a self-contained overview on the construction of an appropriate multiresolution analysis using biorthogonal wavelets, its efficient realization by means of hash maps using global cell identifiers and the parallelization of the multiresolution-based grid adaptation via MPI using space-filling curves. Le concept des schémas de volumes finis multi-échelles et adaptatifs a été développé et etudié pendant les dix dernières années. Ici le maillage adaptatif est réalisé en effectuant une décomposition multi-échelle des données discrètes proches. En les tronquant à l’aide d’une valeur seuil fixée, les données multi-échelles obtenues sont compressées. A partir de celles-ci, le maillage est raffiné localement. Le but de ce travail est de donner un aperçu concis de la construction d’une analyse appropriée de multiresolution utilisant les fonctions ondelettes biorthogonales, de son efficacité d’application en terme de tables de hachage en utilisant des identification globales de cellule et de la parallélisation du maillage adaptatif multirésolution via MPI à l’aide des courbes remplissantes.

  10. Fusion of Thresholding Rules During Wavelet-Based Noisy Image Compression

    Directory of Open Access Journals (Sweden)

    Bekhtin Yury

    2016-01-01

    Full Text Available The new method for combining semisoft thresholding rules during wavelet-based data compression of images with multiplicative noise is suggested. The method chooses the best thresholding rule and the threshold value using the proposed criteria which provide the best nonlinear approximations and take into consideration errors of quantization. The results of computer modeling have shown that the suggested method provides relatively good image quality after restoration in the sense of some criteria such as PSNR, SSIM, etc.

  11. Adaptive Mesh Iteration Method for Trajectory Optimization Based on Hermite-Pseudospectral Direct Transcription

    Directory of Open Access Journals (Sweden)

    Humin Lei

    2017-01-01

    Full Text Available An adaptive mesh iteration method based on Hermite-Pseudospectral is described for trajectory optimization. The method uses the Legendre-Gauss-Lobatto points as interpolation points; then the state equations are approximated by Hermite interpolating polynomials. The method allows for changes in both number of mesh points and the number of mesh intervals and produces significantly smaller mesh sizes with a higher accuracy tolerance solution. The derived relative error estimate is then used to trade the number of mesh points with the number of mesh intervals. The adaptive mesh iteration method is applied successfully to the examples of trajectory optimization of Maneuverable Reentry Research Vehicle, and the simulation experiment results show that the adaptive mesh iteration method has many advantages.

  12. Methods for the estimation of the National Institute for Health and Care Excellence cost-effectiveness threshold.

    Science.gov (United States)

    Claxton, Karl; Martin, Steve; Soares, Marta; Rice, Nigel; Spackman, Eldon; Hinde, Sebastian; Devlin, Nancy; Smith, Peter C; Sculpher, Mark

    2015-02-01

    when PCTs are under more financial pressure and are more likely to be disinvesting than investing. This indicates that the central estimate of the threshold is likely to be an overestimate for all technologies which impose net costs on the NHS and the appropriate threshold to apply should be lower for technologies which have a greater impact on NHS costs. The central estimate is based on identifying a preferred analysis at each stage based on the analysis that made the best use of available information, whether or not the assumptions required appeared more reasonable than the other alternatives available, and which provided a more complete picture of the likely health effects of a change in expenditure. However, the limitation of currently available data means that there is substantial uncertainty associated with the estimate of the overall threshold. The methods go some way to providing an empirical estimate of the scale of opportunity costs the NHS faces when considering whether or not the health benefits associated with new technologies are greater than the health that is likely to be lost elsewhere in the NHS. Priorities for future research include estimating the threshold for subsequent waves of expenditure and outcome data, for example by utilising expenditure and outcomes available at the level of Clinical Commissioning Groups as well as additional data collected on QoL and updated estimates of incidence (by age and gender) and duration of disease. Nonetheless, the study also starts to make the other NHS patients, who ultimately bear the opportunity costs of such decisions, less abstract and more 'known' in social decisions. The National Institute for Health Research-Medical Research Council Methodology Research Programme.

  13. Discrete linear canonical transform computation by adaptive method.

    Science.gov (United States)

    Zhang, Feng; Tao, Ran; Wang, Yue

    2013-07-29

    The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.

  14. Models, methods and software tools for building complex adaptive traffic systems

    International Nuclear Information System (INIS)

    Alyushin, S.A.

    2011-01-01

    The paper studies the modern methods and tools to simulate the behavior of complex adaptive systems (CAS), the existing systems of traffic modeling in simulators and their characteristics; proposes requirements for assessing the suitability of the system to simulate the CAS behavior in simulators. The author has developed a model of adaptive agent representation and its functioning environment to meet certain requirements set above, and has presented methods of agents' interactions and methods of conflict resolution in simulated traffic situations. A simulation system realizing computer modeling for simulating the behavior of CAS in traffic situations has been created [ru

  15. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    Science.gov (United States)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  16. A method for the deliberate and deliberative selection of policy instrument mixes for climate change adaptation

    Directory of Open Access Journals (Sweden)

    Heleen L. P. Mees

    2014-06-01

    Full Text Available Policy instruments can help put climate adaptation plans into action. Here, we propose a method for the systematic assessment and selection of policy instruments for stimulating adaptation action. The multi-disciplinary set of six assessment criteria is derived from economics, policy, and legal studies. These criteria are specified for the purpose of climate adaptation by taking into account four challenges to the governance of climate adaptation: uncertainty, spatial diversity, controversy, and social complexity. The six criteria and four challenges are integrated into a step-wise method that enables the selection of instruments starting from a generic assessment and ending with a specific assessment of policy instrument mixes for the stimulation of a specific adaptation measure. We then apply the method to three examples of adaptation measures. The method's merits lie in enabling deliberate choices through a holistic and comprehensive set of adaptation specific criteria, as well as deliberative choices by offering a stepwise method that structures an informed dialog on instrument selection. Although the method was created and applied by scientific experts, policy-makers can also use the method.

  17. Identifying Threshold Concepts for Information Literacy: A Delphi Study

    OpenAIRE

    Lori Townsend; Amy R. Hofer; Silvia Lin Hanick; Korey Brunetti

    2016-01-01

    This study used the Delphi method to engage expert practitioners on the topic of threshold concepts for information literacy. A panel of experts considered two questions. First, is the threshold concept approach useful for information literacy instruction? The panel unanimously agreed that the threshold concept approach holds potential for information literacy instruction. Second, what are the threshold concepts for information literacy instruction? The panel proposed and discussed over fift...

  18. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    Science.gov (United States)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  19. When do Indians feel hot? Internet searches indicate seasonality suppresses adaptation to heat

    Science.gov (United States)

    Singh, Tanya; Siderius, Christian; Van der Velde, Ype

    2018-05-01

    In a warming world an increasing number of people are being exposed to heat, making a comfortable thermal environment an important need. This study explores the potential of using Regional Internet Search Frequencies (RISF) for air conditioning devices as an indicator for thermal discomfort (i.e. dissatisfaction with the thermal environment) with the aim to quantify the adaptation potential of individuals living across different climate zones and at the high end of the temperature range, in India, where access to health data is limited. We related RISF for the years 2011–2015 to daily daytime outdoor temperature in 17 states and determined at which temperature RISF for air conditioning starts to peak, i.e. crosses a ‘heat threshold’, in each state. Using the spatial variation in heat thresholds, we explored whether people continuously exposed to higher temperatures show a lower response to heat extremes through adaptation (e.g. physiological, behavioural or psychological). State-level heat thresholds ranged from 25.9 °C in Madhya Pradesh to 31.0 °C in Orissa. Local adaptation was found to occur at state level: the higher the average temperature in a state, the higher the heat threshold; and the higher the intra-annual temperature range (warmest minus coldest month) the lower the heat threshold. These results indicate there is potential within India to adapt to warmer temperatures, but that a large intra-annual temperature variability attenuates this potential to adapt to extreme heat. This winter ‘reset’ mechanism should be taken into account when assessing the impact of global warming, with changes in minimum temperatures being an important factor in addition to the change in maximum temperatures itself. Our findings contribute to a better understanding of local heat thresholds and people’s adaptive capacity, which can support the design of local thermal comfort standards and early heat warning systems.

  20. Rainfall thresholds for the possible occurrence of landslides in Italy

    Directory of Open Access Journals (Sweden)

    M. T. Brunetti

    2010-03-01

    Full Text Available In Italy, rainfall is the primary trigger of landslides that frequently cause fatalities and large economic damage. Using a variety of information sources, we have compiled a catalogue listing 753 rainfall events that have resulted in landslides in Italy. For each event in the catalogue, the exact or approximate location of the landslide and the time or period of initiation of the slope failure is known, together with information on the rainfall duration D, and the rainfall mean intensity I, that have resulted in the slope failure. The catalogue represents the single largest collection of information on rainfall-induced landslides in Italy, and was exploited to determine the minimum rainfall conditions necessary for landslide occurrence in Italy, and in the Abruzzo Region, central Italy. For the purpose, new national rainfall thresholds for Italy and new regional rainfall thresholds for the Abruzzo Region were established, using two independent statistical methods, including a Bayesian inference method and a new Frequentist approach. The two methods proved complementary, with the Bayesian method more suited to analyze small data sets, and the Frequentist method performing better when applied to large data sets. The new regional thresholds for the Abruzzo Region are lower than the new national thresholds for Italy, and lower than the regional thresholds proposed in the literature for the Piedmont and Lombardy Regions in northern Italy, and for the Campania Region in southern Italy. This is important, because it shows that landslides in Italy can be triggered by less severe rainfall conditions than previously recognized. The Frequentist method experimented in this work allows for the definition of multiple minimum rainfall thresholds, each based on a different exceedance probability level. This makes the thresholds suited for the design of probabilistic schemes for the prediction of rainfall-induced landslides. A scheme based on four

  1. Defining indoor heat thresholds for health in the UK.

    Science.gov (United States)

    Anderson, Mindy; Carmichael, Catriona; Murray, Virginia; Dengel, Andy; Swainson, Michael

    2013-05-01

    It has been recognised that as outdoor ambient temperatures increase past a particular threshold, so do mortality/morbidity rates. However, similar thresholds for indoor temperatures have not yet been identified. Due to a warming climate, the non-sustainability of air conditioning as a solution, and the desire for more energy-efficient airtight homes, thresholds for indoor temperature should be defined as a public health issue. The aim of this paper is to outline the need for indoor heat thresholds and to establish if they can be identified. Our objectives include: describing how indoor temperature is measured; highlighting threshold measurements and indices; describing adaptation to heat; summary of the risk of susceptible groups to heat; reviewing the current evidence on the link between sleep, heat and health; exploring current heat and health warning systems and thresholds; exploring the built environment and the risk of overheating; and identifying the gaps in current knowledge and research. A global literature search of key databases was conducted using a pre-defined set of keywords to retrieve peer-reviewed and grey literature. The paper will apply the findings to the context of the UK. A summary of 96 articles, reports, government documents and textbooks were analysed and a gap analysis was conducted. Evidence on the effects of indoor heat on health implies that buildings are modifiers of the effect of climate on health outcomes. Personal exposure and place-based heat studies showed the most significant correlations between indoor heat and health outcomes. However, the data are sparse and inconclusive in terms of identifying evidence-based definitions for thresholds. Further research needs to be conducted in order to provide an evidence base for threshold determination. Indoor and outdoor heat are related but are different in terms of language and measurement. Future collaboration between the health and building sectors is needed to develop a common

  2. A NEW MULTI-SPECTRAL THRESHOLD NORMALIZED DIFFERENCE WATER INDEX (MST-NDWI WATER EXTRACTION METHOD – A CASE STUDY IN YANHE WATERSHED

    Directory of Open Access Journals (Sweden)

    Y. Zhou

    2018-05-01

    Full Text Available Accurate remote sensing water extraction is one of the primary tasks of watershed ecological environment study. Since the Yanhe water system has typical characteristics of a small water volume and narrow river channel, which leads to the difficulty for conventional water extraction methods such as Normalized Difference Water Index (NDWI. A new Multi-Spectral Threshold segmentation of the NDWI (MST-NDWI water extraction method is proposed to achieve the accurate water extraction in Yanhe watershed. In the MST-NDWI method, the spectral characteristics of water bodies and typical backgrounds on the Landsat/TM images have been evaluated in Yanhe watershed. The multi-spectral thresholds (TM1, TM4, TM5 based on maximum-likelihood have been utilized before NDWI water extraction to realize segmentation for a division of built-up lands and small linear rivers. With the proposed method, a water map is extracted from the Landsat/TM images in 2010 in China. An accuracy assessment is conducted to compare the proposed method with the conventional water indexes such as NDWI, Modified NDWI (MNDWI, Enhanced Water Index (EWI, and Automated Water Extraction Index (AWEI. The result shows that the MST-NDWI method generates better water extraction accuracy in Yanhe watershed and can effectively diminish the confusing background objects compared to the conventional water indexes. The MST-NDWI method integrates NDWI and Multi-Spectral Threshold segmentation algorithms, with richer valuable information and remarkable results in accurate water extraction in Yanhe watershed.

  3. Adaptive endpoint detection of seismic signal based on auto-correlated function

    International Nuclear Information System (INIS)

    Fan Wanchun; Shi Ren

    2001-01-01

    Based on the analysis of auto-correlation function, the notion of the distance between auto-correlation function was quoted, and the characterization of the noise and the signal with noise were discussed by using the distance. Then, the method of auto- adaptable endpoint detection of seismic signal based on auto-correlated similarity was summed up. The steps of implementation and determining of the thresholds were presented in detail. The experimental results that were compared with the methods based on artificial detecting show that this method has higher sensitivity even in a low signal with noise ratio circumstance

  4. Adaptive Finite Volume Method for the Shallow Water Equations on Triangular Grids

    Directory of Open Access Journals (Sweden)

    Sudi Mungkasi

    2016-01-01

    Full Text Available This paper presents a numerical entropy production (NEP scheme for two-dimensional shallow water equations on unstructured triangular grids. We implement NEP as the error indicator for adaptive mesh refinement or coarsening in solving the shallow water equations using a finite volume method. Numerical simulations show that NEP is successful to be a refinement/coarsening indicator in the adaptive mesh finite volume method, as the method refines the mesh or grids around nonsmooth regions and coarsens them around smooth regions.

  5. Regional rainfall thresholds for landslide occurrence using a centenary database

    Science.gov (United States)

    Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia

    2018-04-01

    This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.

  6. Block Compressed Sensing of Images Using Adaptive Granular Reconstruction

    Directory of Open Access Journals (Sweden)

    Ran Li

    2016-01-01

    Full Text Available In the framework of block Compressed Sensing (CS, the reconstruction algorithm based on the Smoothed Projected Landweber (SPL iteration can achieve the better rate-distortion performance with a low computational complexity, especially for using the Principle Components Analysis (PCA to perform the adaptive hard-thresholding shrinkage. However, during learning the PCA matrix, it affects the reconstruction performance of Landweber iteration to neglect the stationary local structural characteristic of image. To solve the above problem, this paper firstly uses the Granular Computing (GrC to decompose an image into several granules depending on the structural features of patches. Then, we perform the PCA to learn the sparse representation basis corresponding to each granule. Finally, the hard-thresholding shrinkage is employed to remove the noises in patches. The patches in granule have the stationary local structural characteristic, so that our method can effectively improve the performance of hard-thresholding shrinkage. Experimental results indicate that the reconstructed image by the proposed algorithm has better objective quality when compared with several traditional ones. The edge and texture details in the reconstructed image are better preserved, which guarantees the better visual quality. Besides, our method has still a low computational complexity of reconstruction.

  7. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2016-01-01

    Full Text Available Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS. It is composed of three successful components: (i exponential wavelet transform, (ii iterative shrinkage-thresholding algorithm, and (iii random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches.

  8. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Science.gov (United States)

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  9. Analisis Seleksi Citra Mirip dengan Memanfaatkan Konsep CBIR dan Algoritma Threshold

    Directory of Open Access Journals (Sweden)

    Abdul Haris Rangkuti

    2011-12-01

    Full Text Available Content base image retrieval (CBIR is the concept of image retrieval by comparing the existing image on the sample to that of the database (query by example. CBIR process based on color is carried out using adaptive color histogram concept, while one based on shape is performed using moment concept. Following up the process results, a sorting process is done based on a threshold value of the sample image through the utilization threshold algorithm. The image displayed is be sorted from the one that is nearly similar to the query image (example to the resemblance of the lowest (aggregation value. The threshold value of the query image used as reference is compared with the aggregation value of the image database. If the comparison in the search for similarities by using the concept of fuzzy logic approaches 1, the comparison between the threshold value and the aggregation value is almost the same. Otherwise, if it reaches 0, the comparison results in a lot of differences. 

  10. Study on Method of Geohazard Change Detection Based on Integrating Remote Sensing and GIS

    International Nuclear Information System (INIS)

    Zhao, Zhenzhen; Yan, Qin; Liu, Zhengjun; Luo, Chengfeng

    2014-01-01

    Following a comprehensive literature review, this paper looks at analysis of geohazard using remote sensing information. This paper compares the basic types and methods of change detection, explores the basic principle of common methods and makes an respective analysis of the characteristics and shortcomings of the commonly used methods in the application of geohazard. Using the earthquake in JieGu as a case study, this paper proposes a geohazard change detection method integrating RS and GIS. When detecting the pre-earthquake and post-earthquake remote sensing images at different phases, it is crucial to set an appropriate threshold. The method adopts a self-adapting determination algorithm for threshold. We select a training region which is obtained after pixel information comparison and set a threshold value. The threshold value separates the changed pixel maximum. Then we apply the threshold value to the entire image, which could also make change detection accuracy maximum. Finally, we output the result to the GIS system to make change analysis. The experimental results show that this method of geohazard change detection based on integrating remote sensing and GIS information has higher accuracy with obvious advantages compared with the traditional methods

  11. Multivariate Self-Exciting Threshold Autoregressive Models with eXogenous Input

    OpenAIRE

    Addo, Peter Martey

    2014-01-01

    This study defines a multivariate Self--Exciting Threshold Autoregressive with eXogenous input (MSETARX) models and present an estimation procedure for the parameters. The conditions for stationarity of the nonlinear MSETARX models is provided. In particular, the efficiency of an adaptive parameter estimation algorithm and LSE (least squares estimate) algorithm for this class of models is then provided via simulations.

  12. Heat-related deaths in hot cities: estimates of human tolerance to high temperature thresholds.

    Science.gov (United States)

    Harlan, Sharon L; Chowell, Gerardo; Yang, Shuo; Petitti, Diana B; Morales Butler, Emmanuel J; Ruddell, Benjamin L; Ruddell, Darren M

    2014-03-20

    In this study we characterized the relationship between temperature and mortality in central Arizona desert cities that have an extremely hot climate. Relationships between daily maximum apparent temperature (ATmax) and mortality for eight condition-specific causes and all-cause deaths were modeled for all residents and separately for males and females ages heat. For this condition-specific cause of death, the heat thresholds in all gender and age groups (ATmax = 90-97 °F; 32.2-36.1 °C) were below local median seasonal temperatures in the study period (ATmax = 99.5 °F; 37.5 °C). Heat threshold was defined as ATmax at which the mortality ratio begins an exponential upward trend. Thresholds were identified in younger and older females for cardiac disease/stroke mortality (ATmax = 106 and 108 °F; 41.1 and 42.2 °C) with a one-day lag. Thresholds were also identified for mortality from respiratory diseases in older people (ATmax = 109 °F; 42.8 °C) and for all-cause mortality in females (ATmax = 107 °F; 41.7 °C) and males Heat-related mortality in a region that has already made some adaptations to predictable periods of extremely high temperatures suggests that more extensive and targeted heat-adaptation plans for climate change are needed in cities worldwide.

  13. Managing ecological thresholds in coupled environmental–human systems

    Science.gov (United States)

    Horan, Richard D.; Fenichel, Eli P.; Drury, Kevin L. S.; Lodge, David M.

    2011-01-01

    Many ecosystems appear subject to regime shifts—abrupt changes from one state to another after crossing a threshold or tipping point. Thresholds and their associated stability landscapes are determined within a coupled socioeconomic–ecological system (SES) where human choices, including those of managers, are feedback responses. Prior work has made one of two assumptions about managers: that they face no institutional constraints, in which case the SES may be managed to be fairly robust to shocks and tipping points are of little importance, or that managers are rigidly constrained with no flexibility to adapt, in which case the inferred thresholds may poorly reflect actual managerial flexibility. We model a multidimensional SES to investigate how alternative institutions affect SES stability landscapes and alter tipping points. With institutionally dependent human feedbacks, the stability landscape depends on institutional arrangements. Strong institutions that account for feedback responses create the possibility for desirable states of the world and can cause undesirable states to cease to exist. Intermediate institutions interact with ecological relationships to determine the existence and nature of tipping points. Finally, weak institutions can eliminate tipping points so that only undesirable states of the world remain. PMID:21502517

  14. Threshold quantum cryptography

    International Nuclear Information System (INIS)

    Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki

    2005-01-01

    We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding

  15. MEthods of ASsessing blood pressUre: identifying thReshold and target valuEs (MeasureBP): a review & study protocol.

    Science.gov (United States)

    Blom, Kimberly C; Farina, Sasha; Gomez, Yessica-Haydee; Campbell, Norm R C; Hemmelgarn, Brenda R; Cloutier, Lyne; McKay, Donald W; Dawes, Martin; Tobe, Sheldon W; Bolli, Peter; Gelfer, Mark; McLean, Donna; Bartlett, Gillian; Joseph, Lawrence; Featherstone, Robin; Schiffrin, Ernesto L; Daskalopoulou, Stella S

    2015-04-01

    Despite progress in automated blood pressure measurement (BPM) technology, there is limited research linking hard outcomes to automated office BPM (OBPM) treatment targets and thresholds. Equivalences for automated BPM devices have been estimated from approximations of standardized manual measurements of 140/90 mmHg. Until outcome-driven targets and thresholds become available for automated measurement methods, deriving evidence-based equivalences between automated methods and standardized manual OBPM is the next best solution. The MeasureBP study group was initiated by the Canadian Hypertension Education Program to close this critical knowledge gap. MeasureBP aims to define evidence-based equivalent values between standardized manual OBPM and automated BPM methods by synthesizing available evidence using a systematic review and individual subject-level data meta-analyses. This manuscript provides a review of the literature and MeasureBP study protocol. These results will lay the evidenced-based foundation to resolve uncertainties within blood pressure guidelines which, in turn, will improve the management of hypertension.

  16. On Self-Adaptive Method for General Mixed Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Abdellah Bnouhachem

    2008-01-01

    Full Text Available We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003. Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit complementarity problems as special cases, results proved in this paper continue to hold for these problems.

  17. Calculation of left ventricular volume and ejection fraction from ECG-gated myocardial SPECT. Automatic detection of endocardial borders by threshold method

    International Nuclear Information System (INIS)

    Fukushi, Shoji; Teraoka, Satomi.

    1997-01-01

    A new method which calculate end-diastolic volume (EDV), end-systolic volume (ESV) and ejection fraction (LVEF) of the left ventricle from myocardial short axis images of ECG-gated SPECT using 99m Tc myocardial perfusion tracer has been designed. Eight frames per cardiac cycle ECG-gated 180 degrees SPECT was performed. Threshold method was used to detect myocardial borders automatically. The optimal threshold was 45% by myocardial SPECT phantom. To determine if EDV, ESV and LVEF can also be calculated by this method, 12 patients were correlated ventriculography (LVG) for 10 days each. The correlation coefficient with LVG was 0.918 (EDV), 0.935 (ESV) and 0.900 (LVEF). This method is excellent at objectivity and reproductivity because of the automatic detection of myocardial borders. It also provides useful information on heart function in addition to myocardial perfusion. (author)

  18. An adaptive multi-element probabilistic collocation method for statistical EMC/EMI characterization

    KAUST Repository

    Yücel, Abdulkadir C.

    2013-12-01

    An adaptive multi-element probabilistic collocation (ME-PC) method for quantifying uncertainties in electromagnetic compatibility and interference phenomena involving electrically large, multi-scale, and complex platforms is presented. The method permits the efficient and accurate statistical characterization of observables (i.e., quantities of interest such as coupled voltages) that potentially vary rapidly and/or are discontinuous in the random variables (i.e., parameters that characterize uncertainty in a system\\'s geometry, configuration, or excitation). The method achieves its efficiency and accuracy by recursively and adaptively dividing the domain of the random variables into subdomains using as a guide the decay rate of relative error in a polynomial chaos expansion of the observables. While constructing local polynomial expansions on each subdomain, a fast integral-equation-based deterministic field-cable-circuit simulator is used to compute the observable values at the collocation/integration points determined by the adaptive ME-PC scheme. The adaptive ME-PC scheme requires far fewer (computationally costly) deterministic simulations than traditional polynomial chaos collocation and Monte Carlo methods for computing averages, standard deviations, and probability density functions of rapidly varying observables. The efficiency and accuracy of the method are demonstrated via its applications to the statistical characterization of voltages in shielded/unshielded microwave amplifiers and magnetic fields induced on car tire pressure sensors. © 2013 IEEE.

  19. The dynamic time-over-threshold method for multi-channel APD based gamma-ray detectors

    Energy Technology Data Exchange (ETDEWEB)

    Orita, T., E-mail: orita.tadashi@jaea.go.jp [Japan Atomic Energy Agency, Fukushima (Japan); Shimazoe, K.; Takahashi, H. [Department of Nuclear Management and Engineering, The University of Tokyo, Bunkyō (Japan)

    2015-03-01

    t– Recent advances in manufacturing technology have enabled the use of multi-channel pixelated detectors in gamma-ray imaging applications. When obtaining gamma-ray measurements, it is important to obtain pulse height information in order to avoid unnecessary events such as scattering. However, as the number of channels increases, more electronics are needed to process each channel's signal, and the corresponding increases in circuit size and power consumption can result in practical problems. The time-over-threshold (ToT) method, which has recently become popular in the medical field, is a signal processing technique that can effectively avoid such problems. However, ToT suffers from poor linearity and its dynamic range is limited. We therefore propose a new ToT technique called the dynamic time-over-threshold (dToT) method [4]. A new signal processing system using dToT and CR-RC shaping demonstrated much better linearity than that of a conventional ToT. Using a test circuit with a new Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (GAGG) scintillator and an avalanche photodiode, the pulse height spectra of {sup 137}Cs and {sup 22}Na sources were measured with high linearity. Based on these results, we designed a new application-specific integrated circuit (ASIC) for this multi-channel dToT system, measured the spectra of a {sup 22}Na source, and investigated the linearity of the system.

  20. THRESHOLD PARAMETER OF THE EXPECTED LOSSES

    Directory of Open Access Journals (Sweden)

    Josip Arnerić

    2012-12-01

    Full Text Available The objective of extreme value analysis is to quantify the probabilistic behavior of unusually large losses using only extreme values above some high threshold rather than using all of the data which gives better fit to tail distribution in comparison to traditional methods with assumption of normality. In our case we estimate market risk using daily returns of the CROBEX index at the Zagreb Stock Exchange. Therefore, it’s necessary to define the excess distribution above some threshold, i.e. Generalized Pareto Distribution (GPD is used as much more reliable than the normal distribution due to the fact that gives the accent on the extreme values. Parameters of GPD distribution will be estimated using maximum likelihood method (MLE. The contribution of this paper is to specify threshold which is large enough so that GPD approximation valid but low enough so that a sufficient number of observations are available for a precise fit.

  1. Adaptative Techniques to Reduce Power in Digital Circuits

    Directory of Open Access Journals (Sweden)

    Bharadwaj Amrutur

    2011-07-01

    Full Text Available CMOS chips are engineered with sufficient performance margins to ensure that they meet the target performance under worst case operating conditions. Consequently, excess power is consumed for most cases when the operating conditions are more benign. This article will review a suite of dynamic power minimization techniques, which have been recently developed to reduce power consumption based on actual operating conditions. We will discuss commonly used techniques like Dynamic Power Switching (DPS, Dynamic Voltage and Frequency Scaling (DVS and DVFS and Adaptive Voltage Scaling (AVS. Recent efforts to extend these to cover threshold voltage adaptation via Dynamic Voltage and Threshold Scaling (DVTS will also be presented. Computation rate is also adapted to actual work load requirements via dynamically changing the hardware parallelism or by controlling the number of operations performed. These will be explained with some examples from the application domains of media and wireless signal processing.

  2. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  3. Successful adaptation of a research methods course in South America.

    Science.gov (United States)

    Tamariz, Leonardo; Vasquez, Diego; Loor, Cecilia; Palacio, Ana

    2017-01-01

    South America has low research productivity. The lack of a structured research curriculum is one of the barriers to conducting research. To report our experience adapting an active learning-based research methods curriculum to improve research productivity at a university in Ecuador. We used a mixed-method approach to test the adaptation of the research curriculum at Universidad Catolica Santiago de Guayaquil. The curriculum uses a flipped classroom and active learning approach to teach research methods. When adapted, it was longitudinal and had 16-hour programme of in-person teaching and a six-month follow-up online component. Learners were organized in theme groups according to interest, and each group had a faculty leader. Our primary outcome was research productivity, which was measured by the succesful presentation of the research project at a national meeting, or publication in a peer-review journal. Our secondary outcomes were knowledge and perceived competence before and after course completion. We conducted qualitative interviews of faculty members and students to evaluate themes related to participation in research. Fifty university students and 10 faculty members attended the course. We had a total of 15 groups. Both knowledge and perceived competence increased by 17 and 18 percentage points, respectively. The presentation or publication rate for the entire group was 50%. The qualitative analysis showed that a lack of research culture and curriculum were common barriers to research. A US-based curriculum can be successfully adapted in low-middle income countries. A research curriculum aids in achieving pre-determined milestones. UCSG: Universidad Catolica Santiago de Guayaquil; UM: University of Miami.

  4. The Sensory Difference Threshold of Menthol Odor in Flavored Tobacco Determined by Combining Sensory and Chemical Analysis.

    Science.gov (United States)

    Krüsemann, Erna J Z; Cremers, Johannes W J M; Visser, Wouter F; Punter, Pieter H; Talhout, Reinskje

    2017-03-01

    Cigarettes are an often-used consumer product, and flavor is an important determinant of their product appeal. Cigarettes with strong nontobacco flavors are popular among young people, and may facilitate smoking initiation. Discriminating flavors in tobacco is important for regulation purposes, for instance to set upper limits to the levels of important flavor additives. We provide a simple and fast method to determine the human odor difference threshold for flavor additives in a tobacco matrix, using a combination of chemical and sensory analysis. For an example, the human difference threshold for menthol odor, one of the most frequently used tobacco flavors, was determined. A consumer panel consisting of 20 women compared different concentrations of menthol-flavored tobacco to unflavored cigarette tobacco using the 2-alternative forced choice method. Components contributing to menthol odor were quantified using headspace GC-MS. The sensory difference threshold of menthol odor corresponded to a mixture of 43 (37-50)% menthol-flavored tobacco, containing 1.8 (1.6-2.1) mg menthol, 2.7 (2.3-3.1) µg menthone, and 1.0 (0.9-1.2) µg neomenthyl acetate per gram of tobacco. Such a method is important in the context of the European Tobacco Product Directive, and the US Food and Drug Administration Tobacco Control Act, that both prohibit cigarettes and roll-your-own tobacco with a characterizing flavor other than tobacco. Our method can also be adapted for matrices other than tobacco, such as food. © The Author 2016. Published by Oxford University Press.

  5. [Comparative adaptation of crowns of selective laser melting and wax-lost-casting method].

    Science.gov (United States)

    Li, Guo-qiang; Shen, Qing-yi; Gao, Jian-hua; Wu, Xue-ying; Chen, Li; Dai, Wen-an

    2012-07-01

    To investigate the marginal adaptation of crowns fabricated by selective laser melting (SLM) and wax-lost-casting method, so as to provide an experimental basis for clinic. Co-Cr alloy full crown were fabricated by SLM and wax-lost-casting for 24 samples in each group. All crowns were cemented with zinc phosphate cement and cut along longitudinal axis by line cutting machine. The gap between crown tissue surface and die was measured by 6-point measuring method with scanning electron microscope (SEM). The marginal adaptation of crowns fabricated by SLM and wax-lost-casting were compared statistically. The gap between SLM crowns were (36.51 ± 2.94), (49.36 ± 3.31), (56.48 ± 3.35), (42.20 ± 3.60) µm, and wax-lost-casting crowns were (68.86 ± 5.41), (58.86 ± 6.10), (70.62 ± 5.79), (69.90 ± 6.00) µm. There were significant difference between two groups (P casting method and SLM method provide acceptable marginal adaptation in clinic, and the marginal adaptation of SLM is better than that of wax-lost-casting method.

  6. Rejection thresholds in solid chocolate-flavored compound coating.

    Science.gov (United States)

    Harwood, Meriel L; Ziegler, Gregory R; Hayes, John E

    2012-10-01

    Classical detection thresholds do not predict liking, as they focus on the presence or absence of a sensation. Recently however, Prescott and colleagues described a new method, the rejection threshold, where a series of forced choice preference tasks are used to generate a dose-response function to determine hedonically acceptable concentrations. That is, how much is too much? To date, this approach has been used exclusively in liquid foods. Here, we determined group rejection thresholds in solid chocolate-flavored compound coating for bitterness. The influences of self-identified preferences for milk or dark chocolate, as well as eating style (chewers compared to melters) on rejection thresholds were investigated. Stimuli included milk chocolate-flavored compound coating spiked with increasing amounts of sucrose octaacetate, a bitter and generally recognized as safe additive. Paired preference tests (blank compared to spike) were used to determine the proportion of the group that preferred the blank. Across pairs, spiked samples were presented in ascending concentration. We were able to quantify and compare differences between 2 self-identified market segments. The rejection threshold for the dark chocolate preferring group was significantly higher than the milk chocolate preferring group (P= 0.01). Conversely, eating style did not affect group rejection thresholds (P= 0.14), although this may reflect the amount of chocolate given to participants. Additionally, there was no association between chocolate preference and eating style (P= 0.36). Present work supports the contention that this method can be used to examine preferences within specific market segments and potentially individual differences as they relate to ingestive behavior. This work makes use of the rejection threshold method to study market segmentation, extending its use to solid foods. We believe this method has broad applicability to the sensory specialist and product developer by providing a

  7. Adaptive Change Detection for Long-Term Machinery Monitoring Using Incremental Sliding-Window

    Science.gov (United States)

    Wang, Teng; Lu, Guo-Liang; Liu, Jie; Yan, Peng

    2017-11-01

    Detection of structural changes from an operational process is a major goal in machine condition monitoring. Existing methods for this purpose are mainly based on retrospective analysis, resulting in a large detection delay that limits their usages in real applications. This paper presents a new adaptive real-time change detection algorithm, an extension of the recent research by combining with an incremental sliding-window strategy, to handle the multi-change detection in long-term monitoring of machine operations. In particular, in the framework, Hilbert space embedding of distribution is used to map the original data into the Re-producing Kernel Hilbert Space (RKHS) for change detection; then, a new adaptive threshold strategy can be developed when making change decision, in which a global factor (used to control the coarse-to-fine level of detection) is introduced to replace the fixed value of threshold. Through experiments on a range of real testing data which was collected from an experimental rotating machinery system, the excellent detection performances of the algorithm for engineering applications were demonstrated. Compared with state-of-the-art methods, the proposed algorithm can be more suitable for long-term machinery condition monitoring without any manual re-calibration, thus is promising in modern industries.

  8. A generalized adaptive mathematical morphological filter for LIDAR data

    Science.gov (United States)

    Cui, Zheng

    Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in

  9. A Novel Unsupervised Adaptive Learning Method for Long-Term Electromyography (EMG) Pattern Recognition

    Science.gov (United States)

    Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi

    2017-01-01

    Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle). PMID:28608824

  10. A Novel Unsupervised Adaptive Learning Method for Long-Term Electromyography (EMG Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Qi Huang

    2017-06-01

    Full Text Available Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC, by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC. We compared PAC performance with incremental support vector classifier (ISVC and non-adapting SVC (NSVC in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05 and ISVC (13.38% ± 2.62%, p = 0.001, and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle.

  11. Genotoxic thresholds, DNA repair, and susceptibility in human populations

    International Nuclear Information System (INIS)

    Jenkins, Gareth J.S.; Zair, Zoulikha; Johnson, George E.; Doak, Shareen H.

    2010-01-01

    It has been long assumed that DNA damage is induced in a linear manner with respect to the dose of a direct acting genotoxin. Thus, it is implied that direct acting genotoxic agents induce DNA damage at even the lowest of concentrations and that no 'safe' dose range exists. The linear (non-threshold) paradigm has led to the one-hit model being developed. This 'one hit' scenario can be interpreted such that a single DNA damaging event in a cell has the capability to induce a single point mutation in that cell which could (if positioned in a key growth controlling gene) lead to increased proliferation, leading ultimately to the formation of a tumour. There are many groups (including our own) who, for a decade or more, have argued, that low dose exposures to direct acting genotoxins may be tolerated by cells through homeostatic mechanisms such as DNA repair. This argument stems from the existence of evolutionary adaptive mechanisms that allow organisms to adapt to low levels of exogenous sources of genotoxins. We have been particularly interested in the genotoxic effects of known mutagens at low dose exposures in human cells and have identified for the first time, in vitro genotoxic thresholds for several mutagenic alkylating agents (Doak et al., 2007). Our working hypothesis is that DNA repair is primarily responsible for these thresholded effects at low doses by removing low levels of DNA damage but becoming saturated at higher doses. We are currently assessing the roles of base excision repair (BER) and methylguanine-DNA methyltransferase (MGMT) for roles in the identified thresholds (Doak et al., 2008). This research area is currently important as it assesses whether 'safe' exposure levels to mutagenic chemicals can exist and allows risk assessment using appropriate safety factors to define such exposure levels. Given human variation, the mechanistic basis for genotoxic thresholds (e.g. DNA repair) has to be well defined in order that susceptible individuals are

  12. EbayesThresh: R Programs for Empirical Bayes Thresholding

    Directory of Open Access Journals (Sweden)

    Iain Johnstone

    2005-04-01

    Full Text Available Suppose that a sequence of unknown parameters is observed sub ject to independent Gaussian noise. The EbayesThresh package in the S language implements a class of Empirical Bayes thresholding methods that can take advantage of possible sparsity in the sequence, to improve the quality of estimation. The prior for each parameter in the sequence is a mixture of an atom of probability at zero and a heavy-tailed density. Within the package, this can be either a Laplace (double exponential density or else a mixture of normal distributions with tail behavior similar to the Cauchy distribution. The mixing weight, or sparsity parameter, is chosen automatically by marginal maximum likelihood. If estimation is carried out using the posterior median, this is a random thresholding procedure; the estimation can also be carried out using other thresholding rules with the same threshold, and the package provides the posterior mean, and hard and soft thresholding, as additional options. This paper reviews the method, and gives details (far beyond those previously published of the calculations needed for implementing the procedures. It explains and motivates both the general methodology, and the use of the EbayesThresh package, through simulated and real data examples. When estimating the wavelet transform of an unknown function, it is appropriate to apply the method level by level to the transform of the observed data. The package can carry out these calculations for wavelet transforms obtained using various packages in R and S-PLUS. Details, including a motivating example, are presented, and the application of the method to image estimation is also explored. The final topic considered is the estimation of a single sequence that may become progressively sparser along the sequence. An iterated least squares isotone regression method allows for the choice of a threshold that depends monotonically on the order in which the observations are made. An alternative

  13. Sealing Clay Text Segmentation Based on Radon-Like Features and Adaptive Enhancement Filters

    Directory of Open Access Journals (Sweden)

    Xia Zheng

    2015-01-01

    Full Text Available Text extraction is a key issue in sealing clay research. The traditional method based on rubbings increases the risk of sealing clay damage and is unfavorable to sealing clay protection. Therefore, using digital image of sealing clay, a new method for text segmentation based on Radon-like features and adaptive enhancement filters is proposed in this paper. First, adaptive enhancement LM filter bank is used to get the maximum energy image; second, the edge image of the maximum energy image is calculated; finally, Radon-like feature images are generated by combining maximum energy image and its edge image. The average image of Radon-like feature images is segmented by the image thresholding method. Compared with 2D Otsu, GA, and FastFCM, the experiment result shows that this method can perform better in terms of accuracy and completeness of the text.

  14. Solving point reactor kinetic equations by time step-size adaptable numerical methods

    International Nuclear Information System (INIS)

    Liao Chaqing

    2007-01-01

    Based on the analysis of effects of time step-size on numerical solutions, this paper showed the necessity of step-size adaptation. Based on the relationship between error and step-size, two-step adaptation methods for solving initial value problems (IVPs) were introduced. They are Two-Step Method and Embedded Runge-Kutta Method. PRKEs were solved by implicit Euler method with step-sizes optimized by using Two-Step Method. It was observed that the control error has important influence on the step-size and the accuracy of solutions. With suitable control errors, the solutions of PRKEs computed by the above mentioned method are accurate reasonably. The accuracy and usage of MATLAB built-in ODE solvers ode23 and ode45, both of which adopt Runge-Kutta-Fehlberg method, were also studied and discussed. (authors)

  15. An adaptive image denoising method based on local parameters

    Indian Academy of Sciences (India)

    term, i.e., individual pixels or block-by-block, i.e., group of pixels, using suitable shrinkage factor and threshold function. The shrinkage factor is generally a function of threshold and some other characteristics of the neighbouring pixels of the ...

  16. Optimal threshold functions for fault detection and isolation

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik; Cour-Harbo, A. la

    2003-01-01

    Fault diagnosis systems usually comprises two parts: a filtering part and a decision part, the latter typically based on threshold functions. In this paper, systematic ways to choose the threshold values are proposed. Two different test functions for the filtered signals are discussed and a method...

  17. Identifying thresholds for ecosystem-based management.

    Directory of Open Access Journals (Sweden)

    Jameal F Samhouri

    Full Text Available BACKGROUND: One of the greatest obstacles to moving ecosystem-based management (EBM from concept to practice is the lack of a systematic approach to defining ecosystem-level decision criteria, or reference points that trigger management action. METHODOLOGY/PRINCIPAL FINDINGS: To assist resource managers and policymakers in developing EBM decision criteria, we introduce a quantitative, transferable method for identifying utility thresholds. A utility threshold is the level of human-induced pressure (e.g., pollution at which small changes produce substantial improvements toward the EBM goal of protecting an ecosystem's structural (e.g., diversity and functional (e.g., resilience attributes. The analytical approach is based on the detection of nonlinearities in relationships between ecosystem attributes and pressures. We illustrate the method with a hypothetical case study of (1 fishing and (2 nearshore habitat pressure using an empirically-validated marine ecosystem model for British Columbia, Canada, and derive numerical threshold values in terms of the density of two empirically-tractable indicator groups, sablefish and jellyfish. We also describe how to incorporate uncertainty into the estimation of utility thresholds and highlight their value in the context of understanding EBM trade-offs. CONCLUSIONS/SIGNIFICANCE: For any policy scenario, an understanding of utility thresholds provides insight into the amount and type of management intervention required to make significant progress toward improved ecosystem structure and function. The approach outlined in this paper can be applied in the context of single or multiple human-induced pressures, to any marine, freshwater, or terrestrial ecosystem, and should facilitate more effective management.

  18. Multilevel Thresholding Method Based on Electromagnetism for Accurate Brain MRI Segmentation to Detect White Matter, Gray Matter, and CSF

    Directory of Open Access Journals (Sweden)

    G. Sandhya

    2017-01-01

    Full Text Available This work explains an advanced and accurate brain MRI segmentation method. MR brain image segmentation is to know the anatomical structure, to identify the abnormalities, and to detect various tissues which help in treatment planning prior to radiation therapy. This proposed technique is a Multilevel Thresholding (MT method based on the phenomenon of Electromagnetism and it segments the image into three tissues such as White Matter (WM, Gray Matter (GM, and CSF. The approach incorporates skull stripping and filtering using anisotropic diffusion filter in the preprocessing stage. This thresholding method uses the force of attraction-repulsion between the charged particles to increase the population. It is the combination of Electromagnetism-Like optimization algorithm with the Otsu and Kapur objective functions. The results obtained by using the proposed method are compared with the ground-truth images and have given best values for the measures sensitivity, specificity, and segmentation accuracy. The results using 10 MR brain images proved that the proposed method has accurately segmented the three brain tissues compared to the existing segmentation methods such as K-means, fuzzy C-means, OTSU MT, Particle Swarm Optimization (PSO, Bacterial Foraging Algorithm (BFA, Genetic Algorithm (GA, and Fuzzy Local Gaussian Mixture Model (FLGMM.

  19. AUTHENTICATION ARCHITECTURE USING THRESHOLD CRYPTOGRAPHY IN KERBEROS FOR MOBILE AD HOC NETWORKS

    Directory of Open Access Journals (Sweden)

    Hadj Gharib

    2014-06-01

    Full Text Available The use of wireless technologies is gradually increasing and risks related to the use of these technologies are considerable. Due to their dynamically changing topology and open environment without a centralized policy control of a traditional network, a mobile ad hoc network (MANET is vulnerable to the presence of malicious nodes and attacks. The ideal solution to overcome a myriad of security concerns in MANET’s is the use of reliable authentication architecture. In this paper we propose a new key management scheme based on threshold cryptography in kerberos for MANET’s, the proposed scheme uses the elliptic curve cryptography method that consumes fewer resources well adapted to the wireless environment. Our approach shows a strength and effectiveness against attacks.

  20. Adaptive and dynamic meshing methods for numerical simulations

    Science.gov (United States)

    Acikgoz, Nazmiye

    -hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations

  1. Adaptive implicit method for thermal compositional reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, A.; Tchelepi, H.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Stanford Univ., Palo Alto (United States)

    2008-10-15

    As the global demand for oil increases, thermal enhanced oil recovery techniques are becoming increasingly important. Numerical reservoir simulation of thermal methods such as steam assisted gravity drainage (SAGD) is complex and requires a solution of nonlinear mass and energy conservation equations on a fine reservoir grid. The most currently used technique for solving these equations is the fully IMplicit (FIM) method which is unconditionally stable, allowing for large timesteps in simulation. However, it is computationally expensive. On the other hand, the method known as IMplicit pressure explicit saturations, temperature and compositions (IMPEST) is computationally inexpensive, but it is only conditionally stable and restricts the timestep size. To improve the balance between the timestep size and computational cost, the thermal adaptive IMplicit (TAIM) method uses stability criteria and a switching algorithm, where some simulation variables such as pressure, saturations, temperature, compositions are treated implicitly while others are treated with explicit schemes. This presentation described ongoing research on TAIM with particular reference to thermal displacement processes such as the stability criteria that dictate the maximum allowed timestep size for simulation based on the von Neumann linear stability analysis method; the switching algorithm that adapts labeling of reservoir variables as implicit or explicit as a function of space and time; and, complex physical behaviors such as heat and fluid convection, thermal conduction and compressibility. Key numerical results obtained by enhancing Stanford's General Purpose Research Simulator (GPRS) were also presented along with a list of research challenges. 14 refs., 2 tabs., 11 figs., 1 appendix.

  2. Adaptive decoupled power control method for inverter connected DG

    DEFF Research Database (Denmark)

    Sun, Xiaofeng; Tian, Yanjun; Chen, Zhe

    2014-01-01

    an adaptive droop control method based on online evaluation of power decouple matrix for inverter connected distributed generations in distribution system. Traditional decoupled power control is simply based on line impedance parameter, but the load characteristics also cause the power coupling, and alter...

  3. Control of beam halo-chaos using neural network self-adaptation method

    International Nuclear Information System (INIS)

    Fang Jinqing; Huang Guoxian; Luo Xiaoshu

    2004-11-01

    Taking the advantages of neural network control method for nonlinear complex systems, control of beam halo-chaos in the periodic focusing channels (network) of high intensity accelerators is studied by feed-forward back-propagating neural network self-adaptation method. The envelope radius of high-intensity proton beam is reached to the matching beam radius by suitably selecting the control structure of neural network and the linear feedback coefficient, adjusted the right-coefficient of neural network. The beam halo-chaos is obviously suppressed and shaking size is much largely reduced after the neural network self-adaptation control is applied. (authors)

  4. An adaptive phase space method with application to reflection traveltime tomography

    International Nuclear Information System (INIS)

    Chung, Eric; Qian, Jianliang; Uhlmann, Gunther; Zhao, Hongkai

    2011-01-01

    In this work, an adaptive strategy for the phase space method for traveltime tomography (Chung et al 2007 Inverse Problems 23 309–29) is developed. The method first uses those geodesics/rays that produce smaller mismatch with the measurements and continues on in the spirit of layer stripping without defining the layers explicitly. The adaptive approach improves stability, efficiency and accuracy. We then extend our method to reflection traveltime tomography by incorporating broken geodesics/rays for which a jump condition has to be imposed at the broken point for the geodesic flow. In particular, we show that our method can distinguish non-broken and broken geodesics in the measurement and utilize them accordingly in reflection traveltime tomography. We demonstrate that our method can recover the convex hull (with respect to the underlying metric) of unknown obstacles as well as the metric outside the convex hull. (paper)

  5. Cool, warm, and heat-pain detection thresholds: testing methods and inferences about anatomic distribution of receptors.

    Science.gov (United States)

    Dyck, P J; Zimmerman, I; Gillen, D A; Johnson, D; Karnes, J L; O'Brien, P C

    1993-08-01

    We recently found that vibratory detection threshold is greatly influenced by the algorithm of testing. Here, we study the influence of stimulus characteristics and algorithm of testing and estimating threshold on cool (CDT), warm (WDT), and heat-pain (HPDT) detection thresholds. We show that continuously decreasing (for CDT) or increasing (for WDT) thermode temperature to the point at which cooling or warming is perceived and signaled by depressing a response key ("appearance" threshold) overestimates threshold with rapid rates of thermal change. The mean of the appearance and disappearance thresholds also does not perform well for insensitive sites and patients. Pyramidal (or flat-topped pyramidal) stimuli ranging in magnitude, in 25 steps, from near skin temperature to 9 degrees C for 10 seconds (for CDT), from near skin temperature to 45 degrees C for 10 seconds (for WDT), and from near skin temperature to 49 degrees C for 10 seconds (for HPDT) provide ideal stimuli for use in several algorithms of testing and estimating threshold. Near threshold, only the initial direction of thermal change from skin temperature is perceived, and not its return to baseline. Use of steps of stimulus intensity allows the subject or patient to take the needed time to decide whether the stimulus was felt or not (in 4, 2, and 1 stepping algorithms), or whether it occurred in stimulus interval 1 or 2 (in two-alternative forced-choice testing). Thermal thresholds were generally significantly lower with a large (10 cm2) than with a small (2.7 cm2) thermode.(ABSTRACT TRUNCATED AT 250 WORDS)

  6. An adaptive angle-doppler compensation method for airborne bistatic radar based on PAST

    Science.gov (United States)

    Hang, Xu; Jun, Zhao

    2018-05-01

    Adaptive angle-Doppler compensation method extract the requisite information based on the data itself adaptively, thus avoiding the problem of performance degradation caused by inertia system error. However, this method requires estimation and egiendecomposition of sample covariance matrix, which has a high computational complexity and limits its real-time application. In this paper, an adaptive angle Doppler compensation method based on projection approximation subspace tracking (PAST) is studied. The method uses cyclic iterative processing to quickly estimate the positions of the spectral center of the maximum eigenvector of each range cell, and the computational burden of matrix estimation and eigen-decompositon is avoided, and then the spectral centers of all range cells is overlapped by two dimensional compensation. Simulation results show the proposed method can effectively reduce the no homogeneity of airborne bistatic radar, and its performance is similar to that of egien-decomposition algorithms, but the computation load is obviously reduced and easy to be realized.

  7. Panchromatic cooperative hyperspectral adaptive wide band deletion repair method

    Science.gov (United States)

    Jiang, Bitao; Shi, Chunyu

    2018-02-01

    In the hyperspectral data, the phenomenon of stripe deletion often occurs, which seriously affects the efficiency and accuracy of data analysis and application. Narrow band deletion can be directly repaired by interpolation, and this method is not ideal for wide band deletion repair. In this paper, an adaptive spectral wide band missing restoration method based on panchromatic information is proposed, and the effectiveness of the algorithm is verified by experiments.

  8. The Threshold Temperature and Lag Effects on Daily Excess Mortality in Harbin, China: A Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Hanlu Gao

    2017-04-01

    Full Text Available Background: A large number of studies have reported the relationship between ambient temperature and mortality. However, few studies have focused on the effects of high temperatures on cardio-cerebrovascular diseases mortality (CCVDM and their acute events (ACCVDM. Objective: To assess the threshold temperature and time lag effects on daily excess mortality in Harbin, China. Methods: A generalized additive model (GAM with a Poisson distribution was used to investigate the relative risk of mortality for each 1 °C increase above the threshold temperature and their time lag effects in Harbin, China. Results: High temperature threshold was 26 °C in Harbin. Heat effects were immediate and lasted for 0–6 and 0–4 days for CCVDM and ACCVDM, respectively. The acute cardiovascular disease mortality (ACVDM seemed to be more sensitive to temperature than cardiovascular disease mortality (CVDM with higher death risk and shorter time lag effects. The lag effects lasted longer for cerebrovascular disease mortality (CBDM than CVDM; so did ACBDM compared to ACVDM. Conclusion: Hot temperatures increased CCVDM and ACCVDM in Harbin, China. Public health intervention strategies for hot temperatures adaptation should be concerned.

  9. A Sequential Kriging reliability analysis method with characteristics of adaptive sampling regions and parallelizability

    International Nuclear Information System (INIS)

    Wen, Zhixun; Pei, Haiqing; Liu, Hai; Yue, Zhufeng

    2016-01-01

    The sequential Kriging reliability analysis (SKRA) method has been developed in recent years for nonlinear implicit response functions which are expensive to evaluate. This type of method includes EGRA: the efficient reliability analysis method, and AK-MCS: the active learning reliability method combining Kriging model and Monte Carlo simulation. The purpose of this paper is to improve SKRA by adaptive sampling regions and parallelizability. The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results. The size of the sampling regions is adapted according to the failure probability calculated by last iteration. Two parallel strategies are introduced and compared, aimed at selecting multiple sample points at a time. The improvement is verified through several troublesome examples. - Highlights: • The ISKRA method improves the efficiency of SKRA. • Adaptive sampling regions strategy reduces the number of needed samples. • The two parallel strategies reduce the number of needed iterations. • The accuracy of the optimal value impacts the number of samples significantly.

  10. Approach to DOE threshold guidance limits

    International Nuclear Information System (INIS)

    Shuman, R.D.; Wickham, L.E.

    1984-01-01

    The need for less restrictive criteria governing disposal of extremely low-level radioactive waste has long been recognized. The Low-Level Waste Management Program has been directed by the Department of Energy (DOE) to aid in the development of a threshold guidance limit for DOE low-level waste facilities. Project objectives are concernd with the definition of a threshold limit dose and pathway analysis of radionuclide transport within selected exposure scenarios at DOE sites. Results of the pathway analysis will be used to determine waste radionuclide concentration guidelines that meet the defined threshold limit dose. Methods of measurement and verification of concentration limits round out the project's goals. Work on defining a threshold limit dose is nearing completion. Pathway analysis of sanitary landfill operations at the Savannah River Plant and the Idaho National Engineering Laboratory is in progress using the DOSTOMAN computer code. Concentration limit calculations and determination of implementation procedures shall follow completion of the pathways work. 4 references

  11. Sparse Pseudo Spectral Projection Methods with Directional Adaptation for Uncertainty Quantification

    KAUST Repository

    Winokur, J.; Kim, D.; Bisetti, Fabrizio; Le Maî tre, O. P.; Knio, Omar

    2015-01-01

    We investigate two methods to build a polynomial approximation of a model output depending on some parameters. The two approaches are based on pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids, and aim at providing a

  12. Application of Improved Wavelet Thresholding Function in Image Denoising Processing

    Directory of Open Access Journals (Sweden)

    Hong Qi Zhang

    2014-07-01

    Full Text Available Wavelet analysis is a time – frequency analysis method, time-frequency localization problems are well solved, this paper analyzes the basic principles of the wavelet transform and the relationship between the signal singularity Lipschitz exponent and the local maxima of the wavelet transform coefficients mold, the principles of wavelet transform in image denoising are analyzed, the disadvantages of traditional wavelet thresholding function are studied, wavelet threshold function, the discontinuity of hard threshold and constant deviation of soft threshold are improved, image is denoised through using the improved threshold function.

  13. A dual-adaptive support-based stereo matching algorithm

    Science.gov (United States)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  14. Bilevel thresholding of sliced image of sludge floc.

    Science.gov (United States)

    Chu, C P; Lee, D J

    2004-02-15

    This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.

  15. Use of a dynamic grid adaptation in the asymmetric weighted residual method

    International Nuclear Information System (INIS)

    Graf, V.; Romstedt, P.; Werner, W.

    1986-01-01

    A dynamic grid adaptive method has been developed for use with the asymmetric weighted residual method. The method automatically adapts the number and position of the spatial mesh points as the solution of hyperbolic or parabolic vector partial differential equations progresses in time. The mesh selection algorithm is based on the minimization of the L 2 norm of the spatial discretization error. The method permits the accurate calculation of the evolution of inhomogeneities, like wave fronts, shock layers, and other sharp transitions, while generally using a coarse computational grid. The number of required mesh points is significantly reduced, relative to a fixed Eulerian grid. Since the mesh selection algorithm is computationally inexpensive, a corresponding reduction of computing time results

  16. AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation

    International Nuclear Information System (INIS)

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin

    2016-01-01

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.

  17. Resilience Thinking: Integrating Resilience, Adaptability and Transformability

    Directory of Open Access Journals (Sweden)

    Carl Folke

    2010-12-01

    Full Text Available Resilience thinking addresses the dynamics and development of complex social-ecological systems (SES. Three aspects are central: resilience, adaptability and transformability. These aspects interrelate across multiple scales. Resilience in this context is the capacity of a SES to continually change and adapt yet remain within critical thresholds. Adaptability is part of resilience. It represents the capacity to adjust responses to changing external drivers and internal processes and thereby allow for development along the current trajectory (stability domain. Transformability is the capacity to cross thresholds into new development trajectories. Transformational change at smaller scales enables resilience at larger scales. The capacity to transform at smaller scales draws on resilience from multiple scales, making use of crises as windows of opportunity for novelty and innovation, and recombining sources of experience and knowledge to navigate social-ecological transitions. Society must seriously consider ways to foster resilience of smaller more manageable SESs that contribute to Earth System resilience and to explore options for deliberate transformation of SESs that threaten Earth System resilience.

  18. Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data

    Science.gov (United States)

    Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.

    2003-01-01

    A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.

  19. New method to evaluate the {sup 7}Li(p, n){sup 7}Be reaction near threshold

    Energy Technology Data Exchange (ETDEWEB)

    Herrera, María S., E-mail: herrera@tandar.cnea.gov.ar [Comisión Nacional de Energía Atómica, Av. Gral. Paz 1499, Buenos Aires B1650KNA (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Av. Rivadavia 1917, Ciudad Autónoma de Buenos Aires C1033AAJ (Argentina); Escuela de Ciencia y Tecnología, UNSAM, 25 de Mayo y Francia, Buenos Aires B1650KNA (Argentina); Moreno, Gustavo A. [YPF Tecnología, Baradero S/N, Buenos Aires 1925 (Argentina); Departamento de Física J. J. Giambiagi, Facultad de Ciencias Exactas y Naturales, UBA, Ciudad Universitaria, Ciudad Autónoma de Buenos Aires 1428 (Argentina); Kreiner, Andrés J. [Comisión Nacional de Energía Atómica, Av. Gral. Paz 1499, Buenos Aires B1650KNA (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Av. Rivadavia 1917, Ciudad Autónoma de Buenos Aires C1033AAJ (Argentina); Escuela de Ciencia y Tecnología, UNSAM, 25 de Mayo y Francia, Buenos Aires B1650KNA (Argentina)

    2015-04-15

    In this work a complete description of the {sup 7}Li(p, n){sup 7}Be reaction near threshold is given using center-of-mass and relative coordinates. It is shown that this standard approach, not used before in this context, leads to a simple mathematical representation which gives easy access to all relevant quantities in the reaction and allows a precise numerical implementation. It also allows in a simple way to include proton beam-energy spread affects. The method, implemented as a C++ code, was validated both with numerical and experimental data finding a good agreement. This tool is also used here to analyze scattered published measurements such as (p, n) cross sections, differential and total neutron yields for thick targets. Using these data we derive a consistent set of parameters to evaluate neutron production near threshold. Sensitivity of the results to data uncertainty and the possibility of incorporating new measurements are also discussed.

  20. CARA Risk Assessment Thresholds

    Science.gov (United States)

    Hejduk, M. D.

    2016-01-01

    Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).

  1. A practical threshold concept for simple and reasonable radiation protection

    International Nuclear Information System (INIS)

    Kaneko, Masahito

    2002-01-01

    A half century ago it was assumed for the purpose of protection that radiation risks are linearly proportional at all levels of dose. Linear No-Threshold (LNT) hypothesis has greatly contributed to the minimization of doses received by workers and members of the public, while it has brought about 'radiophobia' and unnecessary over-regulation. Now that the existence of bio-defensive mechanisms such as DNA repair, apoptosis and adaptive response are well recognized, the linearity assumption can be said 'unscientific'. Evidences increasingly imply that there are threshold effects in risk of radiation. A concept of 'practical' thresholds is proposed and the classification of 'stochastic' and 'deterministic' radiation effects should be abandoned. 'Practical' thresholds are dose levels below which induction of detectable radiogenic cancers or hereditary effects are not expected. There seems to be no evidence of deleterious health effects from radiation exposures at the current dose limits (50 mSv/y for workers and 5 mSv/y for members of the public), which have been adopted worldwide in the latter half of the 20th century. Those limits are assumed to have been set below certain 'practical' thresholds. As any workers and members of the public do not gain benefits from being exposed, excepting intentional irradiation for medical purposes, their radiation exposures should be kept below 'practical' thresholds. There is no use of 'justification' and 'optimization' (ALARA) principles, because there are no 'radiation detriments' as far as exposures are maintained below 'practical' thresholds. Accordingly the ethical issue of 'justification' to allow benefit to society to offset radiation detriments to individuals can be resolved. And also the ethical issue of 'optimization' to exchange health or safety for economical gain can be resolved. The ALARA principle should be applied to the probability (risk) of exceeding relevant dose limits instead of applying to normal exposures

  2. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    Science.gov (United States)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  3. Adaptive-mesh zoning by the equipotential method

    Energy Technology Data Exchange (ETDEWEB)

    Winslow, A.M.

    1981-04-01

    An adaptive mesh method is proposed for the numerical solution of differential equations which causes the mesh lines to move closer together in regions where higher resolution in some physical quantity T is desired. A coefficient D > 0 is introduced into the equipotential zoning equations, where D depends on the gradient of T . The equations are inverted, leading to nonlinear elliptic equations for the mesh coordinates with source terms which depend on the gradient of D. A functional form of D is proposed.

  4. Thresholds for statistical and clinical significance in systematic reviews with meta-analytic methods

    DEFF Research Database (Denmark)

    Jakobsen, Janus Christian; Wetterslev, Jorn; Winkel, Per

    2014-01-01

    BACKGROUND: Thresholds for statistical significance when assessing meta-analysis results are being insufficiently demonstrated by traditional 95% confidence intervals and P-values. Assessment of intervention effects in systematic reviews with meta-analysis deserves greater rigour. METHODS......: Methodologies for assessing statistical and clinical significance of intervention effects in systematic reviews were considered. Balancing simplicity and comprehensiveness, an operational procedure was developed, based mainly on The Cochrane Collaboration methodology and the Grading of Recommendations...... Assessment, Development, and Evaluation (GRADE) guidelines. RESULTS: We propose an eight-step procedure for better validation of meta-analytic results in systematic reviews (1) Obtain the 95% confidence intervals and the P-values from both fixed-effect and random-effects meta-analyses and report the most...

  5. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    Science.gov (United States)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  6. A novel adaptive force control method for IPMC manipulation

    International Nuclear Information System (INIS)

    Hao, Lina; Sun, Zhiyong; Su, Yunquan; Gao, Jianchao; Li, Zhi

    2012-01-01

    IPMC is a type of electro-active polymer material, also called artificial muscle, which can generate a relatively large deformation under a relatively low input voltage (generally speaking, less than 5 V), and can be implemented in a water environment. Due to these advantages, IPMC can be used in many fields such as biomimetics, service robots, bio-manipulation, etc. Until now, most existing methods for IPMC manipulation are displacement control not directly force control, however, under most conditions, the success rate of manipulations for tiny fragile objects is limited by the contact force, such as using an IPMC gripper to fix cells. Like most EAPs, a creep phenomenon exists in IPMC, of which the generated force will change with time and the creep model will be influenced by the change of the water content or other environmental factors, so a proper force control method is urgently needed. This paper presents a novel adaptive force control method (AIPOF control—adaptive integral periodic output feedback control), based on employing a creep model of which parameters are obtained by using the FRLS on-line identification method. The AIPOF control method can achieve an arbitrary pole configuration as long as the plant is controllable and observable. This paper also designs the POF and IPOF controller to compare their test results. Simulation and experiments of micro-force-tracking tests are carried out, with results confirming that the proposed control method is viable. (paper)

  7. Adaptive Detection and ISI Mitigation for Mobile Molecular Communication.

    Science.gov (United States)

    Chang, Ge; Lin, Lin; Yan, Hao

    2018-03-01

    Current studies on modulation and detection schemes in molecular communication mainly focus on the scenarios with static transmitters and receivers. However, mobile molecular communication is needed in many envisioned applications, such as target tracking and drug delivery. Until now, investigations about mobile molecular communication have been limited. In this paper, a static transmitter and a mobile bacterium-based receiver performing random walk are considered. In this mobile scenario, the channel impulse response changes due to the dynamic change of the distance between the transmitter and the receiver. Detection schemes based on fixed distance fail in signal detection in such a scenario. Furthermore, the intersymbol interference (ISI) effect becomes more complex due to the dynamic character of the signal which makes the estimation and mitigation of the ISI even more difficult. In this paper, an adaptive ISI mitigation method and two adaptive detection schemes are proposed for this mobile scenario. In the proposed scheme, adaptive ISI mitigation, estimation of dynamic distance, and the corresponding impulse response reconstruction are performed in each symbol interval. Based on the dynamic channel impulse response in each interval, two adaptive detection schemes, concentration-based adaptive threshold detection and peak-time-based adaptive detection, are proposed for signal detection. Simulations demonstrate that the ISI effect is significantly reduced and the adaptive detection schemes are reliable and robust for mobile molecular communication.

  8. Appropriate threshold levels of cardiac beat-to-beat variation in semi-automatic analysis of equine ECG recordings

    DEFF Research Database (Denmark)

    Madsen, Mette Flethøj; Kanters, Jørgen K.; Pedersen, Philip Juul

    2016-01-01

    considerably with heart rate (HR), and an adaptable model consisting of three different HR ranges with separate threshold levels of maximum acceptable RR deviation was consequently defined. For resting HRs

  9. Spatially adaptive hp refinement approach for PN neutron transport equation using spectral element method

    International Nuclear Information System (INIS)

    Nahavandi, N.; Minuchehr, A.; Zolfaghari, A.; Abbasi, M.

    2015-01-01

    Highlights: • Powerful hp-SEM refinement approach for P N neutron transport equation has been presented. • The method provides great geometrical flexibility and lower computational cost. • There is a capability of using arbitrary high order and non uniform meshes. • Both posteriori and priori local error estimation approaches have been employed. • High accurate results are compared against other common adaptive and uniform grids. - Abstract: In this work we presented the adaptive hp-SEM approach which is obtained from the incorporation of Spectral Element Method (SEM) and adaptive hp refinement. The SEM nodal discretization and hp adaptive grid-refinement for even-parity Boltzmann neutron transport equation creates powerful grid refinement approach with high accuracy solutions. In this regard a computer code has been developed to solve multi-group neutron transport equation in one-dimensional geometry using even-parity transport theory. The spatial dependence of flux has been developed via SEM method with Lobatto orthogonal polynomial. Two commonly error estimation approaches, the posteriori and the priori has been implemented. The incorporation of SEM nodal discretization method and adaptive hp grid refinement leads to high accurate solutions. Coarser meshes efficiency and significant reduction of computer program runtime in comparison with other common refining methods and uniform meshing approaches is tested along several well-known transport benchmarks

  10. Role of extrinsic noise in the sensitivity of the rod pathway: rapid dark adaptation of nocturnal vision in humans.

    Science.gov (United States)

    Reeves, Adam; Grayhem, Rebecca

    2016-03-01

    Rod-mediated 500 nm test spots were flashed in Maxwellian view at 5 deg eccentricity, both on steady 10.4 deg fields of intensities (I) from 0.00001 to 1.0 scotopic troland (sc td) and from 0.2 s to 1 s after extinguishing the field. On dim fields, thresholds of tiny (5') tests were proportional to √I (Rose-DeVries law), while thresholds after extinction fell within 0.6 s to the fully dark-adapted absolute threshold. Thresholds of large (1.3 deg) tests were proportional to I (Weber law) and extinction thresholds, to √I. rod thresholds are elevated by photon-driven noise from dim fields that disappears at field extinction; large spot thresholds are additionally elevated by neural light adaptation proportional to √I. At night, recovery from dimly lit fields is fast, not slow.

  11. Adapting a perinatal empathic training method from South Africa to Germany.

    Science.gov (United States)

    Knapp, Caprice; Honikman, Simone; Wirsching, Michael; Husni-Pascha, Gidah; Hänselmann, Eva

    2018-01-01

    Maternal mental health conditions are prevalent across the world. For women, the perinatal period is associated with increased rates of depression and anxiety. At the same time, there is widespread documentation of disrespectful care for women by maternity health staff. Improving the empathic engagement skills of maternity healthcare workers may enable them to respond to the mental health needs of their clients more effectively. In South Africa, a participatory empathic training method, the "Secret History" has been used as part of a national Department of Health training program with maternity staff and has showed promising results. For this paper, we aimed to describe an adaptation of the Secret History empathic training method from the South African to the German setting and to evaluate the adapted training. The pilot study occurred in an academic medical center in Germany. A focus group ( n  = 8) was used to adapt the training by describing the local context and changing the materials to be relevant to Germany. After adapting the materials, the pilot training was conducted with a mixed group of professionals ( n  = 15), many of whom were trainers themselves. A pre-post survey assessed the participants' empathy levels and attitudes towards the training method. In adapting the materials, the focus group discussion generated several experiences that were considered to be typical interpersonal and structural challenges facing healthcare workers in maternal care in Germany. These experiences were crafted into case scenarios that then formed the basis of the activities used in the Secret History empathic training pilot. Evaluation of the pilot training showed that although the participants had high levels of empathy in the pre-phase (100% estimated their empathic ability as high or very high), 69% became more aware of their own emotional experiences with patients and the need for self-care after the training. A majority, or 85%, indicated that the training

  12. Levels of alarm thresholds of meningitis outbreaks in Hamadan Province, west of Iran.

    Science.gov (United States)

    Faryadres, Mohammad; Karami, Manoochehr; Moghimbeigi, Abbas; Esmailnasab, Nader; Pazhouhi, Khabat

    2015-01-01

    Few studies have focused on syndromic data to determine levels of alarm thresholds to detection of meningitis outbreaks. The purpose of this study was to determine threshold levels of meningitis outbreak in Hamadan Province, west of Iran. Data on both confirmed and suspected cases of meningitis (fever and neurological symptom) form 21 March 2010 to 20 March 2012 were used in Hamadan Province, Iran. Alarm threshold levels of meningitis outbreak were determined using four different methods including absolute values or standard method, relative increase, statistical cutoff points and upper control limit of exponentially weighted moving average (EWMA) algorithm. Among 723 reported cases, 41 were diagnosed to have meningitis. Standard level of alarm thresholds for meningitis outbreak was determined as incidence of 5/100000 persons. Increasing 1.5 to two times in reported cases of suspected meningitis per week was known as the threshold levels according to relative increase method. An occurrence four cases of suspected meningitis per week that equals to 90th percentile was chosen as alarm thresholds by statistical cut off point method. The corresponding value according to EWMA algorithm was 2.57 i.e. three cases. Policy makers and staff of syndromic surveillance systems are highly recommended to apply the above different methods to determine the levels of alarm threshold.

  13. Structural performance evaluation on aging underground reinforced concrete structures. Part 6. An estimation method of threshold value in performance verification taking reinforcing steel corrosion

    International Nuclear Information System (INIS)

    Matsuo, Toyofumi; Matsumura, Takuro; Miyagawa, Yoshinori

    2009-01-01

    This paper discusses applicability of material degradation model due to reinforcing steel corrosion for RC box-culverts with corroded reinforcement and an estimation method for threshold value in performance verification reflecting reinforcing steel corrosion. First, in FEM analyses, loss of reinforcement section area and initial tension strain arising from reinforcing steel corrosion, and deteriorated bond characteristics between reinforcement and concrete were considered. The full-scale loading tests using corroded RC box-culverts were numerically analyzed. As a result, the analyzed crack patterns and load-strain relationships were in close agreement with the experimental results within the maximum corrosion ratio 15% of primary reinforcement. Then, we showed that this modeling could estimate the load carrying capacity of corroded RC box-culverts. Second, a parametric study was carried out for corroded RC box culverts with various sizes, reinforcement ratios and levels of steel corrosion, etc. Furthermore, as an application of analytical results and various experimental investigations, we suggested allowable degradation ratios for a modification of the threshold value, which corresponds to the chloride induced deterioration progress that is widely accepted in maintenance practice for civil engineering reinforced concrete structures. Finally, based on these findings, we developed two estimation methods for threshold value in performance verification: 1) a structural analysis method using nonlinear FEM included modeling of material degradation, 2) a practical method using a threshold value, which is determined by structural analyses of RC box-culverts in sound condition, is multiplied by the allowable degradation ratio. (author)

  14. Finite element method for solving Kohn-Sham equations based on self-adaptive tetrahedral mesh

    International Nuclear Information System (INIS)

    Zhang Dier; Shen Lihua; Zhou Aihui; Gong Xingao

    2008-01-01

    A finite element (FE) method with self-adaptive mesh-refinement technique is developed for solving the density functional Kohn-Sham equations. The FE method adopts local piecewise polynomials basis functions, which produces sparsely structured matrices of Hamiltonian. The method is well suitable for parallel implementation without using Fourier transform. In addition, the self-adaptive mesh-refinement technique can control the computational accuracy and efficiency with optimal mesh density in different regions

  15. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    Science.gov (United States)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  16. 3D spatially-adaptive canonical correlation analysis: Local and global methods.

    Science.gov (United States)

    Yang, Zhengshi; Zhuang, Xiaowei; Sreenivasan, Karthik; Mishra, Virendra; Curran, Tim; Byrd, Richard; Nandy, Rajesh; Cordes, Dietmar

    2018-04-01

    Local spatially-adaptive canonical correlation analysis (local CCA) with spatial constraints has been introduced to fMRI multivariate analysis for improved modeling of activation patterns. However, current algorithms require complicated spatial constraints that have only been applied to 2D local neighborhoods because the computational time would be exponentially increased if the same method is applied to 3D spatial neighborhoods. In this study, an efficient and accurate line search sequential quadratic programming (SQP) algorithm has been developed to efficiently solve the 3D local CCA problem with spatial constraints. In addition, a spatially-adaptive kernel CCA (KCCA) method is proposed to increase accuracy of fMRI activation maps. With oriented 3D spatial filters anisotropic shapes can be estimated during the KCCA analysis of fMRI time courses. These filters are orientation-adaptive leading to rotational invariance to better match arbitrary oriented fMRI activation patterns, resulting in improved sensitivity of activation detection while significantly reducing spatial blurring artifacts. The kernel method in its basic form does not require any spatial constraints and analyzes the whole-brain fMRI time series to construct an activation map. Finally, we have developed a penalized kernel CCA model that involves spatial low-pass filter constraints to increase the specificity of the method. The kernel CCA methods are compared with the standard univariate method and with two different local CCA methods that were solved by the SQP algorithm. Results show that SQP is the most efficient algorithm to solve the local constrained CCA problem, and the proposed kernel CCA methods outperformed univariate and local CCA methods in detecting activations for both simulated and real fMRI episodic memory data. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Fatigue crack growth thresholds measurements in structural materials

    International Nuclear Information System (INIS)

    Lindstroem, R.; Lidar, P.; Rosborg, B.

    1999-05-01

    Fatigue crack growth thresholds and da/dN-data at low Δk I -values ( 1/2 ) have been determined for type 304 stainless steel, nickel-base weld metal Alloy 182, nickel-base metal Alloy 600, and low-alloy steel in air at ambient temperature and in high-temperature water and steam. The stainless alloys have been tested in water with 0.2 ppm O 2 at 288 deg C and the low-alloy steel in steam at 286 deg C. The fatigue crack growth threshold was defined as the ΔK I -value resulting in a crack growth rate of 10 -7 mm per cycle. The measured fatigue crack growth thresholds (at frequencies from 0.5 to 20 Hz) are quite similar independent of the material and the environment. A relatively inexpensive and time-saving method for measuring fatigue crack growth thresholds, and fatigue crack growth rates at low ΔK I -values, has been used in the tests. The method is a ΔK I -decreasing test with constant K I Max

  18. Threshold-driven optimization for reference-based auto-planning

    Science.gov (United States)

    Long, Troy; Chen, Mingli; Jiang, Steve; Lu, Weiguo

    2018-02-01

    We study threshold-driven optimization methodology for automatically generating a treatment plan that is motivated by a reference DVH for IMRT treatment planning. We present a framework for threshold-driven optimization for reference-based auto-planning (TORA). Commonly used voxel-based quadratic penalties have two components for penalizing under- and over-dosing of voxels: a reference dose threshold and associated penalty weight. Conventional manual- and auto-planning using such a function involves iteratively updating the preference weights while keeping the thresholds constant, an unintuitive and often inconsistent method for planning toward some reference DVH. However, driving a dose distribution by threshold values instead of preference weights can achieve similar plans with less computational effort. The proposed methodology spatially assigns reference DVH information to threshold values, and iteratively improves the quality of that assignment. The methodology effectively handles both sub-optimal and infeasible DVHs. TORA was applied to a prostate case and a liver case as a proof-of-concept. Reference DVHs were generated using a conventional voxel-based objective, then altered to be either infeasible or easy-to-achieve. TORA was able to closely recreate reference DVHs in 5-15 iterations of solving a simple convex sub-problem. TORA has the potential to be effective for auto-planning based on reference DVHs. As dose prediction and knowledge-based planning becomes more prevalent in the clinical setting, incorporating such data into the treatment planning model in a clear, efficient way will be crucial for automated planning. A threshold-focused objective tuning should be explored over conventional methods of updating preference weights for DVH-guided treatment planning.

  19. Rainfall thresholds for the triggering of landslides in Slovenia

    Science.gov (United States)

    Peternel, Tina; Jemec Auflič, Mateja; Rosi, Ascanio; Segoni, Samuele; Komac, Marko; Casagli, Nicola

    2017-04-01

    Both at the worldwide level and in Slovenia, precipitation and related phenomena represent one of the most important triggering factors for the occurrence of slope mass movements. In the past decade, extreme rainfall events with a very high amount of precipitation occurs in a relatively short rainfall period have become increasingly important and more frequent, that causing numerous undesirable consequences. Intense rainstorms cause flash floods and mostly trigger shallow landslides and soil slips. On the other hand, the damage of long lasting rainstorms depends on the region's adaptation and its capacity to store or infiltrate excessive water from the rain. The amount and, consequently, the intensity of daily precipitation that can cause floods in the eastern part of Slovenia is a rather common event for the north-western part of the country. Likewise, the effect of rainfall is very dependent on the prior soil moisture, periods of full soil saturation and the creation of drifts in groundwater levels due to the slow melting of snow, growing period, etc. Landslides could be identified and to some extent also prevent with better knowledge of the relation between landslides and rainfall. In this paper the definition of rainfall thresholds for rainfall-induced landslides in Slovenia is presented. The thresholds have been calculated by collecting approximately 900 landslide data and the relative rainfall amounts, which have been collected from 41 rain gauges all over the country. The thresholds have been defined by the (1) use of an existing procedure, characterized by a high degree of objectiveness and (2) software that was developed for a test site with very different geological and climatic characteristics (Tuscany, central Italy). Firstly, a single national threshold has been defined, later the country was divided into four zones, on the basis of major the river basins and a single threshold has been calculated for each of them. Validation of the calculated

  20. An Adaptive Laboratory Evolution Method to Accelerate Autotrophic Metabolism

    DEFF Research Database (Denmark)

    Zhang, Tian; Tremblay, Pier-Luc

    2018-01-01

    Adaptive laboratory evolution (ALE) is an approach enabling the development of novel characteristics in microbial strains via the application of a constant selection pressure. This method is also an efficient tool to acquire insights on molecular mechanisms responsible for specific phenotypes. ALE...... autotrophically and reducing CO2 into acetate more efficiently. Strains developed via this ALE method were also used to gain knowledge on the autotrophic metabolism of S. ovata as well as other acetogenic bacteria....

  1. Method and system for environmentally adaptive fault tolerant computing

    Science.gov (United States)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  2. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    Science.gov (United States)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  3. HERITABILITY AND BREEDING VALUE OF SHEEP FERTILITY ESTIMATED BY MEANS OF THE GIBBS SAMPLING METHOD USING THE LINEAR AND THRESHOLD MODELS

    Directory of Open Access Journals (Sweden)

    DARIUSZ Piwczynski

    2013-03-01

    Full Text Available The research was carried out on 4,030 Polish Merino ewes born in the years 1991- 2001, kept in 15 flocks from the Pomorze and Kujawy region. Fertility of ewes in subsequent reproduction seasons was analysed with the use of multiple logistic regression. The research showed that there is a statistical influence of the flock, year of birth, age of dam, flock year interaction of birth on the ewes fertility. In order to estimate the genetic parameters, the Gibbs sampling method was applied, using the univariate animal models, both linear as well as threshold. Estimates of fertility depending on the model equalled 0.067 to 0.104, whereas the estimates of repeatability equalled respectively: 0.076 and 0.139. The obtained genetic parameters were then used to estimate the breeding values of the animals in terms of controlled trait (Best Linear Unbiased Prediction method using linear and threshold models. The obtained animal breeding values rankings in respect of the same trait with the use of linear and threshold models were strongly correlated with each other (rs = 0.972. Negative genetic trends of fertility (0.01-0.08% per year were found.

  4. A thresholding based technique to extract retinal blood vessels from fundus images

    Directory of Open Access Journals (Sweden)

    Jyotiprava Dash

    2017-12-01

    Full Text Available Retinal imaging has become the significant tool among all the medical imaging technology, due to its capability to extract many data which is linked to various eye diseases. So, the accurate extraction of blood vessel is necessary that helps the eye care specialists and ophthalmologist to identify the diseases at the early stages. In this paper, we have proposed a computerized technique for extraction of blood vessels from fundus images. The process is conducted in three phases: (i pre-processing where the image is enhanced using contrast limited adaptive histogram equalization and median filter, (ii segmentation using mean-C thresholding to extract retinal blood vessels, (iii post-processing where morphological cleaning operation is used to remove isolated pixels. The performance of the proposed method is tested on and experimental results show that our method achieve an accuracies of 0.955 and 0.954 on Digital retinal images for vessel extraction (DRIVE and Child heart and health study in England (CHASE_DB1 databases respectively.

  5. An adaptive proper orthogonal decomposition method for model order reduction of multi-disc rotor system

    Science.gov (United States)

    Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu

    2017-12-01

    The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.

  6. Hybrid Adaptive Flight Control with Model Inversion Adaptation

    Science.gov (United States)

    Nguyen, Nhan

    2011-01-01

    This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.

  7. Sparse Pseudo Spectral Projection Methods with Directional Adaptation for Uncertainty Quantification

    KAUST Repository

    Winokur, J.

    2015-12-19

    We investigate two methods to build a polynomial approximation of a model output depending on some parameters. The two approaches are based on pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids, and aim at providing a finer control of the resolution along two distinct subsets of model parameters. The control of the error along different subsets of parameters may be needed for instance in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid PSP is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error. In addition, the global approach is better suited for generalization to more than two subsets of directions.

  8. Adaptive wavelet method for pricing two-asset Asian options with floating strike

    Science.gov (United States)

    Černá, Dana

    2017-12-01

    Asian options are path-dependent option contracts which payoff depends on the average value of the asset price over some period of time. We focus on pricing of Asian options on two assets. The model for pricing these options is represented by a parabolic equation with time variable and three state variables, but using substitution it can be reduced to the equation with only two state variables. For time discretization we use the θ-scheme. We propose a wavelet basis that is adapted to boundary conditions and use an adaptive scheme with this basis for discretization on the given time level. The main advantage of this scheme is small number of degrees of freedom. We present numerical experiments for the Asian put option with floating strike and compare the results for the proposed adaptive method and the Galerkin method.

  9. Genetic variation in threshold reaction norms for alternative reproductive tactics in male Atlantic salmon, Salmo salar.

    Science.gov (United States)

    Piché, Jacinthe; Hutchings, Jeffrey A; Blanchard, Wade

    2008-07-07

    Alternative reproductive tactics may be a product of adaptive phenotypic plasticity, such that discontinuous variation in life history depends on both the genotype and the environment. Phenotypes that fall below a genetically determined threshold adopt one tactic, while those exceeding the threshold adopt the alternative tactic. We report evidence of genetic variability in maturation thresholds for male Atlantic salmon (Salmo salar) that mature either as large (more than 1 kg) anadromous males or as small (10-150 g) parr. Using a common-garden experimental protocol, we find that the growth rate at which the sneaker parr phenotype is expressed differs among pure- and mixed-population crosses. Maturation thresholds of hybrids were intermediate to those of pure crosses, consistent with the hypothesis that the life-history switch points are heritable. Our work provides evidence, for a vertebrate, that thresholds for alternative reproductive tactics differ genetically among populations and can be modelled as discontinuous reaction norms for age and size at maturity.

  10. The adaptive problems of female teenage refugees and their behavioral adjustment methods for coping

    Directory of Open Access Journals (Sweden)

    Mhaidat F

    2016-04-01

    Full Text Available Fatin Mhaidat Department of Educational Psychology, Faculty of Educational Sciences, The Hashemite University, Zarqa, Jordan Abstract: This study aimed at identifying the levels of adaptive problems among teenage female refugees in the government schools and explored the behavioral methods that were used to cope with the problems. The sample was composed of 220 Syrian female students (seventh to first secondary grades enrolled at government schools within the Zarqa Directorate and who came to Jordan due to the war conditions in their home country. The study used the scale of adaptive problems that consists of four dimensions (depression, anger and hostility, low self-esteem, and feeling insecure and a questionnaire of the behavioral adjustment methods for dealing with the problem of asylum. The results indicated that the Syrian teenage female refugees suffer a moderate degree of adaptation problems, and the positive adjustment methods they have used are more than the negatives. Keywords: adaptive problems, female teenage refugees, behavioral adjustment

  11. Theory of threshold phenomena

    International Nuclear Information System (INIS)

    Hategan, Cornel

    2002-01-01

    Theory of Threshold Phenomena in Quantum Scattering is developed in terms of Reduced Scattering Matrix. Relationships of different types of threshold anomalies both to nuclear reaction mechanisms and to nuclear reaction models are established. Magnitude of threshold effect is related to spectroscopic factor of zero-energy neutron state. The Theory of Threshold Phenomena, based on Reduced Scattering Matrix, does establish relationships between different types of threshold effects and nuclear reaction mechanisms: the cusp and non-resonant potential scattering, s-wave threshold anomaly and compound nucleus resonant scattering, p-wave anomaly and quasi-resonant scattering. A threshold anomaly related to resonant or quasi resonant scattering is enhanced provided the neutron threshold state has large spectroscopic amplitude. The Theory contains, as limit cases, Cusp Theories and also results of different nuclear reactions models as Charge Exchange, Weak Coupling, Bohr and Hauser-Feshbach models. (author)

  12. A Remote Sensing Image Fusion Method based on adaptive dictionary learning

    Science.gov (United States)

    He, Tongdi; Che, Zongxi

    2018-01-01

    This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.

  13. An Adaptive Privacy Protection Method for Smart Home Environments Using Supervised Learning

    Directory of Open Access Journals (Sweden)

    Jingsha He

    2017-03-01

    Full Text Available In recent years, smart home technologies have started to be widely used, bringing a great deal of convenience to people’s daily lives. At the same time, privacy issues have become particularly prominent. Traditional encryption methods can no longer meet the needs of privacy protection in smart home applications, since attacks can be launched even without the need for access to the cipher. Rather, attacks can be successfully realized through analyzing the frequency of radio signals, as well as the timestamp series, so that the daily activities of the residents in the smart home can be learnt. Such types of attacks can achieve a very high success rate, making them a great threat to users’ privacy. In this paper, we propose an adaptive method based on sample data analysis and supervised learning (SDASL, to hide the patterns of daily routines of residents that would adapt to dynamically changing network loads. Compared to some existing solutions, our proposed method exhibits advantages such as low energy consumption, low latency, strong adaptability, and effective privacy protection.

  14. Is the bitter rejection response always adaptive?

    Science.gov (United States)

    Glendinning, J I

    1994-12-01

    The bitter rejection response consists of a suite of withdrawal reflexes and negative affective responses. It is generally assumed to have evolved as a way to facilitate avoidance of foods that are poisonous because they usually taste bitter to humans. Using previously published studies, the present paper examines the relationship between bitterness and toxicity in mammals, and then assesses the ecological costs and benefits of the bitter rejection response in carnivorous, omnivorous, and herbivorous (grazing and browsing) mammals. If the bitter rejection response accurately predicts the potential toxicity of foods, then one would expect the threshold for the response to be lower for highly toxic compounds than for nontoxic compounds. The data revealed no such relationship. Bitter taste thresholds varied independently of toxicity thresholds, indicating that the bitter rejection response is just as likely to be elicited by a harmless bitter food as it is by a harmful one. Thus, it is not necessarily in an animal's best interest to have an extremely high or low bitter threshold. Based on this observation, it was hypothesized that the adaptiveness of the bitter rejection response depends upon the relative occurrence of bitter and potentially toxic compounds in an animal's diet. Animals with a relatively high occurrence of bitter and potentially toxic compounds in their diet (e.g., browsing herbivores) were predicted to have evolved a high bitter taste threshold and tolerance to dietary poisons. Such an adaptation would be necessary because a browser cannot "afford" to reject all foods that are bitter and potentially toxic without unduly restricting its dietary options. At the other extreme, animals that rarely encounter bitter and potentially toxic compounds in their diet (e.g., carnivores) were predicted to have evolved a low bitter threshold. Carnivores could "afford" to utilize such a stringent rejection mechanism because foods containing bitter and potentially

  15. A class of discontinuous Petrov–Galerkin methods. Part III: Adaptivity

    KAUST Repository

    Demkowicz, Leszek

    2012-04-01

    We continue our theoretical and numerical study on the Discontinuous Petrov-Galerkin method with optimal test functions in context of 1D and 2D convection-dominated diffusion problems and hp-adaptivity. With a proper choice of the norm for the test space, we prove robustness (uniform stability with respect to the diffusion parameter) and mesh-independence of the energy norm of the FE error for the 1D problem. With hp-adaptivity and a proper scaling of the norms for the test functions, we establish new limits for solving convection-dominated diffusion problems numerically: ε=10 -11 for 1D and ε=10 -7 for 2D problems. The adaptive process is fully automatic and starts with a mesh consisting of few elements only. © 2011 IMACS. Published by Elsevier B.V. All rights reserved.

  16. Adaptive Methods for Permeability Estimation and Smart Well Management

    Energy Technology Data Exchange (ETDEWEB)

    Lien, Martha Oekland

    2005-04-01

    The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement

  17. PARALLEL AND ADAPTIVE UNIFORM-DISTRIBUTED REGISTRATION METHOD FOR CHANG’E-1 LUNAR REMOTE SENSED IMAGERY

    Directory of Open Access Journals (Sweden)

    X. Ning

    2012-08-01

    To resolve the above-mentioned registration difficulties, a parallel and adaptive uniform-distributed registration method for CE-1 lunar remote sensed imagery is proposed in this paper. Based on 6 pairs of randomly selected images, both the standard SIFT algorithm and the parallel and adaptive uniform-distributed registration method were executed, the versatility and effectiveness were assessed. The experimental results indicate that: by applying the parallel and adaptive uniform-distributed registration method, the efficiency of CE-1 lunar remote sensed imagery registration were increased dramatically. Therefore, the proposed method in the paper could acquire uniform-distributed registration results more effectively, the registration difficulties including difficult to obtain results, time-consuming, non-uniform distribution could be successfully solved.

  18. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    Science.gov (United States)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  19. Effects of threshold on the topology of gene co-expression networks.

    Science.gov (United States)

    Couto, Cynthia Martins Villar; Comin, César Henrique; Costa, Luciano da Fontoura

    2017-09-26

    Several developments regarding the analysis of gene co-expression profiles using complex network theory have been reported recently. Such approaches usually start with the construction of an unweighted gene co-expression network, therefore requiring the selection of a suitable threshold defining which pairs of vertices will be connected. We aimed at addressing such an important problem by suggesting and comparing five different approaches for threshold selection. Each of the methods considers a respective biologically-motivated criterion for electing a potentially suitable threshold. A set of 21 microarray experiments from different biological groups was used to investigate the effect of applying the five proposed criteria to several biological situations. For each experiment, we used the Pearson correlation coefficient to measure the relationship between each gene pair, and the resulting weight matrices were thresholded considering several values, generating respective adjacency matrices (co-expression networks). Each of the five proposed criteria was then applied in order to select the respective threshold value. The effects of these thresholding approaches on the topology of the resulting networks were compared by using several measurements, and we verified that, depending on the database, the impact on the topological properties can be large. However, a group of databases was verified to be similarly affected by most of the considered criteria. Based on such results, it can be suggested that when the generated networks present similar measurements, the thresholding method can be chosen with greater freedom. If the generated networks are markedly different, the thresholding method that better suits the interests of each specific research study represents a reasonable choice.

  20. Computer prediction of subsurface radionuclide transport: an adaptive numerical method

    International Nuclear Information System (INIS)

    Neuman, S.P.

    1983-01-01

    Radionuclide transport in the subsurface is often modeled with the aid of the advection-dispersion equation. A review of existing computer methods for the solution of this equation shows that there is need for improvement. To answer this need, a new adaptive numerical method is proposed based on an Eulerian-Lagrangian formulation. The method is based on a decomposition of the concentration field into two parts, one advective and one dispersive, in a rigorous manner that does not leave room for ambiguity. The advective component of steep concentration fronts is tracked forward with the aid of moving particles clustered around each front. Away from such fronts the advection problem is handled by an efficient modified method of characteristics called single-step reverse particle tracking. When a front dissipates with time, its forward tracking stops automatically and the corresponding cloud of particles is eliminated. The dispersion problem is solved by an unconventional Lagrangian finite element formulation on a fixed grid which involves only symmetric and diagonal matrices. Preliminary tests against analytical solutions of ne- and two-dimensional dispersion in a uniform steady state velocity field suggest that the proposed adaptive method can handle the entire range of Peclet numbers from 0 to infinity, with Courant numbers well in excess of 1

  1. Adaptation of chemical methods of analysis to the matrix of pyrite-acidified mining lakes

    International Nuclear Information System (INIS)

    Herzsprung, P.; Friese, K.

    2000-01-01

    Owing to the unusual matrix of pyrite-acidified mining lakes, the analysis of chemical parameters may be difficult. A number of methodological improvements have been developed so far, and a comprehensive validation of methods is envisaged. The adaptation of the available methods to small-volume samples of sediment pore waters and the adaptation of sensitivity to the expected concentration ranges is an important element of the methods applied in analyses of biogeochemical processes in mining lakes [de

  2. Moving finite elements: A continuously adaptive method for computational fluid dynamics

    International Nuclear Information System (INIS)

    Glasser, A.H.; Miller, K.; Carlson, N.

    1991-01-01

    Moving Finite Elements (MFE), a recently developed method for computational fluid dynamics, promises major advances in the ability of computers to model the complex behavior of liquids, gases, and plasmas. Applications of computational fluid dynamics occur in a wide range of scientifically and technologically important fields. Examples include meteorology, oceanography, global climate modeling, magnetic and inertial fusion energy research, semiconductor fabrication, biophysics, automobile and aircraft design, industrial fluid processing, chemical engineering, and combustion research. The improvements made possible by the new method could thus have substantial economic impact. Moving Finite Elements is a moving node adaptive grid method which has a tendency to pack the grid finely in regions where it is most needed at each time and to leave it coarse elsewhere. It does so in a manner which is simple and automatic, and does not require a large amount of human ingenuity to apply it to each particular problem. At the same time, it often allows the time step to be large enough to advance a moving shock by many shock thicknesses in a single time step, moving the grid smoothly with the solution and minimizing the number of time steps required for the whole problem. For 2D problems (two spatial variables) the grid is composed of irregularly shaped and irregularly connected triangles which are very flexible in their ability to adapt to the evolving solution. While other adaptive grid methods have been developed which share some of these desirable properties, this is the only method which combines them all. In many cases, the method can save orders of magnitude of computing time, equivalent to several generations of advancing computer hardware

  3. An Adaptive Dense Matching Method for Airborne Images Using Texture Information

    Directory of Open Access Journals (Sweden)

    ZHU Qing

    2017-01-01

    Full Text Available Semi-global matching (SGM is essentially a discrete optimization for the disparity value of each pixel, under the assumption of disparity continuities. SGM overcomes the influence of the disparity discontinuities by a set of parameters. Using smaller parameters, the continuity constraint is weakened, which will cause significant noises in planar and textureless areas, reflected as the fluctuations on the final surface reconstruction. On the other hands, larger parameters will impose too much constraints on continuities, which may lead to losses of sharp features. To address this problem, this paper proposes an adaptive dense stereo matching methods for airborne images using with texture information. Firstly, the texture is quantified, and under the assumption that disparity variation is directly proportional to the texture information, the adaptive parameters are gauged accordingly. Second, SGM is adopted to optimize the discrete disparities using the adaptively tuned parameters. Experimental evaluations using the ISPRS benchmark dataset and images obtained by the SWDC-5 have revealed that the proposed method will significantly improve the visual qualities of the point clouds.

  4. Modeling of processes of an adaptive business management

    Directory of Open Access Journals (Sweden)

    Karev Dmitry Vladimirovich

    2011-04-01

    Full Text Available On the basis of the analysis of systems of adaptive management board business proposed the original version of the real system of adaptive management, the basis of which used dynamic recursive model cash flow forecast and real data. Proposed definitions and the simulation of scales and intervals of model time in the control system, as well as the thresholds observations and conditions of changing (correction of the administrative decisions. The process of adaptive management is illustrated on the basis proposed by the author of the script of development of business.

  5. Test plan: Gas-threshold-pressure testing of the Salado Formation in the WIPP underground facility

    International Nuclear Information System (INIS)

    Saulnier, G.J. Jr.

    1992-03-01

    Performance assessment for the disposal of radioactive waste from the United States defense program in the WIPP underground facility must assess the role of post-closure was generation by waste degradation and the subsequent pressurization of the facility. be assimilated by the host formation will Whether or not the generated gas can be assimilated by the host formation will determine the ability of the gas to reach or exceed lithostatic pressure within the repository. The purpose of this test plan is (1) to present a test design to obtain realistic estimates of gas-threshold pressure for the Salado Formation WIPP underground facility including parts of the formation disturbed by the underground of the Salado, and (2) to provide a excavations and in the far-field or undisturbed part framework for changes and amendments to test objectives, practices, and procedures. Because in situ determinations of gas-threshold pressure in low-permeability media are not standard practice, the methods recommended in this testplan are adapted from permeability-testing and hydrofracture procedures. Therefore, as the gas-threshold-pressure testing program progresses, personnel assigned to the program and outside observers and reviewers will be asked for comments regarding the testing procedures. New and/or improved test procedures will be documented as amendments to this test plan, and subject to similar review procedures

  6. Adaptive designs based on the truncated product method

    Directory of Open Access Journals (Sweden)

    Neuhäuser Markus

    2005-09-01

    Full Text Available Abstract Background Adaptive designs are becoming increasingly important in clinical research. One approach subdivides the study into several (two or more stages and combines the p-values of the different stages using Fisher's combination test. Methods Alternatively to Fisher's test, the recently proposed truncated product method (TPM can be applied to combine the p-values. The TPM uses the product of only those p-values that do not exceed some fixed cut-off value. Here, these two competing analyses are compared. Results When an early termination due to insufficient effects is not appropriate, such as in dose-response analyses, the probability to stop the trial early with the rejection of the null hypothesis is increased when the TPM is applied. Therefore, the expected total sample size is decreased. This decrease in the sample size is not connected with a loss in power. The TPM turns out to be less advantageous, when an early termination of the study due to insufficient effects is possible. This is due to a decrease of the probability to stop the trial early. Conclusion It is recommended to apply the TPM rather than Fisher's combination test whenever an early termination due to insufficient effects is not suitable within the adaptive design.

  7. Particles near threshold

    International Nuclear Information System (INIS)

    Bhattacharya, T.; Willenbrock, S.

    1993-01-01

    We propose returning to the definition of the width of a particle in terms of the pole in the particle's propagator. Away from thresholds, this definition of width is equivalent to the standard perturbative definition, up to next-to-leading order; however, near a threshold, the two definitions differ significantly. The width as defined by the pole position provides more information in the threshold region than the standard perturbative definition and, in contrast with the perturbative definition, does not vanish when a two-particle s-wave threshold is approached from below

  8. An Matching Method for Vehicle-borne Panoramic Image Sequence Based on Adaptive Structure from Motion Feature

    Directory of Open Access Journals (Sweden)

    ZHANG Zhengpeng

    2015-10-01

    Full Text Available Panoramic image matching method with the constraint condition of local structure from motion similarity feature is an important method, the process requires multivariable kernel density estimations for the structure from motion feature used nonparametric mean shift. Proper selection of the kernel bandwidth is a critical step for convergence speed and accuracy of matching method. Variable bandwidth with adaptive structure from motion feature for panoramic image matching method has been proposed in this work. First the bandwidth matrix is defined using the locally adaptive spatial structure of the sampling point in spatial domain and optical flow domain. The relaxation diffusion process of structure from motion similarity feature is described by distance weighting method of local optical flow feature vector. Then the expression form of adaptive multivariate kernel density function is given out, and discusses the solution of the mean shift vector, termination conditions, and the seed point selection method. The final fusions of multi-scale SIFT the features and structure features to establish a unified panoramic image matching framework. The sphere panoramic images from vehicle-borne mobile measurement system are chosen such that a comparison analysis between fixed bandwidth and adaptive bandwidth is carried out in detail. The results show that adaptive bandwidth is good for case with the inlier ratio changes and the object space scale changes. The proposed method can realize the adaptive similarity measure of structure from motion feature, improves the correct matching points and matching rate, experimental results have shown our method to be robust.

  9. The Adapted Ordering Method for Lie algebras and superalgebras and their generalizations

    Energy Technology Data Exchange (ETDEWEB)

    Gato-Rivera, Beatriz [Instituto de Matematicas y Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); NIKHEF-H, Kruislaan 409, NL-1098 SJ Amsterdam (Netherlands)

    2008-02-01

    In 1998 the Adapted Ordering Method was developed for the representation theory of the superconformal algebras in two dimensions. It allows us to determine maximal dimensions for a given type of space of singular vectors, to identify all singular vectors by only a few coefficients, to spot subsingular vectors and to set the basis for constructing embedding diagrams. In this paper we present the Adapted Ordering Method for general Lie algebras and superalgebras and their generalizations, provided they can be triangulated. We also review briefly the results obtained for the Virasoro algebra and for the N = 2 and Ramond N = 1 superconformal algebras.

  10. Visual perception system and method for a humanoid robot

    Science.gov (United States)

    Wells, James W. (Inventor); Mc Kay, Neil David (Inventor); Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  11. Tactile arousal threshold of sleeping king penguins in a breeding colony.

    Science.gov (United States)

    Dewasmes, G; Telliez, F

    2000-09-01

    The tactile arousal threshold of sleeping birds has not been investigated to date. In this study, the characteristics of this threshold were assessed by stimulating either the upper back or a foot of two groups (one cutaneous site per group) of 60 sleeping king penguins (Aptenodytes patagonica) in the breeding colony of Baie du Marin (Crozet Archipelago). Increasing weights were put onto one of the feet or the upper back of individuals that had been sleeping for more than 5 min until they showed behavioural signs of arousal (head raising). The weight applied to the upper back that was needed to awaken a sleeper (837 +/- 73 g) was 20 times greater than that applied to a foot (38 +/- 6 g). In terms of pressure, the difference remained five times higher for the back (209 +/- 18 g/cm(2)) than the foot (40 g +/- 7 g/cm(2)). Because the king penguin incubates its single egg and rears its young chick on its feet, the low threshold measured at this level could be viewed as an adaptation against progeny predation. Sleepers are frequently bumped by conspecifics walking through the colony. The increased arousal threshold associated with tactile stimulation of the back may help to preserve sleep continuity under these conditions.

  12. Adaptive endpoint detection of seismic signal based on auto-correlated function

    International Nuclear Information System (INIS)

    Fan Wanchun; Shi Ren

    2000-01-01

    There are certain shortcomings for the endpoint detection by time-waveform envelope and/or by checking the travel table (both labelled as the artificial detection method). Based on the analysis of the auto-correlation function, the notion of the distance between auto-correlation functions was quoted, and the characterizations of the noise and the signal with noise were discussed by using the distance. Then, the method of auto-adaptable endpoint detection of seismic signal based on auto-correlated similarity was summed up. The steps of implementation and determining of the thresholds were presented in detail. The experimental results that were compared with the methods based on artificial detecting show that this method has higher sensitivity even in a low SNR circumstance

  13. Night Vision Image De-Noising of Apple Harvesting Robots Based on the Wavelet Fuzzy Threshold

    Directory of Open Access Journals (Sweden)

    Chengzhi Ruan

    2015-12-01

    Full Text Available In this paper, the de-noising problem of night vision images is studied for apple harvesting robots working at night. The wavelet threshold method is applied to the de-noising of night vision images. Due to the fact that the choice of wavelet threshold function restricts the effect of the wavelet threshold method, the fuzzy theory is introduced to construct the fuzzy threshold function. We then propose the de-noising algorithm based on the wavelet fuzzy threshold. This new method can reduce image noise interferences, which is conducive to further image segmentation and recognition. To demonstrate the performance of the proposed method, we conducted simulation experiments and compared the median filtering and the wavelet soft threshold de-noising methods. It is shown that this new method can achieve the highest relative PSNR. Compared with the original images, the median filtering de-noising method and the classical wavelet threshold de-noising method, the relative PSNR increases 24.86%, 13.95%, and 11.38% respectively. We carry out comparisons from various aspects, such as intuitive visual evaluation, objective data evaluation, edge evaluation and artificial light evaluation. The experimental results show that the proposed method has unique advantages for the de-noising of night vision images, which lay the foundation for apple harvesting robots working at night.

  14. Adaptive Event-Triggered Control Based on Heuristic Dynamic Programming for Nonlinear Discrete-Time Systems.

    Science.gov (United States)

    Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo

    2017-07-01

    This paper presents the design of a novel adaptive event-triggered control method based on the heuristic dynamic programming (HDP) technique for nonlinear discrete-time systems with unknown system dynamics. In the proposed method, the control law is only updated when the event-triggered condition is violated. Compared with the periodic updates in the traditional adaptive dynamic programming (ADP) control, the proposed method can reduce the computation and transmission cost. An actor-critic framework is used to learn the optimal event-triggered control law and the value function. Furthermore, a model network is designed to estimate the system state vector. The main contribution of this paper is to design a new trigger threshold for discrete-time systems. A detailed Lyapunov stability analysis shows that our proposed event-triggered controller can asymptotically stabilize the discrete-time systems. Finally, we test our method on two different discrete-time systems, and the simulation results are included.

  15. Salt taste adaptation: the psychophysical effects of adapting solutions and residual stimuli from prior tastings on the taste of sodium chloride.

    Science.gov (United States)

    O'Mahony, M

    1979-01-01

    The paper reviews how adaptation to sodium chloride, changing in concentration as a result of various experimental procedures, affects measurements of the sensitivity, intensity, and quality of the salt taste. The development of and evidence for the current model that the salt taste depends on an adaptation level (taste zero) determined by the sodium cation concentration is examined and found to be generally supported, despite great methodological complications. It would seem that lower adaptation levels elicit lower thresholds, higher intensity estimates, and altered quality descriptions with predictable effects on psychophysical measures.

  16. A multilevel adaptive reaction-splitting method for SRNs

    KAUST Repository

    Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro

    2016-01-01

    In [5], we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks (SRNs) specifically designed for systems in which the set of reaction channels can be adaptively partitioned into two subsets characterized by either high or low activity. To estimate expected values of observables of the system, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This is achieved with a computational complexity of order O(TOL-2). We also present a novel control variate technique which may dramatically reduce the variance of the coarsest level at a negligible computational cost.

  17. A multilevel adaptive reaction-splitting method for SRNs

    KAUST Repository

    Moraes, Alvaro

    2016-01-06

    In [5], we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks (SRNs) specifically designed for systems in which the set of reaction channels can be adaptively partitioned into two subsets characterized by either high or low activity. To estimate expected values of observables of the system, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This is achieved with a computational complexity of order O(TOL-2). We also present a novel control variate technique which may dramatically reduce the variance of the coarsest level at a negligible computational cost.

  18. A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov-Maxwell system

    International Nuclear Information System (INIS)

    Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain; Sonnendruecker, Eric; Bertrand, Pierre

    2008-01-01

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to

  19. Adaptation Tipping Points of a Wetland under a Drying Climate

    Directory of Open Access Journals (Sweden)

    Amar Nanda

    2018-02-01

    Full Text Available Wetlands experience considerable alteration to their hydrology, which typically contributes to a decline in their overall ecological integrity. Wetland management strategies aim to repair wetland hydrology and attenuate wetland loss that is associated with climate change. However, decision makers often lack the data needed to support complex social environmental systems models, making it difficult to assess the effectiveness of current or past practices. Adaptation Tipping Points (ATPs is a policy-oriented method that can be useful in these situations. Here, a modified ATP framework is presented to assess the suitability of ecosystem management when rigorous ecological data are lacking. We define the effectiveness of the wetland management strategy by its ability to maintain sustainable minimum water levels that are required to support ecological processes. These minimum water requirements are defined in water management and environmental policy of the wetland. Here, we trial the method on Forrestdale Lake, a wetland in a region experiencing a markedly drying climate. ATPs were defined by linking key ecological objectives identified by policy documents to threshold values for water depth. We then used long-term hydrologic data (1978–2012 to assess if and when thresholds were breached. We found that from the mid-1990s, declining wetland water depth breached ATPs for the majority of the wetland objectives. We conclude that the wetland management strategy has been ineffective from the mid-1990s, when the region’s climate dried markedly. The extent of legislation, policies, and management authorities across different scales and levels of governance need to be understood to adapt ecosystem management strategies. Empirical verification of the ATP assessment is required to validate the suitability of the method. However, in general we consider ATPs to be a useful desktop method to assess the suitability of management when rigorous ecological data

  20. An object-oriented decomposition of the adaptive-hp finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Wiley, J.C.

    1994-12-13

    Adaptive-hp methods are those which use a refinement control strategy driven by a local error estimate to locally modify the element size, h, and polynomial order, p. The result is an unstructured mesh in which each node may be associated with a different polynomial order and which generally require complex data structures to implement. Object-oriented design strategies and languages which support them, e.g., C++, help control the complexity of these methods. Here an overview of the major classes and class structure of an adaptive-hp finite element code is described. The essential finite element structure is described in terms of four areas of computation each with its own dynamic characteristics. Implications of converting the code for a distributed-memory parallel environment are also discussed.

  1. Performance study of Active Queue Management methods: Adaptive GRED, REDD, and GRED-Linear analytical model

    Directory of Open Access Journals (Sweden)

    Hussein Abdel-jaber

    2015-10-01

    Full Text Available Congestion control is one of the hot research topics that helps maintain the performance of computer networks. This paper compares three Active Queue Management (AQM methods, namely, Adaptive Gentle Random Early Detection (Adaptive GRED, Random Early Dynamic Detection (REDD, and GRED Linear analytical model with respect to different performance measures. Adaptive GRED and REDD are implemented based on simulation, whereas GRED Linear is implemented as a discrete-time analytical model. Several performance measures are used to evaluate the effectiveness of the compared methods mainly mean queue length, throughput, average queueing delay, overflow packet loss probability, and packet dropping probability. The ultimate aim is to identify the method that offers the highest satisfactory performance in non-congestion or congestion scenarios. The first comparison results that are based on different packet arrival probability values show that GRED Linear provides better mean queue length; average queueing delay and packet overflow probability than Adaptive GRED and REDD methods in the presence of congestion. Further and using the same evaluation measures, Adaptive GRED offers a more satisfactory performance than REDD when heavy congestion is present. When the finite capacity of queue values varies the GRED Linear model provides the highest satisfactory performance with reference to mean queue length and average queueing delay and all the compared methods provide similar throughput performance. However, when the finite capacity value is large, the compared methods have similar results in regard to probabilities of both packet overflowing and packet dropping.

  2. The role of glacier changes and threshold definition in the characterisation of future streamflow droughts in glacierised catchments

    Science.gov (United States)

    Van Tiel, Marit; Teuling, Adriaan J.; Wanders, Niko; Vis, Marc J. P.; Stahl, Kerstin; Van Loon, Anne F.

    2018-01-01

    Glaciers are essential hydrological reservoirs, storing and releasing water at various timescales. Short-term variability in glacier melt is one of the causes of streamflow droughts, here defined as deficiencies from the flow regime. Streamflow droughts in glacierised catchments have a wide range of interlinked causing factors related to precipitation and temperature on short and long timescales. Climate change affects glacier storage capacity, with resulting consequences for discharge regimes and streamflow drought. Future projections of streamflow drought in glacierised basins can, however, strongly depend on the modelling strategies and analysis approaches applied. Here, we examine the effect of different approaches, concerning the glacier modelling and the drought threshold, on the characterisation of streamflow droughts in glacierised catchments. Streamflow is simulated with the Hydrologiska Byråns Vattenbalansavdelning (HBV-light) model for two case study catchments, the Nigardsbreen catchment in Norway and the Wolverine catchment in Alaska, and two future climate change scenarios (RCP4.5 and RCP8.5). Two types of glacier modelling are applied, a constant and dynamic glacier area conceptualisation. Streamflow droughts are identified with the variable threshold level method and their characteristics are compared between two periods, a historical (1975-2004) and future (2071-2100) period. Two existing threshold approaches to define future droughts are employed: (1) the threshold from the historical period; (2) a transient threshold approach, whereby the threshold adapts every year in the future to the changing regimes. Results show that drought characteristics differ among the combinations of glacier area modelling and thresholds. The historical threshold combined with a dynamic glacier area projects extreme increases in drought severity in the future, caused by the regime shift due to a reduction in glacier area. The historical threshold combined with a

  3. The relationship of VOI threshold, volume and B/S on DISA images

    International Nuclear Information System (INIS)

    Song Liejing; Wang Mingming; Si Hongwei; Li Fei

    2011-01-01

    Objective: To explore the relationship of VOI threshold, Volume and B/S on DISA phantom images. Methods: Ten hollow spheres were placed in cylinder phantom. According to the B/S of 1 : 7, 1 : 5 and 1 : 4, 99m TcO 4- and 18 F-FDG was filled into the container and spheres simultaneously and separately. Images were acquired by DISA and SIDA protocol. Volume of interest (VOI) for each sphere was analyzed by threshold method and to fit expression individually for validating of the relationship. Results: The equation for the estimation of optimal threshold was as following Tm = d + c × Bm/(e + f × Vm) + b/Vm. In majority of data, the calculated threshold was in the 1% interval that optimal thresholds were really in. Those who were not in were at the lower or upper intervals. Conclusions: Both DISA and SIDA images, based o the relationship of VOI thresh- old. Volume and B/S and real volume, this method could accurately calculate optimal threshold with an error less than 1% for spheres whose volumes ranged from 3.3 to 30.8 ml. (authors)

  4. Recruitment dynamics in adaptive social networks

    Science.gov (United States)

    Shkarayev, Maxim S.; Schwartz, Ira B.; Shaw, Leah B.

    2013-06-01

    We model recruitment in adaptive social networks in the presence of birth and death processes. Recruitment is characterized by nodes changing their status to that of the recruiting class as a result of contact with recruiting nodes. Only a susceptible subset of nodes can be recruited. The recruiting individuals may adapt their connections in order to improve recruitment capabilities, thus changing the network structure adaptively. We derive a mean-field theory to predict the dependence of the growth threshold of the recruiting class on the adaptation parameter. Furthermore, we investigate the effect of adaptation on the recruitment level, as well as on network topology. The theoretical predictions are compared with direct simulations of the full system. We identify two parameter regimes with qualitatively different bifurcation diagrams depending on whether nodes become susceptible frequently (multiple times in their lifetime) or rarely (much less than once per lifetime).

  5. Recruitment dynamics in adaptive social networks

    International Nuclear Information System (INIS)

    Shkarayev, Maxim S; Shaw, Leah B; Schwartz, Ira B

    2013-01-01

    We model recruitment in adaptive social networks in the presence of birth and death processes. Recruitment is characterized by nodes changing their status to that of the recruiting class as a result of contact with recruiting nodes. Only a susceptible subset of nodes can be recruited. The recruiting individuals may adapt their connections in order to improve recruitment capabilities, thus changing the network structure adaptively. We derive a mean-field theory to predict the dependence of the growth threshold of the recruiting class on the adaptation parameter. Furthermore, we investigate the effect of adaptation on the recruitment level, as well as on network topology. The theoretical predictions are compared with direct simulations of the full system. We identify two parameter regimes with qualitatively different bifurcation diagrams depending on whether nodes become susceptible frequently (multiple times in their lifetime) or rarely (much less than once per lifetime). (paper)

  6. A threshold auto-adjustment algorithm of feature points extraction based on grid

    Science.gov (United States)

    Yao, Zili; Li, Jun; Dong, Gaojie

    2018-02-01

    When dealing with high-resolution digital images, detection of feature points is usually the very first important step. Valid feature points depend on the threshold. If the threshold is too low, plenty of feature points will be detected, and they may be aggregated in the rich texture regions, which consequently not only affects the speed of feature description, but also aggravates the burden of following processing; if the threshold is set high, the feature points in poor texture area will lack. To solve these problems, this paper proposes a threshold auto-adjustment method of feature extraction based on grid. By dividing the image into numbers of grid, threshold is set in every local grid for extracting the feature points. When the number of feature points does not meet the threshold requirement, the threshold will be adjusted automatically to change the final number of feature points The experimental results show that feature points produced by our method is more uniform and representative, which avoids the aggregation of feature points and greatly reduces the complexity of following work.

  7. A multilevel adaptive reaction-splitting method for SRNs

    KAUST Repository

    Moraes, Alvaro

    2015-01-07

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks specifically designed for systems in which the set of reaction channels can be adaptively partitioned into two subsets characterized by either “high” or “low” activity. To estimate expected values of observables of the system, our method bounds the global computational error to be below a prescribed tolerance, within a given confidence level. This is achieved with a computational complexity of order O (TOL-2).We also present a novel control variate technique which may dramatically reduce the variance of the coarsest level at a negligible computational cost. Our numerical examples show substantial gains with respect to the standard Stochastic Simulation Algorithm (SSA) by Gillespie and also our previous hybrid Chernoff tau-leap method.

  8. Is heart rate variability a feasible method to determine anaerobic threshold in progressive resistance exercise in coronary artery disease?

    Science.gov (United States)

    Sperling, Milena P R; Simões, Rodrigo P; Caruso, Flávia C R; Mendes, Renata G; Arena, Ross; Borghi-Silva, Audrey

    2016-01-01

    Recent studies have shown that the magnitude of the metabolic and autonomic responses during progressive resistance exercise (PRE) is associated with the determination of the anaerobic threshold (AT). AT is an important parameter to determine intensity in dynamic exercise. To investigate the metabolic and cardiac autonomic responses during dynamic resistance exercise in patients with Coronary Artery Disease (CAD). Twenty men (age = 63±7 years) with CAD [Left Ventricular Ejection Fraction (LVEF) = 60±10%] underwent a PRE protocol on a leg press until maximal exertion. The protocol began at 10% of One Repetition Maximum Test (1-RM), with subsequent increases of 10% until maximal exhaustion. Heart Rate Variability (HRV) indices from Poincaré plots (SD1, SD2, SD1/SD2) and time domain (rMSSD and RMSM), and blood lactate were determined at rest and during PRE. Significant alterations in HRV and blood lactate were observed starting at 30% of 1-RM (p<0.05). Bland-Altman plots revealed a consistent agreement between blood lactate threshold (LT) and rMSSD threshold (rMSSDT) and between LT and SD1 threshold (SD1T). Relative values of 1-RM in all LT, rMSSDT and SD1T did not differ (29%±5 vs 28%±5 vs 29%±5 Kg, respectively). HRV during PRE could be a feasible noninvasive method of determining AT in CAD patients to plan intensities during cardiac rehabilitation.

  9. Variable threshold algorithm for division of labor analyzed as a dynamical system.

    Science.gov (United States)

    Castillo-Cagigal, Manuel; Matallanas, Eduardo; Navarro, Iñaki; Caamaño-Martín, Estefanía; Monasterio-Huelin, Félix; Gutiérrez, Álvaro

    2014-12-01

    Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.

  10. Review and Analysis of Cryptographic Schemes Implementing Threshold Signature

    Directory of Open Access Journals (Sweden)

    Anastasiya Victorovna Beresneva

    2015-03-01

    Full Text Available This work is devoted to the study of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, ellipt ic curves and bilinear pairings were investigated. Different methods of generation and verification of threshold signatures were explored, e.g. used in a mobile agents, Internet banking and e-currency. The significance of the work is determined by the reduction of the level of counterfeit electronic documents, signed by certain group of users.

  11. Detecting wood surface defects with fusion algorithm of visual saliency and local threshold segmentation

    Science.gov (United States)

    Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng

    2018-04-01

    This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.

  12. Ultrafuzziness Optimization Based on Type II Fuzzy Sets for Image Thresholding

    Directory of Open Access Journals (Sweden)

    Hudan Studiawan

    2010-11-01

    Full Text Available Image thresholding is one of the processing techniques to provide high quality preprocessed image. Image vagueness and bad illumination are common obstacles yielding in a poor image thresholding output. By assuming image as fuzzy sets, several different fuzzy thresholding techniques have been proposed to remove these obstacles during threshold selection. In this paper, we proposed an algorithm for thresholding image using ultrafuzziness optimization to decrease uncertainty in fuzzy system by common fuzzy sets like type II fuzzy sets. Optimization was conducted by involving ultrafuzziness measurement for background and object fuzzy sets separately. Experimental results demonstrated that the proposed image thresholding method had good performances for images with high vagueness, low level contrast, and grayscale ambiguity.

  13. A Least Square-Based Self-Adaptive Localization Method for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Baoguo Yu

    2016-01-01

    Full Text Available In the wireless sensor network (WSN localization methods based on Received Signal Strength Indicator (RSSI, it is usually required to determine the parameters of the radio signal propagation model before estimating the distance between the anchor node and an unknown node with reference to their communication RSSI value. And finally we use a localization algorithm to estimate the location of the unknown node. However, this localization method, though high in localization accuracy, has weaknesses such as complex working procedure and poor system versatility. Concerning these defects, a self-adaptive WSN localization method based on least square is proposed, which uses the least square criterion to estimate the parameters of radio signal propagation model, which positively reduces the computation amount in the estimation process. The experimental results show that the proposed self-adaptive localization method outputs a high processing efficiency while satisfying the high localization accuracy requirement. Conclusively, the proposed method is of definite practical value.

  14. Electrophysiological gap detection thresholds: effects of age and comparison with a behavioral measure.

    Science.gov (United States)

    Palmer, Shannon B; Musiek, Frank E

    2014-01-01

    Temporal processing ability has been linked to speech understanding ability and older adults often complain of difficulty understanding speech in difficult listening situations. Temporal processing can be evaluated using gap detection procedures. There is some research showing that gap detection can be evaluated using an electrophysiological procedure. However, there is currently no research establishing gap detection threshold using the N1-P2 response. The purposes of the current study were to 1) determine gap detection thresholds in younger and older normal-hearing adults using an electrophysiological measure, 2) compare the electrophysiological gap detection threshold and behavioral gap detection threshold within each group, and 3) investigate the effect of age on each gap detection measure. This study utilized an older adult group and younger adult group to compare performance on an electrophysiological and behavioral gap detection procedure. The subjects in this study were 11 younger, normal-hearing adults (mean = 22 yrs) and 11 older, normal-hearing adults (mean = 64.36 yrs). All subjects completed an adaptive behavioral gap detection procedure in order to determine their behavioral gap detection threshold (BGDT). Subjects also completed an electrophysiologic gap detection procedure to determine their electrophysiologic gap detection threshold (EGDT). Older adults demonstrated significantly larger gap detection thresholds than the younger adults. However, EGDT and BGDT were not significantly different in either group. The mean difference between EGDT and BGDT for all subjects was 0.43 msec. Older adults show poorer gap detection ability when compared to younger adults. However, this study shows that gap detection thresholds can be measured using evoked potential recordings and yield results similar to a behavioral measure. American Academy of Audiology.

  15. Trunk muscle activation during golf swing: Baseline and threshold.

    Science.gov (United States)

    Silva, Luís; Marta, Sérgio; Vaz, João; Fernandes, Orlando; Castro, Maria António; Pezarat-Correia, Pedro

    2013-10-01

    There is a lack of studies regarding EMG temporal analysis during dynamic and complex motor tasks, such as golf swing. The aim of this study is to analyze the EMG onset during the golf swing, by comparing two different threshold methods. Method A threshold was determined using the baseline activity recorded between two maximum voluntary contraction (MVC). Method B threshold was calculated using the mean EMG activity for 1000ms before the 500ms prior to the start of the Backswing. Two different clubs were also studied. Three-way repeated measures ANOVA was used to compare methods, muscles and clubs. Two-way mixed Intraclass Correlation Coefficient (ICC) with absolute agreement was used to determine the methods reliability. Club type usage showed no influence in onset detection. Rectus abdominis (RA) showed the higher agreement between methods. Erector spinae (ES), on the other hand, showed a very low agreement, that might be related to postural activity before the swing. External oblique (EO) is the first being activated, at 1295ms prior impact. There is a similar activation time between right and left muscles sides, although the right EO showed better agreement between methods than left side. Therefore, the algorithms usage is task- and muscle-dependent. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Threshold secret sharing scheme based on phase-shifting interferometry.

    Science.gov (United States)

    Deng, Xiaopeng; Shi, Zhengang; Wen, Wei

    2016-11-01

    We propose a new method for secret image sharing with the (3,N) threshold scheme based on phase-shifting interferometry. The secret image, which is multiplied with an encryption key in advance, is first encrypted by using Fourier transformation. Then, the encoded image is shared into N shadow images based on the recording principle of phase-shifting interferometry. Based on the reconstruction principle of phase-shifting interferometry, any three or more shadow images can retrieve the secret image, while any two or fewer shadow images cannot obtain any information of the secret image. Thus, a (3,N) threshold secret sharing scheme can be implemented. Compared with our previously reported method, the algorithm of this paper is suited for not only a binary image but also a gray-scale image. Moreover, the proposed algorithm can obtain a larger threshold value t. Simulation results are presented to demonstrate the feasibility of the proposed method.

  17. Image registration method for medical image sequences

    Science.gov (United States)

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  18. Adapted Method for Separating Kinetic SZ Signal from Primary CMB Fluctuations

    Directory of Open Access Journals (Sweden)

    Forni Olivier

    2005-01-01

    Full Text Available In this first attempt to extract a map of the kinetic Sunyaev-Zel'dovich (KSZ temperature fluctuations from the cosmic microwave background (CMB anisotropies, we use a method which is based on simple and minimal assumptions. We first focus on the intrinsic limitations of the method due to the cosmological signal itself. We demonstrate using simulated maps that the KSZ reconstructed maps are in quite good agreement with the original input signal with a correlation coefficient between original and reconstructed maps of on average, and an error on the standard deviation of the reconstructed KSZ map of only % on average. To achieve these results, our method is based on the fact that some first-step component separation provides us with (i a map of Compton parameters for the thermal Sunyaev-Zel'dovich (TSZ effect of galaxy clusters, and (ii a map of temperature fluctuations which is the sum of primary CMB and KSZ signals. Our method takes benefit from the spatial correlation between KSZ and TSZ effects which are both due to the same galaxy clusters. This correlation allows us to use the TSZ map as a spatial template in order to mask, in the map, the pixels where the clusters must have imprinted an SZ fluctuation. In practice, a series of TSZ thresholds is defined and for each threshold, we estimate the corresponding KSZ signal by interpolating the CMB fluctuations on the masked pixels. The series of estimated KSZ maps is finally used to reconstruct the KSZ map through the minimisation of a criterion taking into account two statistical properties of the KSZ signal (KSZ dominates over primary anisotropies at small scales, KSZ fluctuations are non-Gaussian distributed. We show that the results are quite sensitive to the effect of beam convolution, especially for large beams, and to the corruption by instrumental noise.

  19. Adaptive control method for core power control in TRIGA Mark II reactor

    Science.gov (United States)

    Sabri Minhat, Mohd; Selamat, Hazlina; Subha, Nurul Adilla Mohd

    2018-01-01

    The 1MWth Reactor TRIGA PUSPATI (RTP) Mark II type has undergone more than 35 years of operation. The existing core power control uses feedback control algorithm (FCA). It is challenging to keep the core power stable at the desired value within acceptable error bands to meet the safety demand of RTP due to the sensitivity of nuclear research reactor operation. Currently, the system is not satisfied with power tracking performance and can be improved. Therefore, a new design core power control is very important to improve the current performance in tracking and regulate reactor power by control the movement of control rods. In this paper, the adaptive controller and focus on Model Reference Adaptive Control (MRAC) and Self-Tuning Control (STC) were applied to the control of the core power. The model for core power control was based on mathematical models of the reactor core, adaptive controller model, and control rods selection programming. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The adaptive control model was presented using Lyapunov method to ensure stable close loop system and STC Generalised Minimum Variance (GMV) Controller was not necessary to know the exact plant transfer function in designing the core power control. The performance between proposed adaptive control and FCA will be compared via computer simulation and analysed the simulation results manifest the effectiveness and the good performance of the proposed control method for core power control.

  20. Heat-Related Deaths in Hot Cities: Estimates of Human Tolerance to High Temperature Thresholds

    Directory of Open Access Journals (Sweden)

    Sharon L. Harlan

    2014-03-01

    Full Text Available In this study we characterized the relationship between temperature and mortality in central Arizona desert cities that have an extremely hot climate. Relationships between daily maximum apparent temperature (ATmax and mortality for eight condition-specific causes and all-cause deaths were modeled for all residents and separately for males and females ages <65 and ≥65 during the months May–October for years 2000–2008. The most robust relationship was between ATmax on day of death and mortality from direct exposure to high environmental heat. For this condition-specific cause of death, the heat thresholds in all gender and age groups (ATmax = 90–97 °F; 32.2‒36.1 °C were below local median seasonal temperatures in the study period (ATmax = 99.5 °F; 37.5 °C. Heat threshold was defined as ATmax at which the mortality ratio begins an exponential upward trend. Thresholds were identified in younger and older females for cardiac disease/stroke mortality (ATmax = 106 and 108 °F; 41.1 and 42.2 °C with a one-day lag. Thresholds were also identified for mortality from respiratory diseases in older people (ATmax = 109 °F; 42.8 °C and for all-cause mortality in females (ATmax = 107 °F; 41.7 °C and males <65 years (ATmax = 102 °F; 38.9 °C. Heat-related mortality in a region that has already made some adaptations to predictable periods of extremely high temperatures suggests that more extensive and targeted heat-adaptation plans for climate change are needed in cities worldwide.

  1. The use of the spectral method within the fast adaptive composite grid method

    Energy Technology Data Exchange (ETDEWEB)

    McKay, S.M.

    1994-12-31

    The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.

  2. Structured decision making as a conceptual framework to identify thresholds for conservation and management

    Science.gov (United States)

    Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.

    2009-01-01

    component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.

  3. What Temperature of Coffee Exceeds the Pain Threshold? Pilot Study of a Sensory Analysis Method as Basis for Cancer Risk Assessment.

    Science.gov (United States)

    Dirler, Julia; Winkler, Gertrud; Lachenmeier, Dirk W

    2018-06-01

    The International Agency for Research on Cancer (IARC) evaluates "very hot (>65 °C) beverages" as probably carcinogenic to humans. However, there is a lack of research regarding what temperatures consumers actually perceive as "very hot" or as "too hot". A method for sensory analysis of such threshold temperatures was developed. The participants were asked to mix a very hot coffee step by step into a cooler coffee. Because of that, the coffee to be tasted was incrementally increased in temperature during the test. The participants took a sip at every addition, until they perceive the beverage as too hot for consumption. The protocol was evaluated in the form of a pilot study using 87 participants. Interestingly, the average pain threshold of the test group (67 °C) and the preferred drinking temperature (63 °C) iterated around the IARC threshold for carcinogenicity. The developed methodology was found as fit for the purpose and may be applied in larger studies.

  4. What Temperature of Coffee Exceeds the Pain Threshold? Pilot Study of a Sensory Analysis Method as Basis for Cancer Risk Assessment

    Directory of Open Access Journals (Sweden)

    Julia Dirler

    2018-06-01

    Full Text Available The International Agency for Research on Cancer (IARC evaluates “very hot (>65 °C beverages” as probably carcinogenic to humans. However, there is a lack of research regarding what temperatures consumers actually perceive as “very hot” or as “too hot”. A method for sensory analysis of such threshold temperatures was developed. The participants were asked to mix a very hot coffee step by step into a cooler coffee. Because of that, the coffee to be tasted was incrementally increased in temperature during the test. The participants took a sip at every addition, until they perceive the beverage as too hot for consumption. The protocol was evaluated in the form of a pilot study using 87 participants. Interestingly, the average pain threshold of the test group (67 °C and the preferred drinking temperature (63 °C iterated around the IARC threshold for carcinogenicity. The developed methodology was found as fit for the purpose and may be applied in larger studies.

  5. Comparison of anaerobic threshold determined by visual and mathematical methods in healthy women.

    Science.gov (United States)

    Higa, M N; Silva, E; Neves, V F C; Catai, A M; Gallo, L; Silva de Sá, M F

    2007-04-01

    Several methods are used to estimate anaerobic threshold (AT) during exercise. The aim of the present study was to compare AT obtained by a graphic visual method for the estimate of ventilatory and metabolic variables (gold standard), to a bi-segmental linear regression mathematical model of Hinkley's algorithm applied to heart rate (HR) and carbon dioxide output (VCO2) data. Thirteen young (24 +/- 2.63 years old) and 16 postmenopausal (57 +/- 4.79 years old) healthy and sedentary women were submitted to a continuous ergospirometric incremental test on an electromagnetic braking cycloergometer with 10 to 20 W/min increases until physical exhaustion. The ventilatory variables were recorded breath-to-breath and HR was obtained beat-to-beat over real time. Data were analyzed by the nonparametric Friedman test and Spearman correlation test with the level of significance set at 5%. Power output (W), HR (bpm), oxygen uptake (VO2; mL kg(-1) min(-1)), VO2 (mL/min), VCO2 (mL/min), and minute ventilation (VE; L/min) data observed at the AT level were similar for both methods and groups studied (P > 0.05). The VO2 (mL kg(-1) min(-1)) data showed significant correlation (P automatic, non-invasive and objective AT measurement.

  6. Comparison of memory thresholds for planar qudit geometries

    Science.gov (United States)

    Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad

    2017-11-01

    We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.

  7. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  8. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    Directory of Open Access Journals (Sweden)

    Jure Tuta

    2018-03-01

    Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  9. Perioperative transfusion threshold and ambulation after hip revision surgery

    DEFF Research Database (Denmark)

    Nielsen, Kamilla; Johansson, Pär I; Dahl, Benny

    2014-01-01

    BACKGROUND: Transfusion with red blood cells (RBC) may be needed during hip revision surgery but the appropriate haemoglobin concentration (Hb) threshold for transfusion has not been well established. We hypothesized that a higher transfusion threshold would improve ambulation after hip revision...... surgery. METHODS: The trial was registered at Clinicaltrials.gov ( NCT00906295). Sixty-six patients aged 18 years or older undergoing hip revision surgery were randomized to receive RBC at a Hb threshold of either 7.3 g/dL (restrictive group) or 8.9 g/dL (liberal group). Postoperative ambulation...... received RBC. CONCLUSIONS: A Hb transfusion threshold of 8.9 g/dL was associated with a statistically significantly faster TUG after hip revision surgery compared to a threshold of 7.3 g/dL but the clinical importance is questionable and the groups did not differ in Hb at the time of testing....

  10. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  11. A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events

    Science.gov (United States)

    Kholodovsky, V.

    2017-12-01

    Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.

  12. A simple method to adapt time sampling of the analog signal

    International Nuclear Information System (INIS)

    Kalinin, Yu.G.; Martyanov, I.S.; Sadykov, Kh.; Zastrozhnova, N.N.

    2004-01-01

    In this paper we briefly describe the time sampling method, which is adapted to the speed of the signal change. Principally, this method is based on a simple idea--the combination of discrete integration with differentiation of the analog signal. This method can be used in nuclear electronics research into the characteristics of detectors and the shape of the pulse signal, pulse and transitive characteristics of inertial systems of processing of signals, etc

  13. Local Stereo Matching Using Adaptive Local Segmentation

    NARCIS (Netherlands)

    Damjanovic, S.; van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan

    We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the fronto-parallel assumption based on the local intensity variations in the 4-neighborhood of the matching pixel. The

  14. SU-G-BRA-09: Estimation of Motion Tracking Uncertainty for Real-Time Adaptive Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yan, H [Capital Medical University, Beijing, Beijing (China); Chen, Z [Yale New Haven Hospital, New Haven, CT (United States); Nath, R; Liu, W [Yale University School of Medicine, New Haven, CT (United States)

    2016-06-15

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertainty through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the

  15. SU-G-BRA-09: Estimation of Motion Tracking Uncertainty for Real-Time Adaptive Imaging

    International Nuclear Information System (INIS)

    Yan, H; Chen, Z; Nath, R; Liu, W

    2016-01-01

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertainty through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the

  16. Self-adaptive method to distinguish inner and outer contours of industrial computed tomography image for rapid prototype

    International Nuclear Information System (INIS)

    Duan Liming; Ye Yong; Zhang Xia; Zuo Jian

    2013-01-01

    A self-adaptive identification method is proposed for realizing more accurate and efficient judgment about the inner and outer contours of industrial computed tomography (CT) slice images. The convexity-concavity of the single-pixel-wide closed contour is identified with angle method at first. Then, contours with concave vertices are distinguished to be inner or outer contours with ray method, and contours without concave vertices are distinguished with extreme coordinate value method. The method was chosen to automatically distinguish contours by means of identifying the convexity and concavity of the contours. Thus, the disadvantages of single distinguishing methods, such as ray method's time-consuming and extreme coordinate method's fallibility, can be avoided. The experiments prove the adaptability, efficiency, and accuracy of the self-adaptive method. (authors)

  17. Verification of the coupled space-angle adaptivity algorithm for the finite element-spherical harmonics method via the method of manufactured solutions

    International Nuclear Information System (INIS)

    Park, H.; De Oliveira, C. R. E.

    2007-01-01

    This paper describes the verification of the recently developed space-angle self-adaptive algorithm for the finite element-spherical harmonics method via the Method of Manufactured Solutions. This method provides a simple, yet robust way for verifying the theoretical properties of the adaptive algorithm and interfaces very well with the underlying second-order, even-parity transport formulation. Simple analytic solutions in both spatial and angular variables are manufactured to assess the theoretical performance of the a posteriori error estimates. The numerical results confirm reliability of the developed space-angle error indicators. (authors)

  18. A resilience perspective to water risk management: case-study application of the adaptation tipping point method

    Science.gov (United States)

    Gersonius, Berry; Ashley, Richard; Jeuken, Ad; Nasruddin, Fauzy; Pathirana, Assela; Zevenbergen, Chris

    2010-05-01

    In a context of high uncertainty about hydrological variables due to climate change and other factors, the development of updated risk management approaches is as important as—if not more important than—the provision of improved data and forecasts of the future. Traditional approaches to adaptation attempt to manage future water risks to cities with the use of the predict-then-adapt method. This method uses hydrological change projections as the starting point to identify adaptive strategies, which is followed by analysing the cause-effect chain based on some sort of Pressures-State-Impact-Response (PSIR) scheme. The predict-then-adapt method presumes that it is possible to define a singular (optimal) adaptive strategy according to a most likely or average projection of future change. A key shortcoming of the method is, however, that the planning of water management structures is typically decoupled from forecast uncertainties and is, as such, inherently inflexible. This means that there is an increased risk of under- or over-adaptation, resulting in either mal-functioning or unnecessary costs. Rather than taking a traditional approach, responsible water risk management requires an alternative approach to adaptation that recognises and cultivates resiliency for change. The concept of resiliency relates to the capability of complex socio-technical systems to make aspirational levels of functioning attainable despite the occurrence of possible changes. Focusing on resiliency does not attempt to reduce uncertainty associated with future change, but rather to develop better ways of managing it. This makes it a particularly relevant perspective for adaptation to long-term hydrological change. Although resiliency is becoming more refined as a theory, the application of the concept to water risk management is still in an initial phase. Different methods are used in practice to support the implementation of a resilience-focused approach. Typically these approaches

  19. A Headset Method for Measuring the Visual Temporal Discrimination Threshold in Cervical Dystonia

    Directory of Open Access Journals (Sweden)

    Anna Molloy

    2014-07-01

    Full Text Available Background: The visual temporal discrimination threshold (TDT is the shortest time interval at which one can determine two stimuli to be asynchronous and meets criteria for a valid endophenotype in adult‐onset idiopathic focal dystonia, a poorly penetrant disorder. Temporal discrimination is assessed in the hospital laboratory; in unaffected relatives of multiplex adult‐onset dystonia patients distance from the hospital is a barrier to data acquisition. We devised a portable headset method for visual temporal discrimination determination and our aim was to validate this portable tool against the traditional laboratory‐based method in a group of patients and in a large cohort of healthy controls. Methods: Visual TDTs were examined in two groups 1 in 96 healthy control participants divided by age and gender, and 2 in 33 cervical dystonia patients, using two methods of data acquisition, the traditional table‐top laboratory‐based system, and the novel portable headset method. The order of assessment was randomized in the control group. The results obtained by each technique were compared. Results: Visual temporal discrimination in healthy control participants demonstrated similar age and gender effects by the headset method as found by the table‐top examination. There were no significant differences between visual TDTs obtained using the two methods, both for the control participants and for the cervical dystonia patients. Bland–Altman testing showed good concordance between the two methods in both patients and in controls.Discussion: The portable headset device is a reliable and accurate method for visual temporal discrimination testing for use outside the laboratory, and will facilitate increased TDT data collection outside of the hospital setting. This is of particular importance in multiplex families where data collection in all available members of the pedigree is important for exome sequencing studies.

  20. Adaptation in the innate immune system and heterologous innate immunity.

    Science.gov (United States)

    Martin, Stefan F

    2014-11-01

    The innate immune system recognizes deviation from homeostasis caused by infectious or non-infectious assaults. The threshold for its activation seems to be established by a calibration process that includes sensing of microbial molecular patterns from commensal bacteria and of endogenous signals. It is becoming increasingly clear that adaptive features, a hallmark of the adaptive immune system, can also be identified in the innate immune system. Such adaptations can result in the manifestation of a primed state of immune and tissue cells with a decreased activation threshold. This keeps the system poised to react quickly. Moreover, the fact that the innate immune system recognizes a wide variety of danger signals via pattern recognition receptors that often activate the same signaling pathways allows for heterologous innate immune stimulation. This implies that, for example, the innate immune response to an infection can be modified by co-infections or other innate stimuli. This "design feature" of the innate immune system has many implications for our understanding of individual susceptibility to diseases or responsiveness to therapies and vaccinations. In this article, adaptive features of the innate immune system as well as heterologous innate immunity and their implications are discussed.

  1. ADAPTIVE METHODS FOR STOCHASTIC DIFFERENTIAL EQUATIONS VIA NATURAL EMBEDDINGS AND REJECTION SAMPLING WITH MEMORY.

    Science.gov (United States)

    Rackauckas, Christopher; Nie, Qing

    2017-01-01

    Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.

  2. Adaptive mixed finite element methods for Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin; Salama, Amgad; Sun, Shuyu

    2016-01-01

    In this paper, we propose adaptive mixed finite element methods for simulating the single-phase Darcy flow in two-dimensional fractured porous media. The reduced model that we use for the simulation is a discrete fracture model coupling Darcy flows in the matrix and the fractures, and the fractures are modeled by one-dimensional entities. The Raviart-Thomas mixed finite element methods are utilized for the solution of the coupled Darcy flows in the matrix and the fractures. In order to improve the efficiency of the simulation, we use adaptive mixed finite element methods based on novel residual-based a posteriori error estimators. In addition, we develop an efficient upscaling algorithm to compute the effective permeability of the fractured porous media. Several interesting examples of Darcy flow in the fractured porous media are presented to demonstrate the robustness of the algorithm.

  3. Adaptive mixed finite element methods for Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin

    2016-09-21

    In this paper, we propose adaptive mixed finite element methods for simulating the single-phase Darcy flow in two-dimensional fractured porous media. The reduced model that we use for the simulation is a discrete fracture model coupling Darcy flows in the matrix and the fractures, and the fractures are modeled by one-dimensional entities. The Raviart-Thomas mixed finite element methods are utilized for the solution of the coupled Darcy flows in the matrix and the fractures. In order to improve the efficiency of the simulation, we use adaptive mixed finite element methods based on novel residual-based a posteriori error estimators. In addition, we develop an efficient upscaling algorithm to compute the effective permeability of the fractured porous media. Several interesting examples of Darcy flow in the fractured porous media are presented to demonstrate the robustness of the algorithm.

  4. Development and evaluation of a method of calibrating medical displays based on fixed adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Sund, Patrik, E-mail: patrik.sund@vgregion.se; Månsson, Lars Gunnar; Båth, Magnus [Department of Medical Physics and Biomedical Engineering, Sahlgrenska University Hospital, Gothenburg SE-41345, Sweden and Department of Radiation Physics, University of Gothenburg, Gothenburg SE-41345 (Sweden)

    2015-04-15

    Purpose: The purpose of this work was to develop and evaluate a new method for calibration of medical displays that includes the effect of fixed adaptation and by using equipment and luminance levels typical for a modern radiology department. Methods: Low contrast sinusoidal test patterns were derived at nine luminance levels from 2 to 600 cd/m{sup 2} and used in a two alternative forced choice observer study, where the adaptation level was fixed at the logarithmic average of 35 cd/m{sup 2}. The contrast sensitivity at each luminance level was derived by establishing a linear relationship between the ten pattern contrast levels used at every luminance level and a detectability index (d′) calculated from the fraction of correct responses. A Gaussian function was fitted to the data and normalized to the adaptation level. The corresponding equation was used in a display calibration method that included the grayscale standard display function (GSDF) but compensated for fixed adaptation. In the evaluation study, the contrast of circular objects with a fixed pixel contrast was displayed using both calibration methods and was rated on a five-grade scale. Results were calculated using a visual grading characteristics method. Error estimations in both observer studies were derived using a bootstrap method. Results: The contrast sensitivities for the darkest and brightest patterns compared to the contrast sensitivity at the adaptation luminance were 37% and 56%, respectively. The obtained Gaussian fit corresponded well with similar studies. The evaluation study showed a higher degree of equally distributed contrast throughout the luminance range with the calibration method compensated for fixed adaptation than for the GSDF. The two lowest scores for the GSDF were obtained for the darkest and brightest patterns. These scores were significantly lower than the lowest score obtained for the compensated GSDF. For the GSDF, the scores for all luminance levels were statistically

  5. Some Observations about the Nearest-Neighbor Model of the Error Threshold

    International Nuclear Information System (INIS)

    Gerrish, Philip J.

    2009-01-01

    I explore some aspects of the 'error threshold' - a critical mutation rate above which a population is nonviable. The phase transition that occurs as mutation rate crosses this threshold has been shown to be mathematically equivalent to the loss of ferromagnetism that occurs as temperature exceeds the Curie point. I will describe some refinements and new results based on the simplest of these mutation models, will discuss the commonly unperceived robustness of this simple model, and I will show some preliminary results comparing qualitative predictions with simulations of finite populations adapting at high mutation rates. I will talk about how these qualitative predictions are relevant to biomedical science and will discuss how my colleagues and I are looking for phase-transition signatures in real populations of Escherichia coli that go extinct as a result of excessive mutation.

  6. Effects of pulse duration on magnetostimulation thresholds

    Energy Technology Data Exchange (ETDEWEB)

    Saritas, Emine U., E-mail: saritas@ee.bilkent.edu.tr [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Department of Electrical and Electronics Engineering, Bilkent University, Bilkent, Ankara 06800 (Turkey); National Magnetic Resonance Research Center (UMRAM), Bilkent University, Bilkent, Ankara 06800 (Turkey); Goodwill, Patrick W. [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Conolly, Steven M. [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Department of EECS, University of California, Berkeley, California 94720-1762 (United States)

    2015-06-15

    Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number of cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations

  7. Effects of pulse duration on magnetostimulation thresholds

    International Nuclear Information System (INIS)

    Saritas, Emine U.; Goodwill, Patrick W.; Conolly, Steven M.

    2015-01-01

    Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number of cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations

  8. Microarchitectural adaptations in aging and osteoarthrotic subchondral bone tissues

    DEFF Research Database (Denmark)

    Ding, Ming

    2010-01-01

    . These diseases are among the major health care problems in terms of socio-economic costs. The overall goals of the current series of studies were to investigate the age-related and osteoarthrosis (OA) related changes in the 3-D microarchitectural properties, mechanical properties, collagen and mineral quality......-related development of guinea pig OA; secondly, the potential effects of hyaluronan on OA subchondral bone tissues; and thirdly, the effects on OA progression of an increase in subchondral bone density by inhibition of bone remodeling with a bisphosphonate. These investigations aimed to obtain more insight...... into the age-related and OA-related subchondral bone adaptations.   Microarchitectural adaptation in human aging cancellous bone The precision of micro-CT measurement is excellent. Accurate 3-D micro-CT image datasets can be generated by applying an appropriate segmentation threshold. A fixed threshold may...

  9. Image Segmentation using a Refined Comprehensive Learning Particle Swarm Optimizer for Maximum Tsallis Entropy Thresholding

    OpenAIRE

    L. Jubair Ahmed; A. Ebenezer Jeyakumar

    2013-01-01

    Thresholding is one of the most important techniques for performing image segmentation. In this paper to compute optimum thresholds for Maximum Tsallis entropy thresholding (MTET) model, a new hybrid algorithm is proposed by integrating the Comprehensive Learning Particle Swarm Optimizer (CPSO) with the Powell’s Conjugate Gradient (PCG) method. Here the CPSO will act as the main optimizer for searching the near-optimal thresholds while the PCG method will be used to fine tune the best solutio...

  10. A novel perceptually adaptive image watermarking scheme by ...

    African Journals Online (AJOL)

    Threshold and modification value were selected adaptively for each image block, which improved robustness and transparency. The proposed algorithm was able to withstand a variety of attacks and image processing operations like rotation, cropping, noise addition, resizing, lossy compression and etc. The experimental ...

  11. Development of parallel implementation of adaptive numerical methods with industrial applications in fluid mechanics

    International Nuclear Information System (INIS)

    Laucoin, E.

    2008-10-01

    Numerical resolution of partial differential equations can be made reliable and efficient through the use of adaptive numerical methods.We present here the work we have done for the design, the implementation and the validation of such a method within an industrial software platform with applications in thermohydraulics. From the geometric point of view, this method can deal both with mesh refinement and mesh coarsening, while ensuring the quality of the mesh cells. Numerically, we use the mortar elements formalism in order to extend the Finite Volumes-Elements method implemented in the Trio-U platform and to deal with the non-conforming meshes arising from the adaptation procedure. Finally, we present an implementation of this method using concepts from domain decomposition methods for ensuring its efficiency while running in a parallel execution context. (author)

  12. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    Science.gov (United States)

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  13. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Zhiwei Ye

    2015-01-01

    Full Text Available Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  14. Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics

    Science.gov (United States)

    Guo, Qiang

    Time dependent partial differential equations (PDEs) are widely used as mathematical models of environmental problems. Aerosols are now clearly identified as an important factor in many environmental aspects of climate and radiative forcing processes, as well as in the health effects of air quality. The mathematical models for the aerosol dynamics with respect to size distribution are nonlinear partial differential and integral equations, which describe processes of condensation, coagulation and deposition. Simulating the general aerosol dynamic equations on time, particle size and space exhibits serious difficulties because the size dimension ranges from a few nanometer to several micrometer while the spatial dimension is usually described with kilometers. Therefore, it is an important and challenging task to develop efficient techniques for solving time dependent dynamic equations. In this thesis, we develop and analyze efficient wavelet and adaptive methods for the time dependent dynamic equations on particle size and further apply them to the spatial aerosol dynamic systems. Wavelet Galerkin method is proposed to solve the aerosol dynamic equations on time and particle size due to the fact that aerosol distribution changes strongly along size direction and the wavelet technique can solve it very efficiently. Daubechies' wavelets are considered in the study due to the fact that they possess useful properties like orthogonality, compact support, exact representation of polynomials to a certain degree. Another problem encountered in the solution of the aerosol dynamic equations results from the hyperbolic form due to the condensation growth term. We propose a new characteristic-based fully adaptive multiresolution numerical scheme for solving the aerosol dynamic equation, which combines the attractive advantages of adaptive multiresolution technique and the characteristics method. On the aspect of theoretical analysis, the global existence and uniqueness of

  15. Automated and Adaptable Quantification of Cellular Alignment from Microscopic Images for Tissue Engineering Applications

    Science.gov (United States)

    Xu, Feng; Beyazoglu, Turker; Hefner, Evan; Gurkan, Umut Atakan

    2011-01-01

    Cellular alignment plays a critical role in functional, physical, and biological characteristics of many tissue types, such as muscle, tendon, nerve, and cornea. Current efforts toward regeneration of these tissues include replicating the cellular microenvironment by developing biomaterials that facilitate cellular alignment. To assess the functional effectiveness of the engineered microenvironments, one essential criterion is quantification of cellular alignment. Therefore, there is a need for rapid, accurate, and adaptable methodologies to quantify cellular alignment for tissue engineering applications. To address this need, we developed an automated method, binarization-based extraction of alignment score (BEAS), to determine cell orientation distribution in a wide variety of microscopic images. This method combines a sequenced application of median and band-pass filters, locally adaptive thresholding approaches and image processing techniques. Cellular alignment score is obtained by applying a robust scoring algorithm to the orientation distribution. We validated the BEAS method by comparing the results with the existing approaches reported in literature (i.e., manual, radial fast Fourier transform-radial sum, and gradient based approaches). Validation results indicated that the BEAS method resulted in statistically comparable alignment scores with the manual method (coefficient of determination R2=0.92). Therefore, the BEAS method introduced in this study could enable accurate, convenient, and adaptable evaluation of engineered tissue constructs and biomaterials in terms of cellular alignment and organization. PMID:21370940

  16. A method and apparatus for the manufacture of glass microspheres adapted to contain a thermonuclear fuel

    International Nuclear Information System (INIS)

    Budrick, R.G.; Nolen, R.L. Jr.; Solomon, D.E.; King, F.T.

    1975-01-01

    The invention relates to the manufacture of glass microspheres. It refers to a method according to which a sintered glass-powder, whose particles are calibrated, is introduced into a blow-pipe adapted to project said glass-powder particles into a heated flue, said sintered glass-powder containing a pore-forming agent adapted to expand the glass particles into microspheres which are collected in a chamber situated abode said flue. The method can be applied to the manufacture of microspheres adapted to contain a thermonuclear fuel [fr

  17. The role of glacier changes and threshold definition in the characterisation of future streamflow droughts in glacierised catchments

    Directory of Open Access Journals (Sweden)

    M. Van Tiel

    2018-01-01

    Full Text Available Glaciers are essential hydrological reservoirs, storing and releasing water at various timescales. Short-term variability in glacier melt is one of the causes of streamflow droughts, here defined as deficiencies from the flow regime. Streamflow droughts in glacierised catchments have a wide range of interlinked causing factors related to precipitation and temperature on short and long timescales. Climate change affects glacier storage capacity, with resulting consequences for discharge regimes and streamflow drought. Future projections of streamflow drought in glacierised basins can, however, strongly depend on the modelling strategies and analysis approaches applied. Here, we examine the effect of different approaches, concerning the glacier modelling and the drought threshold, on the characterisation of streamflow droughts in glacierised catchments. Streamflow is simulated with the Hydrologiska Byråns Vattenbalansavdelning (HBV-light model for two case study catchments, the Nigardsbreen catchment in Norway and the Wolverine catchment in Alaska, and two future climate change scenarios (RCP4.5 and RCP8.5. Two types of glacier modelling are applied, a constant and dynamic glacier area conceptualisation. Streamflow droughts are identified with the variable threshold level method and their characteristics are compared between two periods, a historical (1975–2004 and future (2071–2100 period. Two existing threshold approaches to define future droughts are employed: (1 the threshold from the historical period; (2 a transient threshold approach, whereby the threshold adapts every year in the future to the changing regimes. Results show that drought characteristics differ among the combinations of glacier area modelling and thresholds. The historical threshold combined with a dynamic glacier area projects extreme increases in drought severity in the future, caused by the regime shift due to a reduction in glacier area. The historical

  18. A method for online verification of adapted fields using an independent dose monitor

    International Nuclear Information System (INIS)

    Chang Jina; Norrlinger, Bernhard D.; Heaton, Robert K.; Jaffray, David A.; Cho, Young-Bin; Islam, Mohammad K.; Mahon, Robert

    2013-01-01

    Purpose: Clinical implementation of online adaptive radiotherapy requires generation of modified fields and a method of dosimetric verification in a short time. We present a method of treatment field modification to account for patient setup error, and an online method of verification using an independent monitoring system.Methods: The fields are modified by translating each multileaf collimator (MLC) defined aperture in the direction of the patient setup error, and magnifying to account for distance variation to the marked isocentre. A modified version of a previously reported online beam monitoring system, the integral quality monitoring (IQM) system, was investigated for validation of adapted fields. The system consists of a large area ion-chamber with a spatial gradient in electrode separation to provide a spatially sensitive signal for each beam segment, mounted below the MLC, and a calculation algorithm to predict the signal. IMRT plans of ten prostate patients have been modified in response to six randomly chosen setup errors in three orthogonal directions.Results: A total of approximately 49 beams for the modified fields were verified by the IQM system, of which 97% of measured IQM signal agree with the predicted value to within 2%.Conclusions: The modified IQM system was found to be suitable for online verification of adapted treatment fields

  19. Vibration-Based Adaptive Novelty Detection Method for Monitoring Faults in a Kinematic Chain

    Directory of Open Access Journals (Sweden)

    Jesus Adolfo Cariño-Corrales

    2016-01-01

    Full Text Available This paper presents an adaptive novelty detection methodology applied to a kinematic chain for the monitoring of faults. The proposed approach has the premise that only information of the healthy operation of the machine is initially available and fault scenarios will eventually develop. This approach aims to cover some of the challenges presented when condition monitoring is applied under a continuous learning framework. The structure of the method is divided into two recursive stages: first, an offline stage for initialization and retraining of the feature reduction and novelty detection modules and, second, an online monitoring stage to continuously assess the condition of the machine. Contrary to classical static feature reduction approaches, the proposed method reformulates the features by employing first a Laplacian Score ranking and then the Fisher Score ranking for retraining. The proposed methodology is validated experimentally by monitoring the vibration measurements of a kinematic chain driven by an induction motor. Two faults are induced in the motor to validate the method performance to detect anomalies and adapt the feature reduction and novelty detection modules to the new information. The obtained results show the advantages of employing an adaptive approach for novelty detection and feature reduction making the proposed method suitable for industrial machinery diagnosis applications.

  20. Biological Dosimetry Methods Employed at the Boris Kidric Institute of Nuclear Sciences; Application de Quelques Methodes Particulieres de Dosimetrie Biologique a l'Institut des Sciences Nucleaires Boris Kidric

    Energy Technology Data Exchange (ETDEWEB)

    Aleksic, B.; Veljkovic, D.; Djordjevic, O.; Djukic, Z. [Institut des Sciences Nucleaires Boris Kidric, Belgrade, Yugoslavia (Serbia)

    1971-06-15

    In addition to the more usual methods, the following methods are used at the Boris Kidric Institute of Nuclear Sciences in the medical supervision of occupationally exposed staff: analysis of bi nucleated lymphocytes and chromosome aberrations; physical examination (for example, determination of the pain sensitivity threshold, adaptation to pain, discrimination of sensitivity); capillaroscopy. These methods are described briefly and their practical application discussed. (author) [French] Outre les methodes habituelles, on a utilise a l'Institut Boris Kidric, pour le controle medical des travailleurs professionnellement exposes, des methodes d'analyse des lymphocytes binuclees et des aberrations chromosomiques, ainsi que des methodes d'examen physique telles que la determination du seuil de sensibilite douloureuse, de l'adaptation a la douleur et de la discrimination de la sensibilite, et la capillaroscopie. Ces methodes sont brievement decrites et leurs applications pratiques discutees. (author)

  1. Psychophysical thresholds of face visibility during infancy

    DEFF Research Database (Denmark)

    Gelskov, Sofie; Kouider, Sid

    2010-01-01

    The ability to detect and focus on faces is a fundamental prerequisite for developing social skills. But how well can infants detect faces? Here, we address this question by studying the minimum duration at which faces must appear to trigger a behavioral response in infants. We used a preferential...... looking method in conjunction with masking and brief presentations (300 ms and below) to establish the temporal thresholds of visibility at different stages of development. We found that 5 and 10 month-old infants have remarkably similar visibility thresholds about three times higher than those of adults....... By contrast, 15 month-olds not only revealed adult-like thresholds, but also improved their performance through memory-based strategies. Our results imply that the development of face visibility follows a non-linear course and is determined by a radical improvement occurring between 10 and 15 months....

  2. Grid - a fast threshold tracking procedure

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Dau, Torsten; MacDonald, Ewen

    2016-01-01

    A new procedure, called “grid”, is evaluated that allows rapid acquisition of threshold curves for psychophysics and, in particular, psychoacoustic, experiments. In this method, the parameterresponse space is sampled in two dimensions within a single run. This allows the procedure to focus more e...

  3. Standard test method for determining a threshold stress intensity factor for environment-assisted cracking of metallic materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2003-01-01

    1.1 This test method covers the determination of the environment-assisted cracking threshold stress intensity factor parameters, KIEAC and KEAC, for metallic materials from constant-force testing of fatigue precracked beam or compact fracture specimens and from constant-displacement testing of fatigue precracked bolt-load compact fracture specimens. 1.2 This test method is applicable to environment-assisted cracking in aqueous or other aggressive environments. 1.3 Materials that can be tested by this test method are not limited by thickness or by strength as long as specimens are of sufficient thickness and planar size to meet the size requirements of this test method. 1.4 A range of specimen sizes with proportional planar dimensions is provided, but size may be variable and adjusted for yield strength and applied force. Specimen thickness is a variable independent of planar size. 1.5 Specimen configurations other than those contained in this test method may be used, provided that well-established stress ...

  4. Chromatic discrimination: differential contributions from two adapting fields

    Science.gov (United States)

    Cao, Dingcai; Lu, Yolanda H.

    2012-01-01

    To test whether a retinal or cortical mechanism sums contributions from two adapting fields to chromatic discrimination, L/M discrimination was measured with a test annulus surrounded by an inner circular field and an outer rectangular field. A retinal summation mechanism predicted that the discrimination pattern would not change with a change in the fixation location. Therefore, the fixation was set either in the inner or the outer field in two experiments. When one of the adapting fields was “red” and the other was “green,” the adapting field where the observer fixated always had a stronger influence on chromatic discrimination. However, when one adapting field was “white” and the other was red or green, the white field always weighted more heavily than the other adapting field in determining discrimination thresholds, whether the white field or the fixation was in the inner or outer adapting field. These results suggest that a cortical mechanism determines the relative contributions from different adapting fields. PMID:22330364

  5. Realistic Realizations Of Threshold Circuits

    Science.gov (United States)

    Razavi, Hassan M.

    1987-08-01

    Threshold logic, in which each input is weighted, has many theoretical advantages over the standard gate realization, such as reducing the number of gates, interconnections, and power dissipation. However, because of the difficult synthesis procedure and complicated circuit implementation, their use in the design of digital systems is almost nonexistant. In this study, three methods of NMOS realizations are discussed, and their advantages and shortcomings are explored. Also, the possibility of using the methods to realize multi-valued logic is examined.

  6. An Automatic Multilevel Image Thresholding Using Relative Entropy and Meta-Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    Josue R. Cuevas

    2013-06-01

    Full Text Available Multilevel thresholding has been long considered as one of the most popular techniques for image segmentation. Multilevel thresholding outputs a gray scale image in which more details from the original picture can be kept, while binary thresholding can only analyze the image in two colors, usually black and white. However, two major existing problems with the multilevel thresholding technique are: it is a time consuming approach, i.e., finding appropriate threshold values could take an exceptionally long computation time; and defining a proper number of thresholds or levels that will keep most of the relevant details from the original image is a difficult task. In this study a new evaluation function based on the Kullback-Leibler information distance, also known as relative entropy, is proposed. The property of this new function can help determine the number of thresholds automatically. To offset the expensive computational effort by traditional exhaustive search methods, this study establishes a procedure that combines the relative entropy and meta-heuristics. From the experiments performed in this study, the proposed procedure not only provides good segmentation results when compared with a well known technique such as Otsu’s method, but also constitutes a very efficient approach.

  7. Landscape structure and the speed of adaptation

    International Nuclear Information System (INIS)

    Claudino, Elder S.; Campos, Paulo R.A.

    2014-01-01

    The role of fragmentation in the adaptive process is addressed. We investigate how landscape structure affects the speed of adaptation in a spatially structured population model. As models of fragmented landscapes, here we simulate the percolation maps and the fractal landscapes. In the latter the degree of spatial autocorrelation can be suited. We verified that fragmentation can effectively affect the adaptive process. The examination of the fixation rates and speed of adaptation discloses the dichotomy exhibited by percolation maps and fractal landscapes. In the latter, there is a smooth change in the pace of the adaptation process, as the landscapes become more aggregated higher fixation rates and speed of adaptation are obtained. On the other hand, in random percolation the geometry of the percolating cluster matters. Thus, the scenario depends on whether the system is below or above the percolation threshold. - Highlights: • The role of fragmentation on the adaptive process is addressed. • Our approach makes the linkage between population genetics and landscape ecology. • Fragmentation affects gene flow and thus influences the speed of adaptation. • The level of clumping determines how the speed of adaptation is influenced

  8. Adaptation of the TCLP and SW-846 methods to radioactive mixed waste

    International Nuclear Information System (INIS)

    Griest, W.H.; Schenley, R.L.; Caton, J.E.; Wolfe, P.F.

    1994-01-01

    Modifications of conventional sample preparation and analytical methods are necessary to provide radiation protection and to meet sensitivity requirements for regulated constituents when working with radioactive samples. Adaptations of regulatory methods for determining ''total'' Toxicity Characteristic Leaching Procedure (TCLP) volatile and semivolatile organics and pesticides, and for conducting aqueous leaching are presented

  9. An improved adaptive wavelet shrinkage for ultrasound despeckling

    Indian Academy of Sciences (India)

    Preservation Index (EPI). A comparison of the results shows that the proposed fil- ter achieves an improvement in terms of quantitative measures and in terms of visual quality of the images. Keywords. Wavelet; translation invariance; inter and intra scale dependency; speckle; adaptive thresholding; ultrasound images. ∗.

  10. Using Critical Thresholds to Customize Climate Projections of Extreme Events to User Needs and Support Decisions

    Science.gov (United States)

    Garfin, G. M.; Petersen, A.; Shafer, M.; MacClune, K.; Hayhoe, K.; Riley, R.; Nasser, E.; Kos, L.; Allan, C.; Stults, M.; LeRoy, S. R.

    2016-12-01

    Many communities in the United States are already vulnerable to extreme events; many of these vulnerabilities are likely to increase with climate change. In order to promote the development of effective community responses to climate change, we tested a participatory process for developing usable climate science, in which our project team worked with decision-makers to identify extreme event parameters and critical thresholds associated with policy development and adaptation actions. Our hypothesis is that conveying climate science and data through user-defined parameters and thresholds will help develop capacity to streamline the use of climate projections in developing strategies and actions, and motivate participation by a variety of preparedness planners. Our team collaborated with urban decision-makers, in departments that included resilience, planning, public works, public health, emergency management, and others, in four cities in the semi-arid south-central plains and intermountain areas of Colorado, New Mexico, Oklahoma, and Texas. Through an iterative process, we homed in on both simple and hybrid indicators for which we could develop credible city-specific projections, to stimulate discussion about adaptation actions; throughout the process, we communicated information about confidence and uncertainty, in order to develop a blend of historic and projected climate data, as appropriate, depending on levels of uncertainty. Our collaborations have resulted in (a) the identification of more than 50 unique indicators and thresholds across the four communities, (b) the development of adaptation action strategies in each community, and (c) the implementation of actions, ranging from a climate leadership training program for city staff members, to a rainwater capture project to improve responses to expected increases in both stormwater runoff and water capture for drought episodes.

  11. A projection-adapted cross entropy (PACE) method for transmission network planning

    Energy Technology Data Exchange (ETDEWEB)

    Eshragh, Ali; Filar, Jerzy [University of South Australia, School of Mathematics and Statistics, Mawson Lakes, SA (Australia); Nazar, Asef [University of South Australia, Institute for Sustainable Systems Technologies, School of Mathematics and Statistics, Mawson Lakes, SA (Australia)

    2011-06-15

    In this paper, we propose an adaptation of the cross entropy (CE) method called projection-adapted CE (PACE) to solve a transmission expansion problem that arises in management of national and provincial electricity grids. The aim of the problem is to find an expansion policy that is both economical and operational from the technical perspective. Often, the transmission network expansion problem is mathematically formulated as a mixed integer nonlinear program that is very challenging algorithmically. The challenge originates from the fact that a global optimum should be found despite the presence, of possibly a huge number, of local optima. The PACE method shows promise in solving global optimization problems regardless of continuity or other assumptions. In our approach, we sample the integer variables using the CE mechanism, and solve LPs to obtain matching continuous variables. Numerical results, on selected test systems, demonstrate the potential of this approach. (orig.)

  12. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Webster, Clayton G [ORNL; Zhang, Guannan [ORNL; Gunzburger, Max D [ORNL

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  13. Pressure Systems Stored-Energy Threshold Risk Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Paulsen, Samuel S.

    2009-08-25

    Federal Regulation 10 CFR 851, which became effective February 2007, brought to light potential weaknesses regarding the Pressure Safety Program at the Pacific Northwest National Laboratory (PNNL). The definition of a pressure system in 10 CFR 851 does not contain a limit based upon pressure or any other criteria. Therefore, the need for a method to determine an appropriate risk-based hazard level for pressure safety was identified. The Laboratory has historically used a stored energy of 1000 lbf-ft to define a pressure hazard; however, an analytical basis for this value had not been documented. This document establishes the technical basis by evaluating the use of stored energy as an appropriate criterion to establish a pressure hazard, exploring a suitable risk threshold for pressure hazards, and reviewing the methods used to determine stored energy. The literature review and technical analysis concludes the use of stored energy as a method for determining a potential risk, the 1000 lbf-ft threshold, and the methods used by PNNL to calculate stored energy are all appropriate. Recommendations for further program improvements are also discussed

  14. Thresholds in radiobiology

    International Nuclear Information System (INIS)

    Katz, R.; Hofmann, W.

    1982-01-01

    Interpretations of biological radiation effects frequently use the word 'threshold'. The meaning of this word is explored together with its relationship to the fundamental character of radiation effects and to the question of perception. It is emphasised that although the existence of either a dose or an LET threshold can never be settled by experimental radiobiological investigations, it may be argued on fundamental statistical grounds that for all statistical processes, and especially where the number of observed events is small, the concept of a threshold is logically invalid. (U.K.)

  15. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    Science.gov (United States)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  16. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    Energy Technology Data Exchange (ETDEWEB)

    Paganelli, Chiara [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133 (Italy); Peroni, Marta [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Paul Scherrer Institut, Zentrum für Protonentherapie, WMSA/C15, CH-5232 Villigen PSI (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, strada Campeggi 53, Pavia 27100 (Italy)

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  17. An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation

    Science.gov (United States)

    Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan

    2008-01-01

    This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.

  18. Adaptive e-learning methods and IMS Learning Design. An integrated approach

    NARCIS (Netherlands)

    Burgos, Daniel; Specht, Marcus

    2006-01-01

    Please, cite this publication as: Burgos, D., & Specht, M. (2006). Adaptive e-learning methods and IMS Learning Design. In Kinshuk, R. Koper, P. Kommers, P. Kirschner, D. G. Sampson & W. Didderen (Eds.), Proceedings of the 6th IEEE International Conference on Advanced Learning Technologies (pp.

  19. Shrinkage-thresholding enhanced born iterative method for solving 2D inverse electromagnetic scattering problem

    KAUST Repository

    Desmal, Abdulla

    2014-07-01

    A numerical framework that incorporates recently developed iterative shrinkage thresholding (IST) algorithms within the Born iterative method (BIM) is proposed for solving the two-dimensional inverse electromagnetic scattering problem. IST algorithms minimize a cost function weighted between measurement-data misfit and a zeroth/first-norm penalty term and therefore promote "sharpness" in the solution. Consequently, when applied to domains with sharp variations, discontinuities, or sparse content, the proposed framework is more efficient and accurate than the "classical" BIM that minimizes a cost function with a second-norm penalty term. Indeed, numerical results demonstrate the superiority of the IST-BIM over the classical BIM when they are applied to sparse domains: Permittivity and conductivity profiles recovered using the IST-BIM are sharper and more accurate and converge faster. © 1963-2012 IEEE.

  20. Patched based methods for adaptive mesh refinement solutions of partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Saltzman, J.

    1997-09-02

    This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.

  1. Discriminating the precipitation phase based on different temperature thresholds in the Songhua River Basin, China

    Science.gov (United States)

    Zhong, Keyuan; Zheng, Fenli; Xu, Ximeng; Qin, Chao

    2018-06-01

    Different precipitation phases (rain, snow or sleet) differ greatly in their hydrological and erosional processes. Therefore, accurate discrimination of the precipitation phase is highly important when researching hydrologic processes and climate change at high latitudes and mountainous regions. The objective of this study was to identify suitable temperature thresholds for discriminating the precipitation phase in the Songhua River Basin (SRB) based on 20-year daily precipitation collected from 60 meteorological stations located in and around the basin. Two methods, the air temperature method (AT method) and the wet bulb temperature method (WBT method), were used to discriminate the precipitation phase. Thirteen temperature thresholds were used to discriminate snowfall in the SRB. These thresholds included air temperatures from 0 to 5.5 °C at intervals of 0.5 °C and the wet bulb temperature (WBT). Three evaluation indices, the error percentage of discriminated snowfall days (Ep), the relative error of discriminated snowfall (Re) and the determination coefficient (R2), were applied to assess the discrimination accuracy. The results showed that 2.5 °C was the optimum threshold temperature for discriminating snowfall at the scale of the entire basin. Due to differences in the landscape conditions at the different stations, the optimum threshold varied by station. The optimal threshold ranged 1.5-4.0 °C, and 19 stations, 17 stations and 18 stations had optimal thresholds of 2.5 °C, 3.0 °C, and 3.5 °C respectively, occupying 90% of all stations. Compared with using a single suitable temperature threshold to discriminate snowfall throughout the basin, it was more accurate to use the optimum threshold at each station to estimate snowfall in the basin. In addition, snowfall was underestimated when the temperature threshold was the WBT and when the temperature threshold was below 2.5 °C, whereas snowfall was overestimated when the temperature threshold exceeded 4

  2. Ear surgery techniques results on hearing threshold improvement

    Directory of Open Access Journals (Sweden)

    Farhad Mokhtarinejad

    2013-01-01

    Full Text Available Background: Bone conduction (BC threshold depression is not always by means of sensory neural hearing loss and sometimes it is an artifact caused by middle ear pathologies and ossicular chain problems. In this research, the influences of ear surgeries on bone conduction were evaluated. Materials and Methods: This study was conducted as a clinical trial study. The ear surgery performed on 83 patients classified in four categories: Stapedectomy, tympanomastoid surgery and ossicular reconstruction partially or totally; Partial Ossicular Replacement Prosthesis (PORP and Total Ossicular Replacement Prosthesis (TORP. Bone conduction thresholds assessed in frequencies of 250, 500, 1000, 2000 and 4000 Hz pre and post the surgery. Results: In stapedectomy group, the average of BC threshold in all frequencies improved approximately 6 dB in frequency of 2000 Hz. In tympanomastoid group, BC threshold in the frequency of 500, 1000 and 2000 Hz changed 4 dB (P-value < 0.05. Moreover, In the PORP group, 5 dB enhancement was seen in 1000 and 2000 Hz. In TORP group, the results confirmed that BC threshold improved in all frequencies especially at 4000 Hz about 6.5 dB. Conclusion: In according to results of this study, BC threshold shift was seen after several ear surgeries such as stapedectomy, tympanoplasty, PORP and TORP. The average of BC improvement was approximately 5 dB. It must be considered that BC depression might happen because of ossicular chain problems. Therefore; by resolving middle ear pathologies, the better BC threshold was obtained, the less hearing problems would be faced.

  3. [Relationship between Occlusal Discomfort Syndrome and Occlusal Threshold].

    Science.gov (United States)

    Munakata, Motohiro; Ono, Yumie; Hayama, Rika; Kataoka, Kanako; Ikuta, Ryuhei; Tamaki, Katsushi

    2016-03-01

    Occlusal dysesthesia has been defined as persistent uncomfortable feelings of intercuspal position continuing for more than 6 months without evidence of physical occlusal discrepancy. The problem often occurs after occlusal intervention by dental care. Although various dental treatments (e. g. occlusal adjustment, orthodontic treatment and prosthetic reconstruction) are attempted to solve occlusal dysesthesia, they rarely reach a satisfactory result, neither for patients nor dentists. In Japan, these symptoms are defined by the term "Occlusal discomfort syndrome" (ODS). The aim of this study was to investigate the characteristics of ODS with the simple occlusal sensory perceptive and discriminative test. Twenty-one female dental patients with ODS (mean age 55.8 ± 19.2 years) and 21 age- and gender-matched dental patients without ODS (mean age 53.1 ± 16.8 years) participated in the study. Upon grinding occlusal registration foils that were stacked to different thicknesses, participants reported the thicknesses at which they recognized the foils (recognition threshold) and felt discomfort (discomfort threshold). Although there was no significant difference in occlusal recognition thresholds between the two patient groups, the discomfort threshold was significantly smaller in the patients with ODS than in those without ODS. Moreover, the recognition threshold showed an age-dependent increase in patients without ODS, whereas it remained comparable between the younger (patient subgroups with ODS. These results suggest that occlusal discomfort threshold rather than recognition threshold is an issue in ODS. The foil grinding procedure is a simple and useful method to evaluate occlusal perceptive and discriminative abilities in patients with ODS.

  4. Influence of porcelain firing and cementation on the marginal adaptation of metal-ceramic restorations prepared by different methods.

    Science.gov (United States)

    Kaleli, Necati; Saraç, Duygu

    2017-05-01

    Marginal adaptation plays an important role in the survival of metal-ceramic restorations. Porcelain firings and cementation may affect the adaptation of restorations. Moreover, conventional casting procedures and casting imperfections may cause deteriorations in the marginal adaptation of metal-ceramic restorations. The purpose of this in vitro study was to compare the marginal adaptation after fabrication of the framework, porcelain application, and cementation of metal-ceramic restorations prepared by using the conventional lost-wax technique, milling, direct metal laser sintering (DMLS), and LaserCUSING, a direct process powder-bed system. Alterations in the marginal adaptation of the metal frameworks during the fabrication stages and the precision of fabrication methods were evaluated. Forty-eight metal dies simulating prepared premolar and molar abutment teeth were fabricated to investigate marginal adaptation. They were divided into 4 groups (n=12) according to the fabrication method used (group C serving as the control group: lost-wax method; group M: milling method; group LS: DMLS method; group DP: direct process powder-bed method). Sixty marginal discrepancy measurements were recorded separately on each abutment tooth after fabrication of the framework, porcelain application, and cementation by using a stereomicroscope. Thereafter, each group was divided into 3 subgroups according to the measurements recorded in each fabrication stage: subgroup F (framework), subgroup P (porcelain application), and subgroup C (cementation). Data were statistically analyzed with univariate analysis of variance (followed by 1-way ANOVA and Tamhane T2 test (α=.05). The lowest marginal discrepancy values were observed in restorations prepared by using the direct process powder-bed method, and this was significantly different (Pdirect process powder-bed method is quite successful in terms of marginal adaptation. The marginal discrepancy increased after porcelain application

  5. Data-adaptive Robust Optimization Method for the Economic Dispatch of Active Distribution Networks

    DEFF Research Database (Denmark)

    Zhang, Yipu; Ai, Xiaomeng; Fang, Jiakun

    2018-01-01

    Due to the restricted mathematical description of the uncertainty set, the current two-stage robust optimization is usually over-conservative which has drawn concerns from the power system operators. This paper proposes a novel data-adaptive robust optimization method for the economic dispatch...... of active distribution network with renewables. The scenario-generation method and the two-stage robust optimization are combined in the proposed method. To reduce the conservativeness, a few extreme scenarios selected from the historical data are used to replace the conventional uncertainty set....... The proposed extreme-scenario selection algorithm takes advantage of considering the correlations and can be adaptive to different historical data sets. A theoretical proof is given that the constraints will be satisfied under all the possible scenarios if they hold in the selected extreme scenarios, which...

  6. An Adaptive Multiobjective Particle Swarm Optimization Based on Multiple Adaptive Methods.

    Science.gov (United States)

    Han, Honggui; Lu, Wei; Qiao, Junfei

    2017-09-01

    Multiobjective particle swarm optimization (MOPSO) algorithms have attracted much attention for their promising performance in solving multiobjective optimization problems (MOPs). In this paper, an adaptive MOPSO (AMOPSO) algorithm, based on a hybrid framework of the solution distribution entropy and population spacing (SP) information, is developed to improve the search performance in terms of convergent speed and precision. First, an adaptive global best (gBest) selection mechanism, based on the solution distribution entropy, is introduced to analyze the evolutionary tendency and balance the diversity and convergence of nondominated solutions in the archive. Second, an adaptive flight parameter adjustment mechanism, using the population SP information, is proposed to obtain the distribution of particles with suitable diversity and convergence, which can balance the global exploration and local exploitation abilities of the particles. Third, based on the gBest selection mechanism and the adaptive flight parameter mechanism, this proposed AMOPSO algorithm not only has high accuracy, but also attain a set of optimal solutions with better diversity. Finally, the performance of the proposed AMOPSO algorithm is validated and compared with other five state-of-the-art algorithms on a number of benchmark problems and water distribution system. The experimental results validate the effectiveness of the proposed AMOPSO algorithm, as well as demonstrate that AMOPSO outperforms other MOPSO algorithms in solving MOPs.

  7. Investigation of the Adaptability of Transient Stability Assessment Methods to Real-Time Operation

    DEFF Research Database (Denmark)

    Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Sommer, Stefan

    2012-01-01

    In this paper, an investigation of the adaptability of available transient stability assessment methods to real-time operation and their real-time performance is carried out. Two approaches based on Lyapunov’s method and the equal area criterion are analyzed. The results allow to determine...

  8. An adaptive reentry guidance method considering the influence of blackout zone

    Science.gov (United States)

    Wu, Yu; Yao, Jianyao; Qu, Xiangju

    2018-01-01

    Reentry guidance has been researched as a popular topic because it is critical for a successful flight. In view that the existing guidance methods do not take into account the accumulated navigation error of Inertial Navigation System (INS) in the blackout zone, in this paper, an adaptive reentry guidance method is proposed to obtain the optimal reentry trajectory quickly with the target of minimum aerodynamic heating rate. The terminal error in position and attitude can be also reduced with the proposed method. In this method, the whole reentry guidance task is divided into two phases, i.e., the trajectory updating phase and the trajectory planning phase. In the first phase, the idea of model predictive control (MPC) is used, and the receding optimization procedure ensures the optimal trajectory in the next few seconds. In the trajectory planning phase, after the vehicle has flown out of the blackout zone, the optimal reentry trajectory is obtained by online planning to adapt to the navigation information. An effective swarm intelligence algorithm, i.e. pigeon inspired optimization (PIO) algorithm, is applied to obtain the optimal reentry trajectory in both of the two phases. Compared to the trajectory updating method, the proposed method can reduce the terminal error by about 30% considering both the position and attitude, especially, the terminal error of height has almost been eliminated. Besides, the PIO algorithm performs better than the particle swarm optimization (PSO) algorithm both in the trajectory updating phase and the trajectory planning phases.

  9. Shifts in the relationship between motor unit recruitment thresholds versus derecruitment thresholds during fatigue.

    Science.gov (United States)

    Stock, Matt S; Mota, Jacob A

    2017-12-01

    Muscle fatigue is associated with diminished twitch force amplitude. We examined changes in the motor unit recruitment versus derecruitment threshold relationship during fatigue. Nine men (mean age = 26 years) performed repeated isometric contractions at 50% maximal voluntary contraction (MVC) knee extensor force until exhaustion. Surface electromyographic signals were detected from the vastus lateralis, and were decomposed into their constituent motor unit action potential trains. Motor unit recruitment and derecruitment thresholds and firing rates at recruitment and derecruitment were evaluated at the beginning, middle, and end of the protocol. On average, 15 motor units were studied per contraction. For the initial contraction, three subjects showed greater recruitment thresholds than derecruitment thresholds for all motor units. Five subjects showed greater recruitment thresholds than derecruitment thresholds for only low-threshold motor units at the beginning, with a mean cross-over of 31.6% MVC. As the muscle fatigued, many motor units were derecruited at progressively higher forces. In turn, decreased slopes and increased y-intercepts were observed. These shifts were complemented by increased firing rates at derecruitment relative to recruitment. As the vastus lateralis fatigued, the central nervous system's compensatory adjustments resulted in a shift of the regression line of the recruitment versus derecruitment threshold relationship. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. Adaptive EWMA Method Based on Abnormal Network Traffic for LDoS Attacks

    Directory of Open Access Journals (Sweden)

    Dan Tang

    2014-01-01

    Full Text Available The low-rate denial of service (LDoS attacks reduce network services capabilities by periodically sending high intensity pulse data flows. For their concealed performance, it is more difficult for traditional DoS detection methods to detect LDoS attacks; at the same time the accuracy of the current detection methods for LDoS attacks is relatively low. As the fact that LDoS attacks led to abnormal distribution of the ACK traffic, LDoS attacks can be detected by analyzing the distribution characteristics of ACK traffic. Then traditional EWMA algorithm which can smooth the accidental error while being the same as the exceptional mutation may cause some misjudgment; therefore a new LDoS detection method based on adaptive EWMA (AEWMA algorithm is proposed. The AEWMA algorithm which uses an adaptive weighting function instead of the constant weighting of EWMA algorithm can smooth the accidental error and retain the exceptional mutation. So AEWMA method is more beneficial than EWMA method for analyzing and measuring the abnormal distribution of ACK traffic. The NS2 simulations show that AEWMA method can detect LDoS attacks effectively and has a low false negative rate and a false positive rate. Based on DARPA99 datasets, experiment results show that AEWMA method is more efficient than EWMA method.

  11. Computer simulation of threshold radiation damage in rutile, TiO2

    International Nuclear Information System (INIS)

    Richardson, D.D.

    1983-01-01

    Computer simulation methods have been used to study threshold radiation damage structures in rutile. It was found Ti ions have threshold energies much larger than O ions. Basal plane displacements for oxygen were shown to be complex, and focuson behaviour was only found at energies several times the threshold energy. Oxygen ions do not have simple interstitials or vacancies, but rather a three-ion crowdion and divacancy-interstitial combination were found, respectively. Threshold energies were found to be highly dependent on crystallographic direction, being as low as 10 eV in one instance, but often much higher. Oxygen ions were seen to defocus along the c-axis. (author)

  12. Salicylate-induced changes in auditory thresholds of adolescent and adult rats.

    Science.gov (United States)

    Brennan, J F; Brown, C A; Jastreboff, P J

    1996-01-01

    Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.

  13. Differential equation models for sharp threshold dynamics.

    Science.gov (United States)

    Schramm, Harrison C; Dimitrov, Nedialko B

    2014-01-01

    We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. Published by Elsevier Inc.

  14. The Impact of Dynamic RTS Threshold Adjustment for IEEE 802.11 MAC Protocol

    Directory of Open Access Journals (Sweden)

    Mostafa Mjidi

    2009-01-01

    Full Text Available In recent years, wireless technologies and application received great attention. The Medium Access Control (MAC protocol is the main element that determines the efficiency in sharing the limited communication bandwidth of the wireless channel in wireless local area networks (WLANs. IEEE 802.11 introduced the optional RTS/CTS handshaking mechanism to address the hidden terminal problem as well as to reduces the chance of collision in case of higher node density and traffic. RTS Threshold (RT determines when RTS/CTS mechanism should be used and proved to be an important parameter for performance characteristics in data transmission. We first investigate to find a meaningful threshold value according to the network situation and determine the impact of using or disengaging the RTS/CTS optional mechanism and dynamically adjust the RTS Threshold to maximize data transmission. The results show a significant improvement over existing CSMA/CA and RTS/CTS schemes. Our adaptive scheme performed even better when data rate increases. We verify our proposed scheme both analytically and with extensive network simulation using ns-2.

  15. Free testosterone as marker of adaptation to medium-intensive exercise.

    Science.gov (United States)

    Shkurnikov, M U; Donnikov, A E; Akimov, E B; Sakharov, D A; Tonevitsky, A G

    2008-09-01

    A 4-week study of adaptation reserves of the body was carried out during medium intensive exercise (medium intensive training: 60-80% threshold anaerobic metabolism). Two groups of athletes were singled out by the results of pulsometry analysis: with less than 20% work duration at the level above the 80% threshold anaerobic metabolism and with more than 20% work duration at the level above 80% threshold anaerobic metabolism. No appreciable differences between the concentrations of total testosterone, growth hormone, and cortisol before and after exercise in the groups with different percentage of anaerobic work duration were detected. In group 1 the concentrations of free testosterone did not change throughout the period of observation in comparison with the levels before training. In group 2, the level of free testosterone increased in comparison with the basal level: from 0.61+/-0.12 nmol/liter at the end of week 1 to 0.98+/-0.11 nmol/liter at the end of week 4 (p<0.01). The results indicate that the level of free testosterone can be used for evaluating the degree of athlete's adaptation to medium intensive exercise.

  16. A Framework for Optimizing Phytosanitary Thresholds in Seed Systems.

    Science.gov (United States)

    Choudhury, Robin Alan; Garrett, Karen A; Klosterman, Steven J; Subbarao, Krishna V; McRoberts, Neil

    2017-10-01

    Seedborne pathogens and pests limit production in many agricultural systems. Quarantine programs help prevent the introduction of exotic pathogens into a country, but few regulations directly apply to reducing the reintroduction and spread of endemic pathogens. Use of phytosanitary thresholds helps limit the movement of pathogen inoculum through seed, but the costs associated with rejected seed lots can be prohibitive for voluntary implementation of phytosanitary thresholds. In this paper, we outline a framework to optimize thresholds for seedborne pathogens, balancing the cost of rejected seed lots and benefit of reduced inoculum levels. The method requires relatively small amounts of data, and the accuracy and robustness of the analysis improves over time as data accumulate from seed testing. We demonstrate the method first and illustrate it with a case study of seedborne oospores of Peronospora effusa, the causal agent of spinach downy mildew. A seed lot threshold of 0.23 oospores per seed could reduce the overall number of oospores entering the production system by 90% while removing 8% of seed lots destined for distribution. Alternative mitigation strategies may result in lower economic losses to seed producers, but have uncertain efficacy. We discuss future challenges and prospects for implementing this approach.

  17. The distribution choice for the threshold of solid state relay

    International Nuclear Information System (INIS)

    Sun Beiyun; Zhou Hui; Cheng Xiangyue; Mao Congguang

    2009-01-01

    Either normal distribution or Weibull distribution can be accepted as sample distribution of the threshold of solid state relay. By goodness-of-fit method, bootstrap method and Bayesian method, the Weibull distribution is chosen later. (authors)

  18. Planetary gearbox fault feature enhancement based on combined adaptive filter method

    Directory of Open Access Journals (Sweden)

    Shuangshu Tian

    2015-12-01

    Full Text Available The reliability of vibration signals acquired from a planetary gear system (the indispensable part of wind turbine gearbox is directly related to the accuracy of fault diagnosis. The complex operation environment leads to lots of interference signals which are included in the vibration signals. Furthermore, both multiple gears meshing with each other and the differences in transmission rout produce strong nonlinearity in the vibration signals, which makes it difficult to eliminate the noise. This article presents a combined adaptive filter method by taking a delayed signal as reference signal, the Self-Adaptive Noise Cancellation method is adopted to eliminate the white noise. In the meanwhile, by applying Gaussian function to transform the input signal into high-dimension feature-space signal, the kernel least mean square algorithm is used to cancel the nonlinear interference. Effectiveness of the method has been verified by simulation signals and test rig signals. By dealing with simulation signal, the signal-to-noise ratio can be improved around 30 dB (white noise and the amplitude of nonlinear interference signal can be depressed up to 50%. Experimental results show remarkable improvements and enhance gear fault features.

  19. Modeling, Simulation, and Analysis of Novel Threshold Voltage Definition for Nano-MOSFET

    Directory of Open Access Journals (Sweden)

    Yashu Swami

    2017-01-01

    Full Text Available Threshold voltage (VTH is the indispensable vital parameter in MOSFET designing, modeling, and operation. Diverse expounds and extraction methods exist to model the on-off transition characteristics of the device. The governing gauge for efficient threshold voltage definition and extraction method can be itemized as clarity, simplicity, precision, and stability throughout the operating conditions and technology node. The outcomes of extraction methods diverge from the exact values due to various short-channel effects (SCEs and nonidealities present in the device. A new approach to define and extract the real value of VTH of MOSFET is proposed in the manuscript. The subsequent novel enhanced SCE-independent VTH extraction method named “hybrid extrapolation VTH extraction method” (HEEM is elaborated, modeled, and compared with few prevalent MOSFET threshold voltage extraction methods for validation of the results. All the results are verified by extensive 2D TCAD simulation and confirmed analytically at various technology nodes.

  20. Threshold factorization redux

    Science.gov (United States)

    Chay, Junegone; Kim, Chul

    2018-05-01

    We reanalyze the factorization theorems for the Drell-Yan process and for deep inelastic scattering near threshold, as constructed in the framework of the soft-collinear effective theory (SCET), from a new, consistent perspective. In order to formulate the factorization near threshold in SCET, we should include an additional degree of freedom with small energy, collinear to the beam direction. The corresponding collinear-soft mode is included to describe the parton distribution function (PDF) near threshold. The soft function is modified by subtracting the contribution of the collinear-soft modes in order to avoid double counting on the overlap region. As a result, the proper soft function becomes infrared finite, and all the factorized parts are free of rapidity divergence. Furthermore, the separation of the relevant scales in each factorized part becomes manifest. We apply the same idea to the dihadron production in e+e- annihilation near threshold, and show that the resultant soft function is also free of infrared and rapidity divergences.

  1. Influence of one- or two-stage methods for polymerizing complete dentures on adaptation and teeth movements

    Directory of Open Access Journals (Sweden)

    Moises NOGUEIRA

    Full Text Available Abstract Introduction The quality of complete dentures might be influenced by the method of confection. Objective To evaluate the influence of two different methods of processing muco-supported complete dentures on their adaptation and teeth movements. Material and method Denture confection was assigned in two groups (n=10 for upper and lower arches according to polymerization method: 1 conventional one-stage - a wax trial base was made, teeth were arranged and polymerized; 2 two-stage method - the base was waxed and first polymerized. With the denture base polymerized, the teeth were arranged and then, performed the final polymerization. Teeth movements were evaluated in the distances between incisive (I-I, pre-molars (P-P, molars (M-M, left incisor to left molar (LI-LM and right incisor to right molar (RI-RM. For the adaptation analysis, dentures were cut in three different positions: (A distal face of canines, (B mesial face of the first molars, and (C distal face of second molars. Result Denture bases have shown a significant better adaptation when polymerized in the one-stage procedure for both the upper (p=0.000 and the lower (p=0.000 arches, with region A presenting significant better adaptation than region C. In the upper arch, significant reduction in the distance between I-I was observed in the one-stage technique, while the two-stage technique promoted significant reduction in the RI-RM distance. In the lower arch, one-stage technique promoted significant reduction in the distance for RI-RM and two-stage promoted significant reduction in the LI-LM distance. Conclusion Conventional one-stage method presented the better results for denture adaptation. Both fabrication methods presented some alteration in teeth movements.

  2. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    Science.gov (United States)

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  3. Denoising of Mechanical Vibration Signals Using Quantum-Inspired Adaptive Wavelet Shrinkage

    Directory of Open Access Journals (Sweden)

    Yan-long Chen

    2014-01-01

    Full Text Available The potential application of a quantum-inspired adaptive wavelet shrinkage (QAWS technique to mechanical vibration signals with a focus on noise reduction is studied in this paper. This quantum-inspired shrinkage algorithm combines three elements: an adaptive non-Gaussian statistical model of dual-tree complex wavelet transform (DTCWT coefficients proposed to improve practicability of prior information, the quantum superposition introduced to describe the interscale dependencies of DTCWT coefficients, and the quantum-inspired probability of noise defined to shrink wavelet coefficients in a Bayesian framework. By combining all these elements, this signal processing scheme incorporating the DTCWT with quantum theory can both reduce noise and preserve signal details. A practical vibration signal measured from a power-shift steering transmission is utilized to evaluate the denoising ability of QAWS. Application results demonstrate the effectiveness of the proposed method. Moreover, it achieves better performance than hard and soft thresholding.

  4. An adaptive segment method for smoothing lidar signal based on noise estimation

    Science.gov (United States)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  5. Adaptive Breast Radiation Therapy Using Modeling of Tissue Mechanics: A Breast Tissue Segmentation Study

    International Nuclear Information System (INIS)

    Juneja, Prabhjot; Harris, Emma J.; Kirby, Anna M.; Evans, Philip M.

    2012-01-01

    Purpose: To validate and compare the accuracy of breast tissue segmentation methods applied to computed tomography (CT) scans used for radiation therapy planning and to study the effect of tissue distribution on the segmentation accuracy for the purpose of developing models for use in adaptive breast radiation therapy. Methods and Materials: Twenty-four patients receiving postlumpectomy radiation therapy for breast cancer underwent CT imaging in prone and supine positions. The whole-breast clinical target volume was outlined. Clinical target volumes were segmented into fibroglandular and fatty tissue using the following algorithms: physical density thresholding; interactive thresholding; fuzzy c-means with 3 classes (FCM3) and 4 classes (FCM4); and k-means. The segmentation algorithms were evaluated in 2 stages: first, an approach based on the assumption that the breast composition should be the same in both prone and supine position; and second, comparison of segmentation with tissue outlines from 3 experts using the Dice similarity coefficient (DSC). Breast datasets were grouped into nonsparse and sparse fibroglandular tissue distributions according to expert assessment and used to assess the accuracy of the segmentation methods and the agreement between experts. Results: Prone and supine breast composition analysis showed differences between the methods. Validation against expert outlines found significant differences (P<.001) between FCM3 and FCM4. Fuzzy c-means with 3 classes generated segmentation results (mean DSC = 0.70) closest to the experts' outlines. There was good agreement (mean DSC = 0.85) among experts for breast tissue outlining. Segmentation accuracy and expert agreement was significantly higher (P<.005) in the nonsparse group than in the sparse group. Conclusions: The FCM3 gave the most accurate segmentation of breast tissues on CT data and could therefore be used in adaptive radiation therapy-based on tissue modeling. Breast tissue segmentation

  6. Adaptive Breast Radiation Therapy Using Modeling of Tissue Mechanics: A Breast Tissue Segmentation Study

    Energy Technology Data Exchange (ETDEWEB)

    Juneja, Prabhjot, E-mail: Prabhjot.Juneja@icr.ac.uk [Joint Department of Physics, Institute of Cancer Research, Sutton (United Kingdom); Harris, Emma J. [Joint Department of Physics, Institute of Cancer Research, Sutton (United Kingdom); Kirby, Anna M. [Department of Academic Radiotherapy, Royal Marsden National Health Service Foundation Trust, Sutton (United Kingdom); Evans, Philip M. [Joint Department of Physics, Institute of Cancer Research, Sutton (United Kingdom)

    2012-11-01

    Purpose: To validate and compare the accuracy of breast tissue segmentation methods applied to computed tomography (CT) scans used for radiation therapy planning and to study the effect of tissue distribution on the segmentation accuracy for the purpose of developing models for use in adaptive breast radiation therapy. Methods and Materials: Twenty-four patients receiving postlumpectomy radiation therapy for breast cancer underwent CT imaging in prone and supine positions. The whole-breast clinical target volume was outlined. Clinical target volumes were segmented into fibroglandular and fatty tissue using the following algorithms: physical density thresholding; interactive thresholding; fuzzy c-means with 3 classes (FCM3) and 4 classes (FCM4); and k-means. The segmentation algorithms were evaluated in 2 stages: first, an approach based on the assumption that the breast composition should be the same in both prone and supine position; and second, comparison of segmentation with tissue outlines from 3 experts using the Dice similarity coefficient (DSC). Breast datasets were grouped into nonsparse and sparse fibroglandular tissue distributions according to expert assessment and used to assess the accuracy of the segmentation methods and the agreement between experts. Results: Prone and supine breast composition analysis showed differences between the methods. Validation against expert outlines found significant differences (P<.001) between FCM3 and FCM4. Fuzzy c-means with 3 classes generated segmentation results (mean DSC = 0.70) closest to the experts' outlines. There was good agreement (mean DSC = 0.85) among experts for breast tissue outlining. Segmentation accuracy and expert agreement was significantly higher (P<.005) in the nonsparse group than in the sparse group. Conclusions: The FCM3 gave the most accurate segmentation of breast tissues on CT data and could therefore be used in adaptive radiation therapy-based on tissue modeling. Breast tissue

  7. Barrier heights of plutonium isotopes from (n,n'f)-thresholds

    International Nuclear Information System (INIS)

    Knitter, H.-H.; Budtz-Joergensen, C.

    1983-01-01

    The neutron induced second chance fission cross section for the isotopes 238 Pu, 239 Pu, 240 Pu, 241 Pu, 242 Pu and 244 Pu are studied in the region of the threshold using a simple model. Numerical values are obtained for the inner fission barrier heights of the mentioned isotopes and of the nuclear temperatures governing the neutron evaporation process at incident neutron energies around the second chance fission threshold. The comparisons of the present parameters with those obtained by other methods give hints to possible insufficiencies of experimental cross section data in the region of the second chance fission threshold. (Auth.)

  8. Establishing seasonal and alert influenza thresholds in Cambodia using the WHO method: implications for effective utilization of influenza surveillance in the tropics and subtropics.

    Science.gov (United States)

    Ly, Sovann; Arashiro, Takeshi; Ieng, Vanra; Tsuyuoka, Reiko; Parry, Amy; Horwood, Paul; Heng, Seng; Hamid, Sarah; Vandemaele, Katelijn; Chin, Savuth; Sar, Borann; Arima, Yuzo

    2017-01-01

    To establish seasonal and alert thresholds and transmission intensity categories for influenza to provide timely triggers for preventive measures or upscaling control measures in Cambodia. Using Cambodia's influenza-like illness (ILI) and laboratory-confirmed influenza surveillance data from 2009 to 2015, three parameters were assessed to monitor influenza activity: the proportion of ILI patients among all outpatients, proportion of ILI samples positive for influenza and the product of the two. With these parameters, four threshold levels (seasonal, moderate, high and alert) were established and transmission intensity was categorized based on a World Health Organization alignment method. Parameters were compared against their respective thresholds. Distinct seasonality was observed using the two parameters that incorporated laboratory data. Thresholds established using the composite parameter, combining syndromic and laboratory data, had the least number of false alarms in declaring season onset and were most useful in monitoring intensity. Unlike in temperate regions, the syndromic parameter was less useful in monitoring influenza activity or for setting thresholds. Influenza thresholds based on appropriate parameters have the potential to provide timely triggers for public health measures in a tropical country where monitoring and assessing influenza activity has been challenging. Based on these findings, the Ministry of Health plans to raise general awareness regarding influenza among the medical community and the general public. Our findings have important implications for countries in the tropics/subtropics and in resource-limited settings, and categorized transmission intensity can be used to assess severity of potential pandemic influenza as well as seasonal influenza.

  9. Adaptive geodesic transform for segmentation of vertebrae on CT images

    Science.gov (United States)

    Gaonkar, Bilwaj; Shu, Liao; Hermosillo, Gerardo; Zhan, Yiqiang

    2014-03-01

    Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.

  10. The morphing method as a flexible tool for adaptive local/non-local simulation of static fracture

    KAUST Repository

    Azdoud, Yan

    2014-04-19

    We introduce a framework that adapts local and non-local continuum models to simulate static fracture problems. Non-local models based on the peridynamic theory are promising for the simulation of fracture, as they allow discontinuities in the displacement field. However, they remain computationally expensive. As an alternative, we develop an adaptive coupling technique based on the morphing method to restrict the non-local model adaptively during the evolution of the fracture. The rest of the structure is described by local continuum mechanics. We conduct all simulations in three dimensions, using the relevant discretization scheme in each domain, i.e., the discontinuous Galerkin finite element method in the peridynamic domain and the continuous finite element method in the local continuum mechanics domain. © 2014 Springer-Verlag Berlin Heidelberg.

  11. Identification of a Threshold Value for the DEMATEL Method: Using the Maximum Mean De-Entropy Algorithm

    Science.gov (United States)

    Chung-Wei, Li; Gwo-Hshiung, Tzeng

    To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.

  12. Sparse electromagnetic imaging using nonlinear iterative shrinkage thresholding

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2015-01-01

    A sparse nonlinear electromagnetic imaging scheme is proposed for reconstructing dielectric contrast of investigation domains from measured fields. The proposed approach constructs the optimization problem by introducing the sparsity constraint to the data misfit between the scattered fields expressed as a nonlinear function of the contrast and the measured fields and solves it using the nonlinear iterative shrinkage thresholding algorithm. The thresholding is applied to the result of every nonlinear Landweber iteration to enforce the sparsity constraint. Numerical results demonstrate the accuracy and efficiency of the proposed method in reconstructing sparse dielectric profiles.

  13. Sparse electromagnetic imaging using nonlinear iterative shrinkage thresholding

    KAUST Repository

    Desmal, Abdulla

    2015-04-13

    A sparse nonlinear electromagnetic imaging scheme is proposed for reconstructing dielectric contrast of investigation domains from measured fields. The proposed approach constructs the optimization problem by introducing the sparsity constraint to the data misfit between the scattered fields expressed as a nonlinear function of the contrast and the measured fields and solves it using the nonlinear iterative shrinkage thresholding algorithm. The thresholding is applied to the result of every nonlinear Landweber iteration to enforce the sparsity constraint. Numerical results demonstrate the accuracy and efficiency of the proposed method in reconstructing sparse dielectric profiles.

  14. Threshold guidance update

    International Nuclear Information System (INIS)

    Wickham, L.E.

    1986-01-01

    The Department of Energy (DOE) is developing the concept of threshold quantities for use in determining which waste materials must be handled as radioactive waste and which may be disposed of as nonradioactive waste at its sites. Waste above this concentration level would be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. Last years' activities (1984) included the development of a threshold guidance dose, the development of threshold concentrations corresponding to the guidance dose, the development of supporting documentation, review by a technical peer review committee, and review by the DOE community. As a result of the comments, areas have been identified for more extensive analysis, including an alternative basis for selection of the guidance dose and the development of quality assurance guidelines. Development of quality assurance guidelines will provide a reasonable basis for determining that a given waste stream qualifies as a threshold waste stream and can then be the basis for a more extensive cost-benefit analysis. The threshold guidance and supporting documentation will be revised, based on the comments received. The revised documents will be provided to DOE by early November. DOE-HQ has indicated that the revised documents will be available for review by DOE field offices and their contractors

  15. Adaptive measurements of urban runoff quality

    Science.gov (United States)

    Wong, Brandon P.; Kerkez, Branko

    2016-11-01

    An approach to adaptively measure runoff water quality dynamics is introduced, focusing specifically on characterizing the timing and magnitude of urban pollutographs. Rather than relying on a static schedule or flow-weighted sampling, which can miss important water quality dynamics if parameterized inadequately, novel Internet-enabled sensor nodes are used to autonomously adapt their measurement frequency to real-time weather forecasts and hydrologic conditions. This dynamic approach has the potential to significantly improve the use of constrained experimental resources, such as automated grab samplers, which continue to provide a strong alternative to sampling water quality dynamics when in situ sensors are not available. Compared to conventional flow-weighted or time-weighted sampling schemes, which rely on preset thresholds, a major benefit of the approach is the ability to dynamically adapt to features of an underlying hydrologic signal. A 28 km2 urban watershed was studied to characterize concentrations of total suspended solids (TSS) and total phosphorus. Water quality samples were autonomously triggered in response to features in the underlying hydrograph and real-time weather forecasts. The study watershed did not exhibit a strong first flush and intraevent concentration variability was driven by flow acceleration, wherein the largest loadings of TSS and total phosphorus corresponded with the steepest rising limbs of the storm hydrograph. The scalability of the proposed method is discussed in the context of larger sensor network deployments, as well the potential to improving control of urban water quality.

  16. Integrating adaptive governance and participatory multicriteria methods: a framework for climate adaptation governance

    NARCIS (Netherlands)

    Munaretto, S.; Siciliano, G.; Turvani, M.

    2014-01-01

    Climate adaptation is a dynamic social and institutional process where the governance dimension is receiving growing attention. Adaptive governance is an approach that promises to reduce uncertainty by improving the knowledge base for decision making. As uncertainty is an inherent feature of climate

  17. A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES

    Energy Technology Data Exchange (ETDEWEB)

    Druckmueller, M., E-mail: druckmuller@fme.vutbr.cz [Institute of Mathematics, Faculty of Mechanical Engineering, Brno University of Technology, Technicka 2, 616 69 Brno (Czech Republic)

    2013-08-15

    A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.

  18. Adaptive Elastic Net for Generalized Methods of Moments.

    Science.gov (United States)

    Caner, Mehmet; Zhang, Hao Helen

    2014-01-30

    Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.

  19. Comparison of anaerobic threshold determined by visual and mathematical methods in healthy women

    Directory of Open Access Journals (Sweden)

    M.N. Higa

    2007-04-01

    Full Text Available Several methods are used to estimate anaerobic threshold (AT during exercise. The aim of the present study was to compare AT obtained by a graphic visual method for the estimate of ventilatory and metabolic variables (gold standard, to a bi-segmental linear regression mathematical model of Hinkley's algorithm applied to heart rate (HR and carbon dioxide output (VCO2 data. Thirteen young (24 ± 2.63 years old and 16 postmenopausal (57 ± 4.79 years old healthy and sedentary women were submitted to a continuous ergospirometric incremental test on an electromagnetic braking cycloergometer with 10 to 20 W/min increases until physical exhaustion. The ventilatory variables were recorded breath-to-breath and HR was obtained beat-to-beat over real time. Data were analyzed by the nonparametric Friedman test and Spearman correlation test with the level of significance set at 5%. Power output (W, HR (bpm, oxygen uptake (VO2; mL kg-1 min-1, VO2 (mL/min, VCO2 (mL/min, and minute ventilation (VE; L/min data observed at the AT level were similar for both methods and groups studied (P > 0.05. The VO2 (mL kg-1 min-1 data showed significant correlation (P < 0.05 between the gold standard method and the mathematical model when applied to HR (r s = 0.75 and VCO2 (r s = 0.78 data for the subjects as a whole (N = 29. The proposed mathematical method for the detection of changes in response patterns of VCO2 and HR was adequate and promising for AT detection in young and middle-aged women, representing a semi-automatic, non-invasive and objective AT measurement.

  20. Influence of different contributions of scatter and attenuation on the threshold values in contrast-based algorithms for volume segmentation.

    Science.gov (United States)

    Matheoud, Roberta; Della Monica, Patrizia; Secco, Chiara; Loi, Gianfranco; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco

    2011-01-01

    The aim of this work is to evaluate the role of different amount of attenuation and scatter on FDG-PET image volume segmentation using a contrast-oriented method based on the target-to-background (TB) ratio and target dimensions. A phantom study was designed employing 3 phantom sets, which provided a clinical range of attenuation and scatter conditions, equipped with 6 spheres of different volumes (0.5-26.5 ml). The phantoms were: (1) the Hoffman 3-dimensional brain phantom, (2) a modified International Electro technical Commission (IEC) phantom with an annular ring of water bags of 3 cm thickness fit over the IEC phantom, and (3) a modified IEC phantom with an annular ring of water bags of 9 cm. The phantoms cavities were filled with a solution of FDG at 5.4 kBq/ml activity concentration, and the spheres with activity concentration ratios of about 16, 8, and 4 times the background activity concentration. Images were acquired with a Biograph 16 HI-REZ PET/CT scanner. Thresholds (TS) were determined as a percentage of the maximum intensity in the cross section area of the spheres. To reduce statistical fluctuations a nominal maximum value is calculated as the mean from all voxel > 95%. To find the TS value that yielded an area A best matching the true value, the cross section were auto-contoured in the attenuation corrected slices varying TS in step of 1%, until the area so determined differed by less than 10 mm² versus its known physical value. Multiple regression methods were used to derive an adaptive thresholding algorithm and to test its dependence on different conditions of attenuation and scatter. The errors of scatter and attenuation correction increased with increasing amount of attenuation and scatter in the phantoms. Despite these increasing inaccuracies, PET threshold segmentation algorithms resulted not influenced by the different condition of attenuation and scatter. The test of the hypothesis of coincident regression lines for the three phantoms used

  1. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    Science.gov (United States)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  2. Cloud cover over the equatorial eastern Pacific derived from July 1983 International Satellite Cloud Climatology Project data using a hybrid bispectral threshold method

    Science.gov (United States)

    Minnis, Patrick; Harrison, Edwin F.; Gibson, Gary G.

    1987-01-01

    A set of visible and IR data obtained with GOES from July 17-31, 1983 is analyzed using a modified version of the hybrid bispectral threshold method developed by Minnis and Harrison (1984). This methodology can be divided into a set of procedures or optional techniques to determine the proper contaminate clear-sky temperature or IR threshold. The various optional techniques are described; the options are: standard, low-temperature limit, high-reflectance limit, low-reflectance limit, coldest pixel and thermal adjustment limit, IR-only low-cloud temperature limit, IR clear-sky limit, and IR overcast limit. Variations in the cloud parameters and the characteristics and diurnal cycles of trade cumulus and stratocumulus clouds over the eastern equatorial Pacific are examined. It is noted that the new method produces substantial changes in about one third of the cloud amount retrieval; and low cloud retrievals are affected most by the new constraints.

  3. Real Time Adaptive Stream-oriented Geo-data Filtering

    Directory of Open Access Journals (Sweden)

    A. A. Golovkov

    2016-01-01

    Full Text Available The cutting-edge engineering maintenance software systems of various objects are aimed at processing of geo-location data coming from the employees’ mobile devices in real time. To reduce the amount of transmitted data such systems, usually, use various filtration methods of geo-coordinates recorded directly on mobile devices.The paper identifies the reasons for errors of geo-data coming from different sources, and proposes an adaptive dynamic method to filter geo-location data. Compared with the static method previously described in the literature [1] the approach offers to align adaptively the filtering threshold with changing characteristics of coordinates from many sources of geo-location data.To evaluate the efficiency of the developed filter method have been involved about 400 thousand points, representing motion paths of different type (on foot, by car and high-speed train and parking (indoors, outdoors, near high-rise buildings to take data from different mobile devices. Analysis of results has shown that the benefits of the proposed method are the more precise location of long parking (up to 6 hours and coordinates when user is in motion, the capability to provide steam-oriented filtering of data from different sources that allows to use the approach in geo-information systems, providing continuous monitoring of the location in streamoriented data processing in real time. The disadvantage is a little bit more computational complexity and increasing amount of points of the final track as compared to other filtration techniques.In general, the developed approach enables a significant quality improvement of displayed paths of moving mobile objects.

  4. Seizure threshold and the half-age method in bilateral electroconvulsive therapy in Japanese patients.

    Science.gov (United States)

    Yasuda, Kazuyuki; Kobayashi, Kaoru; Yamaguchi, Masayasu; Tanaka, Koichi; Fujii, Tomokazu; Kitahara, Yuichi; Tamaoki, Toshio; Matsushita, Yutaka; Nunomura, Akihiko; Motohashi, Nobutaka

    2015-01-01

    Seizure threshold (ST) in electroconvulsive therapy (ECT) has not been reported previously in Japanese patients. We investigated ST in bilateral ECT in Japanese patients using the dose-titration method. The associations between demographic and clinical characteristics and ST were analyzed to identify the predictors of ST. Finally, the validity of the half-age method for the stimulus dose was evaluated. Fifty-four Japanese patients with mood disorder, schizophrenia, and other psychotic disorders received an acute course of bilateral ECT using a brief-pulse device. ST was determined at the first session using a fixed titration schedule. ST was correlated with age, sex, body mass index, history of previous ECT, and psychotropic drugs on multiple regression analysis. Furthermore, the rate of accomplished seizures was calculated using the half-age method. Mean ST was 136 mC. ST was influenced by age, sex, history of previous ECT, and medication with benzodiazepines. The accomplished seizure rate using the half-age method was 72%, which was significantly lower in men and subjects on benzodiazepines. ST in Japanese patients was equal to or slightly higher than that previously reported in other ethnic groups, which might be attributable, at least in part, to high prevalence of and large-dose benzodiazepine prescription. Higher age, male gender, no history of ECT, and benzodiazepines were related to higher ST. The half-age method was especially useful in female patients and subjects without benzodiazepine medication. © 2014 The Authors. Psychiatry and Clinical Neurosciences © 2014 Japanese Society of Psychiatry and Neurology.

  5. An experimental test of the linear no-threshold theory of radiation carcinogenesis

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1990-01-01

    There is a substantial body of quantitative information on radiation-induced cancer at high dose, but there are no data at low dose. The usual method for estimating effects of low-level radiation is to assume a linear no-threshold dependence. if this linear no-threshold assumption were not used, essentially all fears about radiation would disappear. Since these fears are costing tens of billions of dollars, it is most important that the linear no-threshold theory be tested at low dose. An opportunity for possibly testing the linear no-threshold concept is now available at low dose due to radon in homes. The purpose of this paper is to attempt to use this data to test the linear no-threshold theory

  6. Beef Quality Identification Using Thresholding Method and Decision Tree Classification Based on Android Smartphone

    Directory of Open Access Journals (Sweden)

    Kusworo Adi

    2017-01-01

    Full Text Available Beef is one of the animal food products that have high nutrition because it contains carbohydrates, proteins, fats, vitamins, and minerals. Therefore, the quality of beef should be maintained so that consumers get good beef quality. Determination of beef quality is commonly conducted visually by comparing the actual beef and reference pictures of each beef class. This process presents weaknesses, as it is subjective in nature and takes a considerable amount of time. Therefore, an automated system based on image processing that is capable of determining beef quality is required. This research aims to develop an image segmentation method by processing digital images. The system designed consists of image acquisition processes with varied distance, resolution, and angle. Image segmentation is done to separate the images of fat and meat using the Otsu thresholding method. Classification was carried out using the decision tree algorithm and the best accuracies were obtained at 90% for training and 84% for testing. Once developed, this system is then embedded into the android programming. Results show that the image processing technique is capable of proper marbling score identification.

  7. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro

    2016-01-01

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  8. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2016-07-07

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  9. Treatment of threshold retinopathy of prematurity

    Directory of Open Access Journals (Sweden)

    Deshpande Dhanashree

    1998-01-01

    Full Text Available This report deals with our experience in the management of threshold retinopathy of prematurity (ROP. A total of 45 eyes of 23 infants were subjected to treatment of threshold ROP. 26.1% of these infants had a birth weight of >l,500 gm. The preferred modality of treatment was laser indirect photocoagulation, which was facilitated by scleral depression. Cryopexy was done in cases with nondilating pupils or medial haze and was always under general anaesthesia. Retreatment with either modality was needed in 42.2% eyes; in this the skip areas were covered. Total regression of diseases was achieved in 91.1% eyes with no sequelae. All the 4 eyes that progressed to stage 5 despite treatment had zone 1 disease. Major treatment-induced complications did not occur in this series. This study underscores the importance of routine screening of infants upto 2,000 gm birth weight for ROP and the excellent response that is achieved with laser photocoagulation in inducing regression of threshold ROP. Laser is the preferred method of treatment in view of the absence of treatment-related morbidity to the premature infants.

  10. Epidemic spreading on contact networks with adaptive weights.

    Science.gov (United States)

    Zhu, Guanghu; Chen, Guanrong; Xu, Xin-Jian; Fu, Xinchu

    2013-01-21

    The heterogeneous patterns of interactions within a population are often described by contact networks, but the variety and adaptivity of contact strengths are usually ignored. This paper proposes a modified epidemic SIS model with a birth-death process and nonlinear infectivity on an adaptive and weighted contact network. The links' weights, named as 'adaptive weights', which indicate the intimacy or familiarity between two connected individuals, will reduce as the disease develops. Through mathematical and numerical analyses, conditions are established for population extermination, disease extinction and infection persistence. Particularly, it is found that the fixed weights setting can trigger the epidemic incidence, and that the adaptivity of weights cannot change the epidemic threshold but it can accelerate the disease decay and lower the endemic level. Finally, some corresponding control measures are suggested. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. An efficient cloud detection method for high resolution remote sensing panchromatic imagery

    Science.gov (United States)

    Li, Chaowei; Lin, Zaiping; Deng, Xinpu

    2018-04-01

    In order to increase the accuracy of cloud detection for remote sensing satellite imagery, we propose an efficient cloud detection method for remote sensing satellite panchromatic images. This method includes three main steps. First, an adaptive intensity threshold value combined with a median filter is adopted to extract the coarse cloud regions. Second, a guided filtering process is conducted to strengthen the textural features difference and then we conduct the detection process of texture via gray-level co-occurrence matrix based on the acquired texture detail image. Finally, the candidate cloud regions are extracted by the intersection of two coarse cloud regions above and we further adopt an adaptive morphological dilation to refine them for thin clouds in boundaries. The experimental results demonstrate the effectiveness of the proposed method.

  12. Lower versus Higher Hemoglobin Threshold for Transfusion in Septic Shock

    DEFF Research Database (Denmark)

    Holst, Lars B; Haase, Nicolai; Wetterslev, Jørn

    2014-01-01

    BACKGROUND: Blood transfusions are frequently given to patients with septic shock. However, the benefits and harms of different hemoglobin thresholds for transfusion have not been established. METHODS: In this multicenter, parallel-group trial, we randomly assigned patients in the intensive care...... unit (ICU) who had septic shock and a hemoglobin concentration of 9 g per deciliter or less to receive 1 unit of leukoreduced red cells when the hemoglobin level was 7 g per deciliter or less (lower threshold) or when the level was 9 g per deciliter or less (higher threshold) during the ICU stay...... were similar in the two intervention groups. CONCLUSIONS: Among patients with septic shock, mortality at 90 days and rates of ischemic events and use of life support were similar among those assigned to blood transfusion at a higher hemoglobin threshold and those assigned to blood transfusion...

  13. Vestibular thresholds for yaw rotation about an earth-vertical axis as a function of frequency.

    Science.gov (United States)

    Grabherr, Luzia; Nicoucar, Keyvan; Mast, Fred W; Merfeld, Daniel M

    2008-04-01

    Perceptual direction detection thresholds for yaw rotation about an earth-vertical axis were measured at seven frequencies (0.05, 0.1, 0.2, 0.5, 1, 2, and 5 Hz) in seven subjects in the dark. Motion stimuli consisted of single cycles of sinusoidal acceleration and were generated by a motion platform. An adaptive two-alternative categorical forced-choice procedure was used. The subjects had to indicate by button presses whether they perceived yaw rotation to the left or to the right. Thresholds were measured using a 3-down, 1-up staircase paradigm. Mean yaw rotation velocity thresholds were 2.8 deg s(-1) for 0.05 Hz, 2.5 deg s(-1) for 0.1 Hz, 1.7 deg s(-1) for 0.2 Hz, 0.7 deg s(-1) for 0.5 Hz, 0.6 deg s(-1) for 1 Hz, 0.4 deg s(-1) for 2 Hz, and 0.6 deg s(-1) for 5 Hz. The results show that motion thresholds increase at 0.2 Hz and below and plateau at 0.5 Hz and above. Increasing velocity thresholds at lower frequencies qualitatively mimic the high-pass characteristics of the semicircular canals, since the increase at 0.2 Hz and below would be consistent with decreased gain/sensitivity observed in the VOR at lower frequencies. In fact, the measured dynamics are consistent with a high pass filter having a threshold plateau of 0.71 deg s(-1) and a cut-off frequency of 0.23 Hz, which corresponds to a time constant of approximately 0.70 s. These findings provide no evidence for an influence of velocity storage on perceptual yaw rotation thresholds.

  14. Solution verification, goal-oriented adaptive methods for stochastic advection–diffusion problems

    KAUST Repository

    Almeida, Regina C.

    2010-08-01

    A goal-oriented analysis of linear, stochastic advection-diffusion models is presented which provides both a method for solution verification as well as a basis for improving results through adaptation of both the mesh and the way random variables are approximated. A class of model problems with random coefficients and source terms is cast in a variational setting. Specific quantities of interest are specified which are also random variables. A stochastic adjoint problem associated with the quantities of interest is formulated and a posteriori error estimates are derived. These are used to guide an adaptive algorithm which adjusts the sparse probabilistic grid so as to control the approximation error. Numerical examples are given to demonstrate the methodology for a specific model problem. © 2010 Elsevier B.V.

  15. Solution verification, goal-oriented adaptive methods for stochastic advection–diffusion problems

    KAUST Repository

    Almeida, Regina C.; Oden, J. Tinsley

    2010-01-01

    A goal-oriented analysis of linear, stochastic advection-diffusion models is presented which provides both a method for solution verification as well as a basis for improving results through adaptation of both the mesh and the way random variables are approximated. A class of model problems with random coefficients and source terms is cast in a variational setting. Specific quantities of interest are specified which are also random variables. A stochastic adjoint problem associated with the quantities of interest is formulated and a posteriori error estimates are derived. These are used to guide an adaptive algorithm which adjusts the sparse probabilistic grid so as to control the approximation error. Numerical examples are given to demonstrate the methodology for a specific model problem. © 2010 Elsevier B.V.

  16. A robust bi-orthogonal/dynamically-orthogonal method using the covariance pseudo-inverse with application to stochastic flow problems

    Science.gov (United States)

    Babaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em

    2017-09-01

    We develop a new robust methodology for the stochastic Navier-Stokes equations based on the dynamically-orthogonal (DO) and bi-orthogonal (BO) methods [1-3]. Both approaches are variants of a generalized Karhunen-Loève (KL) expansion in which both the stochastic coefficients and the spatial basis evolve according to system dynamics, hence, capturing the low-dimensional structure of the solution. The DO and BO formulations are mathematically equivalent [3], but they exhibit computationally complimentary properties. Specifically, the BO formulation may fail due to crossing of the eigenvalues of the covariance matrix, while both BO and DO become unstable when there is a high condition number of the covariance matrix or zero eigenvalues. To this end, we combine the two methods into a robust hybrid framework and in addition we employ a pseudo-inverse technique to invert the covariance matrix. The robustness of the proposed method stems from addressing the following issues in the DO/BO formulation: (i) eigenvalue crossing: we resolve the issue of eigenvalue crossing in the BO formulation by switching to the DO near eigenvalue crossing using the equivalence theorem and switching back to BO when the distance between eigenvalues is larger than a threshold value; (ii) ill-conditioned covariance matrix: we utilize a pseudo-inverse strategy to invert the covariance matrix; (iii) adaptivity: we utilize an adaptive strategy to add/remove modes to resolve the covariance matrix up to a threshold value. In particular, we introduce a soft-threshold criterion to allow the system to adapt to the newly added/removed mode and therefore avoid repetitive and unnecessary mode addition/removal. When the total variance approaches zero, we show that the DO/BO formulation becomes equivalent to the evolution equation of the Optimally Time-Dependent modes [4]. We demonstrate the capability of the proposed methodology with several numerical examples, namely (i) stochastic Burgers equation: we

  17. Pepsi-SAXS : an adaptive method for rapid and accurate computation of small-angle X-ray scattering profiles

    OpenAIRE

    Grudinin , Sergei; Garkavenko , Maria; Kazennov , Andrei

    2017-01-01

    International audience; A new method called Pepsi-SAXS is presented that calculates small-angle X-ray scattering profiles from atomistic models. The method is based on the multipole expansion scheme and is significantly faster compared with other tested methods. In particular, using the Nyquist–Shannon–Kotelnikov sampling theorem, the multipole expansion order is adapted to the size of the model and the resolution of the experimental data. It is argued that by using the adaptive expansion ord...

  18. An adaptive mesh refinement approach for average current nodal expansion method in 2-D rectangular geometry

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.

    2013-01-01

    Highlights: ► A new adaptive h-refinement approach has been developed for a class of nodal method. ► The resulting system of nodal equations is more amenable to efficient numerical solution. ► The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. ► Spatially adaptive approach greatly enhances the accuracy of the solution. - Abstract: The aim of this work is to develop a spatially adaptive coarse mesh strategy that progressively refines the nodes in appropriate regions of domain to solve the neutron balance equation by zeroth order nodal expansion method. A flux gradient based a posteriori estimation scheme has been utilized for checking the approximate solutions for various nodes. The relative surface net leakage of nodes has been considered as an assessment criterion. In this approach, the core module is called in by adaptive mesh generator to determine gradients of node surfaces flux to explore the possibility of node refinements in appropriate regions and directions of the problem. The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. For this purpose, a computer program ANRNE-2D, Adaptive Node Refinement Nodal Expansion, has been developed to solve neutron diffusion equation using average current nodal expansion method for 2D rectangular geometries. Implementing the adaptive algorithm confirms its superiority in enhancing the accuracy of the solution without using fine nodes throughout the domain and increasing the number of unknown solution. Some well-known benchmarks have been investigated and improvements are reported

  19. Analyzing Sub-Threshold Bitcell Topologies and the Effects of Assist Methods on SRAM VMIN

    Directory of Open Access Journals (Sweden)

    James Boley

    2012-04-01

    Full Text Available The need for ultra low power circuits has forced circuit designers to scale voltage supplies into the sub-threshold region where energy per operation is minimized [1]. The problem with this is that the traditional 6T SRAM bitcell, used for data storage, becomes unreliable at voltages below about 700 mV due to process variations and decreased device drive strength [2]. In order to achieve reliable operation, new bitcell topologies and assist methods have been proposed. This paper provides a comparison of four different bitcell topologies using read and write VMIN as the metrics for evaluation. In addition, read and write assist methods were tested using the periphery voltage scaling techniques discussed in [4–13]. Measurements taken from a 180 nm test chip show read functionality (without assist methods down to 500 mV and write functionality down to 600 mV. Using assist methods can reduce both read and write VMIN by 100 mV over the unassisted test case.

  20. Intermediate structure and threshold phenomena

    International Nuclear Information System (INIS)

    Hategan, Cornel

    2004-01-01

    The Intermediate Structure, evidenced through microstructures of the neutron strength function, is reflected in open reaction channels as fluctuations in excitation function of nuclear threshold effects. The intermediate state supporting both neutron strength function and nuclear threshold effect is a micro-giant neutron threshold state. (author)