A method of camera calibration with adaptive thresholding
Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei
2009-07-01
In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.
Comparison of an adaptive local thresholding method on CBCT and µCT endodontic images
Michetti, Jérôme; Basarab, Adrian; Diemer, Franck; Kouame, Denis
2018-01-01
Root canal segmentation on cone beam computed tomography (CBCT) images is difficult because of the noise level, resolution limitations, beam hardening and dental morphological variations. An image processing framework, based on an adaptive local threshold method, was evaluated on CBCT images acquired on extracted teeth. A comparison with high quality segmented endodontic images on micro computed tomography (µCT) images acquired from the same teeth was carried out using a dedicated registration process. Each segmented tooth was evaluated according to volume and root canal sections through the area and the Feret’s diameter. The proposed method is shown to overcome the limitations of CBCT and to provide an automated and adaptive complete endodontic segmentation. Despite a slight underestimation (-4, 08%), the local threshold segmentation method based on edge-detection was shown to be fast and accurate. Strong correlations between CBCT and µCT segmentations were found both for the root canal area and diameter (respectively 0.98 and 0.88). Our findings suggest that combining CBCT imaging with this image processing framework may benefit experimental endodontology, teaching and could represent a first development step towards the clinical use of endodontic CBCT segmentation during pulp cavity treatment.
Directory of Open Access Journals (Sweden)
Jing Xu
2016-07-01
Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.
International Nuclear Information System (INIS)
Chen, Lin; Fan, Xiangtao; Du, Xiaoping
2014-01-01
Point cloud filtering is the basic and key step in LiDAR data processing. Adaptive Triangle Irregular Network Modelling (ATINM) algorithm and Threshold Segmentation on Elevation Statistics (TSES) algorithm are among the mature algorithms. However, few researches concentrate on the parameter selections of ATINM and the iteration condition of TSES, which can greatly affect the filtering results. First the paper presents these two key problems under two different terrain environments. For a flat area, small height parameter and angle parameter perform well and for areas with complex feature changes, large height parameter and angle parameter perform well. One-time segmentation is enough for flat areas, and repeated segmentations are essential for complex areas. Then the paper makes comparisons and analyses of the results by these two methods. ATINM has a larger I error in both two data sets as it sometimes removes excessive points. TSES has a larger II error in both two data sets as it ignores topological relations between points. ATINM performs well even with a large region and a dramatic topology while TSES is more suitable for small region with flat topology. Different parameters and iterations can cause relative large filtering differences
International Nuclear Information System (INIS)
Yin Xiaoming; Li Xiang; Zhao Liping; Fang Zhongping
2009-01-01
A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.
Time-efficient multidimensional threshold tracking method
DEFF Research Database (Denmark)
Fereczkowski, Michal; Kowalewski, Borys; Dau, Torsten
2015-01-01
Traditionally, adaptive methods have been used to reduce the time it takes to estimate psychoacoustic thresholds. However, even with adaptive methods, there are many cases where the testing time is too long to be clinically feasible, particularly when estimating thresholds as a function of anothe...
Robust Adaptive Thresholder For Document Scanning Applications
Hsing, To R.
1982-12-01
In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.
Thresholding methods for PET imaging: A review
International Nuclear Information System (INIS)
Dewalle-Vignion, A.S.; Betrouni, N.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; El Abiad, A.
2010-01-01
This work deals with positron emission tomography segmentation methods for tumor volume determination. We propose a state of art techniques based on fixed or adaptive threshold. Methods found in literature are analysed with an objective point of view on their methodology, advantages and limitations. Finally, a comparative study is presented. (authors)
QRS Detection Based on Improved Adaptive Threshold
Directory of Open Access Journals (Sweden)
Xuanyu Lu
2018-01-01
Full Text Available Cardiovascular disease is the first cause of death around the world. In accomplishing quick and accurate diagnosis, automatic electrocardiogram (ECG analysis algorithm plays an important role, whose first step is QRS detection. The threshold algorithm of QRS complex detection is known for its high-speed computation and minimized memory storage. In this mobile era, threshold algorithm can be easily transported into portable, wearable, and wireless ECG systems. However, the detection rate of the threshold algorithm still calls for improvement. An improved adaptive threshold algorithm for QRS detection is reported in this paper. The main steps of this algorithm are preprocessing, peak finding, and adaptive threshold QRS detecting. The detection rate is 99.41%, the sensitivity (Se is 99.72%, and the specificity (Sp is 99.69% on the MIT-BIH Arrhythmia database. A comparison is also made with two other algorithms, to prove our superiority. The suspicious abnormal area is shown at the end of the algorithm and RR-Lorenz plot drawn for doctors and cardiologists to use as aid for diagnosis.
Consumption of vitamin a rich foods and dark adaptation threshold ...
African Journals Online (AJOL)
BACKGROUND: More than 7.2 million pregnant women in developing countries suffer from vitamin A deficiency. The objective of this study was to assess dark adaptation threshold of pregnant women and related socio-demographic factors in Damot Sore District, Wolayita Zone, Southern Ethiopia. METHODS: A ...
Statistical Algorithm for the Adaptation of Detection Thresholds
DEFF Research Database (Denmark)
Stotsky, Alexander A.
2008-01-01
Many event detection mechanisms in spark ignition automotive engines are based on the comparison of the engine signals to the detection threshold values. Different signal qualities for new and aged engines necessitate the development of an adaptation algorithm for the detection thresholds...... remains constant regardless of engine age and changing detection threshold values. This, in turn, guarantees the same event detection performance for new and aged engines/sensors. Adaptation of the engine knock detection threshold is given as an example. Udgivelsesdato: 2008...
Kinetics of the early adaptive response and adaptation threshold dose
International Nuclear Information System (INIS)
Mendiola C, M.T.; Morales R, P.
2003-01-01
The expression kinetics of the adaptive response (RA) in mouse leukocytes in vivo and the minimum dose of gamma radiation that induces it was determined. The mice were exposed 0.005 or 0.02 Gy of 137 Cs like adaptation and 1h later to the challenge dose (1.0 Gy), another group was only exposed at 1.0 Gy and the damage is evaluated in the DNA with the rehearsal it makes. The treatment with 0. 005 Gy didn't induce RA and 0. 02 Gy causes a similar effect to the one obtained with 0.01 Gy. The RA was show from an interval of 0.5 h being obtained the maximum expression with 5.0 h. The threshold dose to induce the RA is 0.01 Gy and in 5.0 h the biggest quantity in molecules is presented presumably that are related with the protection of the DNA. (Author)
Simplified Threshold RSA with Adaptive and Proactive Security
DEFF Research Database (Denmark)
Almansa Guerra, Jesus Fernando; Damgård, Ivan Bjerre; Nielsen, Jesper Buus
2006-01-01
We present the currently simplest, most efficient, optimally resilient, adaptively secure, and proactive threshold RSA scheme. A main technical contribution is a new rewinding strategy for analysing threshold signature schemes. This new rewinding strategy allows to prove adaptive security...... of a proactive threshold signature scheme which was previously assumed to be only statically secure. As a separate contribution we prove that our protocol is secure in the UC framework....
Passive Sonar Target Detection Using Statistical Classifier and Adaptive Threshold
Directory of Open Access Journals (Sweden)
Hamed Komari Alaie
2018-01-01
Full Text Available This paper presents the results of an experimental investigation about target detecting with passive sonar in Persian Gulf. Detecting propagated sounds in the water is one of the basic challenges of the researchers in sonar field. This challenge will be complex in shallow water (like Persian Gulf and noise less vessels. Generally, in passive sonar, the targets are detected by sonar equation (with constant threshold that increases the detection error in shallow water. The purpose of this study is proposed a new method for detecting targets in passive sonars using adaptive threshold. In this method, target signal (sound is processed in time and frequency domain. For classifying, Bayesian classification is used and posterior distribution is estimated by Maximum Likelihood Estimation algorithm. Finally, target was detected by combining the detection points in both domains using Least Mean Square (LMS adaptive filter. Results of this paper has showed that the proposed method has improved true detection rate by about 24% when compared other the best detection method.
Threshold-based Adaptive Detection for WSN
Abuzaid, Abdulrahman I.
2014-01-06
Efficient receiver designs for wireless sensor networks (WSNs) are becoming increasingly important. Cooperative WSNs communicated with the use of L sensors. As the receiver is constrained, it can only process U out of L sensors. Channel shortening and reduced-rank techniques were employed to design the preprocessing matrix. In this work, a receiver structure is proposed which combines the joint iterative optimization (JIO) algorithm and our proposed threshold selection criteria. This receiver structure assists in determining the optimal Uopt. It also provides the freedom to choose U
Threshold-based Adaptive Detection for WSN
Abuzaid, Abdulrahman I.; Ahmed, Qasim Zeeshan; Alouini, Mohamed-Slim
2014-01-01
Efficient receiver designs for wireless sensor networks (WSNs) are becoming increasingly important. Cooperative WSNs communicated with the use of L sensors. As the receiver is constrained, it can only process U out of L sensors. Channel shortening and reduced-rank techniques were employed to design the preprocessing matrix. In this work, a receiver structure is proposed which combines the joint iterative optimization (JIO) algorithm and our proposed threshold selection criteria. This receiver structure assists in determining the optimal Uopt. It also provides the freedom to choose U
Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image
Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.
2017-12-01
Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.
METHOD OF ADAPTIVE MAGNETOTHERAPY
Rudyk, Valentine Yu.; Tereshchenko, Mykola F.; Rudyk, Tatiana A.
2016-01-01
Practical realization of adaptive control in magnetotherapy apparatus acquires an actual importance on the modern stage of development of magnetotherapy.The structural scheme of method of adaptive impulsive magnetotherapy and algorithm of adaptive control of feed-back signal during procedure of magnetotherapy is represented.A feed-back in magnetotherapy complex will be realized with control of magnetic induction and analysis of man's physiological indexes (temperature, pulse, blood prassure, ...
Modern Adaptive Analytics Approach to Lowering Seismic Network Detection Thresholds
Johnson, C. E.
2017-12-01
Modern seismic networks present a number of challenges, but perhaps most notably are those related to 1) extreme variation in station density, 2) temporal variation in station availability, and 3) the need to achieve detectability for much smaller events of strategic importance. The first of these has been reasonably addressed in the development of modern seismic associators, such as GLASS 3.0 by the USGS/NEIC, though some work still remains to be done in this area. However, the latter two challenges demand special attention. Station availability is impacted by weather, equipment failure or the adding or removing of stations, and while thresholds have been pushed to increasingly smaller magnitudes, new algorithms are needed to achieve even lower thresholds. Station availability can be addressed by a modern, adaptive architecture that maintains specified performance envelopes using adaptive analytics coupled with complexity theory. Finally, detection thresholds can be lowered using a novel approach that tightly couples waveform analytics with the event detection and association processes based on a principled repicking algorithm that uses particle realignment for enhanced phase discrimination.
Adaptive local thresholding for robust nucleus segmentation utilizing shape priors
Wang, Xiuzhong; Srinivas, Chukka
2016-03-01
This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.
Saucez, Ph
2001-01-01
The general Method of Lines (MOL) procedure provides a flexible format for the solution of all the major classes of partial differential equations (PDEs) and is particularly well suited to evolutionary, nonlinear wave PDEs. Despite its utility, however, there are relatively few texts that explore it at a more advanced level and reflect the method''s current state of development.Written by distinguished researchers in the field, Adaptive Method of Lines reflects the diversity of techniques and applications related to the MOL. Most of its chapters focus on a particular application but also provide a discussion of underlying philosophy and technique. Particular attention is paid to the concept of both temporal and spatial adaptivity in solving time-dependent PDEs. Many important ideas and methods are introduced, including moving grids and grid refinement, static and dynamic gridding, the equidistribution principle and the concept of a monitor function, the minimization of a functional, and the moving finite elem...
Sa, Qila; Wang, Zhihui
2018-03-01
At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.
Viral Diversity Threshold for Adaptive Immunity in Prokaryotes
Weinberger, Ariel D.; Wolf, Yuri I.; Lobkovsky, Alexander E.; Gilmore, Michael S.; Koonin, Eugene V.
2012-01-01
ABSTRACT Bacteria and archaea face continual onslaughts of rapidly diversifying viruses and plasmids. Many prokaryotes maintain adaptive immune systems known as clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated genes (Cas). CRISPR-Cas systems are genomic sensors that serially acquire viral and plasmid DNA fragments (spacers) that are utilized to target and cleave matching viral and plasmid DNA in subsequent genomic invasions, offering critical immunological memory. Only 50% of sequenced bacteria possess CRISPR-Cas immunity, in contrast to over 90% of sequenced archaea. To probe why half of bacteria lack CRISPR-Cas immunity, we combined comparative genomics and mathematical modeling. Analysis of hundreds of diverse prokaryotic genomes shows that CRISPR-Cas systems are substantially more prevalent in thermophiles than in mesophiles. With sequenced bacteria disproportionately mesophilic and sequenced archaea mostly thermophilic, the presence of CRISPR-Cas appears to depend more on environmental temperature than on bacterial-archaeal taxonomy. Mutation rates are typically severalfold higher in mesophilic prokaryotes than in thermophilic prokaryotes. To quantitatively test whether accelerated viral mutation leads microbes to lose CRISPR-Cas systems, we developed a stochastic model of virus-CRISPR coevolution. The model competes CRISPR-Cas-positive (CRISPR-Cas+) prokaryotes against CRISPR-Cas-negative (CRISPR-Cas−) prokaryotes, continually weighing the antiviral benefits conferred by CRISPR-Cas immunity against its fitness costs. Tracking this cost-benefit analysis across parameter space reveals viral mutation rate thresholds beyond which CRISPR-Cas cannot provide sufficient immunity and is purged from host populations. These results offer a simple, testable viral diversity hypothesis to explain why mesophilic bacteria disproportionately lack CRISPR-Cas immunity. More generally, fundamental limits on the adaptability of biological
Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting
Directory of Open Access Journals (Sweden)
ZHU Xiaoxiao
2018-02-01
Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.
Spike-threshold adaptation predicted by membrane potential dynamics in vivo.
Directory of Open Access Journals (Sweden)
Bertrand Fontaine
2014-04-01
Full Text Available Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo.
Intelligent Mechanical Fault Diagnosis Based on Multiwavelet Adaptive Threshold Denoising and MPSO
Directory of Open Access Journals (Sweden)
Hao Sun
2014-01-01
Full Text Available The condition diagnosis of rotating machinery depends largely on the feature analysis of vibration signals measured for the condition diagnosis. However, the signals measured from rotating machinery usually are nonstationary and nonlinear and contain noise. The useful fault features are hidden in the heavy background noise. In this paper, a novel fault diagnosis method for rotating machinery based on multiwavelet adaptive threshold denoising and mutation particle swarm optimization (MPSO is proposed. Geronimo, Hardin, and Massopust (GHM multiwavelet is employed for extracting weak fault features under background noise, and the method of adaptively selecting appropriate threshold for multiwavelet with energy ratio of multiwavelet coefficient is presented. The six nondimensional symptom parameters (SPs in the frequency domain are defined to reflect the features of the vibration signals measured in each state. Detection index (DI using statistical theory has been also defined to evaluate the sensitiveness of SP for condition diagnosis. MPSO algorithm with adaptive inertia weight adjustment and particle mutation is proposed for condition identification. MPSO algorithm effectively solves local optimum and premature convergence problems of conventional particle swarm optimization (PSO algorithm. It can provide a more accurate estimate on fault diagnosis. Practical examples of fault diagnosis for rolling element bearings are given to verify the effectiveness of the proposed method.
A Fast Method for Measuring Psychophysical Thresholds Across the Cochlear Implant Array
Directory of Open Access Journals (Sweden)
Julie A. Bierer
2015-02-01
Full Text Available A rapid threshold measurement procedure, based on Bekesy tracking, is proposed and evaluated for use with cochlear implants (CIs. Fifteen postlingually deafened adult CI users participated. Absolute thresholds for 200-ms trains of biphasic pulses were measured using the new tracking procedure and were compared with thresholds obtained with a traditional forced-choice adaptive procedure under both monopolar and quadrupolar stimulation. Virtual spectral sweeps across the electrode array were implemented in the tracking procedure via current steering, which divides the current between two adjacent electrodes and varies the proportion of current directed to each electrode. Overall, no systematic differences were found between threshold estimates with the new channel sweep procedure and estimates using the adaptive forced-choice procedure. Test–retest reliability for the thresholds from the sweep procedure was somewhat poorer than for thresholds from the forced-choice procedure. However, the new method was about 4 times faster for the same number of repetitions. Overall the reliability and speed of the new tracking procedure provides it with the potential to estimate thresholds in a clinical setting. Rapid methods for estimating thresholds could be of particular clinical importance in combination with focused stimulation techniques that result in larger threshold variations between electrodes.
Directory of Open Access Journals (Sweden)
Deni Sutaji
2016-07-01
, segmentasi. AbstractSegmentation of blood vessels in the retina fundus image becomes substantial in the medical, because it can be used to detect diseases, such as diabetic retinopathy, hypertension, and cardiovascular. Doctor takes about two hours to detect the blood vessels of the retina, so screening methods are needed to make it faster. The previous methods are able to segment the blood vessels that are sensitive to variations in the size of the width of blood vessels, but there is over-segmentation in the area of pathology. Therefore, this study aims to develop a segmentation method of blood vessels in retinal fundus images which can reduce over-segmentation in the area of pathology using Gradient Based Adaptive Thresholding and Region Growing. The proposed method consists of three stages, namely the segmentation of the main blood vessels, detection area of pathology and segmentation thin blood vessels. Main blood vessels segmentation using high-pass filtering and tophat reconstruction on the green channel which adjusted of contras image that results the clearly between object and background. Detection area of pathology using Gradient Based Adaptive thresholding method. Thin blood vessels segmentation using Region Growing based on the information main blood vessel segmentation and detection of pathology area. Output of the main blood vessel segmentation and thin blood vessels are then combined to reconstruct an image of the blood vessels as output system.This method is able to segment the blood vessels in retinal fundus images DRIVE with an accuracy of 95.25% and the value of Area Under Curve (AUC in the relative operating characteristic curve (ROC of 74.28%.Keywords: Blood vessel, fundus retina image, gradient based adaptive thresholding, pathology, region growing, segmentation.
Directory of Open Access Journals (Sweden)
Jing Tang
2018-02-01
Full Text Available This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM sets a threshold to divide the ground contact forces (GCFs into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA, which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM and Lopez–Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.
Turbidity threshold sampling: Methods and instrumentation
Rand Eads; Jack Lewis
2001-01-01
Traditional methods for determining the frequency of suspended sediment sample collection often rely on measurements, such as water discharge, that are not well correlated to sediment concentration. Stream power is generally not a good predictor of sediment concentration for rivers that transport the bulk of their load as fines, due to the highly variable routing of...
Senaras, C.; Pennell, M.; Chen, W.; Sahiner, B.; Shana'ah, A.; Louissaint, A.; Hasserjian, R. P.; Lozanski, G.; Gurcan, M. N.
2017-03-01
Immunohistochemical detection of FOXP3 antigen is a usable marker for detection of regulatory T lymphocytes (TR) in formalin fixed and paraffin embedded sections of different types of tumor tissue. TR plays a major role in homeostasis of normal immune systems where they prevent auto reactivity of the immune system towards the host. This beneficial effect of TR is frequently "hijacked" by malignant cells where tumor-infiltrating regulatory T cells are recruited by the malignant nuclei to inhibit the beneficial immune response of the host against the tumor cells. In the majority of human solid tumors, an increased number of tumor-infiltrating FOXP3 positive TR is associated with worse outcome. However, in follicular lymphoma (FL) the impact of the number and distribution of TR on the outcome still remains controversial. In this study, we present a novel method to detect and enumerate nuclei from FOXP3 stained images of FL biopsies. The proposed method defines a new adaptive thresholding procedure, namely the optimal adaptive thresholding (OAT) method, which aims to minimize under-segmented and over-segmented nuclei for coarse segmentation. Next, we integrate a parameter free elliptical arc and line segment detector (ELSD) as additional information to refine segmentation results and to split most of the merged nuclei. Finally, we utilize a state-of-the-art super-pixel method, Simple Linear Iterative Clustering (SLIC) to split the rest of the merged nuclei. Our dataset consists of 13 region-ofinterest images containing 769 negative and 88 positive nuclei. Three expert pathologists evaluated the method and reported sensitivity values in detecting negative and positive nuclei ranging from 83-100% and 90-95%, and precision values of 98-100% and 99-100%, respectively. The proposed solution can be used to investigate the impact of FOXP3 positive nuclei on the outcome and prognosis in FL.
Image reconstruction with an adaptive threshold technique in electrical resistance tomography
International Nuclear Information System (INIS)
Kim, Bong Seok; Khambampati, Anil Kumar; Kim, Sin; Kim, Kyung Youn
2011-01-01
In electrical resistance tomography, electrical currents are injected through the electrodes placed on the surface of a domain and the corresponding voltages are measured. Based on these currents and voltage data, the cross-sectional resistivity distribution is reconstructed. Electrical resistance tomography shows high temporal resolution for monitoring fast transient processes, but it still remains a challenging problem to improve the spatial resolution of the reconstructed images. In this paper, a novel image reconstruction technique is proposed to improve the spatial resolution by employing an adaptive threshold method to the iterative Gauss–Newton method. Numerical simulations and phantom experiments have been performed to illustrate the superior performance of the proposed scheme in the sense of spatial resolution
‘Soglitude’- introducing a method of thinking thresholds
Directory of Open Access Journals (Sweden)
Tatjana Barazon
2010-04-01
Full Text Available ‘Soglitude’ is an invitation to acknowledge the existence of thresholds in thought. A threshold in thought designates the indetermination, the passage, the evolution of every state the world is in. The creation we add to it, and the objectivity we suppose, on the border of those two ideas lies our perceptive threshold. No state will ever be permanent, and in order to stress the temporary, fluent character of the world and our perception of it, we want to introduce a new suitable method to think change and transformation, when we acknowledge our own threshold nature. The contributions gathered in this special issue come from various disciplines: anthropology, philosophy, critical theory, film studies, political science, literature and history. The variety of these insights shows the resonance of the idea of threshold in every category of thought. We hope to enlarge the notion in further issues on physics and chemistry, as well as mathematics. The articles in this issue introduce the method of threshold thinking by showing the importance of the in-between, of the changing of perspective in their respective domain. The ‘Documents’ section named INTERSTICES, includes a selection of poems, two essays, a philosophical-artistic project called ‘infraphysique’, a performance on thresholds in the soul, and a dialogue with Israel Rosenfield. This issue presents a kaleidoscope of possible threshold thinking and hopes to initiate new ways of looking at things.For every change that occurs in reality there is a subjective counterpart in our perception and this needs to be acknowledged as such. What we name objective is reflected in our own personal perception in its own personal manner, in such a way that the objectivity of an event might altogether be questioned. The absolute point of view, the view from “nowhere”, could well be the projection that causes dogmatism. By introducing the method of thinking thresholds into a system, be it
Alternative method for determining anaerobic threshold in rowers
Directory of Open Access Journals (Sweden)
Giovani Dos Santos Cunha
2008-01-01
Full Text Available http://dx.doi.org/10.5007/1980-0037.2008v10n4p367 In rowing, the standard breathing that athletes are trained to use makes it difficult, or even impossible, to detect ventilatory limits, due to the coupling of the breath with the technical movement. For this reason, some authors have proposed determining the anaerobic threshold from the respiratory exchange ratio (RER, but there is not yet consensus on what value of RER should be used. The objective of this study was to test what value of RER corresponds to the anaerobic threshold and whether this value can be used as an independent parameter for determining the anaerobic threshold of rowers. The sample comprised 23 male rowers. They were submitted to a maximal cardiorespiratory test on a rowing ergometer with concurrent ergospirometry in order to determine VO2máx and the physiological variables corresponding to their anaerobic threshold. The anaerobic threshold was determined using the Dmax (maximal distance method. The physiological variables were classified into maximum values and anaerobic threshold values. The maximal state of these rowers reached VO2 (58.2±4.4 ml.kg-1.min-1, lactate (8.2±2.1 mmol.L-1, power (384±54.3 W and RER (1.26±0.1. At the anaerobic threshold they reached VO2 (46.9±7.5 ml.kg-1.min-1, lactate (4.6±1.3 mmol.L-1, power (300± 37.8 W and RER (0.99±0.1. Conclusions - the RER can be used as an independent method for determining the anaerobic threshold of rowers, adopting a value of 0.99, however, RER should exhibit a non-linear increase above this figure.
Fault evaluation and adaptive threshold detection of helicopter pilot ...
African Journals Online (AJOL)
Hitherto, in the field of aerospace science and industry, some acceptable results from control behavior of human operator (pilot), are caught using usual methods. However, very fewer research, has been done based on personal characteristics. The performed investigations, show that many of happened faults (especially in ...
Low-Threshold Active Teaching Methods for Mathematic Instruction
Marotta, Sebastian M.; Hargis, Jace
2011-01-01
In this article, we present a large list of low-threshold active teaching methods categorized so the instructor can efficiently access and target the deployment of conceptually based lessons. The categories include teaching strategies for lecture on large and small class sizes; student action individually, in pairs, and groups; games; interaction…
Constructing financial network based on PMFG and threshold method
Nie, Chun-Xiao; Song, Fu-Tie
2018-04-01
Based on planar maximally filtered graph (PMFG) and threshold method, we introduced a correlation-based network named PMFG-based threshold network (PTN). We studied the community structure of PTN and applied ISOMAP algorithm to represent PTN in low-dimensional Euclidean space. The results show that the community corresponds well to the cluster in the Euclidean space. Further, we studied the dynamics of the community structure and constructed the normalized mutual information (NMI) matrix. Based on the real data in the market, we found that the volatility of the market can lead to dramatic changes in the community structure, and the structure is more stable during the financial crisis.
Threshold-adaptive canny operator based on cross-zero points
Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu
2018-03-01
Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.
Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.
Vikhe, P S; Thool, V R
2016-04-01
Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection.
Alternative method for determining anaerobic threshold in rowers
Directory of Open Access Journals (Sweden)
Giovani dos Santos Cunha
2008-12-01
Full Text Available In rowing, the standard breathing that athletes are trained to use makes it difficult, or even impossible, to detectventilatory limits, due to the coupling of the breath with the technical movement. For this reason, some authors have proposeddetermining the anaerobic threshold from the respiratory exchange ratio (RER, but there is not yet consensus on what valueof RER should be used. The objective of this study was to test what value of RER corresponds to the anaerobic thresholdand whether this value can be used as an independent parameter for determining the anaerobic threshold of rowers. Thesample comprised 23 male rowers. They were submitted to a maximal cardiorespiratory test on a rowing ergometer withconcurrent ergospirometry in order to determine VO2máx and the physiological variables corresponding to their anaerobicthreshold. The anaerobic threshold was determined using the Dmax (maximal distance method. The physiological variableswere classified into maximum values and anaerobic threshold values. The maximal state of these rowers reached VO2(58.2±4.4 ml.kg-1.min-1, lactate (8.2±2.1 mmol.L-1, power (384±54.3 W and RER (1.26±0.1. At the anaerobic thresholdthey reached VO2 (46.9±7.5 ml.kg-1.min-1, lactate (4.6±1.3 mmol.L-1, power (300± 37.8 W and RER (0.99±0.1. Conclusions- the RER can be used as an independent method for determining the anaerobic threshold of rowers, adopting a value of0.99, however, RER should exhibit a non-linear increase above this figure.
International Nuclear Information System (INIS)
Qin, M; Chen, D Y; Wang, L L; Yu, X Y
2006-01-01
The subject investigated in this paper is the ECT system of 8-electrode oil-water two-phase flow, and the measuring principle is analysed. In ART image-reconstruction algorithm, an adaptive threshold image reconstruction is presented to improve quality of image reconstruction and calculating accuracy of concentration, and generally the measurement error is about 1%. Such method can well solve many defects that other measurement methods may have, such as slow speed, high cost, and poor security and so on. Therefore, it offers a new method for the concentration measurement of oil-water two-phase flow
Torque-onset determination: Unintended consequences of the threshold method.
Dotan, Raffy; Jenkins, Glenn; O'Brien, Thomas D; Hansen, Steve; Falk, Bareket
2016-12-01
Compared with visual torque-onset-detection (TOD), threshold-based TOD produces onset bias, which increases with lower torques or rates of torque development (RTD). To compare the effects of differential TOD-bias on common contractile parameters in two torque-disparate groups. Fifteen boys and 12 men performed maximal, explosive, isometric knee-extensions. Torque and EMG were recorded for each contraction. Best contractions were selected by peak torque (MVC) and peak RTD. Visual-TOD-based torque-time traces, electromechanical delays (EMD), and times to peak RTD (tRTD) were compared with corresponding data derived from fixed 4-Nm- and relative 5%MVC-thresholds. The 5%MVC TOD-biases were similar for boys and men, but the corresponding 4-Nm-based biases were markedly different (40.3±14.1 vs. 18.4±7.1ms, respectively; ptorque kinetics tended to be faster than the boys' (NS), but the 4-Nm-based kinetics erroneously depicted the boys as being much faster to any given %MVC (p<0.001). When comparing contractile properties of dissimilar groups, e.g., children vs. adults, threshold-based TOD methods can misrepresent reality and lead to erroneous conclusions. Relative-thresholds (e.g., 5% MVC) still introduce error, but group-comparisons are not confounded. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Is there a minimum intensity threshold for resistance training-induced hypertrophic adaptations?
Schoenfeld, Brad J
2013-12-01
In humans, regimented resistance training has been shown to promote substantial increases in skeletal muscle mass. With respect to traditional resistance training methods, the prevailing opinion is that an intensity of greater than ~60 % of 1 repetition maximum (RM) is necessary to elicit significant increases in muscular size. It has been surmised that this is the minimum threshold required to activate the complete spectrum of fiber types, particularly those associated with the largest motor units. There is emerging evidence, however, that low-intensity resistance training performed with blood flow restriction (BFR) can promote marked increases in muscle hypertrophy, in many cases equal to that of traditional high-intensity exercise. The anabolic effects of such occlusion-based training have been attributed to increased levels of metabolic stress that mediate hypertrophy at least in part by enhancing recruitment of high-threshold motor units. Recently, several researchers have put forth the theory that low-intensity exercise (≤50 % 1RM) performed without BFR can promote increases in muscle size equal, or perhaps even superior, to that at higher intensities, provided training is carried out to volitional muscular failure. Proponents of the theory postulate that fatiguing contractions at light loads is simply a milder form of BFR and thus ultimately results in maximal muscle fiber recruitment. Current research indicates that low-load exercise can indeed promote increases in muscle growth in untrained subjects, and that these gains may be functionally, metabolically, and/or aesthetically meaningful. However, whether hypertrophic adaptations can equal that achieved with higher intensity resistance exercise (≤60 % 1RM) remains to be determined. Furthermore, it is not clear as to what, if any, hypertrophic effects are seen with low-intensity exercise in well-trained subjects as experimental studies on the topic in this population are lacking. Practical
Watershed safety and quality control by safety threshold method
Da-Wei Tsai, David; Mengjung Chou, Caroline; Ramaraj, Rameshprabu; Liu, Wen-Cheng; Honglay Chen, Paris
2014-05-01
Taiwan was warned as one of the most dangerous countries by IPCC and the World Bank. In such an exceptional and perilous island, we would like to launch the strategic research of land-use management on the catastrophe prevention and environmental protection. This study used the watershed management by "Safety Threshold Method" to restore and to prevent the disasters and pollution on island. For the deluge prevention, this study applied the restoration strategy to reduce total runoff which was equilibrium to 59.4% of the infiltration each year. For the sediment management, safety threshold management could reduce the sediment below the equilibrium of the natural sediment cycle. In the water quality issues, the best strategies exhibited the significant total load reductions of 10% in carbon (BOD5), 15% in nitrogen (nitrate) and 9% in phosphorus (TP). We found out the water quality could meet the BOD target by the 50% peak reduction with management. All the simulations demonstrated the safety threshold method was helpful to control the loadings within the safe range of disasters and environmental quality. Moreover, from the historical data of whole island, the past deforestation policy and the mistake economic projects were the prime culprits. Consequently, this study showed a practical method to manage both the disasters and pollution in a watershed scale by the land-use management.
Variable threshold method for ECG R-peak detection.
Kew, Hsein-Ping; Jeong, Do-Un
2011-10-01
In this paper, a wearable belt-type ECG electrode worn around the chest by measuring the real-time ECG is produced in order to minimize the inconvenient in wearing. ECG signal is detected using a potential instrument system. The measured ECG signal is transmits via an ultra low power consumption wireless data communications unit to personal computer using Zigbee-compatible wireless sensor node. ECG signals carry a lot of clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed. There will be errors in peak detection when the baseline changes due to motion artifacts and signal size changes. Preprocessing process which includes differentiation process and Hilbert transform is used as signal preprocessing algorithm. Thereafter, variable threshold method is used to detect the R-peak which is more accurate and efficient than fixed threshold value method. R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research in order to evaluate the performance analysis.
Directory of Open Access Journals (Sweden)
J. Soraghan
2007-01-01
Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by “blowing out” the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.
Directory of Open Access Journals (Sweden)
Salleh MFM
2007-01-01
Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by "blowing out" the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.
Impact of sub and supra-threshold adaptation currents in networks of spiking neurons.
Colliaux, David; Yger, Pierre; Kaneko, Kunihiko
2015-12-01
Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
Methods of scaling threshold color difference using printed samples
Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier
2012-01-01
A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.
Adaptive scalarization methods in multiobjective optimization
Eichfelder, Gabriele
2008-01-01
This book presents adaptive solution methods for multiobjective optimization problems based on parameter dependent scalarization approaches. Readers will benefit from the new adaptive methods and ideas for solving multiobjective optimization.
Circuit and method for controlling the threshold voltage of transistors.
2008-01-01
A control unit, for controlling a threshold voltage of a circuit unit having transistor devices, includes a reference circuit and a measuring unit. The measuring unit is configured to measure a threshold voltage of at least one sensing transistor of the circuit unit, and to measure a threshold
A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm.
Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei; Wang, Hongxun; Dai, Wei
2018-04-08
A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry-Perot (F-P) filter and optical switch. To improve system resolution, the F-P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.
A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm
Directory of Open Access Journals (Sweden)
Weifang Zhang
2018-04-01
Full Text Available A Fiber Bragg Grating (FBG interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA and advanced RISC machine (ARM platform, tunable Fabry–Perot (F–P filter and optical switch. To improve system resolution, the F–P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.
Adaptive threshold control for auto-rate fallback algorithm in IEEE 802.11 multi-rate WLANs
Wu, Qilin; Lu, Yang; Zhu, Xiaolin; Ge, Fangzhen
2012-03-01
The IEEE 802.11 standard supports multiple rates for data transmission in the physical layer. Nowadays, to improve network performance, a rate adaptation scheme called auto-rate fallback (ARF) is widely adopted in practice. However, ARF scheme suffers performance degradation in multiple contending nodes environments. In this article, we propose a novel rate adaptation scheme called ARF with adaptive threshold control. In multiple contending nodes environment, the proposed scheme can effectively mitigate the frame collision effect on rate adaptation decision by adaptively adjusting rate-up and rate-down threshold according to the current collision level. Simulation results show that the proposed scheme can achieve significantly higher throughput than the other existing rate adaptation schemes. Furthermore, the simulation results also demonstrate that the proposed scheme can effectively respond to the varying channel condition.
An adaptive method for γ spectra smoothing
International Nuclear Information System (INIS)
Xiao Gang; Zhou Chunlin; Li Tiantuo; Han Feng; Di Yuming
2001-01-01
Adaptive wavelet method and multinomial fitting gliding method are used for smoothing γ spectra, respectively, and then FWHM of 1332 keV peak of 60 Co and activities of 238 U standard specimen are calculated. Calculated results show that adaptive wavelet method is better than the other
Adaptive and non-adaptive data hiding methods for grayscale images based on modulus function
Directory of Open Access Journals (Sweden)
Najme Maleki
2014-07-01
Full Text Available This paper presents two adaptive and non-adaptive data hiding methods for grayscale images based on modulus function. Our adaptive scheme is based on the concept of human vision sensitivity, so the pixels in edge areas than to smooth areas can tolerate much more changes without making visible distortion for human eyes. In our adaptive scheme, the average differencing value of four neighborhood pixels into a block via a threshold secret key determines whether current block is located in edge or smooth area. Pixels in the edge areas are embedded by Q-bit of secret data with a larger value of Q than that of pixels placed in smooth areas. Also in this scholar, we represent one non-adaptive data hiding algorithm. Our non-adaptive scheme, via an error reduction procedure, produces a high visual quality for stego-image. The proposed schemes present several advantages. 1-of aspects the embedding capacity and visual quality of stego-image are scalable. In other words, the embedding rate as well as the image quality can be scaled for practical applications 2-the high embedding capacity with minimal visual distortion can be achieved, 3-our methods require little memory space for secret data embedding and extracting phases, 4-secret keys have used to protect of the embedded secret data. Thus, level of security is high, 5-the problem of overflow or underflow does not occur. Experimental results indicated that the proposed adaptive scheme significantly is superior to the currently existing scheme, in terms of stego-image visual quality, embedding capacity and level of security and also our non-adaptive method is better than other non-adaptive methods, in view of stego-image quality. Results show which our adaptive algorithm can resist against the RS steganalysis attack.
Directory of Open Access Journals (Sweden)
Tingting Liu
2014-03-01
Full Text Available The switch based on electrowetting technology has the advantages of no moving part, low contact resistance, long life and adjustable acceleration threshold. The acceleration threshold of switch can be fine-tuned by adjusting the applied voltage. This paper is focused on the electrowetting properties of switch and the influence of microchannel structural parameters, applied voltage and droplet volume on acceleration threshold. In the presence of process errors of micro inertial fluidic switch and measuring errors of droplet volume, there is a deviation between test acceleration threshold and target acceleration threshold. Considering the process errors and measuring errors, worst-case analysis is used to analyze the influence of parameter tolerance on the acceleration threshold. Under worst-case condition the total acceleration threshold tolerance caused by various errors is 9.95%. The target acceleration threshold can be achieved by fine-tuning the applied voltage. The acceleration threshold trimming method of micro inertial fluidic switch is verified.
Han, Sheng; Xi, Shi-qiong; Geng, Wei-dong
2017-11-01
In order to solve the problem of low recognition rate of traditional feature extraction operators under low-resolution images, a novel algorithm of expression recognition is proposed, named central oblique average center-symmetric local binary pattern (CS-LBP) with adaptive threshold (ATCS-LBP). Firstly, the features of face images can be extracted by the proposed operator after pretreatment. Secondly, the obtained feature image is divided into blocks. Thirdly, the histogram of each block is computed independently and all histograms can be connected serially to create a final feature vector. Finally, expression classification is achieved by using support vector machine (SVM) classifier. Experimental results on Japanese female facial expression (JAFFE) database show that the proposed algorithm can achieve a recognition rate of 81.9% when the resolution is as low as 16×16, which is much better than that of the traditional feature extraction operators.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-08-12
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Mooney, Ronan A; Cirillo, John; Byblow, Winston D
2018-06-01
Primary motor cortex excitability can be modulated by anodal and cathodal transcranial direct current stimulation (tDCS). These neuromodulatory effects may, in part, be dependent on modulation within gamma-aminobutyric acid (GABA)-mediated inhibitory networks. GABAergic function can be quantified non-invasively using adaptive threshold hunting paired-pulse transcranial magnetic stimulation (TMS). The previous studies have used TMS with posterior-anterior (PA) induced current to assess tDCS effects on inhibition. However, TMS with anterior-posterior (AP) induced current in the brain provides a more robust measure of GABA-mediated inhibition. The aim of the present study was to assess the modulation of corticomotor excitability and inhibition after anodal and cathodal tDCS using TMS with PA- and AP-induced current. In 16 young adults (26 ± 1 years), we investigated the response to anodal, cathodal, and sham tDCS in a repeated-measures double-blinded crossover design. Adaptive threshold hunting paired-pulse TMS with PA- and AP-induced current was used to examine separate interneuronal populations within M1 and their influence on corticomotor excitability and short- and long-interval inhibition (SICI and LICI) for up to 60 min after tDCS. Unexpectedly, cathodal tDCS increased corticomotor excitability assessed with AP (P = 0.047) but not PA stimulation (P = 0.74). SICI AP was reduced after anodal tDCS compared with sham (P = 0.040). Pearson's correlations indicated that SICI AP and LICI AP modulation was associated with corticomotor excitability after anodal (P = 0.027) and cathodal tDCS (P = 0.042). The after-effects of tDCS on corticomotor excitability may depend on the direction of the TMS-induced current used to make assessments, and on modulation within GABA-mediated inhibitory circuits.
Adaptive Method Using Controlled Grid Deformation
Directory of Open Access Journals (Sweden)
Florin FRUNZULICA
2011-09-01
Full Text Available The paper presents an adaptive method using the controlled grid deformation over an elastic, isotropic and continuous domain. The adaptive process is controlled with the principal strains and principal strain directions and uses the finite elements method. Numerical results are presented for several test cases.
The Method of Adaptive Comparative Judgement
Pollitt, Alastair
2012-01-01
Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…
Recovering from a bad start: rapid adaptation and tradeoffs to growth below a threshold density
Directory of Open Access Journals (Sweden)
Marx Christopher J
2012-07-01
Full Text Available Abstract Background Bacterial growth in well-mixed culture is often assumed to be an autonomous process only depending upon the external conditions under control of the investigator. However, increasingly there is awareness that interactions between cells in culture can lead to surprising phenomena such as density-dependence in the initiation of growth. Results Here I report the unexpected discovery of a density threshold for growth of a strain of Methylobacterium extorquens AM1 used to inoculate eight replicate populations that were evolved in methanol. Six of these populations failed to grow to the expected full density during the first couple transfers. Remarkably, the final cell number of six populations crashed to levels 60- to 400-fold smaller than their cohorts. Five of these populations recovered to full density soon after, but one population remained an order of magnitude smaller for over one hundred generations. These variable dynamics appeared to be due to a density threshold for growth that was specific to both this particular ancestral strain and to growth on methanol. When tested at full density, this population had become less fit than its ancestor. Simply increasing the initial dilution 16-fold reversed this result, revealing that this population had more than a 3-fold advantage when tested at this lower density. As this population evolved and ultimately recovered to the same final density range as the other populations this low-density advantage waned. Conclusions These results demonstrate surprisingly strong tradeoffs during adaptation to growth at low absolute densities that manifest over just a 16-fold change in density. Capturing laboratory examples of transitions to and from growth at low density may help us understand the physiological and evolutionary forces that have led to the unusual properties of natural bacteria that have specialized to low-density environments such as the open ocean.
Recovering from a bad start: rapid adaptation and tradeoffs to growth below a threshold density.
Marx, Christopher J
2012-07-04
Bacterial growth in well-mixed culture is often assumed to be an autonomous process only depending upon the external conditions under control of the investigator. However, increasingly there is awareness that interactions between cells in culture can lead to surprising phenomena such as density-dependence in the initiation of growth. Here I report the unexpected discovery of a density threshold for growth of a strain of Methylobacterium extorquens AM1 used to inoculate eight replicate populations that were evolved in methanol. Six of these populations failed to grow to the expected full density during the first couple transfers. Remarkably, the final cell number of six populations crashed to levels 60- to 400-fold smaller than their cohorts. Five of these populations recovered to full density soon after, but one population remained an order of magnitude smaller for over one hundred generations. These variable dynamics appeared to be due to a density threshold for growth that was specific to both this particular ancestral strain and to growth on methanol. When tested at full density, this population had become less fit than its ancestor. Simply increasing the initial dilution 16-fold reversed this result, revealing that this population had more than a 3-fold advantage when tested at this lower density. As this population evolved and ultimately recovered to the same final density range as the other populations this low-density advantage waned. These results demonstrate surprisingly strong tradeoffs during adaptation to growth at low absolute densities that manifest over just a 16-fold change in density. Capturing laboratory examples of transitions to and from growth at low density may help us understand the physiological and evolutionary forces that have led to the unusual properties of natural bacteria that have specialized to low-density environments such as the open ocean.
Adaptative mixed methods to axisymmetric shells
International Nuclear Information System (INIS)
Malta, S.M.C.; Loula, A.F.D.; Garcia, E.L.M.
1989-09-01
The mixed Petrov-Galerkin method is applied to axisymmetric shells with uniform and non uniform meshes. Numerical experiments with a cylindrical shell showed a significant improvement in convergence and accuracy with adaptive meshes. (A.C.A.S.) [pt
Adaptive Control Methods for Soft Robots
National Aeronautics and Space Administration — I propose to develop methods for soft and inflatable robots that will allow the control system to adapt and change control parameters based on changing conditions...
CSIR Research Space (South Africa)
Luus, FPS
2014-06-01
Full Text Available -1 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 6, JUNE 2014 1153 Adaptive Threshold-Based Shadow Masking for Across- Date Settlement Classification of Panchromatic QuickBird Images F. P. S. Luus, F. van den Bergh, and B. T. J. Maharaj...
A NDVI assisted remote sensing image adaptive scale segmentation method
Zhang, Hong; Shen, Jinxiang; Ma, Yanmei
2018-03-01
Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.
Adaptive finite element methods for differential equations
Bangerth, Wolfgang
2003-01-01
These Lecture Notes discuss concepts of `self-adaptivity' in the numerical solution of differential equations, with emphasis on Galerkin finite element methods. The key issues are a posteriori error estimation and it automatic mesh adaptation. Besides the traditional approach of energy-norm error control, a new duality-based technique, the Dual Weighted Residual method for goal-oriented error estimation, is discussed in detail. This method aims at economical computation of arbitrary quantities of physical interest by properly adapting the computational mesh. This is typically required in the design cycles of technical applications. For example, the drag coefficient of a body immersed in a viscous flow is computed, then it is minimized by varying certain control parameters, and finally the stability of the resulting flow is investigated by solving an eigenvalue problem. `Goal-oriented' adaptivity is designed to achieve these tasks with minimal cost. At the end of each chapter some exercises are posed in order ...
A New Wavelet Threshold Determination Method Considering Interscale Correlation in Signal Denoising
Directory of Open Access Journals (Sweden)
Can He
2015-01-01
Full Text Available Due to simple calculation and good denoising effect, wavelet threshold denoising method has been widely used in signal denoising. In this method, the threshold is an important parameter that affects the denoising effect. In order to improve the denoising effect of the existing methods, a new threshold considering interscale correlation is presented. Firstly, a new correlation index is proposed based on the propagation characteristics of the wavelet coefficients. Then, a threshold determination strategy is obtained using the new index. At the end of the paper, a simulation experiment is given to verify the effectiveness of the proposed method. In the experiment, four benchmark signals are used as test signals. Simulation results show that the proposed method can achieve a good denoising effect under various signal types, noise intensities, and thresholding functions.
Czech Academy of Sciences Publication Activity Database
Kyselý, Jan; Picek, J.; Beranová, Romana
2010-01-01
Roč. 72, 1-2 (2010), s. 55-68 ISSN 0921-8181 R&D Projects: GA ČR GA205/06/1535; GA ČR GAP209/10/2045 Grant - others:GA MŠk(CZ) LC06024 Institutional research plan: CEZ:AV0Z30420517 Keywords : climate change * extreme value analysis * global climate models * peaks-over-threshold method * peaks-over-quantile regression * quantile regression * Poisson process * extreme temperatures Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 3.351, year: 2010
Evaluation of Maryland abutment scour equation through selected threshold velocity methods
Benedict, S.T.
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
International Nuclear Information System (INIS)
Prieto, Elena; Peñuelas, Iván; Martí-Climent, Josep M; Lecumberri, Pablo; Gómez, Marisol; Pagola, Miguel; Bilbao, Izaskun; Ecay, Margarita
2012-01-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18 F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools. (paper)
Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun
2016-05-01
The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.
Pruitt, J N; Krauel, J J
2010-10-01
Animals vary greatly in their tendency to consume large meals. Yet, whether or how meal size influences fitness in wild populations is infrequently considered. Using a predator exclusion, mark-recapture experiment, we estimated selection on the amount of food accepted during an ad libitum feeding bout (hereafter termed 'satiation threshold') in the wolf spider Schizocosa ocreata. Individually marked, size-matched females of known satiation threshold were assigned to predator exclusion and predator inclusion treatments and tracked for a 40-day period. We also estimated the narrow-sense heritability of satiation threshold using dam-on-female-offspring regression. In the absence of predation, high satiation threshold was positively associated with larger and faster egg case production. However, these selective advantages were lost when predators were present. We estimated the heritability of satiation threshold to be 0.56. Taken together, our results suggest that satiation threshold can respond to selection and begets a life history trade-off in this system: high satiation threshold individuals tend to produce larger egg cases but also suffer increased susceptibility to predation. © 2010 The Authors. Journal Compilation © 2010 European Society For Evolutionary Biology.
Threshold selection for classification of MR brain images by clustering method
Energy Technology Data Exchange (ETDEWEB)
Moldovanu, Simona [Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunărea de Jos University of Galaţi, 47 Domnească St., 800008, Romania, Phone: +40 236 460 780 (Romania); Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi (Romania); Obreja, Cristian; Moraru, Luminita, E-mail: luminita.moraru@ugal.ro [Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunărea de Jos University of Galaţi, 47 Domnească St., 800008, Romania, Phone: +40 236 460 780 (Romania)
2015-12-07
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
Adaptive Tuning of Frequency Thresholds Using Voltage Drop Data in Decentralized Load Shedding
DEFF Research Database (Denmark)
Hoseinzadeh, Bakhtyar; Faria Da Silva, Filipe Miguel; Bak, Claus Leth
2015-01-01
Load shedding (LS) is the last firewall and the most expensive control action against power system blackout. In the conventional under frequency LS (UFLS) schemes, the load drop locations are already determined independently of the event location. Furthermore, the frequency thresholds of LS relays...... are prespecified and constant values which may not be a comprehensive solution for widespread range of possible events. This paper addresses the decentralized LS in which the instantaneous voltage deviation of load buses is used to determine the frequency thresholds of LS relays. The higher frequency thresholds...
Cost-effectiveness thresholds: methods for setting and examples from around the world.
Santos, André Soares; Guerra-Junior, Augusto Afonso; Godman, Brian; Morton, Alec; Ruas, Cristina Mariano
2018-06-01
Cost-effectiveness thresholds (CETs) are used to judge if an intervention represents sufficient value for money to merit adoption in healthcare systems. The study was motivated by the Brazilian context of HTA, where meetings are being conducted to decide on the definition of a threshold. Areas covered: An electronic search was conducted on Medline (via PubMed), Lilacs (via BVS) and ScienceDirect followed by a complementary search of references of included studies, Google Scholar and conference abstracts. Cost-effectiveness thresholds are usually calculated through three different approaches: the willingness-to-pay, representative of welfare economics; the precedent method, based on the value of an already funded technology; and the opportunity cost method, which links the threshold to the volume of health displaced. An explicit threshold has never been formally adopted in most places. Some countries have defined thresholds, with some flexibility to consider other factors. An implicit threshold could be determined by research of funded cases. Expert commentary: CETs have had an important role as a 'bridging concept' between the world of academic research and the 'real world' of healthcare prioritization. The definition of a cost-effectiveness threshold is paramount for the construction of a transparent and efficient Health Technology Assessment system.
Desmal, Abdulla; Bagci, Hakan
2014-01-01
A numerical framework that incorporates recently developed iterative shrinkage thresholding (IST) algorithms within the Born iterative method (BIM) is proposed for solving the two-dimensional inverse electromagnetic scattering problem. IST
A rule based method for context sensitive threshold segmentation in SPECT using simulation
International Nuclear Information System (INIS)
Fleming, John S.; Alaamer, Abdulaziz S.
1998-01-01
Robust techniques for automatic or semi-automatic segmentation of objects in single photon emission computed tomography (SPECT) are still the subject of development. This paper describes a threshold based method which uses empirical rules derived from analysis of computer simulated images of a large number of objects. The use of simulation allowed the factors affecting the threshold which correctly segmented objects to be investigated systematically. Rules could then be derived from these data to define the threshold in any particular context. The technique operated iteratively and calculated local context sensitive thresholds along radial profiles from the centre of gravity of the object. It was evaluated in a further series of simulated objects and in human studies, and compared to the use of a global fixed threshold. The method was capable of improving accuracy of segmentation and volume assessment compared to the global threshold technique. The improvements were greater for small volumes, shapes with large surface area to volume ratio, variable surrounding activity and non-uniform distributions. The method was applied successfully to simulated objects and human studies and is considered to be a significant advance on global fixed threshold techniques. (author)
Convergence acceleration of Navier-Stokes equation using adaptive wavelet method
International Nuclear Information System (INIS)
Kang, Hyung Min; Ghafoor, Imran; Lee, Do Hyung
2010-01-01
An efficient adaptive wavelet method is proposed for the enhancement of computational efficiency of the Navier-Stokes equations. The method is based on sparse point representation (SPR), which uses the wavelet decomposition and thresholding to obtain a sparsely distributed dataset. The threshold mechanism is modified in order to maintain the spatial accuracy of a conventional Navier-Stokes solver by adapting the threshold value to the order of spatial truncation error. The computational grid can be dynamically adapted to a transient solution to reflect local changes in the solution. The flux evaluation is then carried out only at the points of the adapted dataset, which reduces the computational effort and memory requirements. A stabilization technique is also implemented to avoid the additional numerical errors introduced by the threshold procedure. The numerical results of the adaptive wavelet method are compared with a conventional solver to validate the enhancement in computational efficiency of Navier-Stokes equations without the degeneration of the numerical accuracy of a conventional solver
Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A
2015-02-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.
Hansen, Anja; Krueger, Alexander; Ripken, Tammo
2013-03-01
In ophthalmic microsurgery tissue dissection is achieved using femtosecond laser pulses to create an optical breakdown. For vitreo-retinal applications the irradiance distribution in the focal volume is distorted by the anterior components of the eye causing a raised threshold energy for breakdown. In this work, an adaptive optics system enables spatial beam shaping for compensation of aberrations and investigation of wave front influence on optical breakdown. An eye model was designed to allow for aberration correction as well as detection of optical breakdown. The eye model consists of an achromatic lens for modeling the eye's refractive power, a water chamber for modeling the tissue properties, and a PTFE sample for modeling the retina's scattering properties. Aberration correction was performed using a deformable mirror in combination with a Hartmann-Shack-sensor. The influence of an adaptive optics aberration correction on the pulse energy required for photodisruption was investigated using transmission measurements for determination of the breakdown threshold and video imaging of the focal region for study of the gas bubble dynamics. The threshold energy is considerably reduced when correcting for the aberrations of the system and the model eye. Also, a raise in irradiance at constant pulse energy was shown for the aberration corrected case. The reduced pulse energy lowers the potential risk of collateral damage which is especially important for retinal safety. This offers new possibilities for vitreo-retinal surgery using femtosecond laser pulses.
Adaptive finite element method for shape optimization
Morin, Pedro; Nochetto, Ricardo H.; Pauletti, Miguel S.; Verani, Marco
2012-01-01
We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.
Adaptive finite element method for shape optimization
Morin, Pedro
2012-01-16
We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.
The Translation and Adaptation of Agile Methods
DEFF Research Database (Denmark)
Pries-Heje, Jan; Baskerville, Richard
2017-01-01
Purpose The purpose of this paper is to use translation theory to develop a framework (called FTRA) that explains how companies adopt agile methods in a discourse of fragmentation and articulation. Design/methodology/approach A qualitative multiple case study of six firms using the Scrum agile...... (Scrum). This limits the confidence that the framework is suitable for other kinds of methodologies. Practical implications The FTRA framework and the technological rules are promising for use in practice as a prescriptive or even normative frame for governing methodology adaptation. Social implications....../value The use of translation theory and the FTRA framework to explain how agile adaptation (in particular Scrum) emerges continuously in a process where method fragments are articulated and re-articulated to momentarily suit the local setting. Complete agility that rapidly and elegantly changes its own...
Directory of Open Access Journals (Sweden)
Stefania Munaretto
2014-06-01
Full Text Available Climate adaptation is a dynamic social and institutional process where the governance dimension is receiving growing attention. Adaptive governance is an approach that promises to reduce uncertainty by improving the knowledge base for decision making. As uncertainty is an inherent feature of climate adaptation, adaptive governance seems to be a promising approach for improving climate adaptation governance. However, the adaptive governance literature has so far paid little attention to decision-making tools and methods, and the literature on the governance of adaptation is in its infancy in this regard. We argue that climate adaptation governance would benefit from systematic and yet flexible decision-making tools and methods such as participatory multicriteria methods for the evaluation of adaptation options, and that these methods can be linked to key adaptive governance principles. Moving from these premises, we propose a framework that integrates key adaptive governance features into participatory multicriteria methods for the governance of climate adaptation.
Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan
2017-12-15
Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L₁ algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.
On the limitations of fixed-step-size adaptive methods with response confidence.
Hsu, Yung-Fong; Chin, Ching-Lan
2014-05-01
The family of (non-parametric, fixed-step-size) adaptive methods, also known as 'up-down' or 'staircase' methods, has been used extensively in psychophysical studies for threshold estimation. Extensions of adaptive methods to non-binary responses have also been proposed. An example is the three-category weighted up-down (WUD) method (Kaernbach, 2001) and its four-category extension (Klein, 2001). Such an extension, however, is somewhat restricted, and in this paper we discuss its limitations. To facilitate the discussion, we characterize the extension of WUD by an algorithm that incorporates response confidence into a family of adaptive methods. This algorithm can also be applied to two other adaptive methods, namely Derman's up-down method and the biased-coin design, which are suitable for estimating any threshold quantiles. We then discuss via simulations of the above three methods the limitations of the algorithm. To illustrate, we conduct a small scale of experiment using the extended WUD under different response confidence formats to evaluate the consistency of threshold estimation. © 2013 The British Psychological Society.
Ronald Reyses Tapilatu; Edeh Rolleta Haroen; Rosiliwati Wihardja
2008-01-01
The habit of smoking white cigarettes and clove cigarettes may affect the gustatory function, that is, it will cause damage to taste buds, resulting in an increase in gustatory threshold. This research used the descriptive comparative method and had the purpose of obtaining an illustration of gustatory threshold and compare gustatory threshold in white cigarette smokers and clove cigarette smokers in young, male adults. For gustatory threshold evaluation, the Murphy method was used to obtain ...
An Advanced Method to Apply Multiple Rainfall Thresholds for Urban Flood Warnings
Directory of Open Access Journals (Sweden)
Jiun-Huei Jang
2015-11-01
Full Text Available Issuing warning information to the public when rainfall exceeds given thresholds is a simple and widely-used method to minimize flood risk; however, this method lacks sophistication when compared with hydrodynamic simulation. In this study, an advanced methodology is proposed to improve the warning effectiveness of the rainfall threshold method for urban areas through deterministic-stochastic modeling, without sacrificing simplicity and efficiency. With regards to flooding mechanisms, rainfall thresholds of different durations are divided into two groups accounting for flooding caused by drainage overload and disastrous runoff, which help in grading the warning level in terms of emergency and severity when the two are observed together. A flood warning is then classified into four levels distinguished by green, yellow, orange, and red lights in ascending order of priority that indicate the required measures, from standby, flood defense, evacuation to rescue, respectively. The proposed methodology is tested according to 22 historical events in the last 10 years for 252 urbanized townships in Taiwan. The results show satisfactory accuracy in predicting the occurrence and timing of flooding, with a logical warning time series for taking progressive measures. For systems with multiple rainfall thresholds already in place, the methodology can be used to ensure better application of rainfall thresholds in urban flood warnings.
A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal
Li, Meng; Jiang, Li-hui; Xiong, Xing-long
2015-06-01
Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.
Directory of Open Access Journals (Sweden)
R. Bozovic
2017-12-01
Full Text Available Spectrum sensing is the most important process in cognitive radio in order to ensure interference avoidance to primary users. For optimal performance of cognitive radio, it is substantial to monitor and promptly react to dynamic changes in its operating environment. In this paper, energy detector based spectrum sensing is considered. Under the assumption that detected signal can be modelled according to an autoregressive model, noise variance is estimated from that noisy signal, as well as primary user signal power. A closed-form solution for optimal decision threshold in dynamic electromagnetic environment is proposed and analyzed.
International Nuclear Information System (INIS)
Akesson, T.; Arik, E.; Assamagan, K.; Baker, K.; Barberio, E.; Barberis, D.; Bertelsen, H.; Bytchkov, V.; Callahan, J.; Catinaccio, A.; Danielsson, H.; Dittus, F.; Dolgoshein, B.; Dressnandt, N.; Ebenstein, W.L.; Eerola, P.; Farthouat, P.; Froidevaux, D.; Grichkevitch, Y.; Hajduk, Z.; Hansen, J.R.; Keener, P.T.; Kekelidze, G.; Konovalov, S.; Kowalski, T.; Kramarenko, V.A.; Krivchitch, A.; Laritchev, A.; Lichard, P.; Lucotte, A.; Lundberg, B.; Luehring, F.; Mailov, A.; Manara, A.; McFarlane, K.; Mitsou, V.A.; Morozov, S.; Muraviev, S.; Nadtochy, A.; Newcomer, F.M.; Olszowska, J.; Ogren, H.; Oh, S.H.; Peshekhonov, V.; Rembser, C.; Romaniouk, A.; Rousseau, D.; Rust, D.R.; Schegelsky, V.; Sapinski, M.; Shmeleva, A.; Smirnov, S.; Smirnova, L.N.; Sosnovtsev, V.; Soutchkov, S.; Spiridenkov, E.; Tikhomirov, V.; Van Berg, R.; Vassilakopoulos, V.; Wang, C.; Williams, H.H.
2001-01-01
Test-beam studies of the ATLAS Transition Radiation Tracker (TRT) straw tube performance in terms of electron-pion separation using a time-over-threshold method are described. The test-beam data are compared with Monte Carlo simulations of charged particles passing through the straw tubes of the TRT. For energies below 10 GeV, the time-over-threshold method combined with the standard transition-radiation cluster-counting technique significantly improves the electron-pion separation in the TRT. The use of the time-over-threshold information also provides some kaon-pion separation, thereby significantly enhancing the B-physics capabilities of the ATLAS detector
Directory of Open Access Journals (Sweden)
Ronald Reyses Tapilatu
2008-03-01
Full Text Available The habit of smoking white cigarettes and clove cigarettes may affect the gustatory function, that is, it will cause damage to taste buds, resulting in an increase in gustatory threshold. This research used the descriptive comparative method and had the purpose of obtaining an illustration of gustatory threshold and compare gustatory threshold in white cigarette smokers and clove cigarette smokers in young, male adults. For gustatory threshold evaluation, the Murphy method was used to obtain a value for perception threshold and taste identification threshold using sucrose solution of 0.0006 M-0.06 M concentration. Research results indicate that the perception threshold and identification threshold of young, male adult smokers are 0.0119 M and 0.0292 M. Young, male adult clove cigarette smokers have a perception threshold and identification threshold of 0.0151 M and 0.0348 M. The conclusion of this research is that the perception threshold of young, male adult white cigarette smokers and clove cigarette smokers are the same, whereas the identification threshold of young, male adult white cigarette smokers and clove cigarette smokers are different, that is, the identification threshold of clove cigarette smokers is higher than that of white cigarette smokers.
Reliability and validity of a brief method to assess nociceptive flexion reflex (NFR) threshold.
Rhudy, Jamie L; France, Christopher R
2011-07-01
The nociceptive flexion reflex (NFR) is a physiological tool to study spinal nociception. However, NFR assessment can take several minutes and expose participants to repeated suprathreshold stimulations. The 4 studies reported here assessed the reliability and validity of a brief method to assess NFR threshold that uses a single ascending series of stimulations (Peak 1 NFR), by comparing it to a well-validated method that uses 3 ascending/descending staircases of stimulations (Staircase NFR). Correlations between the NFR definitions were high, were on par with test-retest correlations of Staircase NFR, and were not affected by participant sex or chronic pain status. Results also indicated the test-retest reliabilities for the 2 definitions were similar. Using larger stimulus increments (4 mAs) to assess Peak 1 NFR tended to result in higher NFR threshold estimates than using the Staircase NFR definition, whereas smaller stimulus increments (2 mAs) tended to result in lower NFR threshold estimates than the Staircase NFR definition. Neither NFR definition was correlated with anxiety, pain catastrophizing, or anxiety sensitivity. In sum, a single ascending series of electrical stimulations results in a reliable and valid estimate of NFR threshold. However, caution may be warranted when comparing NFR thresholds across studies that differ in the ascending stimulus increments. This brief method to assess NFR threshold is reliable and valid; therefore, it should be useful to clinical pain researchers interested in quickly assessing inter- and intra-individual differences in spinal nociceptive processes. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Chonglong Wang
Full Text Available Genomic selection has become a useful tool for animal and plant breeding. Currently, genomic evaluation is usually carried out using a single-trait model. However, a multi-trait model has the advantage of using information on the correlated traits, leading to more accurate genomic prediction. To date, joint genomic prediction for a continuous and a threshold trait using a multi-trait model is scarce and needs more attention. Based on the previously proposed methods BayesCπ for single continuous trait and BayesTCπ for single threshold trait, we developed a novel method based on a linear-threshold model, i.e., LT-BayesCπ, for joint genomic prediction of a continuous trait and a threshold trait. Computing procedures of LT-BayesCπ using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the advantages of LT-BayesCπ over BayesCπ and BayesTCπ with regard to the accuracy of genomic prediction on both traits. Factors affecting the performance of LT-BayesCπ were addressed. The results showed that, in all scenarios, the accuracy of genomic prediction obtained from LT-BayesCπ was significantly increased for the threshold trait compared to that from single trait prediction using BayesTCπ, while the accuracy for the continuous trait was comparable with that from single trait prediction using BayesCπ. The proposed LT-BayesCπ could be a method of choice for joint genomic prediction of one continuous and one threshold trait.
Online Adaptive Replanning Method for Prostate Radiotherapy
International Nuclear Information System (INIS)
Ahunbay, Ergun E.; Peng Cheng; Holmes, Shannon; Godley, Andrew; Lawton, Colleen; Li, X. Allen
2010-01-01
Purpose: To report the application of an adaptive replanning technique for prostate cancer radiotherapy (RT), consisting of two steps: (1) segment aperture morphing (SAM), and (2) segment weight optimization (SWO), to account for interfraction variations. Methods and Materials: The new 'SAM+SWO' scheme was retroactively applied to the daily CT images acquired for 10 prostate cancer patients on a linear accelerator and CT-on-Rails combination during the course of RT. Doses generated by the SAM+SWO scheme based on the daily CT images were compared with doses generated after patient repositioning using the current planning target volume (PTV) margin (5 mm, 3 mm toward rectum) and a reduced margin (2 mm), along with full reoptimization scans based on the daily CT images to evaluate dosimetry benefits. Results: For all cases studied, the online replanning method provided significantly better target coverage when compared with repositioning with reduced PTV (13% increase in minimum prostate dose) and improved organ sparing when compared with repositioning with regular PTV (13% decrease in the generalized equivalent uniform dose of rectum). The time required to complete the online replanning process was 6 ± 2 minutes. Conclusion: The proposed online replanning method can be used to account for interfraction variations for prostate RT with a practically acceptable time frame (5-10 min) and with significant dosimetric benefits. On the basis of this study, the developed online replanning scheme is being implemented in the clinic for prostate RT.
Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei
2011-04-01
An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.
Directory of Open Access Journals (Sweden)
Yunyi Li
2017-12-01
Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 , 2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.
An NMR log echo data de-noising method based on the wavelet packet threshold algorithm
International Nuclear Information System (INIS)
Meng, Xiangning; Xie, Ranhong; Li, Changxi; Hu, Falong; Li, Chaoliu; Zhou, Cancan
2015-01-01
To improve the de-noising effects of low signal-to-noise ratio (SNR) nuclear magnetic resonance (NMR) log echo data, this paper applies the wavelet packet threshold algorithm to the data. The principle of the algorithm is elaborated in detail. By comparing the properties of a series of wavelet packet bases and the relevance between them and the NMR log echo train signal, ‘sym7’ is found to be the optimal wavelet packet basis of the wavelet packet threshold algorithm to de-noise the NMR log echo train signal. A new method is presented to determine the optimal wavelet packet decomposition scale; this is within the scope of its maximum, using the modulus maxima and the Shannon entropy minimum standards to determine the global and local optimal wavelet packet decomposition scales, respectively. The results of applying the method to the simulated and actual NMR log echo data indicate that compared with the wavelet threshold algorithm, the wavelet packet threshold algorithm, which shows higher decomposition accuracy and better de-noising effect, is much more suitable for de-noising low SNR–NMR log echo data. (paper)
Robust Ordering of Anaphase Events by Adaptive Thresholds and Competing Degradation Pathways.
Kamenz, Julia; Mihaljev, Tamara; Kubis, Armin; Legewie, Stefan; Hauf, Silke
2015-11-05
The splitting of chromosomes in anaphase and their delivery into the daughter cells needs to be accurately executed to maintain genome stability. Chromosome splitting requires the degradation of securin, whereas the distribution of the chromosomes into the daughter cells requires the degradation of cyclin B. We show that cells encounter and tolerate variations in the abundance of securin or cyclin B. This makes the concurrent onset of securin and cyclin B degradation insufficient to guarantee that early anaphase events occur in the correct order. We uncover that the timing of chromosome splitting is not determined by reaching a fixed securin level, but that this level adapts to the securin degradation kinetics. In conjunction with securin and cyclin B competing for degradation during anaphase, this provides robustness to the temporal order of anaphase events. Our work reveals how parallel cell-cycle pathways can be temporally coordinated despite variability in protein concentrations. Copyright © 2015 Elsevier Inc. All rights reserved.
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto
2016-04-01
Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the
Implementing Adaptive Educational Methods with IMS Learning Design
Specht, Marcus; Burgos, Daniel
2006-01-01
Please, cite this publication as: Specht, M. & Burgos, D. (2006). Implementing Adaptive Educational Methods with IMS Learning Design. Proceedings of Adaptive Hypermedia. June, Dublin, Ireland. Retrieved June 30th, 2006, from http://dspace.learningnetworks.org
Energy Technology Data Exchange (ETDEWEB)
Mota, Hilton de Oliveira; Rocha, Leonardo Chaves Dutra da [Department of Computer Science, Federal University of Sao Joao del-Rei, Visconde do Rio Branco Ave., Colonia do Bengo, Sao Joao del-Rei, MG, 36301-360 (Brazil); Salles, Thiago Cunha de Moura [Department of Computer Science, Federal University of Minas Gerais, 6627 Antonio Carlos Ave., Pampulha, Belo Horizonte, MG, 31270-901 (Brazil); Vasconcelos, Flavio Henrique [Department of Electrical Engineering, Federal University of Minas Gerais, 6627 Antonio Carlos Ave., Pampulha, Belo Horizonte, MG, 31270-901 (Brazil)
2011-02-15
In this paper an improved method to denoise partial discharge (PD) signals is presented. The method is based on the wavelet transform (WT) and support vector machines (SVM) and is distinct from other WT-based denoising strategies in the sense that it exploits the high spatial correlations presented by PD wavelet decompositions as a way to identify and select the relevant coefficients. PD spatial correlations are characterized by WT modulus maxima propagation along decomposition levels (scales), which are a strong indicative of the their time-of-occurrence. Denoising is performed by identification and separation of PD-related maxima lines by an SVM pattern classifier. The results obtained confirm that this method has superior denoising capabilities when compared to other WT-based methods found in the literature for the processing of Gaussian and discrete spectral interferences. Moreover, its greatest advantages become clear when the interference has a pulsating or localized shape, situation in which traditional methods usually fail. (author)
Zhou, Wenlong; Yang, Yan; Tang, Liang; Cheng, Kai; Li, Changkun; Wang, Huimin; Liu, Minzhi; Wang, Wei
2018-03-14
Acrolein (Acr) was used as a selection agent to improve the glutathione (GSH) overproduction of the prototrophic strain W303-1b/FGP PT . After two rounds of adaptive laboratory evolution (ALE), an unexpected result was obtained wherein identical GSH production was observed in the selected isolates. Then, a threshold selection mechanism of Acr-stressed adaption was clarified based on the formation of an Acr-GSH adduct, and a diffusion coefficient (0.36 ± 0.02 μmol·min -1 ·OD 600 -1 ) was calculated. Metabolomic analysis was carried out to reveal the molecular bases that triggered GSH overproduction. The results indicated that all three precursors (glutamic acid (Glu), glycine (Gly) and cysteine (Cys)) needed for GSH synthesis were at a relativity higher concentration in the evolved strain and that the accumulation of homocysteine (Hcy) and cystathionine might promote Cys synthesis and then improve GSH production. In addition to GSH and Cys, it was observed that other non-protein thiols and molecules related to ATP generation were at obviously different levels. To divert the accumulated thiols to GSH biosynthesis, combinatorial strategies, including deletion of cystathionine β-lyase (STR3), overexpression of cystathionine γ-lyase (CYS3) and cystathionine β-synthase (CYS4), and reduction of the unfolded protein response (UPR) through up-regulation of protein disulphide isomerase (PDI), were also investigated.
Energy Technology Data Exchange (ETDEWEB)
Mendiola C, M.T.; Morales R, P. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)
2003-07-01
The expression kinetics of the adaptive response (RA) in mouse leukocytes in vivo and the minimum dose of gamma radiation that induces it was determined. The mice were exposed 0.005 or 0.02 Gy of {sup 137} Cs like adaptation and 1h later to the challenge dose (1.0 Gy), another group was only exposed at 1.0 Gy and the damage is evaluated in the DNA with the rehearsal it makes. The treatment with 0. 005 Gy didn't induce RA and 0. 02 Gy causes a similar effect to the one obtained with 0.01 Gy. The RA was show from an interval of 0.5 h being obtained the maximum expression with 5.0 h. The threshold dose to induce the RA is 0.01 Gy and in 5.0 h the biggest quantity in molecules is presented presumably that are related with the protection of the DNA. (Author) =.
Method of Anti-Virus Protection Based on (n, t Threshold Proxy Signature with an Arbitrator
Directory of Open Access Journals (Sweden)
E. A. Tolyupa
2014-01-01
Full Text Available The article suggests the method of anti-virus protection of mobile devices based on the usage of proxy digital signatures and an (n;t-threshold proxy signature scheme with an arbitrator. The unique feature of the suggested method is in the absence of necessity to install anti-virus software in a mobile device. It will be enough only to have the software verifying digital signatures and the Internet. The method is used on the base of public keys infrastructure /PKI/, thus minimizing implementation expenses.
International Nuclear Information System (INIS)
Berthiau, G.
1995-10-01
The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B
2015-10-06
Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.
New method to evaluate the 7Li(p, n)7Be reaction near threshold
International Nuclear Information System (INIS)
Herrera, María S.; Moreno, Gustavo A.; Kreiner, Andrés J.
2015-01-01
In this work a complete description of the 7 Li(p, n) 7 Be reaction near threshold is given using center-of-mass and relative coordinates. It is shown that this standard approach, not used before in this context, leads to a simple mathematical representation which gives easy access to all relevant quantities in the reaction and allows a precise numerical implementation. It also allows in a simple way to include proton beam-energy spread affects. The method, implemented as a C++ code, was validated both with numerical and experimental data finding a good agreement. This tool is also used here to analyze scattered published measurements such as (p, n) cross sections, differential and total neutron yields for thick targets. Using these data we derive a consistent set of parameters to evaluate neutron production near threshold. Sensitivity of the results to data uncertainty and the possibility of incorporating new measurements are also discussed
Ventilatory thresholds determined from HRV: comparison of 2 methods in obese adolescents.
Quinart, S; Mourot, L; Nègre, V; Simon-Rigaud, M-L; Nicolet-Guénat, M; Bertrand, A-M; Meneveau, N; Mougin, F
2014-03-01
The development of personalised training programmes is crucial in the management of obesity. We evaluated the ability of 2 heart rate variability analyses to determine ventilatory thresholds (VT) in obese adolescents. 20 adolescents (mean age 14.3±1.6 years and body mass index z-score 4.2±0.1) performed an incremental test to exhaustion before and after a 9-month multidisciplinary management programme. The first (VT1) and second (VT2) ventilatory thresholds were identified by the reference method (gas exchanges). We recorded RR intervals to estimate VT1 and VT2 from heart rate variability using time-domain analysis and time-varying spectral-domain analysis. The coefficient correlations between thresholds were higher with spectral-domain analysis compared to time-domain analysis: Heart rate at VT1: r=0.91 vs. =0.66 and VT2: r=0.91 vs. =0.66; power at VT1: r=0.91 vs. =0.74 and VT2: r=0.93 vs. =0.78; spectral-domain vs. time-domain analysis respectively). No systematic bias in heart rate at VT1 and VT2 with standard deviations <6 bpm were found, confirming that spectral-domain analysis could replace the reference method for the detection of ventilatory thresholds. Furthermore, this technique is sensitive to rehabilitation and re-training, which underlines its utility in clinical practice. This inexpensive and non-invasive tool is promising for prescribing physical activity programs in obese adolescents. © Georg Thieme Verlag KG Stuttgart · New York.
Adaptive design methods in clinical trials – a review
Directory of Open Access Journals (Sweden)
Chang Mark
2008-05-01
Full Text Available Abstract In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular due to its flexibility and efficiency. Based on adaptations applied, adaptive designs can be classified into three categories: prospective, concurrent (ad hoc, and retrospective adaptive designs. An adaptive design allows modifications made to trial and/or statistical procedures of ongoing clinical trials. However, it is a concern that the actual patient population after the adaptations could deviate from the originally target patient population and consequently the overall type I error (to erroneously claim efficacy for an infective drug rate may not be controlled. In addition, major adaptations of trial and/or statistical procedures of on-going trials may result in a totally different trial that is unable to address the scientific/medical questions the trial intends to answer. In this article, several commonly considered adaptive designs in clinical trials are reviewed. Impacts of ad hoc adaptations (protocol amendments, challenges in by design (prospective adaptations, and obstacles of retrospective adaptations are described. Strategies for the use of adaptive design in clinical development of rare diseases are discussed. Some examples concerning the development of Velcade intended for multiple myeloma and non-Hodgkin's lymphoma are given. Practical issues that are commonly encountered when implementing adaptive design methods in clinical trials are also discussed.
An n -material thresholding method for improving integerness of solutions in topology optimization
International Nuclear Information System (INIS)
Watts, Seth; Engineering); Tortorelli, Daniel A.; Engineering)
2016-01-01
It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, the canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.
Jiang, Wen Jun; Wittek, Peter; Zhao, Li; Gao, Shi Chao
2014-01-01
Photoplethysmogram (PPG) signals acquired by smartphone cameras are weaker than those acquired by dedicated pulse oximeters. Furthermore, the signals have lower sampling rates, have notches in the waveform and are more severely affected by baseline drift, leading to specific morphological characteristics. This paper introduces a new feature, the inverted triangular area, to address these specific characteristics. The new feature enables real-time adaptive waveform detection using an algorithm of linear time complexity. It can also recognize notches in the waveform and it is inherently robust to baseline drift. An implementation of the algorithm on Android is available for free download. We collected data from 24 volunteers and compared our algorithm in peak detection with two competing algorithms designed for PPG signals, Incremental-Merge Segmentation (IMS) and Adaptive Thresholding (ADT). A sensitivity of 98.0% and a positive predictive value of 98.8% were obtained, which were 7.7% higher than the IMS algorithm in sensitivity, and 8.3% higher than the ADT algorithm in positive predictive value. The experimental results confirmed the applicability of the proposed method.
Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo
2013-06-01
In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.
Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.
Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer
2017-08-16
Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.
Entropy-Based Method of Choosing the Decomposition Level in Wavelet Threshold De-noising
Directory of Open Access Journals (Sweden)
Yan-Fang Sang
2010-06-01
Full Text Available In this paper, the energy distributions of various noises following normal, log-normal and Pearson-III distributions are first described quantitatively using the wavelet energy entropy (WEE, and the results are compared and discussed. Then, on the basis of these analytic results, a method for use in choosing the decomposition level (DL in wavelet threshold de-noising (WTD is put forward. Finally, the performance of the proposed method is verified by analysis of both synthetic and observed series. Analytic results indicate that the proposed method is easy to operate and suitable for various signals. Moreover, contrary to traditional white noise testing which depends on “autocorrelations”, the proposed method uses energy distributions to distinguish real signals and noise in noisy series, therefore the chosen DL is reliable, and the WTD results of time series can be improved.
On the Adaptation of an Agile Information Systems Development Method
Aydin, M.N.; Harmsen, F.; van Slooten, C.; Stegwee, R.A.
2005-01-01
Little specific research has been conducted to date on the adaptation of agile information systems development (ISD) methods. This article presents the work practice in dealing with the adaptation of such a method in the ISD department of one of the leading financial institutes in Europe. Two forms
Adaptation of an Agile Information System Development Method
Aydin, M.N.; Harmsen, A.F.; van Hillegersberg, Jos; Stegwee, R.A.; Siau, K.
2007-01-01
Little specific research has been conducted to date on the adaptation of agile information systems development (ISD) methods. This chapter presents the work practice in dealing with the adaptation of such a method in the ISD department of one of the leading financial institutes in Europe. The
Adaptive integral equation methods in transport theory
International Nuclear Information System (INIS)
Kelley, C.T.
1992-01-01
In this paper, an adaptive multilevel algorithm for integral equations is described that has been developed with the Chandrasekhar H equation and its generalizations in mind. The algorithm maintains good performance when the Frechet derivative of the nonlinear map is singular at the solution, as happens in radiative transfer with conservative scattering and in critical neutron transport. Numerical examples that demonstrate the algorithm's effectiveness are presented
Noninvasive method to estimate anaerobic threshold in individuals with type 2 diabetes
Directory of Open Access Journals (Sweden)
Sales Marcelo M
2011-01-01
Full Text Available Abstract Background While several studies have identified the anaerobic threshold (AT through the responses of blood lactate, ventilation and blood glucose others have suggested the response of the heart rate variability (HRV as a method to identify the AT in young healthy individuals. However, the validity of HRV in estimating the lactate threshold (LT and ventilatory threshold (VT for individuals with type 2 diabetes (T2D has not been investigated yet. Aim To analyze the possibility of identifying the heart rate variability threshold (HRVT by considering the responses of parasympathetic indicators during incremental exercise test in type 2 diabetics subjects (T2D and non diabetics individuals (ND. Methods Nine T2D (55.6 ± 5.7 years, 83.4 ± 26.6 kg, 30.9 ± 5.2 kg.m2(-1 and ten ND (50.8 ± 5.1 years, 76.2 ± 14.3 kg, 26.5 ± 3.8 kg.m2(-1 underwent to an incremental exercise test (IT on a cycle ergometer. Heart rate (HR, rate of perceived exertion (RPE, blood lactate and expired gas concentrations were measured at the end of each stage. HRVT was identified through the responses of root mean square successive difference between adjacent R-R intervals (RMSSD and standard deviation of instantaneous beat-to-beat R-R interval variability (SD1 by considering the last 60 s of each incremental stage, and were known as HRVT by RMSSD and SD1 (HRVT-RMSSD and HRVT-SD1, respectively. Results No differences were observed within groups for the exercise intensities corresponding to LT, VT, HRVT-RMSSD and HHVT-SD1. Furthermore, a strong relationship were verified among the studied parameters both for T2D (r = 0.68 to 0.87 and ND (r = 0.91 to 0.98 and the Bland & Altman technique confirmed the agreement among them. Conclusion The HRVT identification by the proposed autonomic indicators (SD1 and RMSSD were demonstrated to be valid to estimate the LT and VT for both T2D and ND.
Borckardt, Jeffrey J; Nahas, Ziad; Koola, Jejo; George, Mark S
2006-09-01
Resting motor threshold is the basic unit of dosing in transcranial magnetic stimulation (TMS) research and practice. There is little consensus on how best to estimate resting motor threshold with TMS, and only a few tools and resources are readily available to TMS researchers. The current study investigates the accuracy and efficiency of 5 different approaches to motor threshold assessment for TMS research and practice applications. Computer simulation models are used to test the efficiency and accuracy of 5 different adaptive parameter estimation by sequential testing (PEST) procedures. For each approach, data are presented with respect to the mean number of TMS trials necessary to reach the motor threshold estimate as well as the mean accuracy of the estimates. A simple nonparametric PEST procedure appears to provide the most accurate motor threshold estimates, but takes slightly longer (on average, 3.48 trials) to complete than a popular parametric alternative (maximum likelihood PEST). Recommendations are made for the best starting values for each of the approaches to maximize both efficiency and accuracy. In light of the computer simulation data provided in this article, the authors review and suggest which techniques might best fit different TMS research and clinical situations. Lastly, a free user-friendly software package is described and made available on the world wide web that allows users to run all of the motor threshold estimation procedures discussed in this article for clinical and research applications.
Designing adaptive intensive interventions using methods from engineering.
Lagoa, Constantino M; Bekiroglu, Korkut; Lanza, Stephanie T; Murphy, Susan A
2014-10-01
Adaptive intensive interventions are introduced, and new methods from the field of control engineering for use in their design are illustrated. A detailed step-by-step explanation of how control engineering methods can be used with intensive longitudinal data to design an adaptive intensive intervention is provided. The methods are evaluated via simulation. Simulation results illustrate how the designed adaptive intensive intervention can result in improved outcomes with less treatment by providing treatment only when it is needed. Furthermore, the methods are robust to model misspecification as well as the influence of unobserved causes. These new methods can be used to design adaptive interventions that are effective yet reduce participant burden. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Examining Key Notions for Method Adaption
Aydin, Mehmet N.; Ralyté, Jolita; Brinkkemper, Sjaak; Henderson-Sellers, Brian
2007-01-01
It is a well-known fact that IS development methods are not used as prescribed in actual development projects. That is, every ISD method in a development project is subject to its modifications because its peculiarities and emerging situations cannot be understood adequately in a prescribed manner.
Danish pedagogical methodics: adaption on Belarusian ground
DEFF Research Database (Denmark)
Andryieuski, Andrei; Skryhan, K.; Andryieuskaya, M.
2009-01-01
On the basis of our experience of studies and work at Danish universities and Belarusian State University we present a range of methodics that can be easily applied to Belarusian higher school education system to increase its efficiency....
International Nuclear Information System (INIS)
Mitchel, R.E.J.; Burchart, P.; Wyatt, H.
2008-01-01
Low doses of ionizing radiation to cells and animals may induce adaptive responses that reduce the risk of cancer. However, there are upper dose thresholds above which these protective adaptive responses do not occur. We have now tested the hypothesis that there are similar lower dose thresholds that must be exceeded in order to induce protective effects in vivo. We examined the effects of low dose/low dose rate fractionated exposures on cancer formation in Trp53 normal or cancer-prone Trp53 heterozygous female C57BL/6 mice. Beginning at 6 weeks of age, mice were exposed 5 days/week to single daily doses (0.33 mGy, 0.7 mGy/h) totaling 48, 97 or 146 mGy over 30, 60 or 90 weeks. The exposures for shorter times (up to 60 weeks) appeared to be below the level necessary to induce overall protective adaptive responses in Trp53 normal mice, and detrimental effects (shortened lifespan, increased frequency) evident for only specific tumor types (B- and T-cell lymphomas), were produced. Only when the exposures were continued for 90 weeks did the dose become sufficient to induce protective adaptive responses, balancing the detrimental effects for these specific cancers, and reducing the risk level back to that of the unexposed animals. Detrimental effects were not seen for other tumor types, and a protective effect was seen for sarcomas after 60 weeks of exposure, which was then lost when the exposure continued for 90 weeks. As previously shown for the upper dose threshold for protection by low doses, the lower dose boundary between protection and harm was influenced by Trp53 functionality. Neither protection nor harm was observed in exposed Trp53 heterozygous mice, indicating that reduced Trp53 function raises the lower dose/dose rate threshold for both detrimental and protective tumorigenic effects. (author)
Adaptive BDDC Deluxe Methods for H(curl)
Zampini, Stefano
2017-01-01
The work presents numerical results using adaptive BDDC deluxe methods for preconditioning the linear systems arising from finite element discretizations of the time-domain, quasi-static approximation of the Maxwell’s equations. The provided results
DEFF Research Database (Denmark)
Jakobsen, Janus Christian; Wetterslev, Jorn; Winkel, Per
2014-01-01
BACKGROUND: Thresholds for statistical significance when assessing meta-analysis results are being insufficiently demonstrated by traditional 95% confidence intervals and P-values. Assessment of intervention effects in systematic reviews with meta-analysis deserves greater rigour. METHODS......: Methodologies for assessing statistical and clinical significance of intervention effects in systematic reviews were considered. Balancing simplicity and comprehensiveness, an operational procedure was developed, based mainly on The Cochrane Collaboration methodology and the Grading of Recommendations...... Assessment, Development, and Evaluation (GRADE) guidelines. RESULTS: We propose an eight-step procedure for better validation of meta-analytic results in systematic reviews (1) Obtain the 95% confidence intervals and the P-values from both fixed-effect and random-effects meta-analyses and report the most...
Desmal, Abdulla
2014-07-01
A numerical framework that incorporates recently developed iterative shrinkage thresholding (IST) algorithms within the Born iterative method (BIM) is proposed for solving the two-dimensional inverse electromagnetic scattering problem. IST algorithms minimize a cost function weighted between measurement-data misfit and a zeroth/first-norm penalty term and therefore promote "sharpness" in the solution. Consequently, when applied to domains with sharp variations, discontinuities, or sparse content, the proposed framework is more efficient and accurate than the "classical" BIM that minimizes a cost function with a second-norm penalty term. Indeed, numerical results demonstrate the superiority of the IST-BIM over the classical BIM when they are applied to sparse domains: Permittivity and conductivity profiles recovered using the IST-BIM are sharper and more accurate and converge faster. © 1963-2012 IEEE.
A Headset Method for Measuring the Visual Temporal Discrimination Threshold in Cervical Dystonia
Directory of Open Access Journals (Sweden)
Anna Molloy
2014-07-01
Full Text Available Background: The visual temporal discrimination threshold (TDT is the shortest time interval at which one can determine two stimuli to be asynchronous and meets criteria for a valid endophenotype in adult‐onset idiopathic focal dystonia, a poorly penetrant disorder. Temporal discrimination is assessed in the hospital laboratory; in unaffected relatives of multiplex adult‐onset dystonia patients distance from the hospital is a barrier to data acquisition. We devised a portable headset method for visual temporal discrimination determination and our aim was to validate this portable tool against the traditional laboratory‐based method in a group of patients and in a large cohort of healthy controls. Methods: Visual TDTs were examined in two groups 1 in 96 healthy control participants divided by age and gender, and 2 in 33 cervical dystonia patients, using two methods of data acquisition, the traditional table‐top laboratory‐based system, and the novel portable headset method. The order of assessment was randomized in the control group. The results obtained by each technique were compared. Results: Visual temporal discrimination in healthy control participants demonstrated similar age and gender effects by the headset method as found by the table‐top examination. There were no significant differences between visual TDTs obtained using the two methods, both for the control participants and for the cervical dystonia patients. Bland–Altman testing showed good concordance between the two methods in both patients and in controls.Discussion: The portable headset device is a reliable and accurate method for visual temporal discrimination testing for use outside the laboratory, and will facilitate increased TDT data collection outside of the hospital setting. This is of particular importance in multiplex families where data collection in all available members of the pedigree is important for exome sequencing studies.
Adaptive upscaling with the dual mesh method
Energy Technology Data Exchange (ETDEWEB)
Guerillot, D.; Verdiere, S.
1997-08-01
The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.
Comparison of anaerobic threshold determined by visual and mathematical methods in healthy women.
Higa, M N; Silva, E; Neves, V F C; Catai, A M; Gallo, L; Silva de Sá, M F
2007-04-01
Several methods are used to estimate anaerobic threshold (AT) during exercise. The aim of the present study was to compare AT obtained by a graphic visual method for the estimate of ventilatory and metabolic variables (gold standard), to a bi-segmental linear regression mathematical model of Hinkley's algorithm applied to heart rate (HR) and carbon dioxide output (VCO2) data. Thirteen young (24 +/- 2.63 years old) and 16 postmenopausal (57 +/- 4.79 years old) healthy and sedentary women were submitted to a continuous ergospirometric incremental test on an electromagnetic braking cycloergometer with 10 to 20 W/min increases until physical exhaustion. The ventilatory variables were recorded breath-to-breath and HR was obtained beat-to-beat over real time. Data were analyzed by the nonparametric Friedman test and Spearman correlation test with the level of significance set at 5%. Power output (W), HR (bpm), oxygen uptake (VO2; mL kg(-1) min(-1)), VO2 (mL/min), VCO2 (mL/min), and minute ventilation (VE; L/min) data observed at the AT level were similar for both methods and groups studied (P > 0.05). The VO2 (mL kg(-1) min(-1)) data showed significant correlation (P automatic, non-invasive and objective AT measurement.
Smith, Paul L.; VonderHaar, Thomas H.
1996-01-01
The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.
An Adaptive Reordered Method for Computing PageRank
Directory of Open Access Journals (Sweden)
Yi-Ming Bu
2013-01-01
Full Text Available We propose an adaptive reordered method to deal with the PageRank problem. It has been shown that one can reorder the hyperlink matrix of PageRank problem to calculate a reduced system and get the full PageRank vector through forward substitutions. This method can provide a speedup for calculating the PageRank vector. We observe that in the existing reordered method, the cost of the recursively reordering procedure could offset the computational reduction brought by minimizing the dimension of linear system. With this observation, we introduce an adaptive reordered method to accelerate the total calculation, in which we terminate the reordering procedure appropriately instead of reordering to the end. Numerical experiments show the effectiveness of this adaptive reordered method.
New adaptive sampling method in particle image velocimetry
International Nuclear Information System (INIS)
Yu, Kaikai; Xu, Jinglei; Tang, Lan; Mo, Jianwei
2015-01-01
This study proposes a new adaptive method to enable the number of interrogation windows and their positions in a particle image velocimetry (PIV) image interrogation algorithm to become self-adapted according to the seeding density. The proposed method can relax the constraint of uniform sampling rate and uniform window size commonly adopted in the traditional PIV algorithm. In addition, the positions of the sampling points are redistributed on the basis of the spring force generated by the sampling points. The advantages include control of the number of interrogation windows according to the local seeding density and smoother distribution of sampling points. The reliability of the adaptive sampling method is illustrated by processing synthetic and experimental images. The synthetic example attests to the advantages of the sampling method. Compared with that of the uniform interrogation technique in the experimental application, the spatial resolution is locally enhanced when using the proposed sampling method. (technical design note)
Energy Technology Data Exchange (ETDEWEB)
Nesbet, R K [International Business Machines Corp., San Jose, Calif. (USA). Research Lab.
1978-01-14
Variational calculations locate and identify resonances and new threshold structures in electron impact excitation of He metastable states, in the region of the 3/sup 3/S and 3/sup 1/S excitation thresholds. A virtual state is found at the 3/sup 3/S threshold.
Yasuda, Kazuyuki; Kobayashi, Kaoru; Yamaguchi, Masayasu; Tanaka, Koichi; Fujii, Tomokazu; Kitahara, Yuichi; Tamaoki, Toshio; Matsushita, Yutaka; Nunomura, Akihiko; Motohashi, Nobutaka
2015-01-01
Seizure threshold (ST) in electroconvulsive therapy (ECT) has not been reported previously in Japanese patients. We investigated ST in bilateral ECT in Japanese patients using the dose-titration method. The associations between demographic and clinical characteristics and ST were analyzed to identify the predictors of ST. Finally, the validity of the half-age method for the stimulus dose was evaluated. Fifty-four Japanese patients with mood disorder, schizophrenia, and other psychotic disorders received an acute course of bilateral ECT using a brief-pulse device. ST was determined at the first session using a fixed titration schedule. ST was correlated with age, sex, body mass index, history of previous ECT, and psychotropic drugs on multiple regression analysis. Furthermore, the rate of accomplished seizures was calculated using the half-age method. Mean ST was 136 mC. ST was influenced by age, sex, history of previous ECT, and medication with benzodiazepines. The accomplished seizure rate using the half-age method was 72%, which was significantly lower in men and subjects on benzodiazepines. ST in Japanese patients was equal to or slightly higher than that previously reported in other ethnic groups, which might be attributable, at least in part, to high prevalence of and large-dose benzodiazepine prescription. Higher age, male gender, no history of ECT, and benzodiazepines were related to higher ST. The half-age method was especially useful in female patients and subjects without benzodiazepine medication. © 2014 The Authors. Psychiatry and Clinical Neurosciences © 2014 Japanese Society of Psychiatry and Neurology.
Comparison of anaerobic threshold determined by visual and mathematical methods in healthy women
Directory of Open Access Journals (Sweden)
M.N. Higa
2007-04-01
Full Text Available Several methods are used to estimate anaerobic threshold (AT during exercise. The aim of the present study was to compare AT obtained by a graphic visual method for the estimate of ventilatory and metabolic variables (gold standard, to a bi-segmental linear regression mathematical model of Hinkley's algorithm applied to heart rate (HR and carbon dioxide output (VCO2 data. Thirteen young (24 ± 2.63 years old and 16 postmenopausal (57 ± 4.79 years old healthy and sedentary women were submitted to a continuous ergospirometric incremental test on an electromagnetic braking cycloergometer with 10 to 20 W/min increases until physical exhaustion. The ventilatory variables were recorded breath-to-breath and HR was obtained beat-to-beat over real time. Data were analyzed by the nonparametric Friedman test and Spearman correlation test with the level of significance set at 5%. Power output (W, HR (bpm, oxygen uptake (VO2; mL kg-1 min-1, VO2 (mL/min, VCO2 (mL/min, and minute ventilation (VE; L/min data observed at the AT level were similar for both methods and groups studied (P > 0.05. The VO2 (mL kg-1 min-1 data showed significant correlation (P < 0.05 between the gold standard method and the mathematical model when applied to HR (r s = 0.75 and VCO2 (r s = 0.78 data for the subjects as a whole (N = 29. The proposed mathematical method for the detection of changes in response patterns of VCO2 and HR was adequate and promising for AT detection in young and middle-aged women, representing a semi-automatic, non-invasive and objective AT measurement.
Track and vertex reconstruction: From classical to adaptive methods
International Nuclear Information System (INIS)
Strandlie, Are; Fruehwirth, Rudolf
2010-01-01
This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.
Directory of Open Access Journals (Sweden)
Tiago Lazzaretti Fernandes
Full Text Available ABSTRACT CONTEXT AND OBJECTIVE: This study aimed to evaluate different mathematical post-analysis methods of determining lactate threshold in highly and lowly trained endurance runners. DESIGN AND SETTING: Experimental laboratory study, in a tertiary-level public university hospital. METHOD: Twenty-seven male endurance runners were divided into two training load groups: lowly trained (frequency < 4 times per week, < 6 consecutive months, training velocity ≥ 5.0 min/km and highly trained (frequency ≥ 4 times per week, ≥ 6 consecutive months, training velocity < 5.0 min/km. The subjects performed an incremental treadmill protocol, with 1 km/h increases at each subsequent 4-minute stage. Fingerprint blood-lactate analysis was performed at the end of each stage. The lactate threshold (i.e. the running velocity at which blood lactate levels began to exponentially increase was measured using three different methods: increase in blood lactate of 1 mmol/l at stages (DT1, absolute 4 mmol/l blood lactate concentration (4 mmol, and the semi-log method (semi-log. ANOVA was used to compare different lactate threshold methods and training groups. RESULTS: Highly trained athletes showed significantly greater lactate thresholds than lowly trained runners, regardless of the calculation method used. When all the subject data were combined, DT1 and semi-log were not different, while 4 mmol was significantly lower than the other two methods. These same trends were observed when comparing lactate threshold methods in the lowly trained group. However, 4 mmol was only significantly lower than DT1 in the highly trained group. CONCLUSION: The 4 mmol protocol did not show lactate threshold measurements comparable with DT1 and semi-log protocols among lowly trained athletes.
Directory of Open Access Journals (Sweden)
Kusworo Adi
2017-01-01
Full Text Available Beef is one of the animal food products that have high nutrition because it contains carbohydrates, proteins, fats, vitamins, and minerals. Therefore, the quality of beef should be maintained so that consumers get good beef quality. Determination of beef quality is commonly conducted visually by comparing the actual beef and reference pictures of each beef class. This process presents weaknesses, as it is subjective in nature and takes a considerable amount of time. Therefore, an automated system based on image processing that is capable of determining beef quality is required. This research aims to develop an image segmentation method by processing digital images. The system designed consists of image acquisition processes with varied distance, resolution, and angle. Image segmentation is done to separate the images of fat and meat using the Otsu thresholding method. Classification was carried out using the decision tree algorithm and the best accuracies were obtained at 90% for training and 84% for testing. Once developed, this system is then embedded into the android programming. Results show that the image processing technique is capable of proper marbling score identification.
Energy Technology Data Exchange (ETDEWEB)
Varga-Szemes, Akos; Schoepf, U.J.; Suranyi, Pal; De Cecco, Carlo N.; Fox, Mary A. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Muscogiuri, Giuseppe [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' , Department of Medical-Surgical Sciences and Translational Medicine, Rome (Italy); Wichmann, Julian L. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Cannao, Paola M. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Milan, Scuola di Specializzazione in Radiodiagnostica, Milan (Italy); Renker, Matthias [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Kerckhoff Heart and Thorax Center, Bad Nauheim (Germany); Mangold, Stefanie [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Ruzsics, Balazs [Royal Liverpool and Broadgreen University Hospitals, Department of Cardiology, Liverpool (United Kingdom)
2016-05-15
To assess the accuracy and efficiency of a threshold-based, semi-automated cardiac MRI segmentation algorithm in comparison with conventional contour-based segmentation and aortic flow measurements. Short-axis cine images of 148 patients (55 ± 18 years, 81 men) were used to evaluate left ventricular (LV) volumes and mass (LVM) using conventional and threshold-based segmentations. Phase-contrast images were used to independently measure stroke volume (SV). LV parameters were evaluated by two independent readers. Evaluation times using the conventional and threshold-based methods were 8.4 ± 1.9 and 4.2 ± 1.3 min, respectively (P < 0.0001). LV parameters measured by the conventional and threshold-based methods, respectively, were end-diastolic volume (EDV) 146 ± 59 and 134 ± 53 ml; end-systolic volume (ESV) 64 ± 47 and 59 ± 46 ml; SV 82 ± 29 and 74 ± 28 ml (flow-based 74 ± 30 ml); ejection fraction (EF) 59 ± 16 and 58 ± 17 %; and LVM 141 ± 55 and 159 ± 58 g. Significant differences between the conventional and threshold-based methods were observed in EDV, ESV, and LVM measurements; SV from threshold-based and flow-based measurements were in agreement (P > 0.05) but were significantly different from conventional analysis (P < 0.05). Excellent inter-observer agreement was observed. Threshold-based LV segmentation provides improved accuracy and faster assessment compared to conventional contour-based methods. (orig.)
Directory of Open Access Journals (Sweden)
William E Stutz
Full Text Available Genes of the vertebrate major histocompatibility complex (MHC are of great interest to biologists because of their important role in immunity and disease, and their extremely high levels of genetic diversity. Next generation sequencing (NGS technologies are quickly becoming the method of choice for high-throughput genotyping of multi-locus templates like MHC in non-model organisms. Previous approaches to genotyping MHC genes using NGS technologies suffer from two problems:1 a "gray zone" where low frequency alleles and high frequency artifacts can be difficult to disentangle and 2 a similar sequence problem, where very similar alleles can be difficult to distinguish as two distinct alleles. Here were present a new method for genotyping MHC loci--Stepwise Threshold Clustering (STC--that addresses these problems by taking full advantage of the increase in sequence data provided by NGS technologies. Unlike previous approaches for genotyping MHC with NGS data that attempt to classify individual sequences as alleles or artifacts, STC uses a quasi-Dirichlet clustering algorithm to cluster similar sequences at increasing levels of sequence similarity. By applying frequency and similarity based criteria to clusters rather than individual sequences, STC is able to successfully identify clusters of sequences that correspond to individual or similar alleles present in the genomes of individual samples. Furthermore, STC does not require duplicate runs of all samples, increasing the number of samples that can be genotyped in a given project. We show how the STC method works using a single sample library. We then apply STC to 295 threespine stickleback (Gasterosteus aculeatus samples from four populations and show that neighboring populations differ significantly in MHC allele pools. We show that STC is a reliable, accurate, efficient, and flexible method for genotyping MHC that will be of use to biologists interested in a variety of downstream applications.
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
Gustafson, Samantha; Pittman, Andrea; Fanning, Robert
2013-06-01
This tutorial demonstrates the effects of tubing length and coupling type (i.e., foam tip or personal earmold) on hearing threshold and real-ear-to-coupler difference (RECD) measures. Hearing thresholds from 0.25 kHz through 8 kHz are reported at various tubing lengths for 28 normal-hearing adults between the ages of 22 and 31 years. RECD values are reported for 14 of the adults. All measures were made with an insert earphone coupled to a standard foam tip and with an insert earphone coupled to each participant's personal earmold. Threshold and RECD measures obtained with a personal earmold were significantly different from those obtained with a foam tip on repeated measures analyses of variance. One-sample t tests showed these differences to vary systematically with increasing tubing length, with the largest average differences (7-8 dB) occurring at 4 kHz. This systematic examination demonstrates the equal and opposite effects of tubing length on threshold and acoustic measures. Specifically, as tubing length increased, sound pressure level in the ear canal decreased, affecting both hearing thresholds and the real-ear portion of the RECDs. This demonstration shows that when the same coupling method is used to obtain the hearing thresholds and RECD, equal and accurate estimates of real-ear sound pressure level are obtained.
The adaptive collision source method for discrete ordinates radiation transport
International Nuclear Information System (INIS)
Walters, William J.; Haghighat, Alireza
2017-01-01
Highlights: • A new adaptive quadrature method to solve the discrete ordinates transport equation. • The adaptive collision source (ACS) method splits the flux into n’th collided components. • Uncollided flux requires high quadrature; this is lowered with number of collisions. • ACS automatically applies appropriate quadrature order each collided component. • The adaptive quadrature is 1.5–4 times more efficient than uniform quadrature. - Abstract: A novel collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order used for each. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This method allows for an optimal use of processing power, by using a high order quadrature for the first iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and is referred to as the adaptive collision source (ACS) method. The ACS methodology has been implemented in the 3-D, parallel, multigroup discrete ordinates code TITAN. This code was tested on a several simple and complex fixed-source problems. The ACS implementation in TITAN has shown a reduction in computation time by a factor of 1.5–4 on the fixed-source test problems, for the same desired level of accuracy, as compared to the standard TITAN code.
Susrama, I. G.; Purnama, K. E.; Purnomo, M. H.
2016-01-01
Oligospermia is a male fertility issue defined as a low sperm concentration in the ejaculate. Normally the sperm concentration is 20-120 million/ml, while Oligospermia patients has sperm concentration less than 20 million/ml. Sperm test done in the fertility laboratory to determine oligospermia by checking fresh sperm according to WHO standards in 2010 [9]. The sperm seen in a microscope using a Neubauer improved counting chamber and manually count the number of sperm. In order to be counted automatically, this research made an automation system to analyse and count the sperm concentration called Automated Analysis of Sperm Concentration Counters (A2SC2) using Otsu threshold segmentation process and morphology. Data sperm used is the fresh sperm directly in the analysis in the laboratory from 10 people. The test results using A2SC2 method obtained an accuracy of 91%. Thus in this study, A2SC2 can be used to calculate the amount and concentration of sperm automatically
Directory of Open Access Journals (Sweden)
Domingues M. O.
2013-12-01
Full Text Available We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via Harten’s cell average multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge–Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution.
Cultural adaptation and translation of measures: an integrated method.
Sidani, Souraya; Guruge, Sepali; Miranda, Joyal; Ford-Gilboe, Marilyn; Varcoe, Colleen
2010-04-01
Differences in the conceptualization and operationalization of health-related concepts may exist across cultures. Such differences underscore the importance of examining conceptual equivalence when adapting and translating instruments. In this article, we describe an integrated method for exploring conceptual equivalence within the process of adapting and translating measures. The integrated method involves five phases including selection of instruments for cultural adaptation and translation; assessment of conceptual equivalence, leading to the generation of a set of items deemed to be culturally and linguistically appropriate to assess the concept of interest in the target community; forward translation; back translation (optional); and pre-testing of the set of items. Strengths and limitations of the proposed integrated method are discussed. (c) 2010 Wiley Periodicals, Inc.
Adaptive Subband Filtering Method for MEMS Accelerometer Noise Reduction
Directory of Open Access Journals (Sweden)
Piotr PIETRZAK
2008-12-01
Full Text Available Silicon microaccelerometers can be considered as an alternative to high-priced piezoelectric sensors. Unfortunately, relatively high noise floor of commercially available MEMS (Micro-Electro-Mechanical Systems sensors limits the possibility of their usage in condition monitoring systems of rotating machines. The solution of this problem is the method of signal filtering described in the paper. It is based on adaptive subband filtering employing Adaptive Line Enhancer. For filter weights adaptation, two novel algorithms have been developed. They are based on the NLMS algorithm. Both of them significantly simplify its software and hardware implementation and accelerate the adaptation process. The paper also presents the software (Matlab and hardware (FPGA implementation of the proposed noise filter. In addition, the results of the performed tests are reported. They confirm high efficiency of the solution.
An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification.
Li, Fangmin; Yang, Chao; Xia, Yuqing; Ma, Xiaolin; Zhang, Tao; Zhou, Zhou
2017-11-29
In this paper, we propose the multiwindow Adaptive S-method (AS-method) distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM) trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference.
An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification
Directory of Open Access Journals (Sweden)
Fangmin Li
2017-11-01
Full Text Available In this paper, we propose the multiwindow Adaptive S-method (AS-method distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference.
International Nuclear Information System (INIS)
Mera, David; Cotos, José M.; Varela-Pet, José; Garcia-Pineda, Oscar
2012-01-01
Highlights: ► We present an adaptive thresholding algorithm to segment oil spills. ► The segmentation algorithm is based on SAR images and wind field estimations. ► A Database of oil spill confirmations was used for the development of the algorithm. ► Wind field estimations have demonstrated to be useful for filtering look-alikes. ► Parallel programming has been successfully used to minimize processing time. - Abstract: Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean’s surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Energy Technology Data Exchange (ETDEWEB)
Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.
1998-12-10
OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
Time over threshold readout method of SiPM based small animal PET detector
International Nuclear Information System (INIS)
Valastyan, I.; Gal, J.; Hegyesi, G.; Kalinka, G.; Nagy, F.; Kiraly, B.; Imrek, J.; Molnar, J.
2012-01-01
Complete text of publication follows. The aim of the work was to design a readout concept for silicon photomultiplier (SiPM) sensor array used in small animal PET scanner. The detector module consist of LYSO 35x35 scintillation crystals, 324 SiPM sensors (arranged in 2x2 blocks and those quads in a 9x9 configuration) and FPGA based readout electronics. The dimensions of the SiPM matrix are area: 48x48 mm 2 and the size of one SiPM sensor is 1.95x2.2 mm 2 . Due to the high dark current of the SiPM, conventional Anger based readout method does not provide sufficient crystal position maps. Digitizing the 324 SiPM channels is a straightforward way to obtain proper crystal position maps. However handling hundreds of analogue input channels and the required DSP resources cause large racks of data acquisition electronics. Therefore coding of the readout channels is required. Proposed readout method: The coding of the 324 SiPMs consists two steps: Step 1) Reduction of the channels from 324 to 36: Row column readout, SiPMs are connected to each other in column by column and row-by row, thus the required channels are 36. The dark current of 18 connected SiPMs is small in off for identifying pulses coming from scintillating events. Step 2) Reduction of the 18 rows and columns to 4 channels: Comparators were connected to each rows and columns, and the level was set above the level of dark noise. Therefore only few comparators are active when scintillation light enters in the tile. The output of the comparator rows and columns are divided to two parts using resistor chains. Then the outputs of the resistor chains are digitized by a 4 channel ADC. However instead of the Anger method, time over threshold (ToT) was used. Figure 1 shows the readout concept of the SiPM matrix. In order to validate the new method and optimize the front-end electronics of the detector, the analogue signals were digitized before the comparators using a CAEN DT5740 32 channel digitizer, then the
Wavelet methods in multi-conjugate adaptive optics
International Nuclear Information System (INIS)
Helin, T; Yudytskiy, M
2013-01-01
The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory. (paper)
An Adaptively Accelerated Bayesian Deblurring Method with Entropy Prior
Directory of Open Access Journals (Sweden)
Yong-Hoon Kim
2008-05-01
Full Text Available The development of an efficient adaptively accelerated iterative deblurring algorithm based on Bayesian statistical concept has been reported. Entropy of an image has been used as a Ã¢Â€ÂœpriorÃ¢Â€Â distribution and instead of additive form, used in conventional acceleration methods an exponent form of relaxation constant has been used for acceleration. Thus the proposed method is called hereafter as adaptively accelerated maximum a posteriori with entropy prior (AAMAPE. Based on empirical observations in different experiments, the exponent is computed adaptively using first-order derivatives of the deblurred image from previous two iterations. This exponent improves speed of the AAMAPE method in early stages and ensures stability at later stages of iteration. In AAMAPE method, we also consider the constraint of the nonnegativity and flux conservation. The paper discusses the fundamental idea of the Bayesian image deblurring with the use of entropy as prior, and the analytical analysis of superresolution and the noise amplification characteristics of the proposed method. The experimental results show that the proposed AAMAPE method gives lower RMSE and higher SNR in 44% lesser iterations as compared to nonaccelerated maximum a posteriori with entropy prior (MAPE method. Moreover, AAMAPE followed by wavelet wiener filtering gives better result than the state-of-the-art methods.
On Self-Adaptive Method for General Mixed Variational Inequalities
Directory of Open Access Journals (Sweden)
Abdellah Bnouhachem
2008-01-01
Full Text Available We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003. Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit complementarity problems as special cases, results proved in this paper continue to hold for these problems.
Adaptive decoupled power control method for inverter connected DG
DEFF Research Database (Denmark)
Sun, Xiaofeng; Tian, Yanjun; Chen, Zhe
2014-01-01
an adaptive droop control method based on online evaluation of power decouple matrix for inverter connected distributed generations in distribution system. Traditional decoupled power control is simply based on line impedance parameter, but the load characteristics also cause the power coupling, and alter...
Use of dynamic grid adaption in the ASWR-method
International Nuclear Information System (INIS)
Graf, U.; Romstedt, P.; Werner, W.
1985-01-01
A dynamic grid adaption method has been developed for use with the ASWR-method. The method automatically adapts the number and position of the spatial meshpoints as the solution of hyperbolic or parabolic vector partial differential equations progresses in time. The mesh selection algorithm is based on the minimization of the L 2 -norm of the spatial discretization error. The method permits accurate calculation of the evolution of inhomogenities like wave fronts, shock layers and other sharp transitions, while generally using a coarse computational grid. The number of required mesh points is significantly reduced, relative to a fixed Eulerian grid. Since the mesh selection algorithm is computationally inexpensive, a corresponding reduction of computing time results
Cox-Davenport, Rebecca A; Phelan, Julia C
2015-05-01
First-time NCLEX-RN pass rates are an important indicator of nursing school success and quality. Nursing schools use different methods to anticipate NCLEX outcomes and help prevent student failure and possible threat to accreditation. This study evaluated the impact of a shift in NCLEX preparation policy at a BSN program in the southeast United States. The policy shifted from the use of predictor score thresholds to determine graduation eligibility to a more proactive remediation strategy involving adaptive quizzing. A descriptive correlational design evaluated the impact of an adaptive quizzing system designed to give students ongoing active practice and feedback and explored the relationship between predictor examinations and NCLEX success. Data from student usage of the system as well as scores on predictor tests were collected for three student cohorts. Results revealed a positive correlation between adaptive quizzing system usage and content mastery. Two of the 69 students in the sample did not pass the NCLEX. With so few students failing the NCLEX, predictability of any course variables could not be determined. The power of predictor examinations to predict NCLEX failure could also not be supported. The most consistent factor among students, however, was their content mastery level within the adaptive quizzing system. Implications of these findings are discussed.
Discrete linear canonical transform computation by adaptive method.
Zhang, Feng; Tao, Ran; Wang, Yue
2013-07-29
The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.
Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.
2007-03-01
Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.
Panchromatic cooperative hyperspectral adaptive wide band deletion repair method
Jiang, Bitao; Shi, Chunyu
2018-02-01
In the hyperspectral data, the phenomenon of stripe deletion often occurs, which seriously affects the efficiency and accuracy of data analysis and application. Narrow band deletion can be directly repaired by interpolation, and this method is not ideal for wide band deletion repair. In this paper, an adaptive spectral wide band missing restoration method based on panchromatic information is proposed, and the effectiveness of the algorithm is verified by experiments.
An Adaptive Laboratory Evolution Method to Accelerate Autotrophic Metabolism
DEFF Research Database (Denmark)
Zhang, Tian; Tremblay, Pier-Luc
2018-01-01
Adaptive laboratory evolution (ALE) is an approach enabling the development of novel characteristics in microbial strains via the application of a constant selection pressure. This method is also an efficient tool to acquire insights on molecular mechanisms responsible for specific phenotypes. ALE...... autotrophically and reducing CO2 into acetate more efficiently. Strains developed via this ALE method were also used to gain knowledge on the autotrophic metabolism of S. ovata as well as other acetogenic bacteria....
Method and system for environmentally adaptive fault tolerant computing
Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)
2010-01-01
A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.
Adaptive sampling method in deep-penetration particle transport problem
International Nuclear Information System (INIS)
Wang Ruihong; Ji Zhicheng; Pei Lucheng
2012-01-01
Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance sampling method gets satisfied results as well. (authors)
Penfield, Randall D.; Myers, Nicholas D.; Wolfe, Edward W.
2008-01-01
Measurement invariance in the partial credit model (PCM) can be conceptualized in several different but compatible ways. In this article the authors distinguish between three forms of measurement invariance in the PCM: step invariance, item invariance, and threshold invariance. Approaches for modeling these three forms of invariance are proposed,…
An adaptive sampling and windowing interrogation method in PIV
Theunissen, R.; Scarano, F.; Riethmuller, M. L.
2007-01-01
This study proposes a cross-correlation based PIV image interrogation algorithm that adapts the number of interrogation windows and their size to the image properties and to the flow conditions. The proposed methodology releases the constraint of uniform sampling rate (Cartesian mesh) and spatial resolution (uniform window size) commonly adopted in PIV interrogation. Especially in non-optimal experimental conditions where the flow seeding is inhomogeneous, this leads either to loss of robustness (too few particles per window) or measurement precision (too large or coarsely spaced interrogation windows). Two criteria are investigated, namely adaptation to the local signal content in the image and adaptation to local flow conditions. The implementation of the adaptive criteria within a recursive interrogation method is described. The location and size of the interrogation windows are locally adapted to the image signal (i.e., seeding density). Also the local window spacing (commonly set by the overlap factor) is put in relation with the spatial variation of the velocity field. The viability of the method is illustrated over two experimental cases where the limitation of a uniform interrogation approach appears clearly: a shock-wave-boundary layer interaction and an aircraft vortex wake. The examples show that the spatial sampling rate can be adapted to the actual flow features and that the interrogation window size can be arranged so as to follow the spatial distribution of seeding particle images and flow velocity fluctuations. In comparison with the uniform interrogation technique, the spatial resolution is locally enhanced while in poorly seeded regions the level of robustness of the analysis (signal-to-noise ratio) is kept almost constant.
Energy Technology Data Exchange (ETDEWEB)
Berthiau, G
1995-10-01
The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Energy Technology Data Exchange (ETDEWEB)
Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron
1998-12-08
Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel
Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas
2015-04-01
One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General
Adaptive multiresolution method for MAP reconstruction in electron tomography
Energy Technology Data Exchange (ETDEWEB)
Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)
2016-11-15
3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.
Adaptive [theta]-methods for pricing American options
Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran
2008-12-01
We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.
Optimal and adaptive methods of processing hydroacoustic signals (review)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
HAM-Based Adaptive Multiscale Meshless Method for Burgers Equation
Directory of Open Access Journals (Sweden)
Shu-Li Mei
2013-01-01
Full Text Available Based on the multilevel interpolation theory, we constructed a meshless adaptive multiscale interpolation operator (MAMIO with the radial basis function. Using this operator, any nonlinear partial differential equations such as Burgers equation can be discretized adaptively in physical spaces as a nonlinear matrix ordinary differential equation. In order to obtain the analytical solution of the system of ODEs, the homotopy analysis method (HAM proposed by Shijun Liao was developed to solve the system of ODEs by combining the precise integration method (PIM which can be employed to get the analytical solution of linear system of ODEs. The numerical experiences show that HAM is not sensitive to the time step, and so the arithmetic error is mainly derived from the discrete in physical space.
LEACH-A: An Adaptive Method for Improving LEACH Protocol
Directory of Open Access Journals (Sweden)
Jianli ZHAO
2014-01-01
Full Text Available Energy has become one of the most important constraints on wireless sensor networks. Hence, many researchers in this field focus on how to design a routing protocol to prolong the lifetime of the network. The classical hierarchical protocols such as LEACH and LEACH-C have better performance in saving the energy consumption. However, the choosing strategy only based on the largest residue energy or shortest distance will still consume more energy. In this paper an adaptive routing protocol named “LEACH-A” which has an energy threshold E0 is proposed. If there are cluster nodes whose residual energy are greater than E0, the node of largest residual energy is selected to communicated with the base station; When all the cluster nodes energy are less than E0, the node nearest to the base station is select to communication with the base station. Simulations show that our improved protocol LEACH-A performs better than the LEACH and the LEACH-C.
Adaptive-mesh zoning by the equipotential method
Energy Technology Data Exchange (ETDEWEB)
Winslow, A.M.
1981-04-01
An adaptive mesh method is proposed for the numerical solution of differential equations which causes the mesh lines to move closer together in regions where higher resolution in some physical quantity T is desired. A coefficient D > 0 is introduced into the equipotential zoning equations, where D depends on the gradient of T . The equations are inverted, leading to nonlinear elliptic equations for the mesh coordinates with source terms which depend on the gradient of D. A functional form of D is proposed.
Sperling, Milena P R; Simões, Rodrigo P; Caruso, Flávia C R; Mendes, Renata G; Arena, Ross; Borghi-Silva, Audrey
2016-01-01
Recent studies have shown that the magnitude of the metabolic and autonomic responses during progressive resistance exercise (PRE) is associated with the determination of the anaerobic threshold (AT). AT is an important parameter to determine intensity in dynamic exercise. To investigate the metabolic and cardiac autonomic responses during dynamic resistance exercise in patients with Coronary Artery Disease (CAD). Twenty men (age = 63±7 years) with CAD [Left Ventricular Ejection Fraction (LVEF) = 60±10%] underwent a PRE protocol on a leg press until maximal exertion. The protocol began at 10% of One Repetition Maximum Test (1-RM), with subsequent increases of 10% until maximal exhaustion. Heart Rate Variability (HRV) indices from Poincaré plots (SD1, SD2, SD1/SD2) and time domain (rMSSD and RMSM), and blood lactate were determined at rest and during PRE. Significant alterations in HRV and blood lactate were observed starting at 30% of 1-RM (p<0.05). Bland-Altman plots revealed a consistent agreement between blood lactate threshold (LT) and rMSSD threshold (rMSSDT) and between LT and SD1 threshold (SD1T). Relative values of 1-RM in all LT, rMSSDT and SD1T did not differ (29%±5 vs 28%±5 vs 29%±5 Kg, respectively). HRV during PRE could be a feasible noninvasive method of determining AT in CAD patients to plan intensities during cardiac rehabilitation.
Singh, Amritpal; Saini, Barjinder Singh; Singh, Dilbag
2016-06-01
Multiscale approximate entropy (MAE) is used to quantify the complexity of a time series as a function of time scale τ. Approximate entropy (ApEn) tolerance threshold selection 'r' is based on either: (1) arbitrary selection in the recommended range (0.1-0.25) times standard deviation of time series (2) or finding maximum ApEn (ApEnmax) i.e., the point where self-matches start to prevail over other matches and choosing the corresponding 'r' (rmax) as threshold (3) or computing rchon by empirically finding the relation between rmax, SD1/SD2 ratio and N using curve fitting, where, SD1 and SD2 are short-term and long-term variability of a time series respectively. None of these methods is gold standard for selection of 'r'. In our previous study [1], an adaptive procedure for selection of 'r' is proposed for approximate entropy (ApEn). In this paper, this is extended to multiple time scales using MAEbin and multiscale cross-MAEbin (XMAEbin). We applied this to simulations i.e. 50 realizations (n = 50) of random number series, fractional Brownian motion (fBm) and MIX (P) [1] series of data length of N = 300 and short term recordings of HRV and SBPV performed under postural stress from supine to standing. MAEbin and XMAEbin analysis was performed on laboratory recorded data of 50 healthy young subjects experiencing postural stress from supine to upright. The study showed that (i) ApEnbin of HRV is more than SBPV in supine position but is lower than SBPV in upright position (ii) ApEnbin of HRV decreases from supine i.e. 1.7324 ± 0.112 (mean ± SD) to upright 1.4916 ± 0.108 due to vagal inhibition (iii) ApEnbin of SBPV increases from supine i.e. 1.5535 ± 0.098 to upright i.e. 1.6241 ± 0.101 due sympathetic activation (iv) individual and cross complexities of RRi and systolic blood pressure (SBP) series depend on time scale under consideration (v) XMAEbin calculated using ApEnmax is correlated with cross-MAE calculated using ApEn (0.1-0.26) in steps of 0
Claxton, Karl; Martin, Steve; Soares, Marta; Rice, Nigel; Spackman, Eldon; Hinde, Sebastian; Devlin, Nancy; Smith, Peter C; Sculpher, Mark
2015-02-01
when PCTs are under more financial pressure and are more likely to be disinvesting than investing. This indicates that the central estimate of the threshold is likely to be an overestimate for all technologies which impose net costs on the NHS and the appropriate threshold to apply should be lower for technologies which have a greater impact on NHS costs. The central estimate is based on identifying a preferred analysis at each stage based on the analysis that made the best use of available information, whether or not the assumptions required appeared more reasonable than the other alternatives available, and which provided a more complete picture of the likely health effects of a change in expenditure. However, the limitation of currently available data means that there is substantial uncertainty associated with the estimate of the overall threshold. The methods go some way to providing an empirical estimate of the scale of opportunity costs the NHS faces when considering whether or not the health benefits associated with new technologies are greater than the health that is likely to be lost elsewhere in the NHS. Priorities for future research include estimating the threshold for subsequent waves of expenditure and outcome data, for example by utilising expenditure and outcomes available at the level of Clinical Commissioning Groups as well as additional data collected on QoL and updated estimates of incidence (by age and gender) and duration of disease. Nonetheless, the study also starts to make the other NHS patients, who ultimately bear the opportunity costs of such decisions, less abstract and more 'known' in social decisions. The National Institute for Health Research-Medical Research Council Methodology Research Programme.
Directory of Open Access Journals (Sweden)
Yudong Zhang
2016-01-01
Full Text Available Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS. It is composed of three successful components: (i exponential wavelet transform, (ii iterative shrinkage-thresholding algorithm, and (iii random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches.
Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping
2016-01-01
Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068
Tan, Kok Liang; Tanaka, Toshiyuki; Nakamura, Hidetoshi; Shirahata, Toru; Sugiura, Hiroaki
Chronic Obstructive Pulmonary Disease is a disease in which the airways and tiny air sacs (alveoli) inside the lung are partially obstructed or destroyed. Emphysema is what occurs as more and more of the walls between air sacs get destroyed. The goal of this paper is to produce a more practical emphysema-quantification algorithm that has higher correlation with the parameters of pulmonary function tests compared to classical methods. The use of the threshold range from approximately -900 Hounsfield Unit to -990 Hounsfield Unit for extracting emphysema from CT has been reported in many papers. From our experiments, we realize that a threshold which is optimal for a particular CT data set might not be optimal for other CT data sets due to the subtle radiographic variations in the CT images. Consequently, we propose a multi-threshold method that utilizes ten thresholds between and including -900 Hounsfield Unit and -990 Hounsfield Unit for identifying the different potential emphysematous regions in the lung. Subsequently, we divide the lung into eight sub-volumes. From each sub-volume, we calculate the ratio of the voxels with the intensity below a certain threshold. The respective ratios of the voxels below the ten thresholds are employed as the features for classifying the sub-volumes into four emphysema severity classes. Neural network is used as the classifier. The neural network is trained using 80 training sub-volumes. The performance of the classifier is assessed by classifying 248 test sub-volumes of the lung obtained from 31 subjects. Actual diagnoses of the sub-volumes are hand-annotated and consensus-classified by radiologists. The four-class classification accuracy of the proposed method is 89.82%. The sub-volumetric classification results produced in this study encompass not only the information of emphysema severity but also the distribution of emphysema severity from the top to the bottom of the lung. We hypothesize that besides emphysema severity, the
A novel adaptive force control method for IPMC manipulation
International Nuclear Information System (INIS)
Hao, Lina; Sun, Zhiyong; Su, Yunquan; Gao, Jianchao; Li, Zhi
2012-01-01
IPMC is a type of electro-active polymer material, also called artificial muscle, which can generate a relatively large deformation under a relatively low input voltage (generally speaking, less than 5 V), and can be implemented in a water environment. Due to these advantages, IPMC can be used in many fields such as biomimetics, service robots, bio-manipulation, etc. Until now, most existing methods for IPMC manipulation are displacement control not directly force control, however, under most conditions, the success rate of manipulations for tiny fragile objects is limited by the contact force, such as using an IPMC gripper to fix cells. Like most EAPs, a creep phenomenon exists in IPMC, of which the generated force will change with time and the creep model will be influenced by the change of the water content or other environmental factors, so a proper force control method is urgently needed. This paper presents a novel adaptive force control method (AIPOF control—adaptive integral periodic output feedback control), based on employing a creep model of which parameters are obtained by using the FRLS on-line identification method. The AIPOF control method can achieve an arbitrary pole configuration as long as the plant is controllable and observable. This paper also designs the POF and IPOF controller to compare their test results. Simulation and experiments of micro-force-tracking tests are carried out, with results confirming that the proposed control method is viable. (paper)
Successful adaptation of a research methods course in South America.
Tamariz, Leonardo; Vasquez, Diego; Loor, Cecilia; Palacio, Ana
2017-01-01
South America has low research productivity. The lack of a structured research curriculum is one of the barriers to conducting research. To report our experience adapting an active learning-based research methods curriculum to improve research productivity at a university in Ecuador. We used a mixed-method approach to test the adaptation of the research curriculum at Universidad Catolica Santiago de Guayaquil. The curriculum uses a flipped classroom and active learning approach to teach research methods. When adapted, it was longitudinal and had 16-hour programme of in-person teaching and a six-month follow-up online component. Learners were organized in theme groups according to interest, and each group had a faculty leader. Our primary outcome was research productivity, which was measured by the succesful presentation of the research project at a national meeting, or publication in a peer-review journal. Our secondary outcomes were knowledge and perceived competence before and after course completion. We conducted qualitative interviews of faculty members and students to evaluate themes related to participation in research. Fifty university students and 10 faculty members attended the course. We had a total of 15 groups. Both knowledge and perceived competence increased by 17 and 18 percentage points, respectively. The presentation or publication rate for the entire group was 50%. The qualitative analysis showed that a lack of research culture and curriculum were common barriers to research. A US-based curriculum can be successfully adapted in low-middle income countries. A research curriculum aids in achieving pre-determined milestones. UCSG: Universidad Catolica Santiago de Guayaquil; UM: University of Miami.
A multilevel adaptive reaction-splitting method for SRNs
Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro
2016-01-01
In [5], we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks (SRNs) specifically designed for systems in which the set of reaction channels can be adaptively partitioned into two subsets characterized by either high or low activity. To estimate expected values of observables of the system, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This is achieved with a computational complexity of order O(TOL-2). We also present a novel control variate technique which may dramatically reduce the variance of the coarsest level at a negligible computational cost.
Adaptive BDDC Deluxe Methods for H(curl)
Zampini, Stefano
2017-03-17
The work presents numerical results using adaptive BDDC deluxe methods for preconditioning the linear systems arising from finite element discretizations of the time-domain, quasi-static approximation of the Maxwell’s equations. The provided results, obtained using the BDDC implementation of the PETSc library, show that these methods are poly-logarithmic in the polynomial degree of the Nédélec elements of first and second kind, and robust with respect to arbitrary distributions of the magnetic permeability and the conductivity of the medium.
A multilevel adaptive reaction-splitting method for SRNs
Moraes, Alvaro
2016-01-06
In [5], we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks (SRNs) specifically designed for systems in which the set of reaction channels can be adaptively partitioned into two subsets characterized by either high or low activity. To estimate expected values of observables of the system, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This is achieved with a computational complexity of order O(TOL-2). We also present a novel control variate technique which may dramatically reduce the variance of the coarsest level at a negligible computational cost.
Directory of Open Access Journals (Sweden)
Dominique Dardevet
2012-01-01
Full Text Available Skeletal muscle loss is observed in several physiopathological situations. Strategies to prevent, slow down, or increase recovery of muscle have already been tested. Besides exercise, nutrition, and more particularly protein nutrition based on increased amino acid, leucine or the quality of protein intake has generated positive acute postprandial effect on muscle protein anabolism. However, on the long term, these nutritional strategies have often failed in improving muscle mass even if given for long periods of time in both humans and rodent models. Muscle mass loss situations have been often correlated to a resistance of muscle protein anabolism to food intake which may be explained by an increase of the anabolic threshold toward the stimulatory effect of amino acids. In this paper, we will emphasize how this anabolic resistance may affect the intensity and the duration of the muscle anabolic response at the postprandial state and how it may explain the negative results obtained on the long term in the prevention of muscle mass. Sarcopenia, the muscle mass loss observed during aging, has been chosen to illustrate this concept but it may be kept in mind that it could be extended to any other catabolic states or recovery situations.
Dardevet, Dominique; Rémond, Didier; Peyron, Marie-Agnès; Papet, Isabelle; Savary-Auzeloux, Isabelle; Mosoni, Laurent
2012-01-01
Skeletal muscle loss is observed in several physiopathological situations. Strategies to prevent, slow down, or increase recovery of muscle have already been tested. Besides exercise, nutrition, and more particularly protein nutrition based on increased amino acid, leucine or the quality of protein intake has generated positive acute postprandial effect on muscle protein anabolism. However, on the long term, these nutritional strategies have often failed in improving muscle mass even if given for long periods of time in both humans and rodent models. Muscle mass loss situations have been often correlated to a resistance of muscle protein anabolism to food intake which may be explained by an increase of the anabolic threshold toward the stimulatory effect of amino acids. In this paper, we will emphasize how this anabolic resistance may affect the intensity and the duration of the muscle anabolic response at the postprandial state and how it may explain the negative results obtained on the long term in the prevention of muscle mass. Sarcopenia, the muscle mass loss observed during aging, has been chosen to illustrate this concept but it may be kept in mind that it could be extended to any other catabolic states or recovery situations.
Computer prediction of subsurface radionuclide transport: an adaptive numerical method
International Nuclear Information System (INIS)
Neuman, S.P.
1983-01-01
Radionuclide transport in the subsurface is often modeled with the aid of the advection-dispersion equation. A review of existing computer methods for the solution of this equation shows that there is need for improvement. To answer this need, a new adaptive numerical method is proposed based on an Eulerian-Lagrangian formulation. The method is based on a decomposition of the concentration field into two parts, one advective and one dispersive, in a rigorous manner that does not leave room for ambiguity. The advective component of steep concentration fronts is tracked forward with the aid of moving particles clustered around each front. Away from such fronts the advection problem is handled by an efficient modified method of characteristics called single-step reverse particle tracking. When a front dissipates with time, its forward tracking stops automatically and the corresponding cloud of particles is eliminated. The dispersion problem is solved by an unconventional Lagrangian finite element formulation on a fixed grid which involves only symmetric and diagonal matrices. Preliminary tests against analytical solutions of ne- and two-dimensional dispersion in a uniform steady state velocity field suggest that the proposed adaptive method can handle the entire range of Peclet numbers from 0 to infinity, with Courant numbers well in excess of 1
Adaptive implicit method for thermal compositional reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Agarwal, A.; Tchelepi, H.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Stanford Univ., Palo Alto (United States)
2008-10-15
As the global demand for oil increases, thermal enhanced oil recovery techniques are becoming increasingly important. Numerical reservoir simulation of thermal methods such as steam assisted gravity drainage (SAGD) is complex and requires a solution of nonlinear mass and energy conservation equations on a fine reservoir grid. The most currently used technique for solving these equations is the fully IMplicit (FIM) method which is unconditionally stable, allowing for large timesteps in simulation. However, it is computationally expensive. On the other hand, the method known as IMplicit pressure explicit saturations, temperature and compositions (IMPEST) is computationally inexpensive, but it is only conditionally stable and restricts the timestep size. To improve the balance between the timestep size and computational cost, the thermal adaptive IMplicit (TAIM) method uses stability criteria and a switching algorithm, where some simulation variables such as pressure, saturations, temperature, compositions are treated implicitly while others are treated with explicit schemes. This presentation described ongoing research on TAIM with particular reference to thermal displacement processes such as the stability criteria that dictate the maximum allowed timestep size for simulation based on the von Neumann linear stability analysis method; the switching algorithm that adapts labeling of reservoir variables as implicit or explicit as a function of space and time; and, complex physical behaviors such as heat and fluid convection, thermal conduction and compressibility. Key numerical results obtained by enhancing Stanford's General Purpose Research Simulator (GPRS) were also presented along with a list of research challenges. 14 refs., 2 tabs., 11 figs., 1 appendix.
Adaptive Methods for Permeability Estimation and Smart Well Management
Energy Technology Data Exchange (ETDEWEB)
Lien, Martha Oekland
2005-04-01
The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement
Highly accurate adaptive TOF determination method for ultrasonic thickness measurement
Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing
2018-04-01
Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.
New method to evaluate the {sup 7}Li(p, n){sup 7}Be reaction near threshold
Energy Technology Data Exchange (ETDEWEB)
Herrera, María S., E-mail: herrera@tandar.cnea.gov.ar [Comisión Nacional de Energía Atómica, Av. Gral. Paz 1499, Buenos Aires B1650KNA (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Av. Rivadavia 1917, Ciudad Autónoma de Buenos Aires C1033AAJ (Argentina); Escuela de Ciencia y Tecnología, UNSAM, 25 de Mayo y Francia, Buenos Aires B1650KNA (Argentina); Moreno, Gustavo A. [YPF Tecnología, Baradero S/N, Buenos Aires 1925 (Argentina); Departamento de Física J. J. Giambiagi, Facultad de Ciencias Exactas y Naturales, UBA, Ciudad Universitaria, Ciudad Autónoma de Buenos Aires 1428 (Argentina); Kreiner, Andrés J. [Comisión Nacional de Energía Atómica, Av. Gral. Paz 1499, Buenos Aires B1650KNA (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas, Av. Rivadavia 1917, Ciudad Autónoma de Buenos Aires C1033AAJ (Argentina); Escuela de Ciencia y Tecnología, UNSAM, 25 de Mayo y Francia, Buenos Aires B1650KNA (Argentina)
2015-04-15
In this work a complete description of the {sup 7}Li(p, n){sup 7}Be reaction near threshold is given using center-of-mass and relative coordinates. It is shown that this standard approach, not used before in this context, leads to a simple mathematical representation which gives easy access to all relevant quantities in the reaction and allows a precise numerical implementation. It also allows in a simple way to include proton beam-energy spread affects. The method, implemented as a C++ code, was validated both with numerical and experimental data finding a good agreement. This tool is also used here to analyze scattered published measurements such as (p, n) cross sections, differential and total neutron yields for thick targets. Using these data we derive a consistent set of parameters to evaluate neutron production near threshold. Sensitivity of the results to data uncertainty and the possibility of incorporating new measurements are also discussed.
Directory of Open Access Journals (Sweden)
G. Sandhya
2017-01-01
Full Text Available This work explains an advanced and accurate brain MRI segmentation method. MR brain image segmentation is to know the anatomical structure, to identify the abnormalities, and to detect various tissues which help in treatment planning prior to radiation therapy. This proposed technique is a Multilevel Thresholding (MT method based on the phenomenon of Electromagnetism and it segments the image into three tissues such as White Matter (WM, Gray Matter (GM, and CSF. The approach incorporates skull stripping and filtering using anisotropic diffusion filter in the preprocessing stage. This thresholding method uses the force of attraction-repulsion between the charged particles to increase the population. It is the combination of Electromagnetism-Like optimization algorithm with the Otsu and Kapur objective functions. The results obtained by using the proposed method are compared with the ground-truth images and have given best values for the measures sensitivity, specificity, and segmentation accuracy. The results using 10 MR brain images proved that the proposed method has accurately segmented the three brain tissues compared to the existing segmentation methods such as K-means, fuzzy C-means, OTSU MT, Particle Swarm Optimization (PSO, Bacterial Foraging Algorithm (BFA, Genetic Algorithm (GA, and Fuzzy Local Gaussian Mixture Model (FLGMM.
Cutler, Timothy D.; Wang, Chong; Hoff, Steven J.; Zimmerman, Jeffrey J.
2013-01-01
In aerobiology, dose-response studies are used to estimate the risk of infection to a susceptible host presented by exposure to a specific dose of an airborne pathogen. In the research setting, host- and pathogen-specific factors that affect the dose-response continuum can be accounted for by experimental design, but the requirement to precisely determine the dose of infectious pathogen to which the host was exposed is often challenging. By definition, quantification of viable airborne pathogens is based on the culture of micro-organisms, but some airborne pathogens are transmissible at concentrations below the threshold of quantification by culture. In this paper we present an approach to the calculation of exposure dose at microbiologically unquantifiable levels using an application of the “continuous-stirred tank reactor (CSTR) model” and the validation of this approach using rhodamine B dye as a surrogate for aerosolized microbial pathogens in a dynamic aerosol toroid (DAT). PMID:24082399
ECG-derived respiration methods: adapted ICA and PCA.
Tiinanen, Suvi; Noponen, Kai; Tulppo, Mikko; Kiviniemi, Antti; Seppänen, Tapio
2015-05-01
Respiration is an important signal in early diagnostics, prediction, and treatment of several diseases. Moreover, a growing trend toward ambulatory measurements outside laboratory environments encourages developing indirect measurement methods such as ECG derived respiration (EDR). Recently, decomposition techniques like principal component analysis (PCA), and its nonlinear version, kernel PCA (KPCA), have been used to derive a surrogate respiration signal from single-channel ECG. In this paper, we propose an adapted independent component analysis (AICA) algorithm to obtain EDR signal, and extend the normal linear PCA technique based on the best principal component (PC) selection (APCA, adapted PCA) to improve its performance further. We also demonstrate that the usage of smoothing spline resampling and bandpass-filtering improve the performance of all EDR methods. Compared with other recent EDR methods using correlation coefficient and magnitude squared coherence, the proposed AICA and APCA yield a statistically significant improvement with correlations 0.84, 0.82, 0.76 and coherences 0.90, 0.91, 0.85 between reference respiration and AICA, APCA and KPCA, respectively. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
A Fast Adaptive Receive Antenna Selection Method in MIMO System
Directory of Open Access Journals (Sweden)
Chaowei Wang
2013-01-01
Full Text Available Antenna selection has been regarded as an effective method to acquire the diversity benefits of multiple antennas while potentially reduce hardware costs. This paper focuses on receive antenna selection. According to the proportion between the numbers of total receive antennas and selected antennas and the influence of each antenna on system capacity, we propose a fast adaptive antenna selection algorithm for wireless multiple-input multiple-output (MIMO systems. Mathematical analysis and numerical results show that our algorithm significantly reduces the computational complexity and memory requirement and achieves considerable system capacity gain compared with the optimal selection technique in the same time.
A multilevel adaptive reaction-splitting method for SRNs
Moraes, Alvaro
2015-01-07
In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks specifically designed for systems in which the set of reaction channels can be adaptively partitioned into two subsets characterized by either “high” or “low” activity. To estimate expected values of observables of the system, our method bounds the global computational error to be below a prescribed tolerance, within a given confidence level. This is achieved with a computational complexity of order O (TOL-2).We also present a novel control variate technique which may dramatically reduce the variance of the coarsest level at a negligible computational cost. Our numerical examples show substantial gains with respect to the standard Stochastic Simulation Algorithm (SSA) by Gillespie and also our previous hybrid Chernoff tau-leap method.
Multiple centroid method to evaluate the adaptability of alfalfa genotypes
Directory of Open Access Journals (Sweden)
Moysés Nascimento
2015-02-01
Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.
Tuta, Jure; Juric, Matjaz B
2018-03-24
This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method
Directory of Open Access Journals (Sweden)
Jure Tuta
2018-03-01
Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
Ultsch, Alfred; Thrun, Michael C; Hansen-Goos, Onno; Lötsch, Jörn
2015-10-28
Biomedical data obtained during cell experiments, laboratory animal research, or human studies often display a complex distribution. Statistical identification of subgroups in research data poses an analytical challenge. Here were introduce an interactive R-based bioinformatics tool, called "AdaptGauss". It enables a valid identification of a biologically-meaningful multimodal structure in the data by fitting a Gaussian mixture model (GMM) to the data. The interface allows a supervised selection of the number of subgroups. This enables the expectation maximization (EM) algorithm to adapt more complex GMM than usually observed with a noninteractive approach. Interactively fitting a GMM to heat pain threshold data acquired from human volunteers revealed a distribution pattern with four Gaussian modes located at temperatures of 32.3, 37.2, 41.4, and 45.4 °C. Noninteractive fitting was unable to identify a meaningful data structure. Obtained results are compatible with known activity temperatures of different TRP ion channels suggesting the mechanistic contribution of different heat sensors to the perception of thermal pain. Thus, sophisticated analysis of the modal structure of biomedical data provides a basis for the mechanistic interpretation of the observations. As it may reflect the involvement of different TRP thermosensory ion channels, the analysis provides a starting point for hypothesis-driven laboratory experiments.
An adaptive image denoising method based on local parameters
Indian Academy of Sciences (India)
term, i.e., individual pixels or block-by-block, i.e., group of pixels, using suitable shrinkage factor and threshold function. The shrinkage factor is generally a function of threshold and some other characteristics of the neighbouring pixels of the ...
Adaptive designs based on the truncated product method
Directory of Open Access Journals (Sweden)
Neuhäuser Markus
2005-09-01
Full Text Available Abstract Background Adaptive designs are becoming increasingly important in clinical research. One approach subdivides the study into several (two or more stages and combines the p-values of the different stages using Fisher's combination test. Methods Alternatively to Fisher's test, the recently proposed truncated product method (TPM can be applied to combine the p-values. The TPM uses the product of only those p-values that do not exceed some fixed cut-off value. Here, these two competing analyses are compared. Results When an early termination due to insufficient effects is not appropriate, such as in dose-response analyses, the probability to stop the trial early with the rejection of the null hypothesis is increased when the TPM is applied. Therefore, the expected total sample size is decreased. This decrease in the sample size is not connected with a loss in power. The TPM turns out to be less advantageous, when an early termination of the study due to insufficient effects is possible. This is due to a decrease of the probability to stop the trial early. Conclusion It is recommended to apply the TPM rather than Fisher's combination test whenever an early termination due to insufficient effects is not suitable within the adaptive design.
Numerical and adaptive grid methods for ideal magnetohydrodynamics
Loring, Burlen
2008-02-01
In this thesis numerical finite difference methods for ideal magnetohydrodynamics(MHD) are investigated. A review of the relevant physics, essential for interpreting the results of numerical solutions and constructing validation cases, is presented. This review includes a discusion of the propagation of small amplitude waves in the MHD system as well as a thorough discussion of MHD shocks, contacts and rarefactions and how they can be piece together to obtain a solutions to the MHD Riemann problem. Numerical issues relevant to the MHD system such as: the loss of nonlinear numerical stability in the presence of discontinuous solutions, the introduction of spurious forces due to the growth of the divergence of the magnetic flux density, the loss of pressure positivity, and the effects of non-conservative numerical methods are discussed, along with the practical approaches which can be used to remedy or minimize the negative consequences of each. The use of block structured adaptive mesh refinement is investigated in the context of a divergence free MHD code. A new method for conserving magnetic flux across AMR grid interfaces is developed and a detailed discussion of our implementation of this method using the CHOMBO AMR framework is given. A preliminary validation of the new method for conserving magnetic flux density across AMR grid interfaces illustrates that the method works. Finally a number of code validation cases are examined spurring a discussion of the strengths and weaknesses of the numerics employed.
Adaptive discontinuous Galerkin methods for non-linear reactive flows
Uzunca, Murat
2016-01-01
The focus of this monograph is the development of space-time adaptive methods to solve the convection/reaction dominated non-stationary semi-linear advection diffusion reaction (ADR) equations with internal/boundary layers in an accurate and efficient way. After introducing the ADR equations and discontinuous Galerkin discretization, robust residual-based a posteriori error estimators in space and time are derived. The elliptic reconstruction technique is then utilized to derive the a posteriori error bounds for the fully discrete system and to obtain optimal orders of convergence. As coupled surface and subsurface flow over large space and time scales is described by (ADR) equation the methods described in this book are of high importance in many areas of Geosciences including oil and gas recovery, groundwater contamination and sustainable use of groundwater resources, storing greenhouse gases or radioactive waste in the subsurface.
Nalon, E; Maes, D; Piepers, S; van Riet, M M J; Janssens, G P J; Millet, S; Tuyttens, F A M
2013-11-01
Lameness is a frequently occurring, painful condition of breeding sows that may result in hyperalgesia, i.e., an increased sensitivity to pain. In this study a mechanical nociception threshold (MT) test was used (1) to determine if hyperalgesia occurs in sows with naturally-occurring lameness; (2) to compare measurements obtained with a hand-held probe and a limb-mounted actuator connected to a digital algometer; and (3) to investigate the systematic left-to-right and cranial-to-caudal differences in MT. Twenty-eight pregnant sows were investigated, of which 14 were moderately lame and 14 were not lame. Over three testing sessions, repeated measurements were taken at 5 min intervals on the dorsal aspects of the metatarsi and metacarpi of all limbs. The MT was defined as the force in Newtons (N) that elicited an avoidance response, and this parameter was found to be lower in limbs affected by lameness than in normal limbs (Ptesting sessions (P<0.001), as well as between days (P<0.001). The findings provide evidence that lame sows experience hyperalgesia. Systematic differences between forelimb and hindlimb MT must be taken into account when such assessments are performed. Copyright © 2013 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Arkhipchuk, V.V.; Romanenko, V.D.; Arkhipchuk, M.V.; Kipnis, L.S.
1993-01-01
The use of nucleolar characteristics to access the action of physical and chemical factors on living objects is a promising trend in the creation of new and highly sensitive biological tests. The advantages of this process are that the effect of the threshold values of the anthropogenic factors is recorded as a change in functional activity of the cell genome and not as the restructuring of the karyotype. The aim of this research was to test a cytogenetic method of determining the modifying action of various factors on the plant and animal genome, based on analysis of quantitative characteristics of the nucleoli and to extend its use to different groups of organisms
Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling
Rastigejev, Y.
2011-12-01
Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems
[The Confusion Assessment Method: Transcultural adaptation of a French version].
Antoine, V; Belmin, J; Blain, H; Bonin-Guillaume, S; Goldsmith, L; Guerin, O; Kergoat, M-J; Landais, P; Mahmoudi, R; Morais, J A; Rataboul, P; Saber, A; Sirvain, S; Wolfklein, G; de Wazieres, B
2018-04-03
The Confusion Assessment Method (CAM) is a validated key tool in clinical practice and research programs to diagnose delirium and assess its severity. There is no validated French version of the CAM training manual and coding guide (Inouye SK). The aim of this study was to establish a consensual French version of the CAM and its manual. Cross-cultural adaptation to achieve equivalence between the original version and a French adapted version of the CAM manual. A rigorous process was conducted including control of cultural adequacy of the tool's components, double forward and back translations, reconciliation, expert committee review (including bilingual translators with different nationalities, a linguist, highly qualified clinicians, methodologists) and pretesting. A consensual French version of the CAM was achieved. Implementation of the CAM French version in daily clinical practice will enable optimal diagnosis of delirium diagnosis and enhance communication between health professionals in French speaking countries. Validity and psychometric properties are being tested in a French multicenter cohort, opening up new perspectives for improved quality of care and research programs in French speaking countries. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
An adaptive image denoising method based on local parameters ...
Indian Academy of Sciences (India)
noise-free that are used to obtain the variances corresponding to the noise-free .... of too many noisy coefficients completely because the threshold value is at higher side. .... The wavelet coefficients are shrinked using the following expression.
Adaptive and dynamic meshing methods for numerical simulations
Acikgoz, Nazmiye
-hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations
Learning Unknown Structure in CRFs via Adaptive Gradient Projection Method
Directory of Open Access Journals (Sweden)
Wei Xue
2016-08-01
Full Text Available We study the problem of fitting probabilistic graphical models to the given data when the structure is not known. More specifically, we focus on learning unknown structure in conditional random fields, especially learning both the structure and parameters of a conditional random field model simultaneously. To do this, we first formulate the learning problem as a convex minimization problem by adding an l_2-regularization to the node parameters and a group l_1-regularization to the edge parameters, and then a gradient-based projection method is proposed to solve it which combines an adaptive stepsize selection strategy with a nonmonotone line search. Extensive simulation experiments are presented to show the performance of our approach in solving unknown structure learning problems.
An adaptive finite element method for steady and transient problems
International Nuclear Information System (INIS)
Benner, R.E. Jr.; Davis, H.T.; Scriven, L.E.
1987-01-01
Distributing integral error uniformly over variable subdomains, or finite elements, is an attractive criterion by which to subdivide a domain for the Galerkin/finite element method when localized steep gradients and high curvatures are to be resolved. Examples are fluid interfaces, shock fronts and other internal layers, as well as fluid mechanical and other boundary layers, e.g. thin-film states at solid walls. The uniform distribution criterion is developed into an adaptive technique for one-dimensional problems. Nodal positions can be updated simultaneously with nodal values during Newton iteration, but it is usually better to adopt nearly optimal nodal positions during Newton iteration upon nodal values. Three illustrative problems are solved: steady convection with diffusion, gradient theory of fluid wetting on a solid surface and Buckley-Leverett theory of two phase Darcy flow in porous media
Adaptive mesh refinement and adjoint methods in geophysics simulations
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
An Adaptive Pseudospectral Method for Fractional Order Boundary Value Problems
Directory of Open Access Journals (Sweden)
Mohammad Maleki
2012-01-01
Full Text Available An adaptive pseudospectral method is presented for solving a class of multiterm fractional boundary value problems (FBVP which involve Caputo-type fractional derivatives. The multiterm FBVP is first converted into a singular Volterra integrodifferential equation (SVIDE. By dividing the interval of the problem to subintervals, the unknown function is approximated using a piecewise interpolation polynomial with unknown coefficients which is based on shifted Legendre-Gauss (ShLG collocation points. Then the problem is reduced to a system of algebraic equations, thus greatly simplifying the problem. Further, some additional conditions are considered to maintain the continuity of the approximate solution and its derivatives at the interface of subintervals. In order to convert the singular integrals of SVIDE into nonsingular ones, integration by parts is utilized. In the method developed in this paper, the accuracy can be improved either by increasing the number of subintervals or by increasing the degree of the polynomial on each subinterval. Using several examples including Bagley-Torvik equation the proposed method is shown to be efficient and accurate.
Methods used in adaptation of health-related guidelines: A systematic survey.
Abdul-Khalek, Rima A; Darzi, Andrea J; Godah, Mohammad W; Kilzar, Lama; Lakis, Chantal; Agarwal, Arnav; Abou-Jaoude, Elias; Meerpohl, Joerg J; Wiercioch, Wojtek; Santesso, Nancy; Brax, Hneine; Schünemann, Holger; Akl, Elie A
2017-12-01
Adaptation refers to the systematic approach for considering the endorsement or modification of recommendations produced in one setting for application in another as an alternative to de novo development. To describe and assess the methods used for adapting health-related guidelines published in peer-reviewed journals, and to assess the quality of the resulting adapted guidelines. We searched Medline and Embase up to June 2015. We assessed the method of adaptation, and the quality of included guidelines. Seventy-two papers were eligible. Most adapted guidelines and their source guidelines were published by professional societies (71% and 68% respectively), and in high-income countries (83% and 85% respectively). Of the 57 adapted guidelines that reported any detail about adaptation method, 34 (60%) did not use a published adaptation method. The number (and percentage) of adapted guidelines fulfilling each of the ADAPTE steps ranged between 2 (4%) and 57 (100%). The quality of adapted guidelines was highest for the "scope and purpose" domain and lowest for the "editorial independence" domain (respective mean percentages of the maximum possible scores were 93% and 43%). The mean score for "rigor of development" was 57%. Most adapted guidelines published in peer-reviewed journals do not report using a published adaptation method, and their adaptation quality was variable.
Evaluation framework based on fuzzy measured method in adaptive learning systems
Houda Zouari Ounaies, ,; Yassine Jamoussi; Henda Hajjami Ben Ghezala
2008-01-01
Currently, e-learning systems are mainly web-based applications and tackle a wide range of users all over the world. Fitting learners’ needs is considered as a key issue to guaranty the success of these systems. Many researches work on providing adaptive systems. Nevertheless, evaluation of the adaptivity is still in an exploratory phase. Adaptation methods are a basic factor to guaranty an effective adaptation. This issue is referred as meta-adaptation in numerous researches. In our research...
Adaptive Finite Element Methods for Elliptic Problems with Discontinuous Coefficients
Bonito, Andrea; DeVore, Ronald A.; Nochetto, Ricardo H.
2013-01-01
Elliptic PDEs with discontinuous diffusion coefficients occur in application domains such as diffusions through porous media, electromagnetic field propagation on heterogeneous media, and diffusion processes on rough surfaces. The standard approach to numerically treating such problems using finite element methods is to assume that the discontinuities lie on the boundaries of the cells in the initial triangulation. However, this does not match applications where discontinuities occur on curves, surfaces, or manifolds, and could even be unknown beforehand. One of the obstacles to treating such discontinuity problems is that the usual perturbation theory for elliptic PDEs assumes bounds for the distortion of the coefficients in the L∞ norm and this in turn requires that the discontinuities are matched exactly when the coefficients are approximated. We present a new approach based on distortion of the coefficients in an Lq norm with q < ∞ which therefore does not require the exact matching of the discontinuities. We then use this new distortion theory to formulate new adaptive finite element methods (AFEMs) for such discontinuity problems. We show that such AFEMs are optimal in the sense of distortion versus number of computations, and report insightful numerical results supporting our analysis. © 2013 Societ y for Industrial and Applied Mathematics.
Adaptive two-regime method: Application to front propagation
Energy Technology Data Exchange (ETDEWEB)
Robinson, Martin, E-mail: martin.robinson@maths.ox.ac.uk; Erban, Radek, E-mail: erban@maths.ox.ac.uk [Mathematical Institute, University of Oxford, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG (United Kingdom); Flegg, Mark, E-mail: mark.flegg@monash.edu [School of Mathematical Sciences, Faculty of Science, Monash University Wellington Road, Clayton, Victoria 3800 (Australia)
2014-03-28
The Adaptive Two-Regime Method (ATRM) is developed for hybrid (multiscale) stochastic simulation of reaction-diffusion problems. It efficiently couples detailed Brownian dynamics simulations with coarser lattice-based models. The ATRM is a generalization of the previously developed Two-Regime Method [Flegg et al., J. R. Soc., Interface 9, 859 (2012)] to multiscale problems which require a dynamic selection of regions where detailed Brownian dynamics simulation is used. Typical applications include a front propagation or spatio-temporal oscillations. In this paper, the ATRM is used for an in-depth study of front propagation in a stochastic reaction-diffusion system which has its mean-field model given in terms of the Fisher equation [R. Fisher, Ann. Eugen. 7, 355 (1937)]. It exhibits a travelling reaction front which is sensitive to stochastic fluctuations at the leading edge of the wavefront. Previous studies into stochastic effects on the Fisher wave propagation speed have focused on lattice-based models, but there has been limited progress using off-lattice (Brownian dynamics) models, which suffer due to their high computational cost, particularly at the high molecular numbers that are necessary to approach the Fisher mean-field model. By modelling only the wavefront itself with the off-lattice model, it is shown that the ATRM leads to the same Fisher wave results as purely off-lattice models, but at a fraction of the computational cost. The error analysis of the ATRM is also presented for a morphogen gradient model.
An adaptative finite element method for turbulent flow simulations
International Nuclear Information System (INIS)
Arnoux-Guisse, F.; Bonnin, O.; Leal de Sousa, L.; Nicolas, G.
1995-05-01
After outlining the space and time discretization methods used in the N3S thermal hydraulic code developed at EDF/NHL, we describe the possibilities of the peripheral version, the Adaptative Mesh, which comprises two separate parts: the error indicator computation and the development of a module subdividing elements usable by the solid dynamics code ASTER and the electromagnetism code TRIFOU also developed by R and DD. The error indicators implemented in N3S are described. They consist of a projection indicator quantifying the space error in laminar or turbulent flow calculations and a Navier-Stokes residue indicator calculated on each element. The method for subdivision of triangles into four sub-triangles and tetrahedra into eight sub-tetrahedra is then presented with its advantages and drawbacks. It is illustrated by examples showing the efficiency of the module. The last concerns the 2 D case of flow behind a backward-facing step. (authors). 9 refs., 5 figs., 1 tab
A novel partitioning method for block-structured adaptive meshes
Fu, Lin; Litvinov, Sergej; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-07-01
We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtain the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.
A novel partitioning method for block-structured adaptive meshes
Energy Technology Data Exchange (ETDEWEB)
Fu, Lin, E-mail: lin.fu@tum.de; Litvinov, Sergej, E-mail: sergej.litvinov@aer.mw.tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-07-15
We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtain the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.
Nonlinear microwave imaging using Levenberg-Marquardt method with iterative shrinkage thresholding
Desmal, Abdulla; Bagci, Hakan
2014-01-01
Development of microwave imaging methods applicable in sparse investigation domains is becoming a research focus in computational electromagnetics (D.W. Winters and S.C. Hagness, IEEE Trans. Antennas Propag., 58(1), 145-154, 2010). This is simply due to the fact that sparse/sparsified domains naturally exist in many applications including remote sensing, medical imaging, crack detection, hydrocarbon reservoir exploration, and see-through-the-wall imaging.
Nonlinear microwave imaging using Levenberg-Marquardt method with iterative shrinkage thresholding
Desmal, Abdulla
2014-07-01
Development of microwave imaging methods applicable in sparse investigation domains is becoming a research focus in computational electromagnetics (D.W. Winters and S.C. Hagness, IEEE Trans. Antennas Propag., 58(1), 145-154, 2010). This is simply due to the fact that sparse/sparsified domains naturally exist in many applications including remote sensing, medical imaging, crack detection, hydrocarbon reservoir exploration, and see-through-the-wall imaging.
Chung-Wei, Li; Gwo-Hshiung, Tzeng
To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.
International Nuclear Information System (INIS)
Zhao, Zhanqi; Möller, Knut; Guttmann, Josef
2012-01-01
The objective of this paper is to introduce and evaluate the adaptive SLICE method (ASM) for continuous determination of intratidal nonlinear dynamic compliance and resistance. The tidal volume is subdivided into a series of volume intervals called slices. For each slice, one compliance and one resistance are calculated by applying a least-squares-fit method. The volume window (width) covered by each slice is determined based on the confidence interval of the parameter estimation. The method was compared to the original SLICE method and evaluated using simulation and animal data. The ASM was also challenged with separate analysis of dynamic compliance during inspiration. If the signal-to-noise ratio (SNR) in the respiratory data decreased from +∞ to 10 dB, the relative errors of compliance increased from 0.1% to 22% for the ASM and from 0.2% to 227% for the SLICE method. Fewer differences were found in resistance. When the SNR was larger than 40 dB, the ASM delivered over 40 parameter estimates (42.2 ± 1.3). When analyzing the compliance during inspiration separately, the estimates calculated with the ASM were more stable. The adaptive determination of slice bounds results in consistent and reliable parameter values. Online analysis of nonlinear respiratory mechanics will profit from such an adaptive selection of interval size. (paper)
Munaretto, S.; Siciliano, G.; Turvani, M.
2014-01-01
Climate adaptation is a dynamic social and institutional process where the governance dimension is receiving growing attention. Adaptive governance is an approach that promises to reduce uncertainty by improving the knowledge base for decision making. As uncertainty is an inherent feature of climate
American Society for Testing and Materials. Philadelphia
2003-01-01
1.1 This test method covers the determination of the environment-assisted cracking threshold stress intensity factor parameters, KIEAC and KEAC, for metallic materials from constant-force testing of fatigue precracked beam or compact fracture specimens and from constant-displacement testing of fatigue precracked bolt-load compact fracture specimens. 1.2 This test method is applicable to environment-assisted cracking in aqueous or other aggressive environments. 1.3 Materials that can be tested by this test method are not limited by thickness or by strength as long as specimens are of sufficient thickness and planar size to meet the size requirements of this test method. 1.4 A range of specimen sizes with proportional planar dimensions is provided, but size may be variable and adjusted for yield strength and applied force. Specimen thickness is a variable independent of planar size. 1.5 Specimen configurations other than those contained in this test method may be used, provided that well-established stress ...
International Nuclear Information System (INIS)
Roney, A.; Frigon, C.; Larzilliere, M.
1999-01-01
The optical translational spectroscopy technique, based on the principles of fast ion beam laser spectroscopy (FIBLAS) and translational spectroscopy, allows the kinetic energies study of neutral fragments released through free dissociation of a neutral molecule. This method presents interesting features such as near-threshold energy measurements and selection of a specific dissociation limit. The fragments resulting from free dissociation (not induced) of neutral molecules, produced by charge exchange processes with a fast ion beam, are probed by laser radiation. Monitoring of the laser-induced fluorescence allows high-resolution spectra due to the kinematic compression of the velocity spread. Measurements of kinetic energies released to the second limit of dissociation H(1s) + H(2l) of H 2 are put forth and compared with those obtained by means of off-axis translational spectroscopy
International Nuclear Information System (INIS)
Yang, R.X.; Li, C.; Sun, Y.J.; Liu, Z.; Wang, X.Z.; Heng, Y.K.; Sun, S.S.; Dai, H.L.; Wu, Z.; An, F.F.
2017-01-01
The Beijing Spectrometer (BESIII) has just updated its end-cap Time-of-Flight (ETOF) system, using the Multi-gap Resistive Plate Chamber (MRPC) to replace the current scintillator detectors. These MRPCs shows multi-peak phenomena in their time-over-threshold (TOT) distribution, which was also observed in the Long-strip MRPC built for the RHIC-STAR Muon Telescope Detector (MTD). After carefully investigated the correlation between the multi-peak distribution and incident hit positions along the strips, we find out that it can be semi-quantitatively explained by the signal reflections on the ends of the readout strips. Therefore a new offline calibration method was implemented on the MRPC ETOF data in BESIII, making T-TOT correlation significantly improved to evaluate the time resolution.
Threshold Signature Schemes Application
Directory of Open Access Journals (Sweden)
Anastasiya Victorovna Beresneva
2015-10-01
Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.
The dynamic time-over-threshold method for multi-channel APD based gamma-ray detectors
Energy Technology Data Exchange (ETDEWEB)
Orita, T., E-mail: orita.tadashi@jaea.go.jp [Japan Atomic Energy Agency, Fukushima (Japan); Shimazoe, K.; Takahashi, H. [Department of Nuclear Management and Engineering, The University of Tokyo, Bunkyō (Japan)
2015-03-01
t– Recent advances in manufacturing technology have enabled the use of multi-channel pixelated detectors in gamma-ray imaging applications. When obtaining gamma-ray measurements, it is important to obtain pulse height information in order to avoid unnecessary events such as scattering. However, as the number of channels increases, more electronics are needed to process each channel's signal, and the corresponding increases in circuit size and power consumption can result in practical problems. The time-over-threshold (ToT) method, which has recently become popular in the medical field, is a signal processing technique that can effectively avoid such problems. However, ToT suffers from poor linearity and its dynamic range is limited. We therefore propose a new ToT technique called the dynamic time-over-threshold (dToT) method [4]. A new signal processing system using dToT and CR-RC shaping demonstrated much better linearity than that of a conventional ToT. Using a test circuit with a new Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (GAGG) scintillator and an avalanche photodiode, the pulse height spectra of {sup 137}Cs and {sup 22}Na sources were measured with high linearity. Based on these results, we designed a new application-specific integrated circuit (ASIC) for this multi-channel dToT system, measured the spectra of a {sup 22}Na source, and investigated the linearity of the system.
Object-Oriented Support for Adaptive Methods on Paranel Machines
Directory of Open Access Journals (Sweden)
Sandeep Bhatt
1993-01-01
Full Text Available This article reports on experiments from our ongoing project whose goal is to develop a C++ library which supports adaptive and irregular data structures on distributed memory supercomputers. We demonstrate the use of our abstractions in implementing "tree codes" for large-scale N-body simulations. These algorithms require dynamically evolving treelike data structures, as well as load-balancing, both of which are widely believed to make the application difficult and cumbersome to program for distributed-memory machines. The ease of writing the application code on top of our C++ library abstractions (which themselves are application independent, and the low overhead of the resulting C++ code (over hand-crafted C code supports our belief that object-oriented approaches are eminently suited to programming distributed-memory machines in a manner that (to the applications programmer is architecture-independent. Our contribution in parallel programming methodology is to identify and encapsulate general classes of communication and load-balancing strategies useful across applications and MIMD architectures. This article reports experimental results from simulations of half a million particles using multiple methods.
Analyzing Sub-Threshold Bitcell Topologies and the Effects of Assist Methods on SRAM V_{MIN}
Directory of Open Access Journals (Sweden)
James Boley
2012-04-01
Full Text Available The need for ultra low power circuits has forced circuit designers to scale voltage supplies into the sub-threshold region where energy per operation is minimized [1]. The problem with this is that the traditional 6T SRAM bitcell, used for data storage, becomes unreliable at voltages below about 700 mV due to process variations and decreased device drive strength [2]. In order to achieve reliable operation, new bitcell topologies and assist methods have been proposed. This paper provides a comparison of four different bitcell topologies using read and write V_{MIN} as the metrics for evaluation. In addition, read and write assist methods were tested using the periphery voltage scaling techniques discussed in [4–13]. Measurements taken from a 180 nm test chip show read functionality (without assist methods down to 500 mV and write functionality down to 600 mV. Using assist methods can reduce both read and write V_{MIN} by 100 mV over the unassisted test case.
LDRD Final Report: Adaptive Methods for Laser Plasma Simulation
International Nuclear Information System (INIS)
Dorr, M R; Garaizar, F X; Hittinger, J A
2003-01-01
The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an
Incompressible Navier-Stokes inverse design method based on adaptive unstructured meshes
International Nuclear Information System (INIS)
Rahmati, M.T.; Charlesworth, D.; Zangeneh, M.
2005-01-01
An inverse method for blade design based on Navier-Stokes equations on adaptive unstructured meshes has been developed. In the method, unlike the method based on inviscid equations, the effect of viscosity is directly taken into account. In the method, the pressure (or pressure loading) is prescribed. The design method then computes the blade shape that would accomplish the target prescribed pressure distribution. The method is implemented using a cell-centered finite volume method, which solves the incompressible Navier-Stokes equations on unstructured meshes. An adaptive unstructured mesh method based on grid subdivision and local adaptive mesh method is utilized for increasing the accuracy. (author)
The Economics of Adaptation: Concepts, Methods and Examples
DEFF Research Database (Denmark)
Callaway, John MacIntosh; Naswa, Prakriti; Trærup, Sara Lærke Meltofte
and sectoral level strategies, plans and policies. Furthermore, we see it at the local level, where people are already adapting to the early impacts of climate change that affect livelihoods through, for example, changing rainfall patterns, drought, and frequency and intensity of extreme events. Analyses...... of the costs and benefits of climate change impacts and adaptation measures are important to inform future action. Despite the growth in the volume of research and studies on the economics of climate change adaptation over the past 10 years, there are still important gaps and weaknesses in the existing...... knowledge that limit effective and efficient decision-making and implementation of adaptation measures. Much of the literature to date has focussed on aggregate (national, regional and global) estimates of the economic costs of climate change impacts. There has been much less attention to the economics...
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Development and testing of methods for adaptive image processing in odontology and medicine
Energy Technology Data Exchange (ETDEWEB)
Sund, Torbjoern
2005-07-01
Medical diagnostic imaging has undergone radical changes during the last ten years. In the early 1990'ies, the medical imaging department was almost exclusively film-based. Today, all major hospitals have converted to digital acquisition and handling of their diagnostic imaging, or are in the process of conversion. It is therefore important to investigate whether diagnostic reading of digitally acquired images on computer display screens can match or even surpass film recording and viewing. At the same time, the digitalisation opens new possibilities for image processing, which may challenge the traditional way of studying medical images. The current work explores some of the possibilities of digital processing techniques, and evaluates the results both by quantitative methods (ROC analysis) and by subjective qualification by real users. Summary of papers: Paper I: Locally adaptive image binarization with a sliding window threshold was used for the detection of bone ridges in radiotherapy portal images. A new thresholding criterion suitable for incremental update within the sliding window was developed, and it was shown that the algorithm gave better results on difficult portal images than various publicly available adaptive thresholding routines. For small windows the routine was also faster than an adaptive implementation of the Otsu algorithm that uses interpolation between fixed tiles, and the resulting images had equal quality. Paper II: It was investigated whether contrast enhancement by non-interactive, sliding window adaptive histogram equalization could enhance the diagnostic quality of intra-oral radiographs in the dental clinic. Three dentists read 22 periapical and 12 bitewing storage phosphor (SP) radiographs. For the periapical readings they graded the quality of the examination with regard to visually locating the root apex. For the bitewing readings they registered all occurrences of approximal caries on a confidence scale. Each reading was
Development and testing of methods for adaptive image processing in odontology and medicine
International Nuclear Information System (INIS)
Sund, Torbjoern
2005-01-01
Medical diagnostic imaging has undergone radical changes during the last ten years. In the early 1990'ies, the medical imaging department was almost exclusively film-based. Today, all major hospitals have converted to digital acquisition and handling of their diagnostic imaging, or are in the process of conversion. It is therefore important to investigate whether diagnostic reading of digitally acquired images on computer display screens can match or even surpass film recording and viewing. At the same time, the digitalisation opens new possibilities for image processing, which may challenge the traditional way of studying medical images. The current work explores some of the possibilities of digital processing techniques, and evaluates the results both by quantitative methods (ROC analysis) and by subjective qualification by real users. Summary of papers: Paper I: Locally adaptive image binarization with a sliding window threshold was used for the detection of bone ridges in radiotherapy portal images. A new thresholding criterion suitable for incremental update within the sliding window was developed, and it was shown that the algorithm gave better results on difficult portal images than various publicly available adaptive thresholding routines. For small windows the routine was also faster than an adaptive implementation of the Otsu algorithm that uses interpolation between fixed tiles, and the resulting images had equal quality. Paper II: It was investigated whether contrast enhancement by non-interactive, sliding window adaptive histogram equalization could enhance the diagnostic quality of intra-oral radiographs in the dental clinic. Three dentists read 22 periapical and 12 bitewing storage phosphor (SP) radiographs. For the periapical readings they graded the quality of the examination with regard to visually locating the root apex. For the bitewing readings they registered all occurrences of approximal caries on a confidence scale. Each reading was first
Ma, M.; Wang, H.; Chen, Y.; Tang, G.; Hong, Z.; Zhang, K.; Hong, Y.
2017-12-01
Flash floods, one of the deadliest natural hazards worldwide due to their multidisciplinary nature, rank highly in terms of heavy damage and casualties. Such as in the United States, flash flood is the No.1 cause of death and the No. 2 most deadly weather-related hazard among all storm-related hazards, with approximately 100 lives lost each year. According to China Floods and Droughts Disasters Bullet in 2015 (http://www.mwr.gov.cn/zwzc/hygb/zgshzhgb), about 935 deaths per year on average were caused by flash floods from 2000 to 2015, accounting for 73 % of the fatalities due to floods. Therefore, significant efforts have been made toward understanding flash flood processes as well as modeling and forecasting them, it still remains challenging because of their short response time and limited monitoring capacity. This study advances the use of high-resolution Global Precipitation Measurement forecasts (GPMs), disaster data obtained from the government officials in 2011 and 2016, and the improved Distributed Flash Flood Guidance (DFFG) method combining the Distributed Hydrologic Model and Soil Conservation Service Curve Numbers. The objectives of this paper are (1) to examines changes in flash flood occurrence, (2) to estimate the effect of the rainfall spatial variability ,(2) to improve the lead time in flash floods warning and get the rainfall threshold, (3) to assess the DFFG method applicability in Dongchuan catchments, and (4) to yield the probabilistic information about the forecast hydrologic response that accounts for the locational uncertainties of the GPMs. Results indicate: (1) flash flood occurrence increased in the study region, (2) the occurrence of predicted flash floods show high sensitivity to total infiltration and soil water content, (3) the DFFG method is generally capable of making accurate predictions of flash flood events in terms of their locations and time of occurrence, and (4) the accumulative rainfall over a certain time span is an
International Development Research Centre (IDRC) Digital Library (Canada)
building skills, knowledge or networks on adaptation, ... the African partners leading the AfricaAdapt network, together with the UK-based Institute of Development Studies; and ... UNCCD Secretariat, Regional Coordination Unit for Africa, Tunis, Tunisia .... 26 Rural–urban Cooperation on Water Management in the Context of.
Adaptive L1/2 Shooting Regularization Method for Survival Analysis Using Gene Expression Data
Directory of Open Access Journals (Sweden)
Xiao-Ying Liu
2013-01-01
Full Text Available A new adaptive L1/2 shooting regularization method for variable selection based on the Cox’s proportional hazards mode being proposed. This adaptive L1/2 shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L1 penalties and a shooting strategy of L1/2 penalty. Simulation results based on high dimensional artificial data show that the adaptive L1/2 shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL also indicate that the L1/2 regularization method performs competitively.
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method establishes a procedure to measure the susceptibility of steel to a time-delayed failure such as that caused by hydrogen. It does so by measuring the threshold for the onset of subcritical crack growth using standard fracture mechanics specimens, irregular-shaped specimens such as notched round bars, or actual product such as fasteners (2) (threaded or unthreaded) springs or components as identified in SAE J78, J81, and J1237. 1.2 This test method is used to evaluate quantitatively: 1.2.1 The relative susceptibility of steels of different composition or a steel with different heat treatments; 1.2.2 The effect of residual hydrogen in the steel as a result of processing, such as melting, thermal mechanical working, surface treatments, coatings, and electroplating; 1.2.3 The effect of hydrogen introduced into the steel caused by external environmental sources of hydrogen, such as fluids and cleaners maintenance chemicals, petrochemical products, and galvanic coupling in an aqueous enviro...
An Adaptive Multiobjective Particle Swarm Optimization Based on Multiple Adaptive Methods.
Han, Honggui; Lu, Wei; Qiao, Junfei
2017-09-01
Multiobjective particle swarm optimization (MOPSO) algorithms have attracted much attention for their promising performance in solving multiobjective optimization problems (MOPs). In this paper, an adaptive MOPSO (AMOPSO) algorithm, based on a hybrid framework of the solution distribution entropy and population spacing (SP) information, is developed to improve the search performance in terms of convergent speed and precision. First, an adaptive global best (gBest) selection mechanism, based on the solution distribution entropy, is introduced to analyze the evolutionary tendency and balance the diversity and convergence of nondominated solutions in the archive. Second, an adaptive flight parameter adjustment mechanism, using the population SP information, is proposed to obtain the distribution of particles with suitable diversity and convergence, which can balance the global exploration and local exploitation abilities of the particles. Third, based on the gBest selection mechanism and the adaptive flight parameter mechanism, this proposed AMOPSO algorithm not only has high accuracy, but also attain a set of optimal solutions with better diversity. Finally, the performance of the proposed AMOPSO algorithm is validated and compared with other five state-of-the-art algorithms on a number of benchmark problems and water distribution system. The experimental results validate the effectiveness of the proposed AMOPSO algorithm, as well as demonstrate that AMOPSO outperforms other MOPSO algorithms in solving MOPs.
The use of the spectral method within the fast adaptive composite grid method
Energy Technology Data Exchange (ETDEWEB)
McKay, S.M.
1994-12-31
The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.
Dyck, P J; Zimmerman, I; Gillen, D A; Johnson, D; Karnes, J L; O'Brien, P C
1993-08-01
We recently found that vibratory detection threshold is greatly influenced by the algorithm of testing. Here, we study the influence of stimulus characteristics and algorithm of testing and estimating threshold on cool (CDT), warm (WDT), and heat-pain (HPDT) detection thresholds. We show that continuously decreasing (for CDT) or increasing (for WDT) thermode temperature to the point at which cooling or warming is perceived and signaled by depressing a response key ("appearance" threshold) overestimates threshold with rapid rates of thermal change. The mean of the appearance and disappearance thresholds also does not perform well for insensitive sites and patients. Pyramidal (or flat-topped pyramidal) stimuli ranging in magnitude, in 25 steps, from near skin temperature to 9 degrees C for 10 seconds (for CDT), from near skin temperature to 45 degrees C for 10 seconds (for WDT), and from near skin temperature to 49 degrees C for 10 seconds (for HPDT) provide ideal stimuli for use in several algorithms of testing and estimating threshold. Near threshold, only the initial direction of thermal change from skin temperature is perceived, and not its return to baseline. Use of steps of stimulus intensity allows the subject or patient to take the needed time to decide whether the stimulus was felt or not (in 4, 2, and 1 stepping algorithms), or whether it occurred in stimulus interval 1 or 2 (in two-alternative forced-choice testing). Thermal thresholds were generally significantly lower with a large (10 cm2) than with a small (2.7 cm2) thermode.(ABSTRACT TRUNCATED AT 250 WORDS)
Ly, Sovann; Arashiro, Takeshi; Ieng, Vanra; Tsuyuoka, Reiko; Parry, Amy; Horwood, Paul; Heng, Seng; Hamid, Sarah; Vandemaele, Katelijn; Chin, Savuth; Sar, Borann; Arima, Yuzo
2017-01-01
To establish seasonal and alert thresholds and transmission intensity categories for influenza to provide timely triggers for preventive measures or upscaling control measures in Cambodia. Using Cambodia's influenza-like illness (ILI) and laboratory-confirmed influenza surveillance data from 2009 to 2015, three parameters were assessed to monitor influenza activity: the proportion of ILI patients among all outpatients, proportion of ILI samples positive for influenza and the product of the two. With these parameters, four threshold levels (seasonal, moderate, high and alert) were established and transmission intensity was categorized based on a World Health Organization alignment method. Parameters were compared against their respective thresholds. Distinct seasonality was observed using the two parameters that incorporated laboratory data. Thresholds established using the composite parameter, combining syndromic and laboratory data, had the least number of false alarms in declaring season onset and were most useful in monitoring intensity. Unlike in temperate regions, the syndromic parameter was less useful in monitoring influenza activity or for setting thresholds. Influenza thresholds based on appropriate parameters have the potential to provide timely triggers for public health measures in a tropical country where monitoring and assessing influenza activity has been challenging. Based on these findings, the Ministry of Health plans to raise general awareness regarding influenza among the medical community and the general public. Our findings have important implications for countries in the tropics/subtropics and in resource-limited settings, and categorized transmission intensity can be used to assess severity of potential pandemic influenza as well as seasonal influenza.
Akagunduz, Ozlem Ozkaya; Savas, Recep; Yalman, Deniz; Kocacelebi, Kenan; Esassolak, Mustafa
2015-11-01
To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for
Adaptive Maneuvering Frequency Method of Current Statistical Model
Institute of Scientific and Technical Information of China (English)
Wei Sun; Yongjian Yang
2017-01-01
Current statistical model(CSM) has a good performance in maneuvering target tracking. However, the fixed maneuvering frequency will deteriorate the tracking results, such as a serious dynamic delay, a slowly converging speedy and a limited precision when using Kalman filter(KF) algorithm. In this study, a new current statistical model and a new Kalman filter are proposed to improve the performance of maneuvering target tracking. The new model which employs innovation dominated subjection function to adaptively adjust maneuvering frequency has a better performance in step maneuvering target tracking, while a fluctuant phenomenon appears. As far as this problem is concerned, a new adaptive fading Kalman filter is proposed as well. In the new Kalman filter, the prediction values are amended in time by setting judgment and amendment rules,so that tracking precision and fluctuant phenomenon of the new current statistical model are improved. The results of simulation indicate the effectiveness of the new algorithm and the practical guiding significance.
Beyond Low Rank: A Data-Adaptive Tensor Completion Method
Zhang, Lei; Wei, Wei; Shi, Qinfeng; Shen, Chunhua; Hengel, Anton van den; Zhang, Yanning
2017-01-01
Low rank tensor representation underpins much of recent progress in tensor completion. In real applications, however, this approach is confronted with two challenging problems, namely (1) tensor rank determination; (2) handling real tensor data which only approximately fulfils the low-rank requirement. To address these two issues, we develop a data-adaptive tensor completion model which explicitly represents both the low-rank and non-low-rank structures in a latent tensor. Representing the no...
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Sparse Pseudo Spectral Projection Methods with Directional Adaptation for Uncertainty Quantification
Winokur, J.; Kim, D.; Bisetti, Fabrizio; Le Maî tre, O. P.; Knio, Omar
2015-01-01
We investigate two methods to build a polynomial approximation of a model output depending on some parameters. The two approaches are based on pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids, and aim at providing a
Adaptive calibration method with on-line growing complexity
Directory of Open Access Journals (Sweden)
Šika Z.
2011-12-01
Full Text Available This paper describes a modified variant of a kinematical calibration algorithm. In the beginning, a brief review of the calibration algorithm and its simple modification are described. As the described calibration modification uses some ideas used by the Lolimot algorithm, the algorithm is described and explained. Main topic of this paper is a description of a synthesis of the Lolimot-based calibration that leads to an adaptive algorithm with an on-line growing complexity. The paper contains a comparison of simple examples results and a discussion. A note about future research topics is also included.
International Nuclear Information System (INIS)
Matsuo, Toyofumi; Matsumura, Takuro; Miyagawa, Yoshinori
2009-01-01
This paper discusses applicability of material degradation model due to reinforcing steel corrosion for RC box-culverts with corroded reinforcement and an estimation method for threshold value in performance verification reflecting reinforcing steel corrosion. First, in FEM analyses, loss of reinforcement section area and initial tension strain arising from reinforcing steel corrosion, and deteriorated bond characteristics between reinforcement and concrete were considered. The full-scale loading tests using corroded RC box-culverts were numerically analyzed. As a result, the analyzed crack patterns and load-strain relationships were in close agreement with the experimental results within the maximum corrosion ratio 15% of primary reinforcement. Then, we showed that this modeling could estimate the load carrying capacity of corroded RC box-culverts. Second, a parametric study was carried out for corroded RC box culverts with various sizes, reinforcement ratios and levels of steel corrosion, etc. Furthermore, as an application of analytical results and various experimental investigations, we suggested allowable degradation ratios for a modification of the threshold value, which corresponds to the chloride induced deterioration progress that is widely accepted in maintenance practice for civil engineering reinforced concrete structures. Finally, based on these findings, we developed two estimation methods for threshold value in performance verification: 1) a structural analysis method using nonlinear FEM included modeling of material degradation, 2) a practical method using a threshold value, which is determined by structural analyses of RC box-culverts in sound condition, is multiplied by the allowable degradation ratio. (author)
Gonzalez Lazo, Eduardo; Cruz Inclán, Carlos M.; Rodríguez Rodríguez, Arturo; Guzmán Martínez, Fernando; Abreu Alfonso, Yamiel; Piñera Hernández, Ibrahin; Leyva Fabelo, Antonio
2017-09-01
A primary approach for evaluating the influence of point defects like vacancies on atom displacement threshold energies values Td in BaTiO3 is attempted. For this purpose Molecular Dynamics Methods, MD, were applied based on previous Td calculations on an ideal tetragonal crystalline structure. It is an important issue in achieving more realistic simulations of radiation damage effects in BaTiO3 ceramic materials. It also involves irradiated samples under severe radiation damage effects due to high fluency expositions. In addition to the above mentioned atom displacement events supported by a single primary knock-on atom, PKA, a new mechanism was introduced. It corresponds to the simultaneous excitation of two close primary knock-on atoms in BaTiO3, which might take place under a high flux irradiation. Therefore, two different BaTiO3 Td MD calculation trials were accomplished. Firstly, single PKA excitations in a defective BaTiO3 tetragonal crystalline structure, consisting in a 2×2×2 BaTiO3 perovskite like super cell, were considered. It contains vacancies on Ba and O atomic positions under the requirements of electrical charge balance. Alternatively, double PKA excitations in a perfect BaTiO3 tetragonal unit cell were also simulated. On this basis, the corresponding primary knock-on atom (PKA) defect formation probability functions were calculated at principal crystal directions, and compared with the previous one we calculated and reported at an ideal BaTiO3 tetrahedral crystal structure. As a general result, a diminution of Td values arises in present calculations in comparison with those calculated for single PKA excitation in an ideal BaTiO3 crystal structure.
Pasquarella, Cesira; Veronesi, Licia; Napoli, Christian; Castiglia, Paolo; Liguori, Giorgio; Rizzetto, Rolando; Torre, Ida; Righi, Elena; Farruggia, Patrizia; Tesauro, Marina; Torregrossa, Maria V; Montagna, Maria T; Colucci, Maria E; Gallè, Francesca; Masia, Maria D; Strohmenger, Laura; Bergomi, Margherita; Tinteri, Carola; Panico, Manuela; Pennino, Francesca; Cannova, Lucia; Tanzi, Marialuisa
2012-03-15
A microbiological environmental investigation was carried out in ten dental clinics in Italy. Microbial contamination of water, air and surfaces was assessed in each clinic during the five working days, for one week per month, for a three-month period. Water and surfaces were sampled before and after clinical activity; air was sampled before, after, and during clinical activity. A wide variation was found in microbial environmental contamination, both within the participating clinics and for the different sampling times. Before clinical activity, microbial water contamination in tap water reached 51,200cfu/mL (colony forming units per milliliter), and that in Dental Unit Water Systems (DUWSs) reached 872,000cfu/mL. After clinical activity, there was a significant decrease in the Total Viable Count (TVC) in tap water and in DUWSs. Pseudomonas aeruginosa was found in 2.38% (7/294) of tap water samples and in 20.06% (59/294) of DUWS samples; Legionella spp. was found in 29.96% (89/297) of tap water samples and 15.82% (47/297) of DUWS samples, with no significant difference between pre- and post-clinical activity. Microbial air contamination was highest during dental treatments, and decreased significantly at the end of the working activity (p<0.05). The microbial buildup on surfaces increased significantly during the working hours. This study provides data for the establishment of standardized sampling methods, and threshold values for contamination monitoring in dentistry. Some very critical situations have been observed which require urgent intervention. Furthermore, the study emphasizes the need for research aimed at defining effective managing strategies for dental clinics. Copyright Â© 2012 Elsevier B.V. All rights reserved.
Mitigation and adaptation cost assessment: Concepts, methods and appropriate use
Energy Technology Data Exchange (ETDEWEB)
NONE
1999-12-31
The present report on mitigation and adaptation costs addresses the complex issue of identifying synergies and tradeoffs between national priorities and mitigation policies, an issue that requires the integration of various disciplines so as to provide a comprehensive overview of future development trends, available technologies and economic policies. Further, the report suggests a new conceptual framework for treating the social aspects in assessing mitigation and adaptation costs in climate change studies. The impacts of certain sustainability indicators such as employment and poverty reduction on mitigation costing are also discussed in the report. Among the topics to be considered by over 120 distinguished international experts, are the elements of costing methodologies at both the micro and macro levels. Special effort will be made to include the impacts of such parameters as income, equity, poverty, employment and trade. Hence, the contents of this report are highly relevant to the authors of the Third Working Group in the development of the TAR. The report contains a chapter on Special Issues and Problems Related to Cost Assessment for Developing Countries. This chapter will provide valuable background in the further development of these concepts in the TAR because it is an area that has not received due attention in previous work. (au)
Mitigation and adaptation cost assessment: Concepts, methods and appropriate use
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-12-31
The present report on mitigation and adaptation costs addresses the complex issue of identifying synergies and tradeoffs between national priorities and mitigation policies, an issue that requires the integration of various disciplines so as to provide a comprehensive overview of future development trends, available technologies and economic policies. Further, the report suggests a new conceptual framework for treating the social aspects in assessing mitigation and adaptation costs in climate change studies. The impacts of certain sustainability indicators such as employment and poverty reduction on mitigation costing are also discussed in the report. Among the topics to be considered by over 120 distinguished international experts, are the elements of costing methodologies at both the micro and macro levels. Special effort will be made to include the impacts of such parameters as income, equity, poverty, employment and trade. Hence, the contents of this report are highly relevant to the authors of the Third Working Group in the development of the TAR. The report contains a chapter on Special Issues and Problems Related to Cost Assessment for Developing Countries. This chapter will provide valuable background in the further development of these concepts in the TAR because it is an area that has not received due attention in previous work. (au)
Mitigation and adaptation cost assessment: Concepts, methods and appropriate use
International Nuclear Information System (INIS)
1998-01-01
The present report on mitigation and adaptation costs addresses the complex issue of identifying synergies and tradeoffs between national priorities and mitigation policies, an issue that requires the integration of various disciplines so as to provide a comprehensive overview of future development trends, available technologies and economic policies. Further, the report suggests a new conceptual framework for treating the social aspects in assessing mitigation and adaptation costs in climate change studies. The impacts of certain sustainability indicators such as employment and poverty reduction on mitigation costing are also discussed in the report. Among the topics to be considered by over 120 distinguished international experts, are the elements of costing methodologies at both the micro and macro levels. Special effort will be made to include the impacts of such parameters as income, equity, poverty, employment and trade. Hence, the contents of this report are highly relevant to the authors of the Third Working Group in the development of the TAR. The report contains a chapter on Special Issues and Problems Related to Cost Assessment for Developing Countries. This chapter will provide valuable background in the further development of these concepts in the TAR because it is an area that has not received due attention in previous work. (au)
Dirler, Julia; Winkler, Gertrud; Lachenmeier, Dirk W
2018-06-01
The International Agency for Research on Cancer (IARC) evaluates "very hot (>65 °C) beverages" as probably carcinogenic to humans. However, there is a lack of research regarding what temperatures consumers actually perceive as "very hot" or as "too hot". A method for sensory analysis of such threshold temperatures was developed. The participants were asked to mix a very hot coffee step by step into a cooler coffee. Because of that, the coffee to be tasted was incrementally increased in temperature during the test. The participants took a sip at every addition, until they perceive the beverage as too hot for consumption. The protocol was evaluated in the form of a pilot study using 87 participants. Interestingly, the average pain threshold of the test group (67 °C) and the preferred drinking temperature (63 °C) iterated around the IARC threshold for carcinogenicity. The developed methodology was found as fit for the purpose and may be applied in larger studies.
Santos-Concejero, Jordan; Tucker, Ross; Granados, Cristina; Irazusta, Jon; Bidaurrazaga-Letona, Iraia; Zabala-Lili, Jon; Gil, Susana María
2014-01-01
This study investigated the influence of the regression model and initial intensity during an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and performance in elite-standard runners. Twenty-three well-trained runners completed a discontinuous incremental running test on a treadmill. Speed started at 9 km · h(-1) and increased by 1.5 km · h(-1) every 4 min until exhaustion, with a minute of recovery for blood collection. Lactate-speed data were fitted by exponential and polynomial models. The lactate threshold was determined for both models, using all the co-ordinates, excluding the first and excluding the first and second points. The exponential lactate threshold was greater than the polynomial equivalent in any co-ordinate condition (P performance and is independent of the initial intensity of the test.
Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model
Kim, Sangjo; Kim, Kuisoon; Son, Changmin
2018-04-01
An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.
Directory of Open Access Journals (Sweden)
Heleen L. P. Mees
2014-06-01
Full Text Available Policy instruments can help put climate adaptation plans into action. Here, we propose a method for the systematic assessment and selection of policy instruments for stimulating adaptation action. The multi-disciplinary set of six assessment criteria is derived from economics, policy, and legal studies. These criteria are specified for the purpose of climate adaptation by taking into account four challenges to the governance of climate adaptation: uncertainty, spatial diversity, controversy, and social complexity. The six criteria and four challenges are integrated into a step-wise method that enables the selection of instruments starting from a generic assessment and ending with a specific assessment of policy instrument mixes for the stimulation of a specific adaptation measure. We then apply the method to three examples of adaptation measures. The method's merits lie in enabling deliberate choices through a holistic and comprehensive set of adaptation specific criteria, as well as deliberative choices by offering a stepwise method that structures an informed dialog on instrument selection. Although the method was created and applied by scientific experts, policy-makers can also use the method.
On Round-off Error for Adaptive Finite Element Methods
Alvarez-Aramberri, J.
2012-06-02
Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.
On Round-off Error for Adaptive Finite Element Methods
Alvarez-Aramberri, J.; Pardo, David; Paszynski, Maciej; Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.
2012-01-01
Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.
Energy Technology Data Exchange (ETDEWEB)
Delattre, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1961-07-01
We propose to examine all the methods by which fast neutron spectra can be determined using the response of threshold detectors (activation or fission chamber detectors). Most of these methods have been proposed and often even used by various authors of which a list will be found in the bibliography. The aim of the present report is thus not to present original work but rather to gather into a single article and to present in a rational form a whole series of methods which have already been described in articles scattered throughout the specialised literature. Up to the present, each author has in general studied one or two methods and no comparative study of all the possible methods seems to have been made. The most comprehensive study on this topic is that of P.M. UTHE from whose article much has been borrowed. We have tried here to develop a useful tool which should facilitate the systematic experimental study leading to the recognition of the respective merits of the methods proposed. (author) [French] On se propose d'examiner l'ensemble des methodes permettant de determiner les spectres de neutrons rapides a partir des reponses de detecteurs a seuil (detecteurs par activation ou chambre a fission). La plupart de ces methodes ont deja ete proposees, et souvent meme utilisees, par differents auteurs dont on trouvera la liste en bibliographie. Le but du present rapport n'est donc pas de faire oeuvre originale mais plutot de rassembler dans un meme document et de presenter de maniere homogene toute une serie de methodes qui ont deja fait l'objet d'articles disperses dans la litterature specialisee. Jusqu'a present, chaque auteur s'est en general limite a l'etude experimentale d'une ou deux methodes et aucune etude comparative de l'ensemble des methodes possibles ne semble avoir ete faite. Le rapport le plus complet a ce sujet est celui de P.M. UTHE auquel de larges emprunts ont ete faits. On s'est efforce ici d'elaborer un outil de travail commode qui devrait
Adaptation of chemical methods of analysis to the matrix of pyrite-acidified mining lakes
International Nuclear Information System (INIS)
Herzsprung, P.; Friese, K.
2000-01-01
Owing to the unusual matrix of pyrite-acidified mining lakes, the analysis of chemical parameters may be difficult. A number of methodological improvements have been developed so far, and a comprehensive validation of methods is envisaged. The adaptation of the available methods to small-volume samples of sediment pore waters and the adaptation of sensitivity to the expected concentration ranges is an important element of the methods applied in analyses of biogeochemical processes in mining lakes [de
Finite element method for solving Kohn-Sham equations based on self-adaptive tetrahedral mesh
International Nuclear Information System (INIS)
Zhang Dier; Shen Lihua; Zhou Aihui; Gong Xingao
2008-01-01
A finite element (FE) method with self-adaptive mesh-refinement technique is developed for solving the density functional Kohn-Sham equations. The FE method adopts local piecewise polynomials basis functions, which produces sparsely structured matrices of Hamiltonian. The method is well suitable for parallel implementation without using Fourier transform. In addition, the self-adaptive mesh-refinement technique can control the computational accuracy and efficiency with optimal mesh density in different regions
Directory of Open Access Journals (Sweden)
Y. Zhou
2018-05-01
Full Text Available Accurate remote sensing water extraction is one of the primary tasks of watershed ecological environment study. Since the Yanhe water system has typical characteristics of a small water volume and narrow river channel, which leads to the difficulty for conventional water extraction methods such as Normalized Difference Water Index (NDWI. A new Multi-Spectral Threshold segmentation of the NDWI (MST-NDWI water extraction method is proposed to achieve the accurate water extraction in Yanhe watershed. In the MST-NDWI method, the spectral characteristics of water bodies and typical backgrounds on the Landsat/TM images have been evaluated in Yanhe watershed. The multi-spectral thresholds (TM1, TM4, TM5 based on maximum-likelihood have been utilized before NDWI water extraction to realize segmentation for a division of built-up lands and small linear rivers. With the proposed method, a water map is extracted from the Landsat/TM images in 2010 in China. An accuracy assessment is conducted to compare the proposed method with the conventional water indexes such as NDWI, Modified NDWI (MNDWI, Enhanced Water Index (EWI, and Automated Water Extraction Index (AWEI. The result shows that the MST-NDWI method generates better water extraction accuracy in Yanhe watershed and can effectively diminish the confusing background objects compared to the conventional water indexes. The MST-NDWI method integrates NDWI and Multi-Spectral Threshold segmentation algorithms, with richer valuable information and remarkable results in accurate water extraction in Yanhe watershed.
International Nuclear Information System (INIS)
Fukushi, Shoji; Teraoka, Satomi.
1997-01-01
A new method which calculate end-diastolic volume (EDV), end-systolic volume (ESV) and ejection fraction (LVEF) of the left ventricle from myocardial short axis images of ECG-gated SPECT using 99m Tc myocardial perfusion tracer has been designed. Eight frames per cardiac cycle ECG-gated 180 degrees SPECT was performed. Threshold method was used to detect myocardial borders automatically. The optimal threshold was 45% by myocardial SPECT phantom. To determine if EDV, ESV and LVEF can also be calculated by this method, 12 patients were correlated ventriculography (LVG) for 10 days each. The correlation coefficient with LVG was 0.918 (EDV), 0.935 (ESV) and 0.900 (LVEF). This method is excellent at objectivity and reproductivity because of the automatic detection of myocardial borders. It also provides useful information on heart function in addition to myocardial perfusion. (author)
International Nuclear Information System (INIS)
Del C Valdes Hernandez, Maria; Ferguson, Karen J.; Chappell, Francesca M.; Wardlaw, Joanna M.
2010-01-01
Brain tissue segmentation by conventional threshold-based techniques may have limited accuracy and repeatability in older subjects. We present a new multispectral magnetic resonance (MR) image analysis approach for segmenting normal and abnormal brain tissue, including white matter lesions (WMLs). We modulated two 1.5T MR sequences in the red/green colour space and calculated the tissue volumes using minimum variance quantisation. We tested it on 14 subjects, mean age 73.3 ± 10 years, representing the full range of WMLs and atrophy. We compared the results of WML segmentation with those using FLAIR-derived thresholds, examined the effect of sampling location, WML amount and field inhomogeneities, and tested observer reliability and accuracy. FLAIR-derived thresholds were significantly affected by the location used to derive the threshold (P = 0.0004) and by WML volume (P = 0.0003), and had higher intra-rater variability than the multispectral technique (mean difference ± SD: 759 ± 733 versus 69 ± 326 voxels respectively). The multispectral technique misclassified 16 times fewer WMLs. Initial testing suggests that the multispectral technique is highly reproducible and accurate with the potential to be applied to routinely collected clinical MRI data. (orig.)
An adaptive EFG-FE coupling method for elasto-plastic contact of rough surfaces
International Nuclear Information System (INIS)
Liu Lan; Liu Geng; Tong Ruiting; Jin Saiying
2010-01-01
Differing from Finite Element Method, the meshless method does not need any mesh information and can arrange nodes freely which is perfectly suitable for adaptive analysis. In order to simulate the contact condition factually and improve computational efficiency, an adaptive procedure for Element-free Galerkin-Finite Element (EFG-FE) coupling contact model is established and developed to investigate the elastoplastic contact performance for engineering rough surfaces. The local adaptive refinement strategy combined with the strain energy gradient-based error estimation model is employed. The schemes, including principle explanation, arithmetic analysis and programming realization, are introduced and discussed. Furthermore, some related parameters on adaptive convergence criterion are researched emphatically, including adaptation-stop criterion, refinement or coarsening criterion which are guided by the relative error in total strain energy with two adjacent stages. Based on pioneering works of the EFG-FE coupling method for contact problems, an adaptive EFG-FE model for asperity contact is studied. Compared with the solutions obtained from the uniform refinement model, the adaptation results indicate that the adaptive method presented in this paper is capable of solving asperity contact problems with excellent calculation accuracy and computational efficiency.
A Dynamic and Adaptive Selection Radar Tracking Method Based on Information Entropy
Directory of Open Access Journals (Sweden)
Ge Jianjun
2017-12-01
Full Text Available Nowadays, the battlefield environment has become much more complex and variable. This paper presents a quantitative method and lower bound for the amount of target information acquired from multiple radar observations to adaptively and dynamically organize the detection of battlefield resources based on the principle of information entropy. Furthermore, for minimizing the given information entropy’s lower bound for target measurement at every moment, a method to dynamically and adaptively select radars with a high amount of information for target tracking is proposed. The simulation results indicate that the proposed method has higher tracking accuracy than that of tracking without adaptive radar selection based on entropy.
Adaptive ACMS: A robust localized Approximated Component Mode Synthesis Method
Madureira, Alexandre L.; Sarkis, Marcus
2017-01-01
We consider finite element methods of multiscale type to approximate solutions for two-dimensional symmetric elliptic partial differential equations with heterogeneous $L^\\infty$ coefficients. The methods are of Galerkin type and follows the Variational Multiscale and Localized Orthogonal Decomposition--LOD approaches in the sense that it decouples spaces into multiscale and fine subspaces. In a first method, the multiscale basis functions are obtained by mapping coarse basis functions, based...
Model Threshold untuk Pembelajaran Memproduksi Pantun Kelas XI
Directory of Open Access Journals (Sweden)
Fitri Nura Murti
2017-03-01
Full Text Available Abstract: The learning pantun method in schools provided less opportunity to develop the students’ creativity in producing pantun. This situation was supported by the result of the observation conducted on eleventh graders at SMAN 2 Bondowoso. It showed that the students tend to plagiarize their pantun. The general objective of this research and development is to develop Threshold Pantun model for learning to produce pantun for elevent graders. The product was presented in guidance book for teachers entitled “Pembelajaran Memproduksi Pantun Menggunakan Model Threshold Pantun untuk Kelas XI”. This study adapted design method of Borg-Gall’s R&D procedure. The result of this study showed that Threshold Pantun model was appropriate to be implemented for learning to produce pantun. Key Words: Threshold Pantun model, produce pantun Abstrak: Pembelajaran pantun di sekolah selama ini kurang mengembangkan kreativitas siswa dalam memproduksi pantun. Hal tersebut dikuatkan oleh hasil observasi siswa kelas XI SMAN 2 Bondowoso yang menunjukkan adanya kecenderungan produk siswa bersifat plagiat. Tujuan penelitian dan pengembangan ini secara umum adalah mengembangkan model Threshold Pantun untuk pembelajaran memproduksi pantun kelas XI..Produk disajikan dalam bentuk buku panduan bagi guru dengan judul “Pembelajaran Memproduksi Pantun Menggunakan Model Threshold Pantun untuk Kelas XI”. Penelitian ini menggunakan rancangan penelitian yang diadaptasi dari prosedur penelitian dan pengembangan Borg dan Gall. Berdasarkan hasil validasi model Threshold Pantun untuk pembelajaran memproduksi pantun layak diimplementasikan. Kata kunci: model Threshold Pantun, memproduksi pantun
Mixed Methods in Intervention Research: Theory to Adaptation
Nastasi, Bonnie K.; Hitchcock, John; Sarkar, Sreeroopa; Burkholder, Gary; Varjas, Kristen; Jayasena, Asoka
2007-01-01
The purpose of this article is to demonstrate the application of mixed methods research designs to multiyear programmatic research and development projects whose goals include integration of cultural specificity when generating or translating evidence-based practices. The authors propose a set of five mixed methods designs related to different…
Comparison of parameter-adapted segmentation methods for fluorescence micrographs.
Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas
2011-11-01
Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.
An adaptation of Krylov subspace methods to path following
Energy Technology Data Exchange (ETDEWEB)
Walker, H.F. [Utah State Univ., Logan, UT (United States)
1996-12-31
Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.
Hsu, Kuo-Hsien
2012-11-01
Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.
Adapting Western research methods to indigenous ways of knowing.
Simonds, Vanessa W; Christopher, Suzanne
2013-12-01
Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.
Adaptive Coarse Spaces for FETI-DP and BDDC Methods
Radtke, Patrick
2015-01-01
Iterative substructuring methods are well suited for the parallel iterative solution of elliptic partial differential equations. These methods are based on subdividing the computational domain into smaller nonoverlapping subdomains and solving smaller problems on these subdomains. The solutions are then joined to a global solution in an iterative process. In case of a scalar diffusion equation or the equations of linear elasticity with a diffusion coefficient or Young modulus, respectively, ...
The adaptation of methods in multilayer optics for the calculation of specular neutron reflection
International Nuclear Information System (INIS)
Penfold, J.
1988-10-01
The adaptation of standard methods in multilayer optics to the calculation of specular neutron reflection is described. Their application is illustrated with examples which include a glass optical flat and a deuterated Langmuir-Blodgett film. (author)
NEURAL NETWORKS CONTROL OF THE HYBRID POWER UNIT BASED ON THE METHOD OF ADAPTIVE CRITICS
Directory of Open Access Journals (Sweden)
S. Serikov
2012-01-01
Full Text Available The formal statement of the optimization problem of hybrid vehicle power unit control is given. Its solving by neural networks method application on the basis of adaptive critic is considered.
Robust and Adaptive Block Tracking Method Based on Particle Filter
Directory of Open Access Journals (Sweden)
Bin Sun
2015-10-01
Full Text Available In the field of video analysis and processing, object tracking is attracting more and more attention especially in traffic management, digital surveillance and so on. However problems such as objects’ abrupt motion, occlusion and complex target structures would bring difficulties to academic study and engineering application. In this paper, a fragmentsbased tracking method using the block relationship coefficient is proposed. In this method, we use particle filter algorithm and object region is divided into blocks initially. The contribution of this method is that object features are not extracted just from a single block, the relationship between current block and its neighbor blocks are extracted to describe the variation of the block. Each block is weighted according to the block relationship coefficient when the block is voted on the most matched region in next frame. This method can make full use of the relationship between blocks. The experimental results demonstrate that our method can provide good performance in condition of occlusion and abrupt posture variation.
A two-dimensional adaptive numerical grids generation method and its realization
International Nuclear Information System (INIS)
Xu Tao; Shui Hongshou
1998-12-01
A two-dimensional adaptive numerical grids generation method and its particular realization is discussed. This method is effective and easy to realize if the control functions are given continuously, and the grids for some regions is showed in this case. For Computational Fluid Dynamics, because the control values of adaptive grids-numerical solution is given in dispersed form, it is needed to interpolate these values to get the continuous control functions. These interpolation techniques are discussed, and some efficient adaptive grids are given. A two-dimensional fluid dynamics example was also given
International Nuclear Information System (INIS)
Budrick, R.G.; Nolen, R.L. Jr.; Solomon, D.E.; King, F.T.
1975-01-01
The invention relates to the manufacture of glass microspheres. It refers to a method according to which a sintered glass-powder, whose particles are calibrated, is introduced into a blow-pipe adapted to project said glass-powder particles into a heated flue, said sintered glass-powder containing a pore-forming agent adapted to expand the glass particles into microspheres which are collected in a chamber situated abode said flue. The method can be applied to the manufacture of microspheres adapted to contain a thermonuclear fuel [fr
Adaptive cluster sampling: An efficient method for assessing inconspicuous species
Andrea M. Silletti; Joan Walker
2003-01-01
Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...
Machado, Fabiana Andrade; Nakamura, Fábio Yuzo; Moraes, Solange Marta Franzói De
2012-01-01
This study examined the influence of the regression model and initial intensity of an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and the endurance performance. Sixteen non-competitive, recreational female runners performed a discontinuous incremental treadmill test. The initial speed was set at 7 km · h⁻¹, and increased every 3 min by 1 km · h⁻¹ with a 30-s rest between the stages used for earlobe capillary blood sample collection. Lactate-speed data were fitted by an exponential-plus-constant and a third-order polynomial equation. The lactate threshold was determined for both regression equations, using all the coordinates, excluding the first and excluding the first and second initial points. Mean speed of a 10-km road race was the performance index (3.04 ± 0.22 m · s⁻¹). The exponentially-derived lactate threshold had a higher correlation (0.98 ≤ r ≤ 0.99) and smaller standard error of estimate (SEE) (0.04 ≤ SEE ≤ 0.05 m · s⁻¹) with performance than the polynomially-derived equivalent (0.83 ≤ r ≤ 0.89; 0.10 ≤ SEE ≤ 0.13 m · s⁻¹). The exponential lactate threshold was greater than the polynomial equivalent (P performance index that is independent of the initial intensity of the incremental test and better than the polynomial equivalent.
Solving delay differential equations in S-ADAPT by method of steps.
Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech
2013-09-01
S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.
Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu
2017-12-01
The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.
A high-throughput multiplex method adapted for GMO detection.
Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique
2008-12-24
A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.
The Pilates method and cardiorespiratory adaptation to training.
Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen
2016-01-01
Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities.
Minnis, Patrick; Harrison, Edwin F.; Gibson, Gary G.
1987-01-01
A set of visible and IR data obtained with GOES from July 17-31, 1983 is analyzed using a modified version of the hybrid bispectral threshold method developed by Minnis and Harrison (1984). This methodology can be divided into a set of procedures or optional techniques to determine the proper contaminate clear-sky temperature or IR threshold. The various optional techniques are described; the options are: standard, low-temperature limit, high-reflectance limit, low-reflectance limit, coldest pixel and thermal adjustment limit, IR-only low-cloud temperature limit, IR clear-sky limit, and IR overcast limit. Variations in the cloud parameters and the characteristics and diurnal cycles of trade cumulus and stratocumulus clouds over the eastern equatorial Pacific are examined. It is noted that the new method produces substantial changes in about one third of the cloud amount retrieval; and low cloud retrievals are affected most by the new constraints.
The adaptive problems of female teenage refugees and their behavioral adjustment methods for coping
Directory of Open Access Journals (Sweden)
Mhaidat F
2016-04-01
Full Text Available Fatin Mhaidat Department of Educational Psychology, Faculty of Educational Sciences, The Hashemite University, Zarqa, Jordan Abstract: This study aimed at identifying the levels of adaptive problems among teenage female refugees in the government schools and explored the behavioral methods that were used to cope with the problems. The sample was composed of 220 Syrian female students (seventh to first secondary grades enrolled at government schools within the Zarqa Directorate and who came to Jordan due to the war conditions in their home country. The study used the scale of adaptive problems that consists of four dimensions (depression, anger and hostility, low self-esteem, and feeling insecure and a questionnaire of the behavioral adjustment methods for dealing with the problem of asylum. The results indicated that the Syrian teenage female refugees suffer a moderate degree of adaptation problems, and the positive adjustment methods they have used are more than the negatives. Keywords: adaptive problems, female teenage refugees, behavioral adjustment
Near threshold fatigue testing
Freeman, D. C.; Strum, M. J.
1993-01-01
Measurement of the near-threshold fatigue crack growth rate (FCGR) behavior provides a basis for the design and evaluation of components subjected to high cycle fatigue. Typically, the near-threshold fatigue regime describes crack growth rates below approximately 10(exp -5) mm/cycle (4 x 10(exp -7) inch/cycle). One such evaluation was recently performed for the binary alloy U-6Nb. The procedures developed for this evaluation are described in detail to provide a general test method for near-threshold FCGR testing. In particular, techniques for high-resolution measurements of crack length performed in-situ through a direct current, potential drop (DCPD) apparatus, and a method which eliminates crack closure effects through the use of loading cycles with constant maximum stress intensity are described.
Directory of Open Access Journals (Sweden)
Banna Hasanul
2016-03-01
Full Text Available This paper assesses farmers’ willingness to pay for an efficient adaptation programme to climate change for Malaysian agriculture. We used the contingent valuation method to determine the monetary assessment of farmers’ preferences for an adaptation programme. We distributed a structured questionnaire to farmers in Selangor, Malaysia. Based on the survey, 74% of respondents are willing to pay for the adaptation programme with several factors such as socio-economic and motivational factors exerting greater influences over their willingness to pay. However, a significant number of respondents are not willing to pay for the adaptation programme. The Malaysian government, along with social institutions, banks, NGOs, and media could come up with fruitful awareness programmes to motivate financing the programme. Financial institutions such as banks, insurances, leasing firms, etc. along with government and farmers could also donate a substantial portion for the adaptation programme as part of their corporate social responsibility (CSR.
Denoising imaging polarimetry by adapted BM3D method.
Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R
2018-04-01
In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.
Rapid Estimation of Gustatory Sensitivity Thresholds with SIAM and QUEST
Directory of Open Access Journals (Sweden)
Richard Höchenberger
2017-06-01
Full Text Available Adaptive methods provide quick and reliable estimates of sensory sensitivity. Yet, these procedures are typically developed for and applied to the non-chemical senses only, i.e., to vision, audition, and somatosensation. The relatively long inter-stimulus-intervals in gustatory studies, which are required to minimize adaptation and habituation, call for time-efficient threshold estimations. We therefore tested the suitability of two adaptive yes-no methods based on SIAM and QUEST for rapid estimation of taste sensitivity by comparing test-retest reliability for sucrose, citric acid, sodium chloride, and quinine hydrochloride thresholds. We show that taste thresholds can be obtained in a time efficient manner with both methods (within only 6.5 min on average using QUEST and ~9.5 min using SIAM. QUEST yielded higher test-retest correlations than SIAM in three of the four tastants. Either method allows for taste threshold estimation with low strain on participants, rendering them particularly advantageous for use in subjects with limited attentional or mnemonic capacities, and for time-constrained applications during cohort studies or in the testing of patients and children.
Directory of Open Access Journals (Sweden)
Humin Lei
2017-01-01
Full Text Available An adaptive mesh iteration method based on Hermite-Pseudospectral is described for trajectory optimization. The method uses the Legendre-Gauss-Lobatto points as interpolation points; then the state equations are approximated by Hermite interpolating polynomials. The method allows for changes in both number of mesh points and the number of mesh intervals and produces significantly smaller mesh sizes with a higher accuracy tolerance solution. The derived relative error estimate is then used to trade the number of mesh points with the number of mesh intervals. The adaptive mesh iteration method is applied successfully to the examples of trajectory optimization of Maneuverable Reentry Research Vehicle, and the simulation experiment results show that the adaptive mesh iteration method has many advantages.
An adaptive angle-doppler compensation method for airborne bistatic radar based on PAST
Hang, Xu; Jun, Zhao
2018-05-01
Adaptive angle-Doppler compensation method extract the requisite information based on the data itself adaptively, thus avoiding the problem of performance degradation caused by inertia system error. However, this method requires estimation and egiendecomposition of sample covariance matrix, which has a high computational complexity and limits its real-time application. In this paper, an adaptive angle Doppler compensation method based on projection approximation subspace tracking (PAST) is studied. The method uses cyclic iterative processing to quickly estimate the positions of the spectral center of the maximum eigenvector of each range cell, and the computational burden of matrix estimation and eigen-decompositon is avoided, and then the spectral centers of all range cells is overlapped by two dimensional compensation. Simulation results show the proposed method can effectively reduce the no homogeneity of airborne bistatic radar, and its performance is similar to that of egien-decomposition algorithms, but the computation load is obviously reduced and easy to be realized.
Impedance adaptation methods of the piezoelectric energy harvesting
Kim, Hyeoungwoo
In this study, the important issues of energy recovery were addressed and a comprehensive investigation was performed on harvesting electrical power from an ambient mechanical vibration source. Also discussed are the impedance matching methods used to increase the efficiency of energy transfer from the environment to the application. Initially, the mechanical impedance matching method was investigated to increase mechanical energy transferred to the transducer from the environment. This was done by reducing the mechanical impedance such as damping factor and energy reflection ratio. The vibration source and the transducer were modeled by a two-degree-of-freedom dynamic system with mass, spring constant, and damper. The transmissibility employed to show how much mechanical energy that was transferred in this system was affected by the damping ratio and the stiffness of elastic materials. The mechanical impedance of the system was described by electrical system using analogy between the two systems in order to simply the total mechanical impedance. Secondly, the transduction rate of mechanical energy to electrical energy was improved by using a PZT material which has a high figure of merit and a high electromechanical coupling factor for electrical power generation, and a piezoelectric transducer which has a high transduction rate was designed and fabricated. The high g material (g33 = 40 [10-3Vm/N]) was developed to improve the figure of merit of the PZT ceramics. The cymbal composite transducer has been found as a promising structure for piezoelectric energy harvesting under high force at cyclic conditions (10--200 Hz), because it has almost 40 times higher effective strain coefficient than PZT ceramics. The endcap of cymbal also enhances the endurance of the ceramic to sustain ac load along with stress amplification. In addition, a macro fiber composite (MFC) was employed as a strain component because of its flexibility and the high electromechanical coupling
Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi
2017-01-01
Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle). PMID:28608824
Directory of Open Access Journals (Sweden)
Qi Huang
2017-06-01
Full Text Available Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC, by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC. We compared PAC performance with incremental support vector classifier (ISVC and non-adapting SVC (NSVC in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05 and ISVC (13.38% ± 2.62%, p = 0.001, and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle.
Directory of Open Access Journals (Sweden)
Hussein Abdel-jaber
2015-10-01
Full Text Available Congestion control is one of the hot research topics that helps maintain the performance of computer networks. This paper compares three Active Queue Management (AQM methods, namely, Adaptive Gentle Random Early Detection (Adaptive GRED, Random Early Dynamic Detection (REDD, and GRED Linear analytical model with respect to different performance measures. Adaptive GRED and REDD are implemented based on simulation, whereas GRED Linear is implemented as a discrete-time analytical model. Several performance measures are used to evaluate the effectiveness of the compared methods mainly mean queue length, throughput, average queueing delay, overflow packet loss probability, and packet dropping probability. The ultimate aim is to identify the method that offers the highest satisfactory performance in non-congestion or congestion scenarios. The first comparison results that are based on different packet arrival probability values show that GRED Linear provides better mean queue length; average queueing delay and packet overflow probability than Adaptive GRED and REDD methods in the presence of congestion. Further and using the same evaluation measures, Adaptive GRED offers a more satisfactory performance than REDD when heavy congestion is present. When the finite capacity of queue values varies the GRED Linear model provides the highest satisfactory performance with reference to mean queue length and average queueing delay and all the compared methods provide similar throughput performance. However, when the finite capacity value is large, the compared methods have similar results in regard to probabilities of both packet overflowing and packet dropping.
Directory of Open Access Journals (Sweden)
DARIUSZ Piwczynski
2013-03-01
Full Text Available The research was carried out on 4,030 Polish Merino ewes born in the years 1991- 2001, kept in 15 flocks from the Pomorze and Kujawy region. Fertility of ewes in subsequent reproduction seasons was analysed with the use of multiple logistic regression. The research showed that there is a statistical influence of the flock, year of birth, age of dam, flock year interaction of birth on the ewes fertility. In order to estimate the genetic parameters, the Gibbs sampling method was applied, using the univariate animal models, both linear as well as threshold. Estimates of fertility depending on the model equalled 0.067 to 0.104, whereas the estimates of repeatability equalled respectively: 0.076 and 0.139. The obtained genetic parameters were then used to estimate the breeding values of the animals in terms of controlled trait (Best Linear Unbiased Prediction method using linear and threshold models. The obtained animal breeding values rankings in respect of the same trait with the use of linear and threshold models were strongly correlated with each other (rs = 0.972. Negative genetic trends of fertility (0.01-0.08% per year were found.
The method of adaptation under the parameters of the subject of the information interaction
Directory of Open Access Journals (Sweden)
Инесса Анатольевна Воробьёва
2014-12-01
Full Text Available To ensure the effectiveness of settings (adaptation created software and hardware on the particular subject of the method was developed for adaptation under the parameters of the subject of information interaction in the form of a set of operations to build a network dialog procedures on the basis of accounting for entry-level qualification of the subject, assessment of the current level of skills and operational restructuring of the network in accordance with the assessment of his level.
Adaptive variational mode decomposition method for signal processing based on mode characteristic
Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng
2018-07-01
Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.
Adaptive Finite Volume Method for the Shallow Water Equations on Triangular Grids
Directory of Open Access Journals (Sweden)
Sudi Mungkasi
2016-01-01
Full Text Available This paper presents a numerical entropy production (NEP scheme for two-dimensional shallow water equations on unstructured triangular grids. We implement NEP as the error indicator for adaptive mesh refinement or coarsening in solving the shallow water equations using a finite volume method. Numerical simulations show that NEP is successful to be a refinement/coarsening indicator in the adaptive mesh finite volume method, as the method refines the mesh or grids around nonsmooth regions and coarsens them around smooth regions.
Energy Technology Data Exchange (ETDEWEB)
Webster, Clayton G [ORNL; Zhang, Guannan [ORNL; Gunzburger, Max D [ORNL
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
Models, methods and software tools for building complex adaptive traffic systems
International Nuclear Information System (INIS)
Alyushin, S.A.
2011-01-01
The paper studies the modern methods and tools to simulate the behavior of complex adaptive systems (CAS), the existing systems of traffic modeling in simulators and their characteristics; proposes requirements for assessing the suitability of the system to simulate the CAS behavior in simulators. The author has developed a model of adaptive agent representation and its functioning environment to meet certain requirements set above, and has presented methods of agents' interactions and methods of conflict resolution in simulated traffic situations. A simulation system realizing computer modeling for simulating the behavior of CAS in traffic situations has been created [ru
The adaptation method in the Monte Carlo simulation for computed tomography
Energy Technology Data Exchange (ETDEWEB)
Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)
2015-06-15
The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.
International Nuclear Information System (INIS)
Nahavandi, N.; Minuchehr, A.; Zolfaghari, A.; Abbasi, M.
2015-01-01
Highlights: • Powerful hp-SEM refinement approach for P N neutron transport equation has been presented. • The method provides great geometrical flexibility and lower computational cost. • There is a capability of using arbitrary high order and non uniform meshes. • Both posteriori and priori local error estimation approaches have been employed. • High accurate results are compared against other common adaptive and uniform grids. - Abstract: In this work we presented the adaptive hp-SEM approach which is obtained from the incorporation of Spectral Element Method (SEM) and adaptive hp refinement. The SEM nodal discretization and hp adaptive grid-refinement for even-parity Boltzmann neutron transport equation creates powerful grid refinement approach with high accuracy solutions. In this regard a computer code has been developed to solve multi-group neutron transport equation in one-dimensional geometry using even-parity transport theory. The spatial dependence of flux has been developed via SEM method with Lobatto orthogonal polynomial. Two commonly error estimation approaches, the posteriori and the priori has been implemented. The incorporation of SEM nodal discretization method and adaptive hp grid refinement leads to high accurate solutions. Coarser meshes efficiency and significant reduction of computer program runtime in comparison with other common refining methods and uniform meshing approaches is tested along several well-known transport benchmarks
Investigation of the Adaptability of Transient Stability Assessment Methods to Real-Time Operation
Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Sommer, Stefan; Østergaard, Jacob
2012-01-01
In this paper, an investigation of the adaptability of available transient stability assessment methods to real-time operation and their real-time performance is carried out. Two approaches based on Lyapunov’s method and the equal area criterion are analyzed. The results allow to determine the runtime of each method with respect to the number of inputs. Furthermore, it allows to identify, which method is preferable in case of changes in the power system such as the integration of distributed ...
Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy
2015-01-01
This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items, (b) latent factor scores with continuous indicators, (c)…
Investigation of the Adaptability of Transient Stability Assessment Methods to Real-Time Operation
DEFF Research Database (Denmark)
Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Sommer, Stefan
2012-01-01
In this paper, an investigation of the adaptability of available transient stability assessment methods to real-time operation and their real-time performance is carried out. Two approaches based on Lyapunov’s method and the equal area criterion are analyzed. The results allow to determine...
Adapting the mode profile of planar waveguides to single-mode fibers : a novel method
Smit, M.K.; Vreede, De A.H.
1991-01-01
A novel method for coupling single-mode fibers to planar optical circuits with small waveguide dimensions is proposed. The method eliminates the need to apply microoptics or to adapt the waveguide dimensions within the planar circuit to the fiber dimensions. Alignment tolerances are comparable to
Adaptation of the TCLP and SW-846 methods to radioactive mixed waste
International Nuclear Information System (INIS)
Griest, W.H.; Schenley, R.L.; Caton, J.E.; Wolfe, P.F.
1994-01-01
Modifications of conventional sample preparation and analytical methods are necessary to provide radiation protection and to meet sensitivity requirements for regulated constituents when working with radioactive samples. Adaptations of regulatory methods for determining ''total'' Toxicity Characteristic Leaching Procedure (TCLP) volatile and semivolatile organics and pesticides, and for conducting aqueous leaching are presented
Solving point reactor kinetic equations by time step-size adaptable numerical methods
International Nuclear Information System (INIS)
Liao Chaqing
2007-01-01
Based on the analysis of effects of time step-size on numerical solutions, this paper showed the necessity of step-size adaptation. Based on the relationship between error and step-size, two-step adaptation methods for solving initial value problems (IVPs) were introduced. They are Two-Step Method and Embedded Runge-Kutta Method. PRKEs were solved by implicit Euler method with step-sizes optimized by using Two-Step Method. It was observed that the control error has important influence on the step-size and the accuracy of solutions. With suitable control errors, the solutions of PRKEs computed by the above mentioned method are accurate reasonably. The accuracy and usage of MATLAB built-in ODE solvers ode23 and ode45, both of which adopt Runge-Kutta-Fehlberg method, were also studied and discussed. (authors)
A multilevel correction adaptive finite element method for Kohn-Sham equation
Hu, Guanghui; Xie, Hehu; Xu, Fei
2018-02-01
In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.
Directory of Open Access Journals (Sweden)
Julia Dirler
2018-06-01
Full Text Available The International Agency for Research on Cancer (IARC evaluates “very hot (>65 °C beverages” as probably carcinogenic to humans. However, there is a lack of research regarding what temperatures consumers actually perceive as “very hot” or as “too hot”. A method for sensory analysis of such threshold temperatures was developed. The participants were asked to mix a very hot coffee step by step into a cooler coffee. Because of that, the coffee to be tasted was incrementally increased in temperature during the test. The participants took a sip at every addition, until they perceive the beverage as too hot for consumption. The protocol was evaluated in the form of a pilot study using 87 participants. Interestingly, the average pain threshold of the test group (67 °C and the preferred drinking temperature (63 °C iterated around the IARC threshold for carcinogenicity. The developed methodology was found as fit for the purpose and may be applied in larger studies.
International Nuclear Information System (INIS)
Han, Jaeyoung; Jung, Mooncheong; Yu, Sangseok; Yi, Sun
2016-01-01
In this study, a model reference adaptive controller is developed to regulate the outlet air flow rate of centrifugal compressor for automotive supercharger. The centrifugal compressor is developed using the analytical based method to predict the transient behavior of operating and the designed model is validated with experimental data to confirm the system accuracy. The model reference adaptive control structure consists of a compressor model and a MRAC(model reference adaptive control) mechanism. The feedback control do not robust with variation of system parameter but the applied adaptive control is robust even if the system parameter is changed. As a result, the MRAC was regulated to reference air flow rate. Also MRAC was found to be more robust control compared with the feedback control even if the system parameter is changed.
Energy Technology Data Exchange (ETDEWEB)
Han, Jaeyoung; Jung, Mooncheong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Yi, Sun [North Carolina A and T State Univ., Raleigh (United States)
2016-08-15
In this study, a model reference adaptive controller is developed to regulate the outlet air flow rate of centrifugal compressor for automotive supercharger. The centrifugal compressor is developed using the analytical based method to predict the transient behavior of operating and the designed model is validated with experimental data to confirm the system accuracy. The model reference adaptive control structure consists of a compressor model and a MRAC(model reference adaptive control) mechanism. The feedback control do not robust with variation of system parameter but the applied adaptive control is robust even if the system parameter is changed. As a result, the MRAC was regulated to reference air flow rate. Also MRAC was found to be more robust control compared with the feedback control even if the system parameter is changed.
Control of beam halo-chaos using neural network self-adaptation method
International Nuclear Information System (INIS)
Fang Jinqing; Huang Guoxian; Luo Xiaoshu
2004-11-01
Taking the advantages of neural network control method for nonlinear complex systems, control of beam halo-chaos in the periodic focusing channels (network) of high intensity accelerators is studied by feed-forward back-propagating neural network self-adaptation method. The envelope radius of high-intensity proton beam is reached to the matching beam radius by suitably selecting the control structure of neural network and the linear feedback coefficient, adjusted the right-coefficient of neural network. The beam halo-chaos is obviously suppressed and shaking size is much largely reduced after the neural network self-adaptation control is applied. (authors)
The Adapted Ordering Method for Lie algebras and superalgebras and their generalizations
Energy Technology Data Exchange (ETDEWEB)
Gato-Rivera, Beatriz [Instituto de Matematicas y Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); NIKHEF-H, Kruislaan 409, NL-1098 SJ Amsterdam (Netherlands)
2008-02-01
In 1998 the Adapted Ordering Method was developed for the representation theory of the superconformal algebras in two dimensions. It allows us to determine maximal dimensions for a given type of space of singular vectors, to identify all singular vectors by only a few coefficients, to spot subsingular vectors and to set the basis for constructing embedding diagrams. In this paper we present the Adapted Ordering Method for general Lie algebras and superalgebras and their generalizations, provided they can be triangulated. We also review briefly the results obtained for the Virasoro algebra and for the N = 2 and Ramond N = 1 superconformal algebras.
International Nuclear Information System (INIS)
Laucoin, E.
2008-10-01
Numerical resolution of partial differential equations can be made reliable and efficient through the use of adaptive numerical methods.We present here the work we have done for the design, the implementation and the validation of such a method within an industrial software platform with applications in thermohydraulics. From the geometric point of view, this method can deal both with mesh refinement and mesh coarsening, while ensuring the quality of the mesh cells. Numerically, we use the mortar elements formalism in order to extend the Finite Volumes-Elements method implemented in the Trio-U platform and to deal with the non-conforming meshes arising from the adaptation procedure. Finally, we present an implementation of this method using concepts from domain decomposition methods for ensuring its efficiency while running in a parallel execution context. (author)
Vieira Dias, Juliana; Gloaguen, Celine; Kereselidze, Dimitri; Manens, Line; Tack, Karine; Ebrahimian, Teni G
2018-01-01
A central question in radiation protection research is whether low-dose and low-dose-rate (LDR) exposures to ionizing radiation play a role in progression of cardiovascular disease. The response of endothelial cells to different LDR exposures may help estimate risk of cardiovascular disease by providing the biological mechanism involved. We investigated the effect of chronic LDR radiation on functional and molecular responses of human aorta endothelial cells (HAoECs). Human aorta endothelial cells were continuously irradiated at LDR (6 mGy/h) for 15 days and analyzed at time points when the cumulative dose reached 0.05, 0.5, 1.0, and 2.0 Gy. The same doses were administered acutely at high-dose rate (HDR; 1 Gy/min). The threshold for the loss of angiogenic capacity for both LDR and HDR radiations was between 0.5 and 1.0 Gy. At 2.0 Gy, angiogenic capacity returned to normal only for HAoEC exposed to LDR radiation, associated with increased expression of antioxidant and anti-inflammatory genes. Pre-LDR, but not pre-HDR, radiation, followed by a single acute 2.0 Gy challenge dose sustained the expression of antioxidant and anti-inflammatory genes and stimulated angiogenesis. Our results suggest that dose rate is important in cellular response and that a radioadaptive response is involved for a 2.0 Gy dose at LDR.
Vieira Dias, Juliana; Gloaguen, Celine; Kereselidze, Dimitri; Manens, Line; Tack, Karine; Ebrahimian, Teni G
2018-01-01
A central question in radiation protection research is whether low-dose and low-dose-rate (LDR) exposures to ionizing radiation play a role in progression of cardiovascular disease. The response of endothelial cells to different LDR exposures may help estimate risk of cardiovascular disease by providing the biological mechanism involved. We investigated the effect of chronic LDR radiation on functional and molecular responses of human aorta endothelial cells (HAoECs). Human aorta endothelial cells were continuously irradiated at LDR (6 mGy/h) for 15 days and analyzed at time points when the cumulative dose reached 0.05, 0.5, 1.0, and 2.0 Gy. The same doses were administered acutely at high-dose rate (HDR; 1 Gy/min). The threshold for the loss of angiogenic capacity for both LDR and HDR radiations was between 0.5 and 1.0 Gy. At 2.0 Gy, angiogenic capacity returned to normal only for HAoEC exposed to LDR radiation, associated with increased expression of antioxidant and anti-inflammatory genes. Pre-LDR, but not pre-HDR, radiation, followed by a single acute 2.0 Gy challenge dose sustained the expression of antioxidant and anti-inflammatory genes and stimulated angiogenesis. Our results suggest that dose rate is important in cellular response and that a radioadaptive response is involved for a 2.0 Gy dose at LDR. PMID:29531508
A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES
Energy Technology Data Exchange (ETDEWEB)
Druckmueller, M., E-mail: druckmuller@fme.vutbr.cz [Institute of Mathematics, Faculty of Mechanical Engineering, Brno University of Technology, Technicka 2, 616 69 Brno (Czech Republic)
2013-08-15
A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.
Face Recognition by Bunch Graph Method Using a Group Based Adaptive Tolerant Neural Network
Aradhana D.; Girish H.; Karibasappa K.; Reddy A. Chennakeshava
2011-01-01
This paper presents a new method for feature extraction from the facial image by using bunch graph method. These extracted geometric features of the face are used subsequently for face recognition by utilizing the group based adaptive neural network. This method is suitable, when the facial images are rotation and translation invariant. Further the technique also free from size invariance of facial image and is capable of identifying the facial images correctly when corrupted w...
A simple method to adapt time sampling of the analog signal
International Nuclear Information System (INIS)
Kalinin, Yu.G.; Martyanov, I.S.; Sadykov, Kh.; Zastrozhnova, N.N.
2004-01-01
In this paper we briefly describe the time sampling method, which is adapted to the speed of the signal change. Principally, this method is based on a simple idea--the combination of discrete integration with differentiation of the analog signal. This method can be used in nuclear electronics research into the characteristics of detectors and the shape of the pulse signal, pulse and transitive characteristics of inertial systems of processing of signals, etc
Method and system for training dynamic nonlinear adaptive filters which have embedded memory
Rabinowitz, Matthew (Inventor)
2002-01-01
Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.
An adaptive multi-element probabilistic collocation method for statistical EMC/EMI characterization
Yücel, Abdulkadir C.
2013-12-01
An adaptive multi-element probabilistic collocation (ME-PC) method for quantifying uncertainties in electromagnetic compatibility and interference phenomena involving electrically large, multi-scale, and complex platforms is presented. The method permits the efficient and accurate statistical characterization of observables (i.e., quantities of interest such as coupled voltages) that potentially vary rapidly and/or are discontinuous in the random variables (i.e., parameters that characterize uncertainty in a system\\'s geometry, configuration, or excitation). The method achieves its efficiency and accuracy by recursively and adaptively dividing the domain of the random variables into subdomains using as a guide the decay rate of relative error in a polynomial chaos expansion of the observables. While constructing local polynomial expansions on each subdomain, a fast integral-equation-based deterministic field-cable-circuit simulator is used to compute the observable values at the collocation/integration points determined by the adaptive ME-PC scheme. The adaptive ME-PC scheme requires far fewer (computationally costly) deterministic simulations than traditional polynomial chaos collocation and Monte Carlo methods for computing averages, standard deviations, and probability density functions of rapidly varying observables. The efficiency and accuracy of the method are demonstrated via its applications to the statistical characterization of voltages in shielded/unshielded microwave amplifiers and magnetic fields induced on car tire pressure sensors. © 2013 IEEE.
International Nuclear Information System (INIS)
Wen, Zhixun; Pei, Haiqing; Liu, Hai; Yue, Zhufeng
2016-01-01
The sequential Kriging reliability analysis (SKRA) method has been developed in recent years for nonlinear implicit response functions which are expensive to evaluate. This type of method includes EGRA: the efficient reliability analysis method, and AK-MCS: the active learning reliability method combining Kriging model and Monte Carlo simulation. The purpose of this paper is to improve SKRA by adaptive sampling regions and parallelizability. The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results. The size of the sampling regions is adapted according to the failure probability calculated by last iteration. Two parallel strategies are introduced and compared, aimed at selecting multiple sample points at a time. The improvement is verified through several troublesome examples. - Highlights: • The ISKRA method improves the efficiency of SKRA. • Adaptive sampling regions strategy reduces the number of needed samples. • The two parallel strategies reduce the number of needed iterations. • The accuracy of the optimal value impacts the number of samples significantly.
[Comparative adaptation of crowns of selective laser melting and wax-lost-casting method].
Li, Guo-qiang; Shen, Qing-yi; Gao, Jian-hua; Wu, Xue-ying; Chen, Li; Dai, Wen-an
2012-07-01
To investigate the marginal adaptation of crowns fabricated by selective laser melting (SLM) and wax-lost-casting method, so as to provide an experimental basis for clinic. Co-Cr alloy full crown were fabricated by SLM and wax-lost-casting for 24 samples in each group. All crowns were cemented with zinc phosphate cement and cut along longitudinal axis by line cutting machine. The gap between crown tissue surface and die was measured by 6-point measuring method with scanning electron microscope (SEM). The marginal adaptation of crowns fabricated by SLM and wax-lost-casting were compared statistically. The gap between SLM crowns were (36.51 ± 2.94), (49.36 ± 3.31), (56.48 ± 3.35), (42.20 ± 3.60) µm, and wax-lost-casting crowns were (68.86 ± 5.41), (58.86 ± 6.10), (70.62 ± 5.79), (69.90 ± 6.00) µm. There were significant difference between two groups (P casting method and SLM method provide acceptable marginal adaptation in clinic, and the marginal adaptation of SLM is better than that of wax-lost-casting method.
Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo
2017-12-01
A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.
Zhou, Qiuling; Tang, Chen; Li, Biyuan; Wang, Linlin; Lei, Zhenkun; Tang, Shuwei
2018-01-01
The filtering of discontinuous optical fringe patterns is a challenging problem faced in this area. This paper is concerned with oriented partial differential equations (OPDEs)-based image filtering methods for discontinuous optical fringe patterns. We redefine a new controlling speed function to depend on the orientation coherence. The orientation coherence can be used to distinguish the continuous regions and the discontinuous regions, and can be calculated by utilizing fringe orientation. We introduce the new controlling speed function to the previous OPDEs and propose adaptive OPDEs filtering models. According to our proposed adaptive OPDEs filtering models, the filtering in the continuous and discontinuous regions can be selectively carried out. We demonstrate the performance of the proposed adaptive OPDEs via application to the simulated and experimental fringe patterns, and compare our methods with the previous OPDEs.
Adaptive wavelet method for pricing two-asset Asian options with floating strike
Černá, Dana
2017-12-01
Asian options are path-dependent option contracts which payoff depends on the average value of the asset price over some period of time. We focus on pricing of Asian options on two assets. The model for pricing these options is represented by a parabolic equation with time variable and three state variables, but using substitution it can be reduced to the equation with only two state variables. For time discretization we use the θ-scheme. We propose a wavelet basis that is adapted to boundary conditions and use an adaptive scheme with this basis for discretization on the given time level. The main advantage of this scheme is small number of degrees of freedom. We present numerical experiments for the Asian put option with floating strike and compare the results for the proposed adaptive method and the Galerkin method.
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
An adaptive phase space method with application to reflection traveltime tomography
International Nuclear Information System (INIS)
Chung, Eric; Qian, Jianliang; Uhlmann, Gunther; Zhao, Hongkai
2011-01-01
In this work, an adaptive strategy for the phase space method for traveltime tomography (Chung et al 2007 Inverse Problems 23 309–29) is developed. The method first uses those geodesics/rays that produce smaller mismatch with the measurements and continues on in the spirit of layer stripping without defining the layers explicitly. The adaptive approach improves stability, efficiency and accuracy. We then extend our method to reflection traveltime tomography by incorporating broken geodesics/rays for which a jump condition has to be imposed at the broken point for the geodesic flow. In particular, we show that our method can distinguish non-broken and broken geodesics in the measurement and utilize them accordingly in reflection traveltime tomography. We demonstrate that our method can recover the convex hull (with respect to the underlying metric) of unknown obstacles as well as the metric outside the convex hull. (paper)
A Least Square-Based Self-Adaptive Localization Method for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Baoguo Yu
2016-01-01
Full Text Available In the wireless sensor network (WSN localization methods based on Received Signal Strength Indicator (RSSI, it is usually required to determine the parameters of the radio signal propagation model before estimating the distance between the anchor node and an unknown node with reference to their communication RSSI value. And finally we use a localization algorithm to estimate the location of the unknown node. However, this localization method, though high in localization accuracy, has weaknesses such as complex working procedure and poor system versatility. Concerning these defects, a self-adaptive WSN localization method based on least square is proposed, which uses the least square criterion to estimate the parameters of radio signal propagation model, which positively reduces the computation amount in the estimation process. The experimental results show that the proposed self-adaptive localization method outputs a high processing efficiency while satisfying the high localization accuracy requirement. Conclusively, the proposed method is of definite practical value.
Blom, Kimberly C; Farina, Sasha; Gomez, Yessica-Haydee; Campbell, Norm R C; Hemmelgarn, Brenda R; Cloutier, Lyne; McKay, Donald W; Dawes, Martin; Tobe, Sheldon W; Bolli, Peter; Gelfer, Mark; McLean, Donna; Bartlett, Gillian; Joseph, Lawrence; Featherstone, Robin; Schiffrin, Ernesto L; Daskalopoulou, Stella S
2015-04-01
Despite progress in automated blood pressure measurement (BPM) technology, there is limited research linking hard outcomes to automated office BPM (OBPM) treatment targets and thresholds. Equivalences for automated BPM devices have been estimated from approximations of standardized manual measurements of 140/90 mmHg. Until outcome-driven targets and thresholds become available for automated measurement methods, deriving evidence-based equivalences between automated methods and standardized manual OBPM is the next best solution. The MeasureBP study group was initiated by the Canadian Hypertension Education Program to close this critical knowledge gap. MeasureBP aims to define evidence-based equivalent values between standardized manual OBPM and automated BPM methods by synthesizing available evidence using a systematic review and individual subject-level data meta-analyses. This manuscript provides a review of the literature and MeasureBP study protocol. These results will lay the evidenced-based foundation to resolve uncertainties within blood pressure guidelines which, in turn, will improve the management of hypertension.
Development and evaluation of a method of calibrating medical displays based on fixed adaptation
Energy Technology Data Exchange (ETDEWEB)
Sund, Patrik, E-mail: patrik.sund@vgregion.se; Månsson, Lars Gunnar; Båth, Magnus [Department of Medical Physics and Biomedical Engineering, Sahlgrenska University Hospital, Gothenburg SE-41345, Sweden and Department of Radiation Physics, University of Gothenburg, Gothenburg SE-41345 (Sweden)
2015-04-15
Purpose: The purpose of this work was to develop and evaluate a new method for calibration of medical displays that includes the effect of fixed adaptation and by using equipment and luminance levels typical for a modern radiology department. Methods: Low contrast sinusoidal test patterns were derived at nine luminance levels from 2 to 600 cd/m{sup 2} and used in a two alternative forced choice observer study, where the adaptation level was fixed at the logarithmic average of 35 cd/m{sup 2}. The contrast sensitivity at each luminance level was derived by establishing a linear relationship between the ten pattern contrast levels used at every luminance level and a detectability index (d′) calculated from the fraction of correct responses. A Gaussian function was fitted to the data and normalized to the adaptation level. The corresponding equation was used in a display calibration method that included the grayscale standard display function (GSDF) but compensated for fixed adaptation. In the evaluation study, the contrast of circular objects with a fixed pixel contrast was displayed using both calibration methods and was rated on a five-grade scale. Results were calculated using a visual grading characteristics method. Error estimations in both observer studies were derived using a bootstrap method. Results: The contrast sensitivities for the darkest and brightest patterns compared to the contrast sensitivity at the adaptation luminance were 37% and 56%, respectively. The obtained Gaussian fit corresponded well with similar studies. The evaluation study showed a higher degree of equally distributed contrast throughout the luminance range with the calibration method compensated for fixed adaptation than for the GSDF. The two lowest scores for the GSDF were obtained for the darkest and brightest patterns. These scores were significantly lower than the lowest score obtained for the compensated GSDF. For the GSDF, the scores for all luminance levels were statistically
Adaptive control method for core power control in TRIGA Mark II reactor
Sabri Minhat, Mohd; Selamat, Hazlina; Subha, Nurul Adilla Mohd
2018-01-01
The 1MWth Reactor TRIGA PUSPATI (RTP) Mark II type has undergone more than 35 years of operation. The existing core power control uses feedback control algorithm (FCA). It is challenging to keep the core power stable at the desired value within acceptable error bands to meet the safety demand of RTP due to the sensitivity of nuclear research reactor operation. Currently, the system is not satisfied with power tracking performance and can be improved. Therefore, a new design core power control is very important to improve the current performance in tracking and regulate reactor power by control the movement of control rods. In this paper, the adaptive controller and focus on Model Reference Adaptive Control (MRAC) and Self-Tuning Control (STC) were applied to the control of the core power. The model for core power control was based on mathematical models of the reactor core, adaptive controller model, and control rods selection programming. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The adaptive control model was presented using Lyapunov method to ensure stable close loop system and STC Generalised Minimum Variance (GMV) Controller was not necessary to know the exact plant transfer function in designing the core power control. The performance between proposed adaptive control and FCA will be compared via computer simulation and analysed the simulation results manifest the effectiveness and the good performance of the proposed control method for core power control.
Tariba, N.; Bouknadel, A.; Haddou, A.; Ikken, N.; Omari, Hafsa El; Omari, Hamid El
2017-01-01
The Photovoltaic Generator have a nonlinear characteristic function relating the intensity at the voltage I = f (U) and depend on the variation of solar irradiation and temperature, In addition, its point of operation depends directly on the load that it supplies. To fix this drawback, and to extract the maximum power available to the terminal of the generator, an adaptation stage is introduced between the generator and the load to couple the two elements as perfectly as possible. The adaptation stage is associated with a command called MPPT MPPT (Maximum Power Point Tracker) whose is used to force the PVG to operate at the MPP (Maximum Power Point) under variation of climatic conditions and load variation. This paper presents a comparative study between the adaptive controller for PV Systems using MIT rules and Lyapunov method to regulate the PV voltage. The Incremental Conductance (IC) algorithm is used to extract the maximum power from the PVG by calculating the voltage Vref, and the adaptive controller is used to regulate and track quickly the PV voltage. The two methods of the adaptive controller will be compared to prove their performance by using the PSIM tools and experimental test, and the mathematical model of step-up with PVG model will be presented.
Directory of Open Access Journals (Sweden)
Bachmann M.
2011-12-01
Full Text Available The concept of fully adaptive multiresolution finite volume schemes has been developed and investigated during the past decade. Here grid adaptation is realized by performing a multiscale decomposition of the discrete data at hand. By means of hard thresholding the resulting multiscale data are compressed. From the remaining data a locally refined grid is constructed. The aim of the present work is to give a self-contained overview on the construction of an appropriate multiresolution analysis using biorthogonal wavelets, its efficient realization by means of hash maps using global cell identifiers and the parallelization of the multiresolution-based grid adaptation via MPI using space-filling curves. Le concept des schémas de volumes finis multi-échelles et adaptatifs a été développé et etudié pendant les dix dernières années. Ici le maillage adaptatif est réalisé en effectuant une décomposition multi-échelle des données discrètes proches. En les tronquant à l’aide d’une valeur seuil fixée, les données multi-échelles obtenues sont compressées. A partir de celles-ci, le maillage est raffiné localement. Le but de ce travail est de donner un aperçu concis de la construction d’une analyse appropriée de multiresolution utilisant les fonctions ondelettes biorthogonales, de son efficacité d’application en terme de tables de hachage en utilisant des identification globales de cellule et de la parallélisation du maillage adaptatif multirésolution via MPI à l’aide des courbes remplissantes.
Adaptive e-learning methods and IMS Learning Design. An integrated approach
Burgos, Daniel; Specht, Marcus
2006-01-01
Please, cite this publication as: Burgos, D., & Specht, M. (2006). Adaptive e-learning methods and IMS Learning Design. In Kinshuk, R. Koper, P. Kommers, P. Kirschner, D. G. Sampson & W. Didderen (Eds.), Proceedings of the 6th IEEE International Conference on Advanced Learning Technologies (pp.
When Smokey says "No": Fire-less methods for growing plants adapted to cultural fire regimes
Daniela Shebitz; Justine E. James
2010-01-01
Two culturally-significant plants (sweetgrass [Anthoxanthum nitens] and beargrass [Xerophyllum tenax]) are used as case studies for investigating methods of restoring plant populations that are adapted to indigenous burning practices without using fire. Reports from tribal members that the plants of interest were declining in traditional gathering areas provided the...
Rackauckas, Christopher; Nie, Qing
2017-01-01
Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.
International Nuclear Information System (INIS)
Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.
2013-01-01
Highlights: ► A new adaptive h-refinement approach has been developed for a class of nodal method. ► The resulting system of nodal equations is more amenable to efficient numerical solution. ► The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. ► Spatially adaptive approach greatly enhances the accuracy of the solution. - Abstract: The aim of this work is to develop a spatially adaptive coarse mesh strategy that progressively refines the nodes in appropriate regions of domain to solve the neutron balance equation by zeroth order nodal expansion method. A flux gradient based a posteriori estimation scheme has been utilized for checking the approximate solutions for various nodes. The relative surface net leakage of nodes has been considered as an assessment criterion. In this approach, the core module is called in by adaptive mesh generator to determine gradients of node surfaces flux to explore the possibility of node refinements in appropriate regions and directions of the problem. The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. For this purpose, a computer program ANRNE-2D, Adaptive Node Refinement Nodal Expansion, has been developed to solve neutron diffusion equation using average current nodal expansion method for 2D rectangular geometries. Implementing the adaptive algorithm confirms its superiority in enhancing the accuracy of the solution without using fine nodes throughout the domain and increasing the number of unknown solution. Some well-known benchmarks have been investigated and improvements are reported
Use of a dynamic grid adaptation in the asymmetric weighted residual method
International Nuclear Information System (INIS)
Graf, V.; Romstedt, P.; Werner, W.
1986-01-01
A dynamic grid adaptive method has been developed for use with the asymmetric weighted residual method. The method automatically adapts the number and position of the spatial mesh points as the solution of hyperbolic or parabolic vector partial differential equations progresses in time. The mesh selection algorithm is based on the minimization of the L 2 norm of the spatial discretization error. The method permits the accurate calculation of the evolution of inhomogeneities, like wave fronts, shock layers, and other sharp transitions, while generally using a coarse computational grid. The number of required mesh points is significantly reduced, relative to a fixed Eulerian grid. Since the mesh selection algorithm is computationally inexpensive, a corresponding reduction of computing time results
International Nuclear Information System (INIS)
Wang, Ruihong; Yang, Shulin; Pei, Lucheng
2011-01-01
Deep penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, an adaptive technique under the emission point as a sampling station is presented. The main advantage is to choose the most suitable sampling number from the emission point station to get the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is also derived. The main principle is to define the importance function of the response due to the particle state and ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive method under the emission point as a station could overcome the difficulty of underestimation to the result in some degree, and the related importance sampling method gets satisfied results as well. (author)
An h-adaptive finite element method for turbulent heat transfer
Energy Technology Data Exchange (ETDEWEB)
Carriington, David B [Los Alamos National Laboratory
2009-01-01
A two-equation turbulence closure model (k-{omega}) using an h-adaptive grid technique and finite element method (FEM) has been developed to simulate low Mach flow and heat transfer. These flows are applicable to many flows in engineering and environmental sciences. Of particular interest in the engineering modeling areas are: combustion, solidification, and heat exchanger design. Flows for indoor air quality modeling and atmospheric pollution transport are typical types of environmental flows modeled with this method. The numerical method is based on a hybrid finite element model using an equal-order projection process. The model includes thermal and species transport, localized mesh refinement (h-adaptive) and Petrov-Galerkin weighting for the stabilizing the advection. This work develops the continuum model of a two-equation turbulence closure method. The fractional step solution method is stated along with the h-adaptive grid method (Carrington and Pepper, 2002). Solutions are presented for 2d flow over a backward-facing step.
A class of discontinuous Petrov–Galerkin methods. Part III: Adaptivity
Demkowicz, Leszek
2012-04-01
We continue our theoretical and numerical study on the Discontinuous Petrov-Galerkin method with optimal test functions in context of 1D and 2D convection-dominated diffusion problems and hp-adaptivity. With a proper choice of the norm for the test space, we prove robustness (uniform stability with respect to the diffusion parameter) and mesh-independence of the energy norm of the FE error for the 1D problem. With hp-adaptivity and a proper scaling of the norms for the test functions, we establish new limits for solving convection-dominated diffusion problems numerically: ε=10 -11 for 1D and ε=10 -7 for 2D problems. The adaptive process is fully automatic and starts with a mesh consisting of few elements only. © 2011 IMACS. Published by Elsevier B.V. All rights reserved.
Adaptive mixed finite element methods for Darcy flow in fractured porous media
Chen, Huangxin; Salama, Amgad; Sun, Shuyu
2016-01-01
In this paper, we propose adaptive mixed finite element methods for simulating the single-phase Darcy flow in two-dimensional fractured porous media. The reduced model that we use for the simulation is a discrete fracture model coupling Darcy flows in the matrix and the fractures, and the fractures are modeled by one-dimensional entities. The Raviart-Thomas mixed finite element methods are utilized for the solution of the coupled Darcy flows in the matrix and the fractures. In order to improve the efficiency of the simulation, we use adaptive mixed finite element methods based on novel residual-based a posteriori error estimators. In addition, we develop an efficient upscaling algorithm to compute the effective permeability of the fractured porous media. Several interesting examples of Darcy flow in the fractured porous media are presented to demonstrate the robustness of the algorithm.
Data-adaptive Robust Optimization Method for the Economic Dispatch of Active Distribution Networks
DEFF Research Database (Denmark)
Zhang, Yipu; Ai, Xiaomeng; Fang, Jiakun
2018-01-01
Due to the restricted mathematical description of the uncertainty set, the current two-stage robust optimization is usually over-conservative which has drawn concerns from the power system operators. This paper proposes a novel data-adaptive robust optimization method for the economic dispatch...... of active distribution network with renewables. The scenario-generation method and the two-stage robust optimization are combined in the proposed method. To reduce the conservativeness, a few extreme scenarios selected from the historical data are used to replace the conventional uncertainty set....... The proposed extreme-scenario selection algorithm takes advantage of considering the correlations and can be adaptive to different historical data sets. A theoretical proof is given that the constraints will be satisfied under all the possible scenarios if they hold in the selected extreme scenarios, which...
Fuzzy adaptive Kalman filter for indoor mobile target positioning with INS/WSN integrated method
Institute of Scientific and Technical Information of China (English)
杨海; 李威; 罗成名
2015-01-01
Pure inertial navigation system (INS) has divergent localization errors after a long time. In order to compensate the disadvantage, wireless sensor network (WSN) associated with the INS was applied to estimate the mobile target positioning. Taking traditional Kalman filter (KF) as the framework, the system equation of KF was established by the INS and the observation equation of position errors was built by the WSN. Meanwhile, the observation equation of velocity errors was established by the velocity difference between the INS and WSN, then the covariance matrix of Kalman filter measurement noise was adjusted with fuzzy inference system (FIS), and the fuzzy adaptive Kalman filter (FAKF) based on the INS/WSN was proposed. The simulation results show that the FAKF method has better accuracy and robustness than KF and EKF methods and shows good adaptive capacity with time-varying system noise. Finally, experimental results further prove that FAKF has the fast convergence error, in comparison with KF and EKF methods.
International Nuclear Information System (INIS)
Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing
2012-01-01
In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)
A Remote Sensing Image Fusion Method based on adaptive dictionary learning
He, Tongdi; Che, Zongxi
2018-01-01
This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.
Adaptive mixed finite element methods for Darcy flow in fractured porous media
Chen, Huangxin
2016-09-21
In this paper, we propose adaptive mixed finite element methods for simulating the single-phase Darcy flow in two-dimensional fractured porous media. The reduced model that we use for the simulation is a discrete fracture model coupling Darcy flows in the matrix and the fractures, and the fractures are modeled by one-dimensional entities. The Raviart-Thomas mixed finite element methods are utilized for the solution of the coupled Darcy flows in the matrix and the fractures. In order to improve the efficiency of the simulation, we use adaptive mixed finite element methods based on novel residual-based a posteriori error estimators. In addition, we develop an efficient upscaling algorithm to compute the effective permeability of the fractured porous media. Several interesting examples of Darcy flow in the fractured porous media are presented to demonstrate the robustness of the algorithm.
Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun
2018-05-17
This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.
Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.
2016-01-01
18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a
Eiber, Calvin D; Dokos, Socrates; Lovell, Nigel H; Suaning, Gregg J
2017-05-01
The capacity to quickly and accurately simulate extracellular stimulation of neurons is essential to the design of next-generation neural prostheses. Existing platforms for simulating neurons are largely based on finite-difference techniques; due to the complex geometries involved, the more powerful spectral or differential quadrature techniques cannot be applied directly. This paper presents a mathematical basis for the application of a spectral element method to the problem of simulating the extracellular stimulation of retinal neurons, which is readily extensible to neural fibers of any kind. The activating function formalism is extended to arbitrary neuron geometries, and a segmentation method to guarantee an appropriate choice of collocation points is presented. Differential quadrature may then be applied to efficiently solve the resulting cable equations. The capacity for this model to simulate action potentials propagating through branching structures and to predict minimum extracellular stimulation thresholds for individual neurons is demonstrated. The presented model is validated against published values for extracellular stimulation threshold and conduction velocity for realistic physiological parameter values. This model suggests that convoluted axon geometries are more readily activated by extracellular stimulation than linear axon geometries, which may have ramifications for the design of neural prostheses.
Adapting a perinatal empathic training method from South Africa to Germany.
Knapp, Caprice; Honikman, Simone; Wirsching, Michael; Husni-Pascha, Gidah; Hänselmann, Eva
2018-01-01
Maternal mental health conditions are prevalent across the world. For women, the perinatal period is associated with increased rates of depression and anxiety. At the same time, there is widespread documentation of disrespectful care for women by maternity health staff. Improving the empathic engagement skills of maternity healthcare workers may enable them to respond to the mental health needs of their clients more effectively. In South Africa, a participatory empathic training method, the "Secret History" has been used as part of a national Department of Health training program with maternity staff and has showed promising results. For this paper, we aimed to describe an adaptation of the Secret History empathic training method from the South African to the German setting and to evaluate the adapted training. The pilot study occurred in an academic medical center in Germany. A focus group ( n = 8) was used to adapt the training by describing the local context and changing the materials to be relevant to Germany. After adapting the materials, the pilot training was conducted with a mixed group of professionals ( n = 15), many of whom were trainers themselves. A pre-post survey assessed the participants' empathy levels and attitudes towards the training method. In adapting the materials, the focus group discussion generated several experiences that were considered to be typical interpersonal and structural challenges facing healthcare workers in maternal care in Germany. These experiences were crafted into case scenarios that then formed the basis of the activities used in the Secret History empathic training pilot. Evaluation of the pilot training showed that although the participants had high levels of empathy in the pre-phase (100% estimated their empathic ability as high or very high), 69% became more aware of their own emotional experiences with patients and the need for self-care after the training. A majority, or 85%, indicated that the training
Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics
Guo, Qiang
Time dependent partial differential equations (PDEs) are widely used as mathematical models of environmental problems. Aerosols are now clearly identified as an important factor in many environmental aspects of climate and radiative forcing processes, as well as in the health effects of air quality. The mathematical models for the aerosol dynamics with respect to size distribution are nonlinear partial differential and integral equations, which describe processes of condensation, coagulation and deposition. Simulating the general aerosol dynamic equations on time, particle size and space exhibits serious difficulties because the size dimension ranges from a few nanometer to several micrometer while the spatial dimension is usually described with kilometers. Therefore, it is an important and challenging task to develop efficient techniques for solving time dependent dynamic equations. In this thesis, we develop and analyze efficient wavelet and adaptive methods for the time dependent dynamic equations on particle size and further apply them to the spatial aerosol dynamic systems. Wavelet Galerkin method is proposed to solve the aerosol dynamic equations on time and particle size due to the fact that aerosol distribution changes strongly along size direction and the wavelet technique can solve it very efficiently. Daubechies' wavelets are considered in the study due to the fact that they possess useful properties like orthogonality, compact support, exact representation of polynomials to a certain degree. Another problem encountered in the solution of the aerosol dynamic equations results from the hyperbolic form due to the condensation growth term. We propose a new characteristic-based fully adaptive multiresolution numerical scheme for solving the aerosol dynamic equation, which combines the attractive advantages of adaptive multiresolution technique and the characteristics method. On the aspect of theoretical analysis, the global existence and uniqueness of
3D spatially-adaptive canonical correlation analysis: Local and global methods.
Yang, Zhengshi; Zhuang, Xiaowei; Sreenivasan, Karthik; Mishra, Virendra; Curran, Tim; Byrd, Richard; Nandy, Rajesh; Cordes, Dietmar
2018-04-01
Local spatially-adaptive canonical correlation analysis (local CCA) with spatial constraints has been introduced to fMRI multivariate analysis for improved modeling of activation patterns. However, current algorithms require complicated spatial constraints that have only been applied to 2D local neighborhoods because the computational time would be exponentially increased if the same method is applied to 3D spatial neighborhoods. In this study, an efficient and accurate line search sequential quadratic programming (SQP) algorithm has been developed to efficiently solve the 3D local CCA problem with spatial constraints. In addition, a spatially-adaptive kernel CCA (KCCA) method is proposed to increase accuracy of fMRI activation maps. With oriented 3D spatial filters anisotropic shapes can be estimated during the KCCA analysis of fMRI time courses. These filters are orientation-adaptive leading to rotational invariance to better match arbitrary oriented fMRI activation patterns, resulting in improved sensitivity of activation detection while significantly reducing spatial blurring artifacts. The kernel method in its basic form does not require any spatial constraints and analyzes the whole-brain fMRI time series to construct an activation map. Finally, we have developed a penalized kernel CCA model that involves spatial low-pass filter constraints to increase the specificity of the method. The kernel CCA methods are compared with the standard univariate method and with two different local CCA methods that were solved by the SQP algorithm. Results show that SQP is the most efficient algorithm to solve the local constrained CCA problem, and the proposed kernel CCA methods outperformed univariate and local CCA methods in detecting activations for both simulated and real fMRI episodic memory data. Copyright © 2017 Elsevier Inc. All rights reserved.
A simple and inexpensive method for determining cold sensitivity and adaptation in mice.
Brenner, Daniel S; Golden, Judith P; Vogt, Sherri K; Gereau, Robert W
2015-03-17
Cold hypersensitivity is a serious clinical problem, affecting a broad subset of patients and causing significant decreases in quality of life. The cold plantar assay allows the objective and inexpensive assessment of cold sensitivity in mice, and can quantify both analgesia and hypersensitivity. Mice are acclimated on a glass plate, and a compressed dry ice pellet is held against the glass surface underneath the hindpaw. The latency to withdrawal from the cooling glass is used as a measure of cold sensitivity. Cold sensation is also important for survival in regions with seasonal temperature shifts, and in order to maintain sensitivity animals must be able to adjust their thermal response thresholds to match the ambient temperature. The Cold Plantar Assay (CPA) also allows the study of adaptation to changes in ambient temperature by testing the cold sensitivity of mice at temperatures ranging from 30 °C to 5 °C. Mice are acclimated as described above, but the glass plate is cooled to the desired starting temperature using aluminum boxes (or aluminum foil packets) filled with hot water, wet ice, or dry ice. The temperature of the plate is measured at the center using a filament T-type thermocouple probe. Once the plate has reached the desired starting temperature, the animals are tested as described above. This assay allows testing of mice at temperatures ranging from innocuous to noxious. The CPA yields unambiguous and consistent behavioral responses in uninjured mice and can be used to quantify both hypersensitivity and analgesia. This protocol describes how to use the CPA to measure cold hypersensitivity, analgesia, and adaptation in mice.
A method for online verification of adapted fields using an independent dose monitor
International Nuclear Information System (INIS)
Chang Jina; Norrlinger, Bernhard D.; Heaton, Robert K.; Jaffray, David A.; Cho, Young-Bin; Islam, Mohammad K.; Mahon, Robert
2013-01-01
Purpose: Clinical implementation of online adaptive radiotherapy requires generation of modified fields and a method of dosimetric verification in a short time. We present a method of treatment field modification to account for patient setup error, and an online method of verification using an independent monitoring system.Methods: The fields are modified by translating each multileaf collimator (MLC) defined aperture in the direction of the patient setup error, and magnifying to account for distance variation to the marked isocentre. A modified version of a previously reported online beam monitoring system, the integral quality monitoring (IQM) system, was investigated for validation of adapted fields. The system consists of a large area ion-chamber with a spatial gradient in electrode separation to provide a spatially sensitive signal for each beam segment, mounted below the MLC, and a calculation algorithm to predict the signal. IMRT plans of ten prostate patients have been modified in response to six randomly chosen setup errors in three orthogonal directions.Results: A total of approximately 49 beams for the modified fields were verified by the IQM system, of which 97% of measured IQM signal agree with the predicted value to within 2%.Conclusions: The modified IQM system was found to be suitable for online verification of adapted treatment fields
An h-adaptive mesh method for Boltzmann-BGK/hydrodynamics coupling
International Nuclear Information System (INIS)
Cai Zhenning; Li Ruo
2010-01-01
We introduce a coupled method for hydrodynamic and kinetic equations on 2-dimensional h-adaptive meshes. We adopt the Euler equations with a fast kinetic solver in the region near thermodynamical equilibrium, while use the Boltzmann-BGK equation in kinetic regions where fluids are far from equilibrium. A buffer zone is created around the kinetic regions, on which a gradually varying numerical flux is adopted. Based on the property of a continuously discretized cut-off function which describes how the flux varies, the coupling will be conservative. In order for the conservative 2-dimensional specularly reflective boundary condition to be implemented conveniently, the discrete Maxwellian is approximated by a high order continuous formula with improved accuracy on a disc instead of on a square domain. The h-adaptive method can work smoothly with a time-split numerical scheme. Through h-adaptation, the cell number is greatly reduced. This method is particularly suitable for problems with hydrodynamics breakdown on only a small part of the whole domain, so that the total efficiency of the algorithm can be greatly improved. Three numerical examples are presented to validate the proposed method and demonstrate its efficiency.
Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement
Shervani-Tabar, Navid; Vasilyev, Oleg V.
2016-11-01
This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.
Vibration-Based Adaptive Novelty Detection Method for Monitoring Faults in a Kinematic Chain
Directory of Open Access Journals (Sweden)
Jesus Adolfo Cariño-Corrales
2016-01-01
Full Text Available This paper presents an adaptive novelty detection methodology applied to a kinematic chain for the monitoring of faults. The proposed approach has the premise that only information of the healthy operation of the machine is initially available and fault scenarios will eventually develop. This approach aims to cover some of the challenges presented when condition monitoring is applied under a continuous learning framework. The structure of the method is divided into two recursive stages: first, an offline stage for initialization and retraining of the feature reduction and novelty detection modules and, second, an online monitoring stage to continuously assess the condition of the machine. Contrary to classical static feature reduction approaches, the proposed method reformulates the features by employing first a Laplacian Score ranking and then the Fisher Score ranking for retraining. The proposed methodology is validated experimentally by monitoring the vibration measurements of a kinematic chain driven by an induction motor. Two faults are induced in the motor to validate the method performance to detect anomalies and adapt the feature reduction and novelty detection modules to the new information. The obtained results show the advantages of employing an adaptive approach for novelty detection and feature reduction making the proposed method suitable for industrial machinery diagnosis applications.
Adaptive EWMA Method Based on Abnormal Network Traffic for LDoS Attacks
Directory of Open Access Journals (Sweden)
Dan Tang
2014-01-01
Full Text Available The low-rate denial of service (LDoS attacks reduce network services capabilities by periodically sending high intensity pulse data flows. For their concealed performance, it is more difficult for traditional DoS detection methods to detect LDoS attacks; at the same time the accuracy of the current detection methods for LDoS attacks is relatively low. As the fact that LDoS attacks led to abnormal distribution of the ACK traffic, LDoS attacks can be detected by analyzing the distribution characteristics of ACK traffic. Then traditional EWMA algorithm which can smooth the accidental error while being the same as the exceptional mutation may cause some misjudgment; therefore a new LDoS detection method based on adaptive EWMA (AEWMA algorithm is proposed. The AEWMA algorithm which uses an adaptive weighting function instead of the constant weighting of EWMA algorithm can smooth the accidental error and retain the exceptional mutation. So AEWMA method is more beneficial than EWMA method for analyzing and measuring the abnormal distribution of ACK traffic. The NS2 simulations show that AEWMA method can detect LDoS attacks effectively and has a low false negative rate and a false positive rate. Based on DARPA99 datasets, experiment results show that AEWMA method is more efficient than EWMA method.
Kou, Jisheng; Sun, Shuyu
2014-01-01
The gradient theory for the surface tension of simple fluids and mixtures is rigorously analyzed based on mathematical theory. The finite element approximation of surface tension is developed and analyzed, and moreover, an adaptive finite element method based on a physical-based estimator is proposed and it can be coupled efficiently with Newton's method as well. The numerical tests are carried out both to verify the proposed theory and to demonstrate the efficiency of the proposed method. © 2013 Elsevier B.V. All rights reserved.
Kou, Jisheng
2014-01-01
The gradient theory for the surface tension of simple fluids and mixtures is rigorously analyzed based on mathematical theory. The finite element approximation of surface tension is developed and analyzed, and moreover, an adaptive finite element method based on a physical-based estimator is proposed and it can be coupled efficiently with Newton\\'s method as well. The numerical tests are carried out both to verify the proposed theory and to demonstrate the efficiency of the proposed method. © 2013 Elsevier B.V. All rights reserved.
Janssen, Bä rbel; Kanschat, Guido
2011-01-01
A multilevel method on adaptive meshes with hanging nodes is presented, and the additional matrices appearing in the implementation are derived. Smoothers of overlapping Schwarz type are discussed; smoothing is restricted to the interior of the subdomains refined to the current level; thus it has optimal computational complexity. When applied to conforming finite element discretizations of elliptic problems and Maxwell equations, the method's convergence rates are very close to those for the nonadaptive version. Furthermore, the smoothers remain efficient for high order finite elements. We discuss the implementation in a general finite element code using the example of the deal.II library. © 2011 Societ y for Industrial and Applied Mathematics.
A projection-adapted cross entropy (PACE) method for transmission network planning
Energy Technology Data Exchange (ETDEWEB)
Eshragh, Ali; Filar, Jerzy [University of South Australia, School of Mathematics and Statistics, Mawson Lakes, SA (Australia); Nazar, Asef [University of South Australia, Institute for Sustainable Systems Technologies, School of Mathematics and Statistics, Mawson Lakes, SA (Australia)
2011-06-15
In this paper, we propose an adaptation of the cross entropy (CE) method called projection-adapted CE (PACE) to solve a transmission expansion problem that arises in management of national and provincial electricity grids. The aim of the problem is to find an expansion policy that is both economical and operational from the technical perspective. Often, the transmission network expansion problem is mathematically formulated as a mixed integer nonlinear program that is very challenging algorithmically. The challenge originates from the fact that a global optimum should be found despite the presence, of possibly a huge number, of local optima. The PACE method shows promise in solving global optimization problems regardless of continuity or other assumptions. In our approach, we sample the integer variables using the CE mechanism, and solve LPs to obtain matching continuous variables. Numerical results, on selected test systems, demonstrate the potential of this approach. (orig.)
Directory of Open Access Journals (Sweden)
Hui Liu
2015-01-01
Full Text Available The key problem of computer-aided diagnosis (CAD of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO pulmonary nodules than other typical algorithms.
An integration time adaptive control method for atmospheric composition detection of occultation
Ding, Lin; Hou, Shuai; Yu, Fei; Liu, Cheng; Li, Chao; Zhe, Lin
2018-01-01
When sun is used as the light source for atmospheric composition detection, it is necessary to image sun for accurate identification and stable tracking. In the course of 180 second of the occultation, the magnitude of sun light intensity through the atmosphere changes greatly. It is nearly 1100 times illumination change between the maximum atmospheric and the minimum atmospheric. And the process of light change is so severe that 2.9 times per second of light change can be reached. Therefore, it is difficult to control the integration time of sun image camera. In this paper, a novel adaptive integration time control method for occultation is presented. In this method, with the distribution of gray value in the image as the reference variable, and the concepts of speed integral PID control, the integration time adaptive control problem of high frequency imaging. The large dynamic range integration time automatic control in the occultation can be achieved.
An object-oriented decomposition of the adaptive-hp finite element method
Energy Technology Data Exchange (ETDEWEB)
Wiley, J.C.
1994-12-13
Adaptive-hp methods are those which use a refinement control strategy driven by a local error estimate to locally modify the element size, h, and polynomial order, p. The result is an unstructured mesh in which each node may be associated with a different polynomial order and which generally require complex data structures to implement. Object-oriented design strategies and languages which support them, e.g., C++, help control the complexity of these methods. Here an overview of the major classes and class structure of an adaptive-hp finite element code is described. The essential finite element structure is described in terms of four areas of computation each with its own dynamic characteristics. Implications of converting the code for a distributed-memory parallel environment are also discussed.
Directory of Open Access Journals (Sweden)
Hailun Wang
2017-01-01
Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.
Energy Technology Data Exchange (ETDEWEB)
Sheng, Qin, E-mail: Qin_Sheng@baylor.edu [Department of Mathematics and Center for Astrophysics, Space Physics and Engineering Research, Baylor University, One Bear Place, Waco, TX 76798-7328 (United States); Sun, Hai-wei, E-mail: hsun@umac.mo [Department of Mathematics, University of Macau (Macao)
2016-11-15
This study concerns the asymptotic stability of an eikonal, or ray, transformation based Peaceman–Rachford splitting method for solving the paraxial Helmholtz equation with high wave numbers. Arbitrary nonuniform grids are considered in transverse and beam propagation directions. The differential equation targeted has been used for modeling propagations of high intensity laser pulses over a long distance without diffractions. Self-focusing of high intensity beams may be balanced with the de-focusing effect of created ionized plasma channel in the situation, and applications of grid adaptations are frequently essential. It is shown rigorously that the fully discretized oscillation-free decomposition method on arbitrary adaptive grids is asymptotically stable with a stability index one. Simulation experiments are carried out to illustrate our concern and conclusions.
Adapted Method for Separating Kinetic SZ Signal from Primary CMB Fluctuations
Directory of Open Access Journals (Sweden)
Forni Olivier
2005-01-01
Full Text Available In this first attempt to extract a map of the kinetic Sunyaev-Zel'dovich (KSZ temperature fluctuations from the cosmic microwave background (CMB anisotropies, we use a method which is based on simple and minimal assumptions. We first focus on the intrinsic limitations of the method due to the cosmological signal itself. We demonstrate using simulated maps that the KSZ reconstructed maps are in quite good agreement with the original input signal with a correlation coefficient between original and reconstructed maps of on average, and an error on the standard deviation of the reconstructed KSZ map of only % on average. To achieve these results, our method is based on the fact that some first-step component separation provides us with (i a map of Compton parameters for the thermal Sunyaev-Zel'dovich (TSZ effect of galaxy clusters, and (ii a map of temperature fluctuations which is the sum of primary CMB and KSZ signals. Our method takes benefit from the spatial correlation between KSZ and TSZ effects which are both due to the same galaxy clusters. This correlation allows us to use the TSZ map as a spatial template in order to mask, in the map, the pixels where the clusters must have imprinted an SZ fluctuation. In practice, a series of TSZ thresholds is defined and for each threshold, we estimate the corresponding KSZ signal by interpolating the CMB fluctuations on the masked pixels. The series of estimated KSZ maps is finally used to reconstruct the KSZ map through the minimisation of a criterion taking into account two statistical properties of the KSZ signal (KSZ dominates over primary anisotropies at small scales, KSZ fluctuations are non-Gaussian distributed. We show that the results are quite sensitive to the effect of beam convolution, especially for large beams, and to the corruption by instrumental noise.
Quantification of organ motion based on an adaptive image-based scale invariant feature method
Energy Technology Data Exchange (ETDEWEB)
Paganelli, Chiara [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133 (Italy); Peroni, Marta [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Paul Scherrer Institut, Zentrum für Protonentherapie, WMSA/C15, CH-5232 Villigen PSI (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, strada Campeggi 53, Pavia 27100 (Italy)
2013-11-15
Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT
A fully general and adaptive inverse analysis method for cementitious materials
DEFF Research Database (Denmark)
Jepsen, Michael S.; Damkilde, Lars; Lövgren, Ingemar
2016-01-01
The paper presents an adaptive method for inverse determination of the tensile σ - w relationship, direct tensile strength and Young’s modulus of cementitious materials. The method facilitates an inverse analysis with a multi-linear σ - w function. Usually, simple bi- or tri-linear functions...... are applied when modeling the fracture mechanisms in cementitious materials, but the vast development of pseudo-strain hardening, fiber reinforced cementitious materials require inverse methods, capable of treating multi-linear σ - w functions. The proposed method is fully general in the sense that it relies...... of notched specimens and simulated data from a nonlinear hinge model. The paper shows that the results obtained by means of the proposed method is independent on the initial shape of the σ - w function and the initial guess of the tensile strength. The method provides very accurate fits, and the increased...
CARA Risk Assessment Thresholds
Hejduk, M. D.
2016-01-01
Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).
An Adaptive Privacy Protection Method for Smart Home Environments Using Supervised Learning
Directory of Open Access Journals (Sweden)
Jingsha He
2017-03-01
Full Text Available In recent years, smart home technologies have started to be widely used, bringing a great deal of convenience to people’s daily lives. At the same time, privacy issues have become particularly prominent. Traditional encryption methods can no longer meet the needs of privacy protection in smart home applications, since attacks can be launched even without the need for access to the cipher. Rather, attacks can be successfully realized through analyzing the frequency of radio signals, as well as the timestamp series, so that the daily activities of the residents in the smart home can be learnt. Such types of attacks can achieve a very high success rate, making them a great threat to users’ privacy. In this paper, we propose an adaptive method based on sample data analysis and supervised learning (SDASL, to hide the patterns of daily routines of residents that would adapt to dynamically changing network loads. Compared to some existing solutions, our proposed method exhibits advantages such as low energy consumption, low latency, strong adaptability, and effective privacy protection.
An Adaptive Physics-Based Method for the Solution of One-Dimensional Wave Motion Problems
Directory of Open Access Journals (Sweden)
Masoud Shafiei
2015-12-01
Full Text Available In this paper, an adaptive physics-based method is developed for solving wave motion problems in one dimension (i.e., wave propagation in strings, rods and beams. The solution of the problem includes two main parts. In the first part, after discretization of the domain, a physics-based method is developed considering the conservation of mass and the balance of momentum. In the second part, adaptive points are determined using the wavelet theory. This part is done employing the Deslauries-Dubuc (D-D wavelets. By solving the problem in the first step, the domain of the problem is discretized by the same cells taking into consideration the load and characteristics of the structure. After the first trial solution, the D-D interpolation shows the lack and redundancy of points in the domain. These points will be added or eliminated for the next solution. This process may be repeated for obtaining an adaptive mesh for each step. Also, the smoothing spline fit is used to eliminate the noisy portion of the solution. Finally, the results of the proposed method are compared with the results available in the literature. The comparison shows excellent agreement between the obtained results and those already reported.
Han, Dongmei; Xu, Xinyi; Yan, Denghua
2016-04-01
In recent years, global climate change has significantly caused a serious crisis of water resources throughout the world. However, mainly through variations in temperature, climate change will affect water requirements of crop. It is obvious that the rise of temperature affects growing period and phenological period of crop directly, then changes the water demand quota of crop. Methods including accumulated temperature threshold and climatic tendency rate were adopted, which made up for the weakness of phenological observations, to reveal the response of crop phenological change during the growing period. Then using Penman-Menteith model and crop coefficients from the United Nations Food& Agriculture Organization (FAO), the paper firstly explored crop water requirements in different growth periods, and further forecasted quantitatively crop water requirements in Heihe River Basin, China under different climate change scenarios. Results indicate that: (i) The results of crop phenological change established in the method of accumulated temperature threshold were in agreement with measured results, and (ii) there were many differences in impacts of climate warming on water requirement of different crops. The growth periods of wheat and corn had tendency of shortening as well as the length of growth periods. (ii)Results of crop water requirements under different climate change scenarios showed: when temperature increased by 1°C, the start time of wheat growth period changed, 2 days earlier than before, and the length of total growth period shortened 2 days. Wheat water requirements increased by 1.4mm. However, corn water requirements decreased by almost 0.9mm due to the increasing temperature of 1°C. And the start time of corn growth period become 3 days ahead, and the length of total growth period shortened 4 days. Therefore, the contradiction between water supply and water demands are more obvious under the future climate warming in Heihe River Basin, China.
Directory of Open Access Journals (Sweden)
Mehdi Neshat
2015-11-01
Full Text Available In this article, the objective was to present effective and optimal strategies aimed at improving the Swallow Swarm Optimization (SSO method. The SSO is one of the best optimization methods based on swarm intelligence which is inspired by the intelligent behaviors of swallows. It has been able to offer a relatively strong method for solving optimization problems. However, despite its many advantages, the SSO suffers from two shortcomings. Firstly, particles movement speed is not controlled satisfactorily during the search due to the lack of an inertia weight. Secondly, the variables of the acceleration coefficient are not able to strike a balance between the local and the global searches because they are not sufficiently flexible in complex environments. Therefore, the SSO algorithm does not provide adequate results when it searches in functions such as the Step or Quadric function. Hence, the fuzzy adaptive Swallow Swarm Optimization (FASSO method was introduced to deal with these problems. Meanwhile, results enjoy high accuracy which are obtained by using an adaptive inertia weight and through combining two fuzzy logic systems to accurately calculate the acceleration coefficients. High speed of convergence, avoidance from falling into local extremum, and high level of error tolerance are the advantages of proposed method. The FASSO was compared with eleven of the best PSO methods and SSO in 18 benchmark functions. Finally, significant results were obtained.
An adaptive reentry guidance method considering the influence of blackout zone
Wu, Yu; Yao, Jianyao; Qu, Xiangju
2018-01-01
Reentry guidance has been researched as a popular topic because it is critical for a successful flight. In view that the existing guidance methods do not take into account the accumulated navigation error of Inertial Navigation System (INS) in the blackout zone, in this paper, an adaptive reentry guidance method is proposed to obtain the optimal reentry trajectory quickly with the target of minimum aerodynamic heating rate. The terminal error in position and attitude can be also reduced with the proposed method. In this method, the whole reentry guidance task is divided into two phases, i.e., the trajectory updating phase and the trajectory planning phase. In the first phase, the idea of model predictive control (MPC) is used, and the receding optimization procedure ensures the optimal trajectory in the next few seconds. In the trajectory planning phase, after the vehicle has flown out of the blackout zone, the optimal reentry trajectory is obtained by online planning to adapt to the navigation information. An effective swarm intelligence algorithm, i.e. pigeon inspired optimization (PIO) algorithm, is applied to obtain the optimal reentry trajectory in both of the two phases. Compared to the trajectory updating method, the proposed method can reduce the terminal error by about 30% considering both the position and attitude, especially, the terminal error of height has almost been eliminated. Besides, the PIO algorithm performs better than the particle swarm optimization (PSO) algorithm both in the trajectory updating phase and the trajectory planning phases.
Errors in the estimation method for the rejection of vibrations in adaptive optics systems
Kania, Dariusz
2017-06-01
In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.
Removing damped sinusoidal vibrations in adaptive optics systems using a DFT-based estimation method
Kania, Dariusz
2017-06-01
The problem of a vibrations rejection in adaptive optics systems is still present in publications. These undesirable signals emerge because of shaking the system structure, the tracking process, etc., and they usually are damped sinusoidal signals. There are some mechanical solutions to reduce the signals but they are not very effective. One of software solutions are very popular adaptive methods. An AVC (Adaptive Vibration Cancellation) method has been presented and developed in recent years. The method is based on the estimation of three vibrations parameters and values of frequency, amplitude and phase are essential to produce and adjust a proper signal to reduce or eliminate vibrations signals. This paper presents a fast (below 10 ms) and accurate estimation method of frequency, amplitude and phase of a multifrequency signal that can be used in the AVC method to increase the AO system performance. The method accuracy depends on several parameters: CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, THD, b - number of A/D converter bits in a real time system, γ - the damping ratio of the tested signal, φ - the phase of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value of systematic error for γ = 0.1%, CiR = 1.1 and N = 32 is approximately 10^-4 Hz/Hz. This paper focuses on systematic errors of and effect of the signal phase and values of γ on the results.
Moving finite elements: A continuously adaptive method for computational fluid dynamics
International Nuclear Information System (INIS)
Glasser, A.H.; Miller, K.; Carlson, N.
1991-01-01
Moving Finite Elements (MFE), a recently developed method for computational fluid dynamics, promises major advances in the ability of computers to model the complex behavior of liquids, gases, and plasmas. Applications of computational fluid dynamics occur in a wide range of scientifically and technologically important fields. Examples include meteorology, oceanography, global climate modeling, magnetic and inertial fusion energy research, semiconductor fabrication, biophysics, automobile and aircraft design, industrial fluid processing, chemical engineering, and combustion research. The improvements made possible by the new method could thus have substantial economic impact. Moving Finite Elements is a moving node adaptive grid method which has a tendency to pack the grid finely in regions where it is most needed at each time and to leave it coarse elsewhere. It does so in a manner which is simple and automatic, and does not require a large amount of human ingenuity to apply it to each particular problem. At the same time, it often allows the time step to be large enough to advance a moving shock by many shock thicknesses in a single time step, moving the grid smoothly with the solution and minimizing the number of time steps required for the whole problem. For 2D problems (two spatial variables) the grid is composed of irregularly shaped and irregularly connected triangles which are very flexible in their ability to adapt to the evolving solution. While other adaptive grid methods have been developed which share some of these desirable properties, this is the only method which combines them all. In many cases, the method can save orders of magnitude of computing time, equivalent to several generations of advancing computer hardware
Sparse Pseudo Spectral Projection Methods with Directional Adaptation for Uncertainty Quantification
Winokur, J.
2015-12-19
We investigate two methods to build a polynomial approximation of a model output depending on some parameters. The two approaches are based on pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids, and aim at providing a finer control of the resolution along two distinct subsets of model parameters. The control of the error along different subsets of parameters may be needed for instance in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid PSP is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error. In addition, the global approach is better suited for generalization to more than two subsets of directions.
A local adaptive method for the numerical approximation in seismic wave modelling
Directory of Open Access Journals (Sweden)
Galuzzi Bruno G.
2017-12-01
Full Text Available We propose a new numerical approach for the solution of the 2D acoustic wave equation to model the predicted data in the field of active-source seismic inverse problems. This method consists in using an explicit finite difference technique with an adaptive order of approximation of the spatial derivatives that takes into account the local velocity at the grid nodes. Testing our method to simulate the recorded seismograms in a marine seismic acquisition, we found that the low computational time and the low approximation error of the proposed approach make it suitable in the context of seismic inversion problems.
A wavelet domain adaptive image watermarking method based on chaotic encryption
Wei, Fang; Liu, Jian; Cao, Hanqiang; Yang, Jun
2009-10-01
A digital watermarking technique is a specific branch of steganography, which can be used in various applications, provides a novel way to solve security problems for multimedia information. In this paper, we proposed a kind of wavelet domain adaptive image digital watermarking method using chaotic stream encrypt and human eye visual property. The secret information that can be seen as a watermarking is hidden into a host image, which can be publicly accessed, so the transportation of the secret information will not attract the attention of illegal receiver. The experimental results show that the method is invisible and robust against some image processing.
An Adaptive Dense Matching Method for Airborne Images Using Texture Information
Directory of Open Access Journals (Sweden)
ZHU Qing
2017-01-01
Full Text Available Semi-global matching (SGM is essentially a discrete optimization for the disparity value of each pixel, under the assumption of disparity continuities. SGM overcomes the influence of the disparity discontinuities by a set of parameters. Using smaller parameters, the continuity constraint is weakened, which will cause significant noises in planar and textureless areas, reflected as the fluctuations on the final surface reconstruction. On the other hands, larger parameters will impose too much constraints on continuities, which may lead to losses of sharp features. To address this problem, this paper proposes an adaptive dense stereo matching methods for airborne images using with texture information. Firstly, the texture is quantified, and under the assumption that disparity variation is directly proportional to the texture information, the adaptive parameters are gauged accordingly. Second, SGM is adopted to optimize the discrete disparities using the adaptively tuned parameters. Experimental evaluations using the ISPRS benchmark dataset and images obtained by the SWDC-5 have revealed that the proposed method will significantly improve the visual qualities of the point clouds.
Luo, Yuan; Wang, Bo-yu; Zhang, Yi; Zhao, Li-ming
2018-03-01
In this paper, under different illuminations and random noises, focusing on the local texture feature's defects of a face image that cannot be completely described because the threshold of local ternary pattern (LTP) cannot be calculated adaptively, a local three-value model of improved adaptive local ternary pattern (IALTP) is proposed. Firstly, the difference function between the center pixel and the neighborhood pixel weight is established to obtain the statistical characteristics of the central pixel and the neighborhood pixel. Secondly, the adaptively gradient descent iterative function is established to calculate the difference coefficient which is defined to be the threshold of the IALTP operator. Finally, the mean and standard deviation of the pixel weight of the local region are used as the coding mode of IALTP. In order to reflect the overall properties of the face and reduce the dimension of features, the two-directional two-dimensional PCA ((2D)2PCA) is adopted. The IALTP is used to extract local texture features of eyes and mouth area. After combining the global features and local features, the fusion features (IALTP+) are obtained. The experimental results on the Extended Yale B and AR standard face databases indicate that under different illuminations and random noises, the algorithm proposed in this paper is more robust than others, and the feature's dimension is smaller. The shortest running time reaches 0.329 6 s, and the highest recognition rate reaches 97.39%.
An adaptive segment method for smoothing lidar signal based on noise estimation
Wang, Yuzhao; Luo, Pingping
2014-10-01
An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.
Janssen, Bärbel
2011-01-01
A multilevel method on adaptive meshes with hanging nodes is presented, and the additional matrices appearing in the implementation are derived. Smoothers of overlapping Schwarz type are discussed; smoothing is restricted to the interior of the subdomains refined to the current level; thus it has optimal computational complexity. When applied to conforming finite element discretizations of elliptic problems and Maxwell equations, the method\\'s convergence rates are very close to those for the nonadaptive version. Furthermore, the smoothers remain efficient for high order finite elements. We discuss the implementation in a general finite element code using the example of the deal.II library. © 2011 Societ y for Industrial and Applied Mathematics.
An accurate anisotropic adaptation method for solving the level set advection equation
International Nuclear Information System (INIS)
Bui, C.; Dapogny, C.; Frey, P.
2012-01-01
In the present paper, a mesh adaptation process for solving the advection equation on a fully unstructured computational mesh is introduced, with a particular interest in the case it implicitly describes an evolving surface. This process mainly relies on a numerical scheme based on the method of characteristics. However, low order, this scheme lends itself to a thorough analysis on the theoretical side. It gives rise to an anisotropic error estimate which enjoys a very natural interpretation in terms of the Hausdorff distance between the exact and approximated surfaces. The computational mesh is then adapted according to the metric supplied by this estimate. The whole process enjoys a good accuracy as far as the interface resolution is concerned. Some numerical features are discussed and several classical examples are presented and commented in two or three dimensions. (authors)
An adaptive two-stage dose-response design method for establishing proof of concept.
Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R
2013-01-01
We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.
A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method
Bush, I. J.; Todorov, I. T.; Smith, W.
2006-09-01
The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.
Solution verification, goal-oriented adaptive methods for stochastic advection–diffusion problems
Almeida, Regina C.
2010-08-01
A goal-oriented analysis of linear, stochastic advection-diffusion models is presented which provides both a method for solution verification as well as a basis for improving results through adaptation of both the mesh and the way random variables are approximated. A class of model problems with random coefficients and source terms is cast in a variational setting. Specific quantities of interest are specified which are also random variables. A stochastic adjoint problem associated with the quantities of interest is formulated and a posteriori error estimates are derived. These are used to guide an adaptive algorithm which adjusts the sparse probabilistic grid so as to control the approximation error. Numerical examples are given to demonstrate the methodology for a specific model problem. © 2010 Elsevier B.V.
Solution verification, goal-oriented adaptive methods for stochastic advection–diffusion problems
Almeida, Regina C.; Oden, J. Tinsley
2010-01-01
A goal-oriented analysis of linear, stochastic advection-diffusion models is presented which provides both a method for solution verification as well as a basis for improving results through adaptation of both the mesh and the way random variables are approximated. A class of model problems with random coefficients and source terms is cast in a variational setting. Specific quantities of interest are specified which are also random variables. A stochastic adjoint problem associated with the quantities of interest is formulated and a posteriori error estimates are derived. These are used to guide an adaptive algorithm which adjusts the sparse probabilistic grid so as to control the approximation error. Numerical examples are given to demonstrate the methodology for a specific model problem. © 2010 Elsevier B.V.
Adaptive collocation method for simultaneous heat and mass diffusion with phase change
International Nuclear Information System (INIS)
Chawla, T.C.; Leaf, G.; Minkowycz, W.J.; Pedersen, D.R.; Shouman, A.R.
1983-01-01
The present study is carried out to determine melting rates of a lead slab of various thicknesses by contact with sodium coolant and to evaluate the extent of penetration and the mixing rates of molten lead into liquid sodium by molecular diffusion alone. The study shows that these two calculations cannot be performed simultaneously without the use of adaptive coordinates which cause considerable stretching of the physical coordinates for mass diffusion. Because of the large difference in densities of these two liquid metals, the traditional constant density approximation for the calculation of mass diffusion cannot be used for studying their interdiffusion. The use of orthogonal collocation method along with adaptive coordinates produces extremely accurate results which are ascertained by comparing with the existing analytical solutions for concentration distribution for the case of constant density approximation and for melting rates for the case of infinite lead slab
Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method
Klimczak, Marek; Cecot, Witold
2018-01-01
We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.
A novel ECG data compression method based on adaptive Fourier decomposition
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
A Cartesian Adaptive Level Set Method for Two-Phase Flows
Ham, F.; Young, Y.-N.
2003-01-01
In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.
Adaptive moving grid methods for two-phase flow in porous media
Dong, Hao
2014-08-01
In this paper, we present an application of the moving mesh method for approximating numerical solutions of the two-phase flow model in porous media. The numerical schemes combine a mixed finite element method and a finite volume method, which can handle the nonlinearities of the governing equations in an efficient way. The adaptive moving grid method is then used to distribute more grid points near the sharp interfaces, which enables us to obtain accurate numerical solutions with fewer computational resources. The numerical experiments indicate that the proposed moving mesh strategy could be an effective way to approximate two-phase flows in porous media. © 2013 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Linhai Gan
2017-01-01
Full Text Available The random matrix (RM method is widely applied for group target tracking. The assumption that the group extension keeps invariant in conventional RM method is not yet valid, as the orientation of the group varies rapidly while it is maneuvering; thus, a new approach with group extension predicted is derived here. To match the group maneuvering, a best model augmentation (BMA method is introduced. The existing BMA method uses a fixed basic model set, which may lead to a poor performance when it could not ensure basic coverage of true motion modes. Here, a maneuvering group target tracking algorithm is proposed, where the group extension prediction and the BMA adaption are exploited. The performance of the proposed algorithm will be illustrated by simulation.
Threshold quantum cryptography
International Nuclear Information System (INIS)
Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki
2005-01-01
We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding
Gupta, Joyeeta; Termeer, Catrien; Klostermann, Judith; Meijerink, Sander; van den Brink, Margo; Jong, Pieter; Nooteboom, Sibout; Bergsma, Emmy
2010-01-01
Climate change potentially brings continuous and unpredictable changes in weather patterns. Consequently, it calls for institutions that promote the adaptive capacity of society and allow society to modify its institutions at a rate commensurate with the rate of environmental change. Institutions,
Gupta, J.; Termeer, C.; Klostermann, J.; Meijerink, S.; van den Brink, M.; de Jong, P.; Nootebooms, S.; Bergsma, E.J.
2010-01-01
Climate change potentially brings continuous and unpredictable changes in weather patterns. Consequently, it calls for institutions that promote the adaptive capacity of society and allow society to modify its institutions at a rate commensurate with the rate of environmental change. Institutions,
Directory of Open Access Journals (Sweden)
González Delgado Ángel
2012-06-01
Full Text Available In the biodiesel production process from microalgae, the cell disruption and lipid extraction stages are important for obtaining triglycerides that can be transesterified to biodiesel and glycerol. In this work, the Bligh & Dyer method was adapted for lipid extraction from native microalgae using organosolv pretreatment or acid hydrolysis as cell disruption mechanism for improve the extraction process. Chloroform-methanol-water are the solvents employed in the Bligh & Dyer extraction method. The microalgae species Botryococcus braunii, Nannocloropsis, Closterium, Guinardia y Amphiprora were employed for the experimental part. Adaptation of the method was found the best extraction conditions, these were: 1:20 biomass/solvent ratio, initial ratio solvents CHCl3:CH3OH:H2O (1:2:0, stirring conditions of 5000 rpm for 14 minutes and centrifuge of 3400 rpm for 15 minutes. The cell disruption mechanisms allowed to obtain extracts with high lipid content after performing the extraction with Bligh & Dyer method, but decreases significantly the total extraction yield. Finally, the fatty acids profiles showed that Botryococcus braunii specie contains higher acylglycerol percentage area suitable for the production of biodiesel.
Patched based methods for adaptive mesh refinement solutions of partial differential equations
Energy Technology Data Exchange (ETDEWEB)
Saltzman, J.
1997-09-02
This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.
AK-SYS: An adaptation of the AK-MCS method for system reliability
International Nuclear Information System (INIS)
Fauriat, W.; Gayton, N.
2014-01-01
A lot of research work has been proposed over the last two decades to evaluate the probability of failure of a structure involving a very time-consuming mechanical model. Surrogate model approaches based on Kriging, such as the Efficient Global Reliability Analysis (EGRA) or the Active learning and Kriging-based Monte-Carlo Simulation (AK-MCS) methods, are very efficient and each has advantages of its own. EGRA is well suited to evaluating small probabilities, as the surrogate can be used to classify any population. AK-MCS is built in relation to a given population and requires no optimization program for the active learning procedure to be performed. It is therefore easier to implement and more likely to spend computational effort on areas with a significant probability content. When assessing system reliability, analytical approaches and first-order approximation are widely used in the literature. However, in the present paper we rather focus on sampling techniques and, considering the recent adaptation of the EGRA method for systems, a strategy is presented to adapt the AK-MCS method for system reliability. The AK-SYS method, “Active learning and Kriging-based SYStem reliability method”, is presented. Its high efficiency and accuracy are illustrated via various examples
Data-adapted moving least squares method for 3-D image interpolation
International Nuclear Information System (INIS)
Jang, Sumi; Lee, Yeon Ju; Jeong, Byeongseon; Nam, Haewon; Lee, Rena; Yoon, Jungho
2013-01-01
In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons. (paper)
Grudinin , Sergei; Garkavenko , Maria; Kazennov , Andrei
2017-01-01
International audience; A new method called Pepsi-SAXS is presented that calculates small-angle X-ray scattering profiles from atomistic models. The method is based on the multipole expansion scheme and is significantly faster compared with other tested methods. In particular, using the Nyquist–Shannon–Kotelnikov sampling theorem, the multipole expansion order is adapted to the size of the model and the resolution of the experimental data. It is argued that by using the adaptive expansion ord...
Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J
2015-09-01
Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
SYSTEM ANALYSIS OF MAJOR TRENDS IN DEVELOPMENT OF ADAPTIVE TRAFFIC FLOW MANAGEMENT METHODS
Directory of Open Access Journals (Sweden)
A. N. Klimovich
2017-01-01
Full Text Available Adaptive algorithms, which current traffic systems are based on, exist for many decades. Information technologies have developed significantly over this period and it makes more relevant their application in the field of transport. This paper analyses modern trends in the development of adaptive traffic flow control methods. Reviewed the most perspective directions in the field of intelligent transport systems, such as high-speed wireless communication between vehicles and road infrastructure based on such technologies as DSRC and WAVE, traffic jams prediction having such features as traffic flow information, congestion, velocity of vehicles using machine learning, fuzzy logic rules and genetic algorithms, application of driver assistance systems to increase vehicle’s autonomy. Advantages of such technologies in safety, efficiency and usability of transport are shown. Described multi-agent approach, which uses V2I-communication between vehicles and intersection controller to improve efficiency of control due to more complete traffic flow information and possibility to give orders to separate vehicles. Presented number of algorithms which use such approach to create new generation of adaptive transport systems.
A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks
Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro
2016-01-01
In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.
A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks
Moraes, Alvaro
2016-07-07
In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.
Teichmann, A Lina; Nieuwenstein, Mark R; Rich, Anina N
2017-08-01
For digit-color synaesthetes, digits elicit vivid experiences of color that are highly consistent for each individual. The conscious experience of synaesthesia is typically unidirectional: Digits evoke colors but not vice versa. There is an ongoing debate about whether synaesthetes have a memory advantage over non-synaesthetes. One key question in this debate is whether synaesthetes have a general superiority or whether any benefit is specific to a certain type of material. Here, we focus on immediate serial recall and ask digit-color synaesthetes and controls to memorize digit and color sequences. We developed a sensitive staircase method manipulating presentation duration to measure participants' serial recall of both overlearned and novel sequences. Our results show that synaesthetes can activate digit information to enhance serial memory for color sequences. When color sequences corresponded to ascending or descending digit sequences, synaesthetes encoded these sequences at a faster rate than their non-synaesthetes counterparts and faster than non-structured color sequences. However, encoding color sequences is approximately 200 ms slower than encoding digit sequences directly, independent of group and condition, which shows that the translation process is time consuming. These results suggest memory advantages in synaesthesia require a modified dual-coding account, in which secondary (synaesthetically linked) information is useful only if it is more memorable than the primary information to be recalled. Our study further shows that duration thresholds are a sensitive method to measure subtle differences in serial recall performance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Music effect on pain threshold evaluated with current perception threshold
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
AIM: Music relieves anxiety and psychotic tension. This effect of music is applied to surgical operation in the hospital and dental office. It is still unclear whether this music effect is only limited to the psychological aspect but not to the physical aspect or whether its music effect is influenced by the mood or emotion of audience. To elucidate these issues, we evaluated the music effect on pain threshold by current perception threshold (CPT) and profile of mood states (POMC) test. METHODS: Healthy 30 subjects (12 men, 18 women, 25-49 years old, mean age 34.9) were tested. (1)After POMC test, all subjects were evaluated pain threshold with CPT by Neurometer (Radionics, USA) under 6 conditions, silence, listening to the slow tempo classic music, nursery music, hard rock music, classic paino music and relaxation music with 30 seconds interval. (2)After Stroop color word test as the stresser, pain threshold was evaluated with CPT under 2 conditions, silence and listening to the slow tempo classic music. RESULTS: Under litening to the music, CPT sores increased, especially 2 000 Hz level related with compression, warm and pain sensation. Type of music, preference of music and stress also affected CPT score. CONCLUSION: The present study demonstrated that the concentration on the music raise the pain threshold and that stress and mood influence the music effect on pain threshold.
Technical Note: A fast online adaptive replanning method for VMAT using flattening filter free beams
Energy Technology Data Exchange (ETDEWEB)
Ates, Ozgur; Ahunbay, Ergun E.; Li, X. Allen, E-mail: ali@mcw.edu [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin 53226 (United States); Moreau, Michel [Elekta, Inc., Maryland Heights, Missouri 63043 (United States)
2016-06-15
Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT research planning system, which enables the input and output of beam and machine parameters of VMAT plans. The SAM algorithm was used to modify multileaf collimator positions for each segment aperture based on the changes of the target from the planning (CT/MR) to daily image [CT/CBCT/magnetic resonance imaging (MRI)]. The leaf travel distance was controlled for large shifts to prevent the increase of VMAT delivery time. The SAM algorithm was tested for 11 patient cases including prostate, pancreatic, and lung cancers. For each daily image set, three types of VMAT plans, image-guided radiation therapy (IGRT) repositioning, SAM adaptive, and full-scope reoptimization plans, were generated and compared. Results: The SAM adaptive plans were found to have improved the plan quality in target and/or critical organs when compared to the IGRT repositioning plans and were comparable to the reoptimization plans based on the data of planning target volume (PTV)-V100 (volume covered by 100% of prescription dose). For the cases studied, the average PTV-V100 was 98.85% ± 1.13%, 97.61% ± 1.45%, and 92.84% ± 1.61% with FFF beams for the reoptimization, SAM adaptive, and repositioning plans, respectively. The execution of the SAM algorithm takes less than 10 s using 16-CPU (2.6 GHz dual core) hardware. Conclusions: The SAM algorithm can generate adaptive VMAT plans using FFF beams with comparable plan qualities as those from the full-scope reoptimization plans based on daily CT/CBCT/MRI and can be used for online replanning to address interfractional variations.
International Nuclear Information System (INIS)
Hategan, Cornel
2002-01-01
Theory of Threshold Phenomena in Quantum Scattering is developed in terms of Reduced Scattering Matrix. Relationships of different types of threshold anomalies both to nuclear reaction mechanisms and to nuclear reaction models are established. Magnitude of threshold effect is related to spectroscopic factor of zero-energy neutron state. The Theory of Threshold Phenomena, based on Reduced Scattering Matrix, does establish relationships between different types of threshold effects and nuclear reaction mechanisms: the cusp and non-resonant potential scattering, s-wave threshold anomaly and compound nucleus resonant scattering, p-wave anomaly and quasi-resonant scattering. A threshold anomaly related to resonant or quasi resonant scattering is enhanced provided the neutron threshold state has large spectroscopic amplitude. The Theory contains, as limit cases, Cusp Theories and also results of different nuclear reactions models as Charge Exchange, Weak Coupling, Bohr and Hauser-Feshbach models. (author)
Adaptive control system having hedge unit and related apparatus and methods
Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)
2007-01-01
The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.
International Nuclear Information System (INIS)
Duan Liming; Ye Yong; Zhang Xia; Zuo Jian
2013-01-01
A self-adaptive identification method is proposed for realizing more accurate and efficient judgment about the inner and outer contours of industrial computed tomography (CT) slice images. The convexity-concavity of the single-pixel-wide closed contour is identified with angle method at first. Then, contours with concave vertices are distinguished to be inner or outer contours with ray method, and contours without concave vertices are distinguished with extreme coordinate value method. The method was chosen to automatically distinguish contours by means of identifying the convexity and concavity of the contours. Thus, the disadvantages of single distinguishing methods, such as ray method's time-consuming and extreme coordinate method's fallibility, can be avoided. The experiments prove the adaptability, efficiency, and accuracy of the self-adaptive method. (authors)
Planetary gearbox fault feature enhancement based on combined adaptive filter method
Directory of Open Access Journals (Sweden)
Shuangshu Tian
2015-12-01
Full Text Available The reliability of vibration signals acquired from a planetary gear system (the indispensable part of wind turbine gearbox is directly related to the accuracy of fault diagnosis. The complex operation environment leads to lots of interference signals which are included in the vibration signals. Furthermore, both multiple gears meshing with each other and the differences in transmission rout produce strong nonlinearity in the vibration signals, which makes it difficult to eliminate the noise. This article presents a combined adaptive filter method by taking a delayed signal as reference signal, the Self-Adaptive Noise Cancellation method is adopted to eliminate the white noise. In the meanwhile, by applying Gaussian function to transform the input signal into high-dimension feature-space signal, the kernel least mean square algorithm is used to cancel the nonlinear interference. Effectiveness of the method has been verified by simulation signals and test rig signals. By dealing with simulation signal, the signal-to-noise ratio can be improved around 30 dB (white noise and the amplitude of nonlinear interference signal can be depressed up to 50%. Experimental results show remarkable improvements and enhance gear fault features.
Directory of Open Access Journals (Sweden)
Taochang Li
2014-01-01
Full Text Available Automatic steering control is the key factor and essential condition in the realization of the automatic navigation control of agricultural vehicles. In order to get satisfactory steering control performance, an adaptive sliding mode control method based on a nonlinear integral sliding surface is proposed in this paper for agricultural vehicle steering control. First, the vehicle steering system is modeled as a second-order mathematic model; the system uncertainties and unmodeled dynamics as well as the external disturbances are regarded as the equivalent disturbances satisfying a certain boundary. Second, a transient process of the desired system response is constructed in each navigation control period. Based on the transient process, a nonlinear integral sliding surface is designed. Then the corresponding sliding mode control law is proposed to guarantee the fast response characteristics with no overshoot in the closed-loop steering control system. Meanwhile, the switching gain of sliding mode control is adaptively adjusted to alleviate the control input chattering by using the fuzzy control method. Finally, the effectiveness and the superiority of the proposed method are verified by a series of simulation and actual steering control experiments.
A systematic review of gait analysis methods based on inertial sensors and adaptive algorithms.
Caldas, Rafael; Mundt, Marion; Potthast, Wolfgang; Buarque de Lima Neto, Fernando; Markert, Bernd
2017-09-01
The conventional methods to assess human gait are either expensive or complex to be applied regularly in clinical practice. To reduce the cost and simplify the evaluation, inertial sensors and adaptive algorithms have been utilized, respectively. This paper aims to summarize studies that applied adaptive also called artificial intelligence (AI) algorithms to gait analysis based on inertial sensor data, verifying if they can support the clinical evaluation. Articles were identified through searches of the main databases, which were encompassed from 1968 to October 2016. We have identified 22 studies that met the inclusion criteria. The included papers were analyzed due to their data acquisition and processing methods with specific questionnaires. Concerning the data acquisition, the mean score is 6.1±1.62, what implies that 13 of 22 papers failed to report relevant outcomes. The quality assessment of AI algorithms presents an above-average rating (8.2±1.84). Therefore, AI algorithms seem to be able to support gait analysis based on inertial sensor data. Further research, however, is necessary to enhance and standardize the application in patients, since most of the studies used distinct methods to evaluate healthy subjects. Copyright © 2017 Elsevier B.V. All rights reserved.
Adaptive algorithms for a self-shielding wavelet-based Galerkin method
International Nuclear Information System (INIS)
Fournier, D.; Le Tellier, R.
2009-01-01
The treatment of the energy variable in deterministic neutron transport methods is based on a multigroup discretization, considering the flux and cross-sections to be constant within a group. In this case, a self-shielding calculation is mandatory to correct sections of resonant isotopes. In this paper, a different approach based on a finite element discretization on a wavelet basis is used. We propose adaptive algorithms constructed from error estimates. Such an approach is applied to within-group scattering source iterations. A first implementation is presented in the special case of the fine structure equation for an infinite homogeneous medium. Extension to spatially-dependent cases is discussed. (authors)
Adaptive grouping for the higher-order multilevel fast multipole method
DEFF Research Database (Denmark)
Borries, Oscar Peter; Jørgensen, Erik; Meincke, Peter
2014-01-01
An alternative parameter-free adaptive approach for the grouping of the basis function patterns in the multilevel fast multipole method is presented, yielding significant memory savings compared to the traditional Octree grouping for most discretizations, particularly when using higher-order basis...... functions. Results from both a uniformly and nonuniformly meshed scatterer are presented, showing how the technique is worthwhile even for regular meshes, and demonstrating that there is no loss of accuracy in spite of the large reduction in memory requirements and the relatively low computational cost....
A review of some a posteriori error estimates for adaptive finite element methods
Czech Academy of Sciences Publication Activity Database
Segeth, Karel
2010-01-01
Roč. 80, č. 8 (2010), s. 1589-1600 ISSN 0378-4754. [European Seminar on Coupled Problems. Jetřichovice, 08.06.2008-13.06.2008] R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : hp-adaptive finite element method * a posteriori error estimators * computational error estimates Subject RIV: BA - General Mathematics Impact factor: 0.812, year: 2010 http://www.sciencedirect.com/science/article/pii/S0378475408004230
Directory of Open Access Journals (Sweden)
I. C. Ramos
2015-10-01
Full Text Available We present the adaptation to non-free boundary conditions of a pseudospectral method based on the (complex Fourier transform. The method is applied to the numerical integration of the Oberbeck-Boussinesq equations in a Rayleigh-Bénard cell with no-slip boundary conditions for velocity and Dirichlet boundary conditions for temperature. We show the first results of a 2D numerical simulation of dry air convection at high Rayleigh number (. These results are the basis for the later study, by the same method, of wet convection in a solar still. Received: 20 Novembre 2014, Accepted: 15 September 2015; Edited by: C. A. Condat, G. J. Sibona; DOI:http://dx.doi.org/10.4279/PIP.070015 Cite as: I C Ramos, C B Briozzo, Papers in Physics 7, 070015 (2015
Directory of Open Access Journals (Sweden)
Songjun Zeng
2010-01-01
Full Text Available A method for three-dimensional (3D reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N =0.1,0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise.
Directory of Open Access Journals (Sweden)
Saeed Daneshmand
2016-10-01
Full Text Available The use of antenna arrays in Global Navigation Satellite System (GNSS applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations.
Directory of Open Access Journals (Sweden)
ZHANG Zhengpeng
2015-10-01
Full Text Available Panoramic image matching method with the constraint condition of local structure from motion similarity feature is an important method, the process requires multivariable kernel density estimations for the structure from motion feature used nonparametric mean shift. Proper selection of the kernel bandwidth is a critical step for convergence speed and accuracy of matching method. Variable bandwidth with adaptive structure from motion feature for panoramic image matching method has been proposed in this work. First the bandwidth matrix is defined using the locally adaptive spatial structure of the sampling point in spatial domain and optical flow domain. The relaxation diffusion process of structure from motion similarity feature is described by distance weighting method of local optical flow feature vector. Then the expression form of adaptive multivariate kernel density function is given out, and discusses the solution of the mean shift vector, termination conditions, and the seed point selection method. The final fusions of multi-scale SIFT the features and structure features to establish a unified panoramic image matching framework. The sphere panoramic images from vehicle-borne mobile measurement system are chosen such that a comparison analysis between fixed bandwidth and adaptive bandwidth is carried out in detail. The results show that adaptive bandwidth is good for case with the inlier ratio changes and the object space scale changes. The proposed method can realize the adaptive similarity measure of structure from motion feature, improves the correct matching points and matching rate, experimental results have shown our method to be robust.
Identifying Threshold Concepts for Information Literacy: A Delphi Study
Directory of Open Access Journals (Sweden)
Lori Townsend
2016-06-01
Full Text Available This study used the Delphi method to engage expert practitioners on the topic of threshold concepts for information literacy. A panel of experts considered two questions. First, is the threshold concept approach useful for information literacy instruction? The panel unanimously agreed that the threshold concept approach holds potential for information literacy instruction. Second, what are the threshold concepts for information literacy instruction? The panel proposed and discussed over fifty potential threshold concepts, finally settling on six information literacy threshold concepts.
Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P
2015-03-01
Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.
Directory of Open Access Journals (Sweden)
Bingfei Fan
2017-05-01
Full Text Available Magnetic and inertial sensors have been widely used to estimate the orientation of human segments due to their low cost, compact size and light weight. However, the accuracy of the estimated orientation is easily affected by external factors, especially when the sensor is used in an environment with magnetic disturbances. In this paper, we propose an adaptive method to improve the accuracy of orientation estimations in the presence of magnetic disturbances. The method is based on existing gradient descent algorithms, and it is performed prior to sensor fusion algorithms. The proposed method includes stationary state detection and magnetic disturbance severity determination. The stationary state detection makes this method immune to magnetic disturbances in stationary state, while the magnetic disturbance severity determination helps to determine the credibility of magnetometer data under dynamic conditions, so as to mitigate the negative effect of the magnetic disturbances. The proposed method was validated through experiments performed on a customized three-axis instrumented gimbal with known orientations. The error of the proposed method and the original gradient descent algorithms were calculated and compared. Experimental results demonstrate that in stationary state, the proposed method is completely immune to magnetic disturbances, and in dynamic conditions, the error caused by magnetic disturbance is reduced by 51.2% compared with original MIMU gradient descent algorithm.
An adaptive bin framework search method for a beta-sheet protein homopolymer model
Directory of Open Access Journals (Sweden)
Hoos Holger H
2007-04-01
Full Text Available Abstract Background The problem of protein structure prediction consists of predicting the functional or native structure of a protein given its linear sequence of amino acids. This problem has played a prominent role in the fields of biomolecular physics and algorithm design for over 50 years. Additionally, its importance increases continually as a result of an exponential growth over time in the number of known protein sequences in contrast to a linear increase in the number of determined structures. Our work focuses on the problem of searching an exponentially large space of possible conformations as efficiently as possible, with the goal of finding a global optimum with respect to a given energy function. This problem plays an important role in the analysis of systems with complex search landscapes, and particularly in the context of ab initio protein structure prediction. Results In this work, we introduce a novel approach for solving this conformation search problem based on the use of a bin framework for adaptively storing and retrieving promising locally optimal solutions. Our approach provides a rich and general framework within which a broad range of adaptive or reactive search strategies can be realized. Here, we introduce adaptive mechanisms for choosing which conformations should be stored, based on the set of conformations already stored in memory, and for biasing choices when retrieving conformations from memory in order to overcome search stagnation. Conclusion We show that our bin framework combined with a widely used optimization method, Monte Carlo search, achieves significantly better performance than state-of-the-art generalized ensemble methods for a well-known protein-like homopolymer model on the face-centered cubic lattice.
Locomotor adaptation to a powered ankle-foot orthosis depends on control method
Directory of Open Access Journals (Sweden)
Gordon Keith E
2007-12-01
Full Text Available Abstract Background We studied human locomotor adaptation to powered ankle-foot orthoses with the intent of identifying differences between two different orthosis control methods. The first orthosis control method used a footswitch to provide bang-bang control (a kinematic control and the second orthosis control method used a proportional myoelectric signal from the soleus (a physiological control. Both controllers activated an artificial pneumatic muscle providing plantar flexion torque. Methods Subjects walked on a treadmill for two thirty-minute sessions spaced three days apart under either footswitch control (n = 6 or myoelectric control (n = 6. We recorded lower limb electromyography (EMG, joint kinematics, and orthosis kinetics. We compared stance phase EMG amplitudes, correlation of joint angle patterns, and mechanical work performed by the powered orthosis between the two controllers over time. Results During steady state at the end of the second session, subjects using proportional myoelectric control had much lower soleus and gastrocnemius activation than the subjects using footswitch control. The substantial decrease in triceps surae recruitment allowed the proportional myoelectric control subjects to walk with ankle kinematics close to normal and reduce negative work performed by the orthosis. The footswitch control subjects walked with substantially perturbed ankle kinematics and performed more negative work with the orthosis. Conclusion These results provide evidence that the choice of orthosis control method can greatly alter how humans adapt to powered orthosis assistance during walking. Specifically, proportional myoelectric control results in larger reductions in muscle activation and gait kinematics more similar to normal compared to footswitch control.
An Efficient Adaptive Window Size Selection Method for Improving Spectrogram Visualization
Directory of Open Access Journals (Sweden)
Shibli Nisar
2016-01-01
Full Text Available Short Time Fourier Transform (STFT is an important technique for the time-frequency analysis of a time varying signal. The basic approach behind it involves the application of a Fast Fourier Transform (FFT to a signal multiplied with an appropriate window function with fixed resolution. The selection of an appropriate window size is difficult when no background information about the input signal is known. In this paper, a novel empirical model is proposed that adaptively adjusts the window size for a narrow band-signal using spectrum sensing technique. For wide-band signals, where a fixed time-frequency resolution is undesirable, the approach adapts the constant Q transform (CQT. Unlike the STFT, the CQT provides a varying time-frequency resolution. This results in a high spectral resolution at low frequencies and high temporal resolution at high frequencies. In this paper, a simple but effective switching framework is provided between both STFT and CQT. The proposed method also allows for the dynamic construction of a filter bank according to user-defined parameters. This helps in reducing redundant entries in the filter bank. Results obtained from the proposed method not only improve the spectrogram visualization but also reduce the computation cost and achieves 87.71% of the appropriate window length selection.
International Nuclear Information System (INIS)
La Cognata, M.; Spitaleri, C.; Guardo, G. L.; Puglia, S. M. R.; Romano, S.; Spartà, R.; Trippella, O.; Kiss, G. G.; Rogachev, G. V.; Avila, M.; Koshchiy, E.; Kuchera, A.; Santiago, D.; Mukhamedzhanov, A. M.; Lamia, L.
2014-01-01
The 13 C(α,n) 16 O reaction is the neutron source of the main component of the s-process. The astrophysical S(E)-factor is dominated by the −3 keV sub-threshold resonance due to the 6.356 MeV level in 17 O. Its contribution is still controversial as extrapolations, e.g., through R-matrix calculations, and indirect techniques, such as the asymptotic normalization coefficient (ANC), yield inconsistent results. Therefore, we have applied the Trojan Horse Method (THM) to the 13 C( 6 Li,n 16 O)d reaction to measure its contribution. For the first time, the ANC for the 6.356 MeV level has been deduced through the THM, allowing to attain an unprecedented accuracy. Though a larger ANC for the 6.356 MeV level is measured, our experimental S(E) factor agrees with the most recent extrapolation in the literature in the 140-230 keV energy interval, the accuracy being greatly enhanced thanks to this innovative approach, merging together two well establish indirect techniques, namely, the THM and the ANC
International Nuclear Information System (INIS)
Watson, F.V.
1982-01-01
An adaptation of the alternate direction method for coarse mesh calculation, is presented. The algorithm is applicable to two-and three dimensional problems, the last being the more interesting one. (E.G.) [pt
Coleman, S.; Hurley, S.; Koliba, C.; Zia, A.; Exler, S.
2014-12-01
Eutrophication and nutrient pollution of surface waters occur within complex governance, social, hydrologic and biophysical basin contexts. The pervasive and perennial nutrient pollution in Lake Champlain Basin, despite decades of efforts, exemplifies problems found across the world's surface waters. Stakeholders with diverse values, interests, and forms of explicit and tacit knowledge determine water quality impacts through land use, agricultural and water resource decisions. Uncertainty, ambiguity and dynamic feedback further complicate the ability to promote the continual provision of water quality and ecosystem services. Adaptive management of water resources and land use requires mechanisms to allow for learning and integration of new information over time. The transdisciplinary Research on Adaptation to Climate Change (RACC) team is working to build regional adaptive capacity in Lake Champlain Basin while studying and integrating governance, land use, hydrological, and biophysical systems to evaluate implications for adaptive management. The RACC team has engaged stakeholders through mediated modeling workshops, online forums, surveys, focus groups and interviews. In March 2014, CSS2CC.org, an interactive online forum to source and identify adaptive interventions from a group of stakeholders across sectors was launched. The forum, based on the Delphi Method, brings forward the collective wisdom of stakeholders and experts to identify potential interventions and governance designs in response to scientific uncertainty and ambiguity surrounding the effectiveness of any strategy, climate change impacts, and the social and natural systems governing water quality and eutrophication. A Mediated Modeling Workshop followed the forum in May 2014, where participants refined and identified plausible interventions under different governance, policy and resource scenarios. Results from the online forum and workshop can identify emerging consensus across scales and sectors
Adaptive Finite Element Method Assisted by Stochastic Simulation of Chemical Systems
Cotter, Simon L.; Vejchodský , Tomá š; Erban, Radek
2013-01-01
Stochastic models of chemical systems are often analyzed by solving the corresponding Fokker-Planck equation, which is a drift-diffusion partial differential equation for the probability distribution function. Efficient numerical solution of the Fokker-Planck equation requires adaptive mesh refinements. In this paper, we present a mesh refinement approach which makes use of a stochastic simulation of the underlying chemical system. By observing the stochastic trajectory for a relatively short amount of time, the areas of the state space with nonnegligible probability density are identified. By refining the finite element mesh in these areas, and coarsening elsewhere, a suitable mesh is constructed and used for the computation of the stationary probability density. Numerical examples demonstrate that the presented method is competitive with existing a posteriori methods. © 2013 Society for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Ghazanfar Shahgholian
2018-01-01
Full Text Available This paper examines the influence of Static Synchronous Series Compensator (SSSC on the oscillation damping control in the network. The performance of Flexible AC Transmission System (FACTS controller highly depends upon its parameters and appropriate location in the network. A new Adaptive Inertia Weight Particle Swarm Optimization (AIWPSO method is employed to design the parameters of the SSSC-base controller. In the proposed controller, the proper signal of the power system such as rotor angle is used as the feedback. AIWPSO technique has high flexibility and balanced mechanism for the local and global research. The proposed controller is compared with a Genetic Algorithm (GA based controller that confirms the operation of the controller. To show the integrity of the proposed controller method, the achievement of the simulations is done out in a single-machine infinite-bus and multi-machine grid under multi turmoil.
Le Jeune, L.; Robert, S.; Dumas, P.; Membre, A.; Prada, C.
2015-03-01
In this paper, we propose an ultrasonic adaptive imaging method based on the phased-array technology and the synthetic focusing algorithm Total Focusing Method (TFM). The general principle is to image the surface by applying the TFM algorithm in a semi-infinite water medium. Then, the reconstructed surface is taken into account to make a second TFM image inside the component. In the surface reconstruction step, the TFM algorithm has been optimized to decrease computation time and to limit noise in water. In the second step, the ultrasonic paths through the reconstructed surface are calculated by the Fermat's principle and an iterative algorithm, and the classical TFM is applied to obtain an image inside the component. This paper presents several results of TFM imaging in components of different geometries, and a result obtained with a new technology of probes equipped with a flexible wedge filled with water (manufactured by Imasonic).
Using ecological thresholds to inform resource management: current options and future possibilities
Directory of Open Access Journals (Sweden)
Melissa M Foley
2015-11-01
Full Text Available In the face of growing human impacts on ecosystems, scientists and managers recognize the need to better understand thresholds and nonlinear dynamics in ecological systems to help set management targets. However, our understanding of the factors that drive threshold dynamics, and when and how rapidly thresholds will be crossed is currently limited in many systems. In spite of these limitations, there are approaches available to practitioners today—including ecosystem monitoring, statistical methods to identify thresholds and indicators, and threshold-based adaptive management—that can be used to help avoid ecological thresholds or restore systems that have crossed them. We briefly review the current state of knowledge and then use real-world examples to demonstrate how resource managers can use available approaches to avoid crossing ecological thresholds. We also highlight new tools and indicators being developed that have the potential to enhance our ability to detect change, predict when a system is approaching an ecological threshold, or restore systems that have already crossed a tipping point.
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
3D CSEM inversion based on goal-oriented adaptive finite element method
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with
Adaptive spacetime method using Riemann jump conditions for coupled atomistic-continuum dynamics
International Nuclear Information System (INIS)
Kraczek, B.; Miller, S.T.; Haber, R.B.; Johnson, D.D.
2010-01-01
We combine the Spacetime Discontinuous Galerkin (SDG) method for elastodynamics with the mathematically consistent Atomistic Discontinuous Galerkin (ADG) method in a new scheme that concurrently couples continuum and atomistic models of dynamic response in solids. The formulation couples non-overlapping continuum and atomistic models across sharp interfaces by weakly enforcing jump conditions, for both momentum balance and kinematic compatibility, using Riemann values to preserve the characteristic structure of the underlying hyperbolic system. Momentum balances to within machine-precision accuracy over every element, on each atom, and over the coupled system, with small, controllable energy dissipation in the continuum region that ensures numerical stability. When implemented on suitable unstructured spacetime grids, the continuum SDG model offers linear computational complexity in the number of elements and powerful adaptive analysis capabilities that readily bridge between atomic and continuum scales in both space and time. A special trace operator for the atomic velocities and an associated atomistic traction field enter the jump conditions at the coupling interface. The trace operator depends on parameters that specify, at the scale of the atomic spacing, the position of the coupling interface relative to the atoms. In a key finding, we demonstrate that optimizing these parameters suppresses spurious reflections at the coupling interface without the use of non-physical damping or special boundary conditions. We formulate the implicit SDG-ADG coupling scheme in up to three spatial dimensions, and describe an efficient iterative solution scheme that outperforms common explicit schemes, such as the Velocity Verlet integrator. Numerical examples, in 1dxtime and employing both linear and nonlinear potentials, demonstrate the performance of the SDG-ADG method and show how adaptive spacetime meshing reconciles disparate time steps and resolves atomic-scale signals in
Esophageal cancer prediction based on qualitative features using adaptive fuzzy reasoning method
Directory of Open Access Journals (Sweden)
Raed I. Hamed
2015-04-01
Full Text Available Esophageal cancer is one of the most common cancers world-wide and also the most common cause of cancer death. In this paper, we present an adaptive fuzzy reasoning algorithm for rule-based systems using fuzzy Petri nets (FPNs, where the fuzzy production rules are represented by FPN. We developed an adaptive fuzzy Petri net (AFPN reasoning algorithm as a prognostic system to predict the outcome for esophageal cancer based on the serum concentrations of C-reactive protein and albumin as a set of input variables. The system can perform fuzzy reasoning automatically to evaluate the degree of truth of the proposition representing the risk degree value with a weight value to be optimally tuned based on the observed data. In addition, the implementation process for esophageal cancer prediction is fuzzily deducted by the AFPN algorithm. Performance of the composite model is evaluated through a set of experiments. Simulations and experimental results demonstrate the effectiveness and performance of the proposed algorithms. A comparison of the predictive performance of AFPN models with other methods and the analysis of the curve showed the same results with an intuitive behavior of AFPN models.
Water System Adaptation To Hydrological Changes: Module 11, Methods and Tools: Computational Models
This course will introduce students to the fundamental principles of water system adaptation to hydrological changes, with emphasis on data analysis and interpretation, technical planning, and computational modeling. Starting with real-world scenarios and adaptation needs, the co...
SuBSENSE: a universal change detection method with local adaptive sensitivity.
St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert
2015-01-01
Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.
An efficient contents-adaptive backlight control method for mobile devices
Chen, Qiao Song; Yan, Ya Xing; Zhang, Xiao Mou; Cai, Hua; Deng, Xin; Wang, Jin
2015-03-01
For most of mobile devices with a large screen, image quality and power consumption are both of the major factors affecting the consumers' preference. Contents-adaptive backlight control (CABC) method can be utilized to adjust the backlight and promote the performance of mobile devices. Unlike the previous works mostly focusing on the reduction of power consumption, both of image quality and power consumption are taken into account in the proposed method. Firstly, region of interest (ROI) is detected to divide image into two parts: ROI and non-ROI. Then, three attributes including entropy, luminance, and saturation information in ROI are calculated. To achieve high perceived image quality in mobile devices, optimal value of backlight can be calculated by a linear combination of the aforementioned attributes. Coefficients of the linear combination are determined by applying the linear regression to the subjective scores of human visual experiments and objective values of the attributes. Based on the optimal value of backlight, displayed image data are processed brightly and backlight is darkened to reduce the power consumption of backlight later. Here, the ratios of increasing image data and decreasing backlight functionally depend on the luminance information of displayed image. Also, the proposed method is hardware implemented. Experimental results indicate that the proposed technique exhibits better performance compared to the conventional methods.
Kaleli, Necati; Saraç, Duygu
2017-05-01
Marginal adaptation plays an important role in the survival of metal-ceramic restorations. Porcelain firings and cementation may affect the adaptation of restorations. Moreover, conventional casting procedures and casting imperfections may cause deteriorations in the marginal adaptation of metal-ceramic restorations. The purpose of this in vitro study was to compare the marginal adaptation after fabrication of the framework, porcelain application, and cementation of metal-ceramic restorations prepared by using the conventional lost-wax technique, milling, direct metal laser sintering (DMLS), and LaserCUSING, a direct process powder-bed system. Alterations in the marginal adaptation of the metal frameworks during the fabrication stages and the precision of fabrication methods were evaluated. Forty-eight metal dies simulating prepared premolar and molar abutment teeth were fabricated to investigate marginal adaptation. They were divided into 4 groups (n=12) according to the fabrication method used (group C serving as the control group: lost-wax method; group M: milling method; group LS: DMLS method; group DP: direct process powder-bed method). Sixty marginal discrepancy measurements were recorded separately on each abutment tooth after fabrication of the framework, porcelain application, and cementation by using a stereomicroscope. Thereafter, each group was divided into 3 subgroups according to the measurements recorded in each fabrication stage: subgroup F (framework), subgroup P (porcelain application), and subgroup C (cementation). Data were statistically analyzed with univariate analysis of variance (followed by 1-way ANOVA and Tamhane T2 test (α=.05). The lowest marginal discrepancy values were observed in restorations prepared by using the direct process powder-bed method, and this was significantly different (Pdirect process powder-bed method is quite successful in terms of marginal adaptation. The marginal discrepancy increased after porcelain application
Park, Eunjeong
2016-01-01
Despite the contribution to economic and social impact on the institutions in the United States, international students' academic adaptation has been always challenging. The study investigated international graduate students' academic adaptation scales via a survey questionnaire and explored how international students are academically adapted in…
Proposal of adaptive human interface and study of interface evaluation method for plant operators
International Nuclear Information System (INIS)
Ujita, Hiroshi; Kubota, Ryuji.
1994-01-01
In this report, a new concept of human interface adaptive to plant operators' mental model, cognitive process and psychological state which change with time is proposed. It is composed of a function to determine information which should be indicated to operators based on the plant situation, a function to estimate operators' internal conditions, and a function to arrange the information amount, position, timing, form etc. based on their conditions. The method to evaluate the fitness of the interface by using the analysis results based on cognitive science, ergonomics, psychology and physiology is developed to achieve such an interface. Fundamental physiological experiments have been performed. Stress and workload can be identified by the ratio of the power average of the α wave fraction of a brain wave and be distinguished by the ratio of the standard deviation of the R-R interval in test and at rest, in the case of low stress such as mouse operation, calculation and walking. (author)
Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M.
2016-01-01
Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data. PMID:28090605
Guan, W.; Cheng, X.; Huang, J.; Huber, G.; Li, W.; McCammon, J. A.; Zhang, B.
2018-06-01
RPYFMM is a software package for the efficient evaluation of the potential field governed by the Rotne-Prager-Yamakawa (RPY) tensor interactions in biomolecular hydrodynamics simulations. In our algorithm, the RPY tensor is decomposed as a linear combination of four Laplace interactions, each of which is evaluated using the adaptive fast multipole method (FMM) (Greengard and Rokhlin, 1997) where the exponential expansions are applied to diagonalize the multipole-to-local translation operators. RPYFMM offers a unified execution on both shared and distributed memory computers by leveraging the DASHMM library (DeBuhr et al., 2016, 2018). Preliminary numerical results show that the interactions for a molecular system of 15 million particles (beads) can be computed within one second on a Cray XC30 cluster using 12,288 cores, while achieving approximately 54% strong-scaling efficiency.
Integrable discretizations and self-adaptive moving mesh method for a coupled short pulse equation
International Nuclear Information System (INIS)
Feng, Bao-Feng; Chen, Junchao; Chen, Yong; Maruno, Ken-ichi; Ohta, Yasuhiro
2015-01-01
In the present paper, integrable semi-discrete and fully discrete analogues of a coupled short pulse (CSP) equation are constructed. The key to the construction are the bilinear forms and determinant structure of the solutions of the CSP equation. We also construct N-soliton solutions for the semi-discrete and fully discrete analogues of the CSP equations in the form of Casorati determinants. In the continuous limit, we show that the fully discrete CSP equation converges to the semi-discrete CSP equation, then further to the continuous CSP equation. Moreover, the integrable semi-discretization of the CSP equation is used as a self-adaptive moving mesh method for numerical simulations. The numerical results agree with the analytical results very well. (paper)
Adaptive F.E. method for the shakedown and limit analysis of pressure vessels
International Nuclear Information System (INIS)
Queiroz Franco, J.R.; Bruzzi Barros, F.; Ponter, A.R.S.
2003-01-01
Upper bound estimates of limit and shakedown loads for pressure vessels are calculated by using the technique described in this paper. These have been achieved by applying Koiter's theorem and by discretizing the shell into finite elements. The flow law associated with an hexagonal prism yield surface, relates the plastic strain increments and curvatures to plastic multipliers. A suitable matrix also relates such a plastic strain field to a displacement field through a classical relation. A novel method enforces a consistent relationship between nodal displacements and nodal plastic multipliers by minimizing the residual between the two independent descriptions of the plastic increments, measured with respect to the energy norm. The discretized problem is then reduced to a minimization problem and solved by linear programming. An a posteriori error indicator in the energy norm is derived with and adaptive mesh refinement scheme. (authors)
Directory of Open Access Journals (Sweden)
Kaysar Rahman
2014-01-01
Full Text Available Bone adaptive repair theory considers that the external load is the direct source of bone remodeling; bone achieves its maintenance by remodeling some microscopic damages due to external load during the process. This paper firstly observes CT data from the whole self-repairing process in bone defects in rabbit femur. Experimental result shows that during self-repairing process there exists an interaction relationship between spongy bone and enamel bone volume changes of bone defect, that is when volume of spongy bone increases, enamel bone decreases, and when volume of spongy bone decreases, enamel bone increases. Secondly according to this feature a bone remodeling model based on cross-type reaction-diffusion system influenced by mechanical stress is proposed. Finally, this model coupled with finite element method by using the element adding and removing process is used to simulate the self-repairing process and engineering optimization problems by considering the idea of bionic topology optimization.
Proposal of adaptive human interface and study of interface evaluation method for plant operators
Energy Technology Data Exchange (ETDEWEB)
Ujita, Hiroshi [Hitachi Ltd., Ibaraki (Japan). Energy Research Lab.; Kubota, Ryuji
1994-07-01
In this report, a new concept of human interface adaptive to plant operators' mental model, cognitive process and psychological state which change with time is proposed. It is composed of a function to determine information which should be indicated to operators based on the plant situation, a function to estimate operators' internal conditions, and a function to arrange the information amount, position, timing, form etc. based on their conditions. The method to evaluate the fitness of the interface by using the analysis results based on cognitive science, ergonomics, psychology and physiology is developed to achieve such an interface. Fundamental physiological experiments have been performed. Stress and workload can be identified by the ratio of the power average of the [alpha] wave fraction of a brain wave and be distinguished by the ratio of the standard deviation of the R-R interval in test and at rest, in the case of low stress such as mouse operation, calculation and walking. (author).
Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan
2018-04-01
Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.
A Fuzzy Adaptive Tightly-Coupled Integration Method for Mobile Target Localization Using SINS/WSN
Directory of Open Access Journals (Sweden)
Wei Li
2016-11-01
Full Text Available In recent years, mobile target localization for enclosed environments has been a growing interest. In this paper, we have proposed a fuzzy adaptive tightly-coupled integration (FATCI method for positioning and tracking applications using strapdown inertial navigation system (SINS and wireless sensor network (WSN. The wireless signal outage and severe multipath propagation of WSN often influence the accuracy of measured distance and lead to difficulties with the WSN positioning. Note also that the SINS are known for their drifted error over time. Using as a base the well-known loosely-coupled integration method, we have built a tightly-coupled integrated positioning system for SINS/WSN based on the measured distances between anchor nodes and mobile node. The measured distance value of WSN is corrected with a least squares regression (LSR algorithm, with the aim of decreasing the systematic error for measured distance. Additionally, the statistical covariance of measured distance value is used to adjust the observation covariance matrix of a Kalman filter using a fuzzy inference system (FIS, based on the statistical characteristics. Then the tightly-coupled integration model can adaptively adjust the confidence level for measurement according to the different measured accuracies of distance measurements. Hence the FATCI system is achieved using SINS/WSN. This innovative approach is verified in real scenarios. Experimental results show that the proposed positioning system has better accuracy and stability compared with the loosely-coupled and traditional tightly-coupled integration model for WSN short-term failure or normal conditions.
Thresholds in chemical respiratory sensitisation.
Cochrane, Stella A; Arts, Josje H E; Ehnes, Colin; Hindle, Stuart; Hollnagel, Heli M; Poole, Alan; Suto, Hidenori; Kimber, Ian
2015-07-03
There is a continuing interest in determining whether it is possible to identify thresholds for chemical allergy. Here allergic sensitisation of the respiratory tract by chemicals is considered in this context. This is an important occupational health problem, being associated with rhinitis and asthma, and in addition provides toxicologists and risk assessors with a number of challenges. In common with all forms of allergic disease chemical respiratory allergy develops in two phases. In the first (induction) phase exposure to a chemical allergen (by an appropriate route of exposure) causes immunological priming and sensitisation of the respiratory tract. The second (elicitation) phase is triggered if a sensitised subject is exposed subsequently to the same chemical allergen via inhalation. A secondary immune response will be provoked in the respiratory tract resulting in inflammation and the signs and symptoms of a respiratory hypersensitivity reaction. In this article attention has focused on the identification of threshold values during the acquisition of sensitisation. Current mechanistic understanding of allergy is such that it can be assumed that the development of sensitisation (and also the elicitation of an allergic reaction) is a threshold phenomenon; there will be levels of exposure below which sensitisation will not be acquired. That is, all immune responses, including allergic sensitisation, have threshold requirement for the availability of antigen/allergen, below which a response will fail to develop. The issue addressed here is whether there are methods available or clinical/epidemiological data that permit the identification of such thresholds. This document reviews briefly relevant human studies of occupational asthma, and experimental models that have been developed (or are being developed) for the identification and characterisation of chemical respiratory allergens. The main conclusion drawn is that although there is evidence that the
International Nuclear Information System (INIS)
Bhattacharya, T.; Willenbrock, S.
1993-01-01
We propose returning to the definition of the width of a particle in terms of the pole in the particle's propagator. Away from thresholds, this definition of width is equivalent to the standard perturbative definition, up to next-to-leading order; however, near a threshold, the two definitions differ significantly. The width as defined by the pole position provides more information in the threshold region than the standard perturbative definition and, in contrast with the perturbative definition, does not vanish when a two-particle s-wave threshold is approached from below
Ultrasound viscoelasticity assessment using an adaptive torsional shear wave propagation method
Energy Technology Data Exchange (ETDEWEB)
Ouared, Abderrahmane [Laboratory of Biorheology and Medical Ultrasonics, University of Montréal Hospital Research Center (CRCHUM), Montréal, Québec H2X 0A9, Canada and Institute of Biomedical Engineering, University of Montréal, Montréal, Québec H3T 1J4 (Canada); Kazemirad, Siavash; Montagnon, Emmanuel [Laboratory of Biorheology and Medical Ultrasonics, University of Montréal Hospital Research Center (CRCHUM), Montréal, Québec H2X 0A9 (Canada); Cloutier, Guy, E-mail: guy.cloutier@umontreal.ca [Laboratory of Biorheology and Medical Ultrasonics, University of Montréal Hospital Research Center (CRCHUM), Montréal, Québec H2X 0A9 (Canada); Department of Radiology, Radio-Oncology and Nuclear Medicine, University of Montréal, Montréal, Québec H3T 1J4 (Canada); Institute of Biomedical Engineering, University of Montréal, Montréal, Québec H3T 1J4 (Canada)
2016-04-15
Purpose: Different approaches have been used in dynamic elastography to assess mechanical properties of biological tissues. Most techniques are based on a simple inversion based on the measurement of the shear wave speed to assess elasticity, whereas some recent strategies use more elaborated analytical or finite element method (FEM) models. In this study, a new method is proposed for the quantification of both shear storage and loss moduli of confined lesions, in the context of breast imaging, using adaptive torsional shear waves (ATSWs) generated remotely with radiation pressure. Methods: A FEM model was developed to solve the inverse wave propagation problem and obtain viscoelastic properties of interrogated media. The inverse problem was formulated and solved in the frequency domain and its robustness to noise and geometric constraints was evaluated. The proposed model was validated in vitro with two independent rheology methods on several homogeneous and heterogeneous breast tissue-mimicking phantoms over a broad range of frequencies (up to 400 Hz). Results: Viscoelastic properties matched benchmark rheology methods with discrepancies of 8%–38% for the shear modulus G′ and 9%–67% for the loss modulus G″. The robustness study indicated good estimations of storage and loss moduli (maximum mean errors of 19% on G′ and 32% on G″) for signal-to-noise ratios between 19.5 and 8.5 dB. Larger errors were noticed in the case of biases in lesion dimension and position. Conclusions: The ATSW method revealed that it is possible to estimate the viscoelasticity of biological tissues with torsional shear waves when small biases in lesion geometry exist.
Detection thresholds of macaque otolith afferents.
Yu, Xiong-Jie; Dickman, J David; Angelaki, Dora E
2012-06-13
The vestibular system is our sixth sense and is important for spatial perception functions, yet the sensory detection and discrimination properties of vestibular neurons remain relatively unexplored. Here we have used signal detection theory to measure detection thresholds of otolith afferents using 1 Hz linear accelerations delivered along three cardinal axes. Direction detection thresholds were measured by comparing mean firing rates centered on response peak and trough (full-cycle thresholds) or by comparing peak/trough firing rates with spontaneous activity (half-cycle thresholds). Thresholds were similar for utricular and saccular afferents, as well as for lateral, fore/aft, and vertical motion directions. When computed along the preferred direction, full-cycle direction detection thresholds were 7.54 and 3.01 cm/s(2) for regular and irregular firing otolith afferents, respectively. Half-cycle thresholds were approximately double, with excitatory thresholds being half as large as inhibitory thresholds. The variability in threshold among afferents was directly related to neuronal gain and did not depend on spike count variance. The exact threshold values depended on both the time window used for spike count analysis and the filtering method used to calculate mean firing rate, although differences between regular and irregular afferent thresholds were independent of analysis parameters. The fact that minimum thresholds measured in macaque otolith afferents are of the same order of magnitude as human behavioral thresholds suggests that the vestibular periphery might determine the limit on our ability to detect or discriminate small differences in head movement, with little noise added during downstream processing.
Data mining methods application in reflexive adaptation realization in e-learning systems
Directory of Open Access Journals (Sweden)
A. S. Bozhday
2017-01-01
Full Text Available In recent years, e-learning technologies are rapidly gaining momentum in their evolution. In this regard, issues related to improving the quality of software for virtual educational systems are becoming topical: increasing the period of exploitation of programs, increasing their reliability and flexibility. The above characteristics directly depend on the ability of the software system to adapt to changes in the domain, environment and user characteristics. In some cases, this ability is reduced to the timely optimization of the program’s own interfaces and data structure. At present, several approaches to creating mechanisms for self-optimization of software systems are known, but all of them have an insufficient degree of formalization and, as a consequence, weak universality. The purpose of this work is to develop the basics of the technology of self-optimization of software systems in the structure of e-learning. The proposed technology is based on the formulated and formalized principle of reflexive adaptation of software, applicable to a wide class of software systems and based on the discovery of new knowledge in the behavioral products of the system.To solve this problem, methods of data mining were applied. Data mining allows finding regularities in the functioning of software systems, which may not be obvious at the stage of their development. Finding such regularities and their subsequent analysis will make it possible to reorganize the structure of the system in a more optimal way and without human intervention, which will prolong the life cycle of the software and reduce the costs of its maintenance. Achieving this effect is important for e-learning systems, since they are quite expensive.The main results of the work include: the proposed classification of software adaptation mechanisms, taking into account the latest trends in the IT field in general and in the field of e-learning in particular; Formulation and formalization of
International Nuclear Information System (INIS)
Ma Xiang; Zabaras, Nicholas
2009-01-01
A new approach to modeling inverse problems using a Bayesian inference method is introduced. The Bayesian approach considers the unknown parameters as random variables and seeks the probabilistic distribution of the unknowns. By introducing the concept of the stochastic prior state space to the Bayesian formulation, we reformulate the deterministic forward problem as a stochastic one. The adaptive hierarchical sparse grid collocation (ASGC) method is used for constructing an interpolant to the solution of the forward model in this prior space which is large enough to capture all the variability/uncertainty in the posterior distribution of the unknown parameters. This solution can be considered as a function of the random unknowns and serves as a stochastic surrogate model for the likelihood calculation. Hierarchical Bayesian formulation is used to derive the posterior probability density function (PPDF). The spatial model is represented as a convolution of a smooth kernel and a Markov random field. The state space of the PPDF is explored using Markov chain Monte Carlo algorithms to obtain statistics of the unknowns. The likelihood calculation is performed by directly sampling the approximate stochastic solution obtained through the ASGC method. The technique is assessed on two nonlinear inverse problems: source inversion and permeability estimation in flow through porous media
Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting
2017-12-01
Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.
A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry
Almarouf, Mohamad Abdulilah Alhusain Alali
2017-02-25
We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.
A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry
Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi
2017-01-01
We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-08
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
Ke Li
2016-01-01
Full Text Available A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF and Diagnostic Bayesian Network (DBN is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO. To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA is proposed to evaluate the sensitiveness of symptom parameters (SPs for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
Kinetics of electron-positron pair plasmas using an adaptive Monte Carlo method
International Nuclear Information System (INIS)
Pilla, R.P.; Shaham, J.
1997-01-01
A new algorithm for implementing the adaptive Monte Carlo method is given. It is used to solve the Boltzmann equations that describe the time evolution of a nonequilibrium electron-positron pair plasma containing high-energy photons. These are coupled nonlinear integro-differential equations. The collision kernels for the photons as well as pairs are evaluated for Compton scattering, pair annihilation and creation, bremsstrahlung, and Coulomb collisions. They are given as multidimensional integrals which are valid for all energies. For an homogeneous and isotropic plasma with no particle escape, the equilibrium solution is expressed analytically in terms of the initial conditions. For two specific cases, for which the photon and the pair spectra are initially constant or have a power-law distribution within the given limits, the time evolution of the plasma is analyzed using the new method. The final spectra are found to be in a good agreement with the analytical solutions. The new algorithm is faster than the Monte Carlo scheme based on uniform sampling and more flexible than the numerical methods used in the past, which do not involve Monte Carlo sampling. It is also found to be very stable. Some astrophysical applications of this technique are discussed. copyright 1997 The American Astronomical Society
Domain-adaptive finite difference methods for collapsing annular liquid jets
Ramos, J. I.
1993-01-01
A domain-adaptive technique which maps a time-dependent, curvilinear geometry into a unit square is used to determine the steady state mass absorption rate and the collapse of annular liquid jets. A method of lines is used to solve the one-dimensional fluid dynamics equations written in weak conservation-law form, and upwind differences are employed to evaluate the axial convective fluxes. The unknown, time-dependent, axial location of the downstream boundary is determined from the solution of an ordinary differential equation which is nonlinearly coupled to the fluid dynamics and gas concentration equations. The equation for the gas concentration in the annular liquid jet is written in strong conservation-law form and solved by means of a method of lines at high Peclet numbers and a line Gauss-Seidel method at low Peclet numbers. The effects of the number of grid points along and across the annular jet, time step, and discretization of the radial convective fluxes on both the steady state mass absorption rate and the jet's collapse rate have been analyzed on staggered and non-staggered grids. The steady state mass absorption rate and the collapse of annular liquid jets are determined as a function of the Froude, Peclet and Weber numbers, annular jet's thickness-to-radius ratio at the nozzle exit, initial pressure difference across the annular jet, nozzle exit angle, temperature of the gas enclosed by the annular jet, pressure of the gas surrounding the jet, solubilities at the inner and outer interfaces of the annular jet, and gas concentration at the nozzle exit. It is shown that the steady state mass absorption rate is proportional to the inverse square root of the Peclet number except for low values of this parameter, and that the possible mathematical incompatibilities in the concentration field at the nozzle exit exert a great influence on the steady state mass absorption rate and on the jet collapse. It is also shown that the steady state mass absorption
Directory of Open Access Journals (Sweden)
X. Ning
2012-08-01
To resolve the above-mentioned registration difficulties, a parallel and adaptive uniform-distributed registration method for CE-1 lunar remote sensed imagery is proposed in this paper. Based on 6 pairs of randomly selected images, both the standard SIFT algorithm and the parallel and adaptive uniform-distributed registration method were executed, the versatility and effectiveness were assessed. The experimental results indicate that: by applying the parallel and adaptive uniform-distributed registration method, the efficiency of CE-1 lunar remote sensed imagery registration were increased dramatically. Therefore, the proposed method in the paper could acquire uniform-distributed registration results more effectively, the registration difficulties including difficult to obtain results, time-consuming, non-uniform distribution could be successfully solved.
A convergent blind deconvolution method for post-adaptive-optics astronomical imaging
International Nuclear Information System (INIS)
Prato, M; Camera, A La; Bertero, M; Bonettini, S
2013-01-01
In this paper, we propose a blind deconvolution method which applies to data perturbed by Poisson noise. The objective function is a generalized Kullback–Leibler (KL) divergence, depending on both the unknown object and unknown point spread function (PSF), without the addition of regularization terms; constrained minimization, with suitable convex constraints on both unknowns, is considered. The problem is non-convex and we propose to solve it by means of an inexact alternating minimization method, whose global convergence to stationary points of the objective function has been recently proved in a general setting. The method is iterative and each iteration, also called outer iteration, consists of alternating an update of the object and the PSF by means of a fixed number of iterations, also called inner iterations, of the scaled gradient projection (SGP) method. Therefore, the method is similar to other proposed methods based on the Richardson–Lucy (RL) algorithm, with SGP replacing RL. The use of SGP has two advantages: first, it allows one to prove global convergence of the blind method; secondly, it allows the introduction of different constraints on the object and the PSF. The specific constraint on the PSF, besides non-negativity and normalization, is an upper bound derived from the so-called Strehl ratio (SR), which is the ratio between the peak value of an aberrated versus a perfect wavefront. Therefore, a typical application, but not a unique one, is to the imaging of modern telescopes equipped with adaptive optics systems for the partial correction of the aberrations due to atmospheric turbulence. In the paper, we describe in detail the algorithm and we recall the results leading to its convergence. Moreover, we illustrate its effectiveness by means of numerical experiments whose results indicate that the method, pushed to convergence, is very promising in the reconstruction of non-dense stellar clusters. The case of more complex astronomical targets
Measuring system and method of determining the Adaptive Force
Directory of Open Access Journals (Sweden)
Laura Schaefer
2017-07-01
Full Text Available The term Adaptive Force (AF describes the capability of adaptation of the nerve-muscle-system to externally applied forces during isometric and eccentric muscle action. This ability plays an important role in real life motions as well as in sports. The focus of this paper is on the specific measurement method of this neuromuscular action, which can be seen as innovative. A measuring system based on the use of compressed air was constructed and evaluated for this neuromuscular function. It depends on the physical conditions of the subject, at which force level it deviates from the quasi isometric position and merges into eccentric muscle action. The device enables – in contrast to the isokinetic systems – a measure of strength without forced motion. Evaluation of the scientific quality criteria of the devices was done by measurements regarding the intra- and interrater-, the test-retest-reliability and fatiguing measurements. Comparisons of the pneumatic device with a dynamometer were also done. Looking at the mechanical evaluation, the results show a high level of consistency (r²=0.94 to 0.96. The parallel test reliability delivers a very high and significant correlation (ρ=0.976; p=0.000. Including the biological system, the concordance of three different raters is very high (p=0.001, Cronbachs alpha α=0.987. The test retest with 4 subjects over five weeks speaks for the reliability of the device in showing no statistically significant differences. These evaluations indicate that the scientific evaluation criteria are fulfilled. The specific feature of this system is that an isometric position can be maintained while the externally impacting force rises. Moreover, the device can capture concentric, static and eccentric strength values. Fields of application are performance diagnostics in sports and medicine.
Sugiarto, Y.; Perdinan; Atmaja, T.; Wibowo, A.
2017-03-01
Agriculture plays a strategic role in strengthening sustainable development. Based on agropolitan concept, the village becomes the center of economic activities by combining agriculture, agro-industry, agribusiness and tourism that able to create high value-added economy. The impact of climate change on agriculture and water resources may increase the pressure on agropolitan development. The assessment method is required to measure the vulnerability of area-based communities in the agropolitan to climate change impact. An analysis of agropolitan vulnerability was conducted in Malang district based on four aspects and considering the availability and distribution of water as the problem. The indicators used to measure was vulnerability component which consisted of sensitivity and adaptive capacity and exposure component. The studies earned 21 indicators derived from the 115 village-based data. The results of vulnerability assessments showed that most of the villages were categorised at a moderate level. Around 20% of 388 villages were categorized at high to very high level of vulnerability due to low level of agricultural economic. In agropolitan region within the sub-district of Poncokusumo, the vulnerability of the villages varies between very low to very high. The most villages were vulnerable due to lower adaptive capacity, eventhough the level of sensitivity and exposure of all villages were relatively similar. The existence of water resources was the biggest contributor to the high exposure of the villages in Malang district, while the reception of credit facilities and source of family income were among the indicators that lead to high sensitivity component.
A parallel direct solver for the self-adaptive hp Finite Element Method
Paszyński, Maciej R.
2010-03-01
In this paper we present a new parallel multi-frontal direct solver, dedicated for the hp Finite Element Method (hp-FEM). The self-adaptive hp-FEM generates in a fully automatic mode, a sequence of hp-meshes delivering exponential convergence of the error with respect to the number of degrees of freedom (d.o.f.) as well as the CPU time, by performing a sequence of hp refinements starting from an arbitrary initial mesh. The solver constructs an initial elimination tree for an arbitrary initial mesh, and expands the elimination tree each time the mesh is refined. This allows us to keep track of the order of elimination for the solver. The solver also minimizes the memory usage, by de-allocating partial LU factorizations computed during the elimination stage of the solver, and recomputes them for the backward substitution stage, by utilizing only about 10% of the computational time necessary for the original computations. The solver has been tested on 3D Direct Current (DC) borehole resistivity measurement simulations problems. We measure the execution time and memory usage of the solver over a large regular mesh with 1.5 million degrees of freedom as well as on the highly non-regular mesh, generated by the self-adaptive h p-FEM, with finite elements of various sizes and polynomial orders of approximation varying from p = 1 to p = 9. From the presented experiments it follows that the parallel solver scales well up to the maximum number of utilized processors. The limit for the solver scalability is the maximum sequential part of the algorithm: the computations of the partial LU factorizations over the longest path, coming from the root of the elimination tree down to the deepest leaf. © 2009 Elsevier Inc. All rights reserved.
Özen, Hamit; Turan, Selahattin
2017-01-01
This study was designed to develop the scale of the Complex Adaptive Leadership for School Principals (CAL-SP) and examine its psychometric properties. This was an exploratory mixed method research design (ES-MMD). Both qualitative and quantitative methods were used to develop and assess psychometric properties of the questionnaire. This study…
Osuna, Diego; Barrera, Manuel; Strycker, Lisa A; Toobert, Deborah J; Glasgow, Russell E; Geno, Cristy R; Almeida, Fabio; Perdomo, Malena; King, Diane; Doty, Alyssa Tinley
2011-05-01
Because Latinas experience a high prevalence of type 2 diabetes and its complications, there is an urgent need to reach them with interventions that promote healthful lifestyles. This article illustrates a sequential approach that took an effective multiple-risk-factor behavior-change program and adapted it for Latinas with type 2 diabetes. Adaptation stages include (a) information gathering from literature and focus groups, (b) preliminary adaptation design, and (c) preliminary adaptation test. In this third stage, a pilot study finds that participants were highly satisfied with the intervention and showed improvement across diverse outcomes. Key implications for applications include the importance of a model for guiding cultural adaptations, and the value of procedures for obtaining continuous feedback from staff and participants during the preliminary adaptation test.
International Nuclear Information System (INIS)
Park, H.; De Oliveira, C. R. E.
2007-01-01
This paper describes the verification of the recently developed space-angle self-adaptive algorithm for the finite element-spherical harmonics method via the Method of Manufactured Solutions. This method provides a simple, yet robust way for verifying the theoretical properties of the adaptive algorithm and interfaces very well with the underlying second-order, even-parity transport formulation. Simple analytic solutions in both spatial and angular variables are manufactured to assess the theoretical performance of the a posteriori error estimates. The numerical results confirm reliability of the developed space-angle error indicators. (authors)