Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics
DEFF Research Database (Denmark)
Schlaikjer, Malene; Jensen, Jørgen Arendt
2004-01-01
)-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...
A new maximum likelihood blood velocity estimator incorporating spatial and temporal correlation
DEFF Research Database (Denmark)
Schlaikjer, Malene; Jensen, Jørgen Arendt
2001-01-01
and space. This paper presents a new estimator (STC-MLE), which incorporates the correlation property. It is an expansion of the maximum likelihood estimator (MLE) developed by Ferrara et al. With the MLE a cross-correlation analysis between consecutive RF-lines on complex form is carried out for a range...... of possible velocities. In the new estimator an additional similarity investigation for each evaluated velocity and the available velocity estimates in a temporal (between frames) and spatial (within frames) neighborhood is performed. An a priori probability density term in the distribution...... of the observations gives a probability measure of the correlation between the velocities. Both the MLE and the STC-MLE have been evaluated on simulated and in-vivo RF-data obtained from the carotid artery. Using the MLE 4.1% of the estimates deviate significantly from the true velocities, when the performance...
Middle cerebral artery blood velocity during running
DEFF Research Database (Denmark)
Lyngeraa, Tobias; Pedersen, Lars Møller; Mantoni, T
2013-01-01
for eight subjects, respectively, were excluded from analysis because of insufficient signal quality. Running increased mean arterial pressure and mean MCA velocity and induced rhythmic oscillations in BP and in MCA velocity corresponding to the difference between step rate and heart rate (HR) frequencies....... During running, rhythmic oscillations in arterial BP induced by interference between HR and step frequency impact on cerebral blood velocity. For the exercise as a whole, average MCA velocity becomes elevated. These results suggest that running not only induces an increase in regional cerebral blood flow...
Algorithms for estimating blood velocities using ultrasound
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2000-01-01
Ultrasound has been used intensively for the last 15 years for studying the hemodynamics of the human body. Systems for determining both the velocity distribution at one point of interest (spectral systems) and for displaying a map of velocity in real time have been constructed. A number of schemes...... have been developed for performing the estimation, and the various approaches are described. The current systems only display the velocity along the ultrasound beam direction and a velocity transverse to the beam is not detected. This is a major problem in these systems, since most blood vessels...... are parallel to the skin surface. Angling the transducer will often disturb the flow, and new techniques for finding transverse velocities are needed. The various approaches for determining transverse velocities will be explained. This includes techniques using two-dimensional correlation (speckle tracking...
Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar
Directory of Open Access Journals (Sweden)
Zhenxin Cao
2018-02-01
Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.
Examples of in-vivo blood vector velocity estimation
DEFF Research Database (Denmark)
Udesen, Jesper; Nielsen, Michael Bachmann; Nielsen, Kristian R.
2007-01-01
In this paper examples of in-vivo blood vector velocity images of the carotid artery are presented. The transverse oscillation (TO) method for blood vector velocity estimation has been used to estimate the vector velocities and the method is first evaluated in a circulating flowrig where...
Daily rhythm of cerebral blood flow velocity
Directory of Open Access Journals (Sweden)
Spielman Arthur J
2005-03-01
Full Text Available Abstract Background CBFV (cerebral blood flow velocity is lower in the morning than in the afternoon and evening. Two hypotheses have been proposed to explain the time of day changes in CBFV: 1 CBFV changes are due to sleep-associated processes or 2 time of day changes in CBFV are due to an endogenous circadian rhythm independent of sleep. The aim of this study was to examine CBFV over 30 hours of sustained wakefulness to determine whether CBFV exhibits fluctuations associated with time of day. Methods Eleven subjects underwent a modified constant routine protocol. CBFV from the middle cerebral artery was monitored by chronic recording of Transcranial Doppler (TCD ultrasonography. Other variables included core body temperature (CBT, end-tidal carbon dioxide (EtCO2, blood pressure, and heart rate. Salivary dim light melatonin onset (DLMO served as a measure of endogenous circadian phase position. Results A non-linear multiple regression, cosine fit analysis revealed that both the CBT and CBFV rhythm fit a 24 hour rhythm (R2 = 0.62 and R2 = 0.68, respectively. Circadian phase position of CBT occurred at 6:05 am while CBFV occurred at 12:02 pm, revealing a six hour, or 90 degree difference between these two rhythms (t = 4.9, df = 10, p Conclusion In conclusion, time of day variations in CBFV have an approximately 24 hour rhythm under constant conditions, suggesting regulation by a circadian oscillator. The 90 degree-phase angle difference between the CBT and CBFV rhythms may help explain previous findings of lower CBFV values in the morning. The phase difference occurs at a time period during which cognitive performance decrements have been observed and when both cardiovascular and cerebrovascular events occur more frequently. The mechanisms underlying this phase angle difference require further exploration.
Vector blood velocity estimation in medical ultrasound
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Gran, Fredrik; Udesen, Jesper
2006-01-01
Two methods for making vector velocity estimation in medical ultrasound are presented. All of the techniques can find both the axial and transverse velocity in the image and can be used for displaying both the correct velocity magnitude and direction. The first method uses a transverse oscillation...... in the ultrasound field to find the transverse velocity. In-vivo examples from the carotid artery are shown, where complex turbulent flow is found in certain parts of the cardiac cycle. The second approach uses directional beam forming along the flow direction to estimate the velocity magnitude. Using a correlation...... search can also yield the direction, and the full velocity vector is thereby found. An examples from a flow rig is shown....
Estimation of blood velocities using ultrasound
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
imaging, and, finally, some of the more recent experimental techniques. The authors shows that the Doppler shift, usually considered the way velocity is detected, actually, plays a minor role in pulsed systems. Rather, it is the shift of position of signals between pulses that is used in velocity...
Blood flow velocity in migraine attacks - a transcranial Doppler study
International Nuclear Information System (INIS)
Zwetsloot, C.P.; Caekebeke, J.F.V.; Jansen, J.C.; Odink, J.; Ferrari, M.D.
1991-01-01
A pulsed Doppler device was used to measure blood flow velocities in the common carotid artery, the extracranial part of the internal carotid artery, the external carotid artery, the middle cerebral artery, and the anterior cerebral artery in 31 migraneurs without aura (n=27) and with aura (n=4), both during and ouside an attack. The aims were to compare blood flow velocity during and between migraine attacks and to study asymmetries of the blood flow velocity. Compared with blood flow velocity values obtained in the attack-free interval, blood flow velocity was lower during attacks without aura in both common carotid arteries, but not in the other extra- and intracranial vessels which were examined. However, during attacks of migraine with aura, blood flow velocity tended to be lower in all examined vessels. There were no asymmetries of the blood flow velocity. It is suggested that during migraine attacks without aura there is a dissociation in blood flow regulation in the common carotid and middle cerebral arteries. 20 refs., 2 tabs
Blood flow velocity in migraine attacks - a transcranial Doppler study
Energy Technology Data Exchange (ETDEWEB)
Zwetsloot, C.P.; Caekebeke, J.F.V.; Jansen, J.C.; Odink, J.; Ferrari, M.D. (Rijksuniversiteit Leiden (Netherlands))
1991-05-01
A pulsed Doppler device was used to measure blood flow velocities in the common carotid artery, the extracranial part of the internal carotid artery, the external carotid artery, the middle cerebral artery, and the anterior cerebral artery in 31 migraneurs without aura (n=27) and with aura (n=4), both during and ouside an attack. The aims were to compare blood flow velocity during and between migraine attacks and to study asymmetries of the blood flow velocity. Compared with blood flow velocity values obtained in the attack-free interval, blood flow velocity was lower during attacks without aura in both common carotid arteries, but not in the other extra- and intracranial vessels which were examined. However, during attacks of migraine with aura, blood flow velocity tended to be lower in all examined vessels. There were no asymmetries of the blood flow velocity. It is suggested that during migraine attacks without aura there is a dissociation in blood flow regulation in the common carotid and middle cerebral arteries. 20 refs., 2 tabs.
Middle cerebral artery blood velocity during running
Lyngeraa, T. S.; Pedersen, L. M.; Mantoni, T.; Belhage, B.; Rasmussen, L. S.; van Lieshout, J. J.; Pott, F. C.
2013-01-01
Running induces characteristic fluctuations in blood pressure (BP) of unknown consequence for organ blood flow. We hypothesized that running-induced BP oscillations are transferred to the cerebral vasculature. In 15 healthy volunteers, transcranial Doppler-determined middle cerebral artery (MCA)
Ultrasound systems for blood velocity estimation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
1998-01-01
Medical ultrasound scanners can be used both for displayinggray-scale images of the anatomy and for visualizing theblood flow dynamically in the body.The systems can interrogate the flow at a single position in the bodyand there find the velocity distribution over time. They can also show adynamic...
Velocity Profiles of Slow Blood Flow in a Narrow Tube
Chen, Jinyu; Huang, Zuqia; Zhuang, Fengyuan; Zhang, Hui
1998-04-01
A fractal model is introduced into the slow blood motion. When blood flows slowly in a narrow tube, red cell aggregation results in the formation of an approximately cylindrical core of red cells. By introducing the fractal model and using the power law relation between area fraction φ and distance from tube axis ρ, rigorous velocity profiles of the fluid in and outside the aggregated core and of the core itself are obtained analytically for different fractal dimensions. It shows a blunted velocity distribution for a relatively large fractal dimension (D ˜ 2), which can be observed in normal blood; a pathological velocity profile for moderate dimension (D = 1), which is similar to the Segre-Silberberg effect; and a parabolic profile for negligible red cell concentration (D = 0), which likes in the Poiseuille flow. The project supported by the National Basic Research Project "Nonlinear Science", National Natural Science Foundation of China and the State Education Commission through the Foundation of Doctoral Training
Blood velocity estimation using ultrasound and spectral iterative adaptive approaches
DEFF Research Database (Denmark)
Gudmundson, Erik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2011-01-01
-mode images are interleaved with the Doppler emissions. Furthermore, the techniques are shown, using both simplified and more realistic Field II simulations as well as in vivo data, to outperform current state-of-the-art techniques, allowing for accurate estimation of the blood velocity spectrum using only 30......This paper proposes two novel iterative data-adaptive spectral estimation techniques for blood velocity estimation using medical ultrasound scanners. The techniques make no assumption on the sampling pattern of the emissions or the depth samples, allowing for duplex mode transmissions where B...
Data adaptive estimation of transversal blood flow velocities
DEFF Research Database (Denmark)
Pirnia, E.; Jakobsson, A.; Gudmundson, E.
2014-01-01
the transversal blood flow. In this paper, we propose a novel data-adaptive blood flow estimator exploiting this modulation scheme. Using realistic Field II simulations, the proposed estimator is shown to achieve a notable performance improvement as compared to current state-of-the-art techniques.......The examination of blood flow inside the body may yield important information about vascular anomalies, such as possible indications of, for example, stenosis. Current Medical ultrasound systems suffer from only allowing for measuring the blood flow velocity along the direction of irradiation......, posing natural difficulties due to the complex behaviour of blood flow, and due to the natural orientation of most blood vessels. Recently, a transversal modulation scheme was introduced to induce also an oscillation along the transversal direction, thereby allowing for the measurement of also...
An Iterative Adaptive Approach for Blood Velocity Estimation Using Ultrasound
DEFF Research Database (Denmark)
Gudmundson, Erik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2010-01-01
This paper proposes a novel iterative data-adaptive spectral estimation technique for blood velocity estimation using medical ultrasound scanners. The technique makes no assumption on the sampling pattern of the slow-time or the fast-time samples, allowing for duplex mode transmissions where B......-mode images are interleaved with the Doppler emissions. Furthermore, the technique is shown, using both simplified and more realistic Field II simulations, to outperform current state-of-the-art techniques, allowing for accurate estimation of the blood velocity spectrum using only 30% of the transmissions......, thereby allowing for the examination of two separate vessel regions while retaining an adequate updating rate of the B-mode images. In addition, the proposed method also allows for more flexible transmission patterns, as well as exhibits fewer spectral artifacts as compared to earlier techniques....
Tissue motion in blood velocity estimation and its simulation
DEFF Research Database (Denmark)
Schlaikjer, Malene; Torp-Pedersen, Søren; Jensen, Jørgen Arendt
1998-01-01
to the improvement of color flow imaging. Optimization based on in-vivo data is difficult since the blood and tissue signals cannot be accurately distinguished and the correct extend of the vessel under investigation is often unknown. This study introduces a model for the simulation of blood velocity data in which...... tissue motion is included. Tissue motion from breathing, heart beat, and vessel pulsation were determined based on in-vivo RF-data obtained from 10 healthy volunteers. The measurements were taken at the carotid artery at one condition and in the liver at three conditions. Each measurement was repeated 10....... The motion due to the heart, when the volunteer was asked to hold his breath, gave a peak velocity of 4.2±1.7 mm/s. The movement of the carotid artery wall due to changing blood pressure had a peak velocity of 8.9±3.7 mm/s over the cardiac cycle. The variations are due to differences in heart rhythm...
Kyoden, Tomoaki; Akiguchi, Shunsuke; Tajiri, Tomoki; Andoh, Tsugunobu; Hachiga, Tadashi
2017-11-01
The development of a system for in vivo visualization of occluded distal blood vessels for diabetic patients is the main target of our research. We herein describe two-beam multipoint laser Doppler velocimetry (MLDV), which measures the instantaneous multipoint flow velocity and can be used to observe the blood flow velocity in peripheral blood vessels. By including a motorized stage to shift the measurement points horizontally and in the depth direction while measuring the velocity, the path of the blood vessel in the skin could be observed using blood flow velocity in three-dimensional space. The relationship of the signal power density between the blood vessel and the surrounding tissues was shown and helped us identify the position of the blood vessel. Two-beam MLDV can be used to simultaneously determine the absolute blood flow velocity distribution and identify the blood vessel position in skin.
Prediction of Cerebral Hyperperfusion Syndrome with Velocity Blood Pressure Index
Directory of Open Access Journals (Sweden)
Zhi-Chao Lai
2015-01-01
Full Text Available Background: Cerebral hyperperfusion syndrome is an important complication of carotid endarterectomy (CEA. An >100% increase in middle cerebral artery velocity (MCAV after CEA is used to predict the cerebral hyperperfusion syndrome (CHS development, but the accuracy is limited. The increase in blood pressure (BP after surgery is a risk factor of CHS, but no study uses it to predict CHS. This study was to create a more precise parameter for prediction of CHS by combined the increase of MCAV and BP after CEA. Methods: Systolic MCAV measured by transcranial Doppler and systematic BP were recorded preoperatively; 30 min postoperatively. The new parameter velocity BP index (VBI was calculated from the postoperative increase ratios of MCAV and BP. The prediction powers of VBI and the increase ratio of MCAV (velocity ratio [VR] were compared for predicting CHS occurrence. Results: Totally, 6/185 cases suffered CHS. The best-fit cut-off point of 2.0 for VBI was identified, which had 83.3% sensitivity, 98.3% specificity, 62.5% positive predictive value and 99.4% negative predictive value for CHS development. This result is significantly better than VR (33.3%, 97.2%, 28.6% and 97.8%. The area under the curve (AUC of receiver operating characteristic: AUC VBI = 0.981, 95% confidence interval [CI] 0.949-0.995; AUC VR = 0.935, 95% CI 0.890-0.966, P = 0.02. Conclusions: The new parameter VBI can more accurately predict patients at risk of CHS after CEA. This observation needs to be validated by larger studies.
Prediction of Cerebral Hyperperfusion Syndrome with Velocity Blood Pressure Index.
Lai, Zhi-Chao; Liu, Bao; Chen, Yu; Ni, Leng; Liu, Chang-Wei
2015-06-20
Cerebral hyperperfusion syndrome is an important complication of carotid endarterectomy (CEA). An >100% increase in middle cerebral artery velocity (MCAV) after CEA is used to predict the cerebral hyperperfusion syndrome (CHS) development, but the accuracy is limited. The increase in blood pressure (BP) after surgery is a risk factor of CHS, but no study uses it to predict CHS. This study was to create a more precise parameter for prediction of CHS by combined the increase of MCAV and BP after CEA. Systolic MCAV measured by transcranial Doppler and systematic BP were recorded preoperatively; 30 min postoperatively. The new parameter velocity BP index (VBI) was calculated from the postoperative increase ratios of MCAV and BP. The prediction powers of VBI and the increase ratio of MCAV (velocity ratio [VR]) were compared for predicting CHS occurrence. Totally, 6/185 cases suffered CHS. The best-fit cut-off point of 2.0 for VBI was identified, which had 83.3% sensitivity, 98.3% specificity, 62.5% positive predictive value and 99.4% negative predictive value for CHS development. This result is significantly better than VR (33.3%, 97.2%, 28.6% and 97.8%). The area under the curve (AUC) of receiver operating characteristic: AUC(VBI) = 0.981, 95% confidence interval [CI] 0.949-0.995; AUC(VR) = 0.935, 95% CI 0.890-0.966, P = 0.02. The new parameter VBI can more accurately predict patients at risk of CHS after CEA. This observation needs to be validated by larger studies.
In Vivo Validation of a Blood Vector Velocity Estimator with MR Angiography
DEFF Research Database (Denmark)
Hansen, Kristoffer Lindskov; Udesen, Jesper; Thomsen, Carsten
2009-01-01
Conventional Doppler methods for blood velocity estimation only estimate the velocity component along the ultrasound beam direction. This implies that a Doppler angle under examination close to 90° results in unreliable information about the true blood direction and blood velocity. The novel method...... indicate that reliable vector velocity estimates can be obtained in vivo using the presented angle-independent 2-D vector velocity method. The TO method can be a useful alternative to conventional Doppler systems by avoiding the angle artifact, thus giving quantitative velocity information....
Ide, K.; Pott, F.; van Lieshout, J. J.; Secher, N. H.
1998-01-01
We tested the hypothesis that pharmacological reduction of the increase in cardiac output during dynamic exercise with a large muscle mass would influence the cerebral blood velocity/perfusion. We studied the relationship between changes in cerebral blood velocity (transcranial Doppler), rectus
Changes in cerebral artery blood flow velocity after intermittent cerebrospinal fluid drainage.
Kempley, S T; Gamsu, H R
1993-01-01
Doppler ultrasound was used to measure blood flow velocity in the anterior cerebral artery of six premature infants with posthaemorrhagic hydrocephalus, before and after intermittent cerebrospinal fluid (CSF) drainage, on 23 occasions. There was a significant increase in mean blood flow velocity after the drainage procedures (+5.6 cm/s, 95% confidence interval +2.9 to +8.3 cm/s), which was accompanied by a decrease in velocity waveform pulsatility. CSF pressure also fell significantly. In pat...
Blood flow velocity measurements in chicken embryo vascular network via PIV approach
Kurochkin, Maxim A.; Stiukhina, Elena S.; Fedosov, Ivan V.; Tuchin, Valery V.
2018-04-01
A method for measuring of blood velocity in the native vasculature of a chick embryo by the method of micro anemometry from particle images (μPIV) is improved. A method for interrogation regions sorting by the mask of the vasculature is proposed. A method for sorting of the velocity field of capillary blood flow is implemented. The in vitro method was evaluated for accuracy in a glass phantom of a blood vessel with a diameter of 50 μm and in vivo on the bloodstream of a chicken embryo, by comparing the transverse profile of the blood velocity obtained by the PIV method with the theoretical Poiseuille laminar flow profile.
Variation of velocity profile according to blood viscosity in a microfluidic channel
Yeom, Eunseop; Kang, Yang Jun; Lee, Sang-Joon
2014-11-01
The shear-thinning effect of blood flows is known to change blood viscosity. Since blood viscosity and motion of red blood cells (RBCs) are closely related, hemorheological variations have a strong influence on hemodynamic characteristics. Therefore, understanding on the relationship between the hemorheological and hemodynamic properties is importance for getting more detailed information on blood circulation in microvessels. In this study, the blood viscosity and velocity profiles in a microfluidic channel were systematically investigated. Rat blood was delivered in the microfluidic device which can measure blood viscosity by monitoring the flow-switching phenomenon. Velocity profiles of blood flows in the microchannel were measured by using a micro-particle image velocimetry (PIV) technique. Shape of velocity profiles measured at different flow rates was quantified by using a curve-fitting equation. It was observed that the shape of velocity profiles is highly correlated with blood viscosity. The study on the relation between blood viscosity and velocity profile would be helpful to understand the roles of hemorheological and hemodynamic properties in cardiovascular diseases. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIP) (No. 2008-0061991).
Coded excitation and sub-band processing for blood velocity estmation in medical ultrasound
DEFF Research Database (Denmark)
Gran, Fredrik; Udesen, Jesper; Jensen, Jørgen Arendt
2007-01-01
This paper investigates the use of broadband coded excitation and subband processing for blood velocity estimation in medical ultrasound. In conventional blood velocity estimation a long (narrow-band) pulse is emitted and the blood velocity is estimated using an auto-correlation based approach....... However, the axial resolution of the narrow-band pulse is too poor for brightness-mode (B-mode) imaging. Therefore, a separate transmission sequence is used for updating the B-mode image, which lowers the overall frame-rate of the system. By using broad-band excitation signals, the backscattered received...... signal can be divided into a number of narrow frequency bands. The blood velocity can be estimated in each of the bands and the velocity estimates can be averaged to form an improved estimate. Furthermore, since the excitation signal is broadband, no secondary B-mode sequence is required, and the frame...
Middle cerebral artery blood velocity and cerebral blood flow and O2 uptake during dynamic exercise
DEFF Research Database (Denmark)
Madsen, P L; Sperling, B K; Warming, T
1993-01-01
Results obtained by the 133Xe clearance method with external detectors and by transcranial Doppler sonography (TCD) suggest that dynamic exercise causes an increase of global average cerebral blood flow (CBF). These data are contradicted by earlier data obtained during less-well-defined conditions....... To investigate this controversy, we applied the Kety-Schmidt technique to measure the global average levels of CBF and cerebral metabolic rate of oxygen (CMRO2) during rest and dynamic exercise. Simultaneously with the determination of CBF and CMRO2, we used TCD to determine mean maximal flow velocity...... in the middle cerebral artery (MCA Vmean). For values of CBF and MCA Vmean a correction for an observed small drop in arterial PCO2 was carried out. Baseline values for global CBF and CMRO2 were 50.7 and 3.63 ml.100 g-1.min-1, respectively. The same values were found during dynamic exercise, whereas a 22% (P
Directory of Open Access Journals (Sweden)
Lumu W
2017-02-01
Full Text Available William Lumu,1 Leaticia Kampiire,2 George Patrick Akabwai,3 Daniel Ssekikubo Kiggundu,4 Davis Kibirige5 1Department of Medicine and Diabetes/Endocrine Unit, Mengo Hospital, 2Infectious Disease Research Collaboration, 3Baylor College of Medicine Children’s Foundation, 4Nephrology Unit, Mulago National Referral and Teaching Hospital, 5Department of Medicine, Uganda Martyrs Hospital Lubaga, Kampala, Uganda Background: Hypertension is one of the recognized risk factors of cardiovascular diseases in adult diabetic patients. High prevalence of suboptimal blood pressure (BP control has been well documented in the majority of studies assessing BP control in diabetic patients in sub-Saharan Africa. In Uganda, there is a dearth of similar studies. This study evaluated the prevalence and correlates of suboptimal BP control in an adult diabetic population in Uganda.Patients and methods: This was a cross-sectional study that enrolled 425 eligible ambulatory adult diabetic patients attending three urban diabetic outpatient clinics over 11 months. Data about their sociodemographic characteristics and clinical history were collected using pre-tested questionnaires. Suboptimal BP control was defined according to the 2015 American Diabetes Association standards of diabetes care guideline as BP levels ≥140/90 mmHg.Results: The mean age of the study participants was 52.2±14.4 years, with the majority being females (283, 66.9%. Suboptimal BP control was documented in 192 (45.3% study participants and was independently associated with the study site (private hospitals; odds ratio 2.01, 95% confidence interval 1.18–3.43, P=0.01 and use of statin therapy (odds ratio 0.5, 95% confidence interval 0.26–0.96, P=0.037.Conclusion: Suboptimal BP control was highly prevalent in this study population. Strategies to improve optimal BP control, especially in the private hospitals, and the use of statin therapy should be encouraged in adult diabetic patients
Effect of head rotation on cerebral blood velocity in the prone position
DEFF Research Database (Denmark)
Højlund, Jakob; Sandmand, Marie; Sonne, Morten
2012-01-01
for cerebral blood flow. We tested in healthy subjects the hypothesis that rotating the head in the prone position reduces cerebral blood flow. Methods. Mean arterial blood pressure (MAP), stroke volume (SV), and CO were determined, together with the middle cerebral artery mean blood velocity (MCA V...... V(mean) ~10% in spite of an elevated MAP. Prone positioning with rotated head affects both CBF and cerebrovenous drainage indicating that optimal brain perfusion requires head centering....
DEFF Research Database (Denmark)
Hansen, Kristoffer Lindskov; Udesen, Jesper; Gran, Fredrik
2008-01-01
Conventional ultrasound methods for acquiring color flow images of the blood motion are restricted by a relatively low frame rate and angle dependent velocity estimates. The Plane Wave Excitation (PWE) method has been proposed to solve these limitations. The frame rate can be increased, and the 2-D...... vector velocity of the blood motion can be estimated. The transmitted pulse is not focused, and a full speckle image of the blood can be acquired for each emission. A 13 bit Barker code is transmitted simultaneously from each transducer element. The 2-D vector velocity of the blood is found using 2-D...... speckle tracking between segments in consecutive speckle images. The flow patterns of six bifurcations and two veins were investigated in-vivo. It was shown: 1) that a stable vortex in the carotid bulb was present opposed to other examined bifurcations, 2) that retrograde flow was present...
Reference values of fetal peak systolic blood flow Velocity in the ...
African Journals Online (AJOL)
Objectives: The objectives of this prospective cross sectional study are (i) to establish new reference values of peak systolic blood flow velocity measurement in the fetal middle cerebral artery (MCA-PSV) following validated methodological guidelines (ii) to correlate peak systolic velocity with gestational age and (iii) to ...
Velocity estimation using synthetic aperture imaging [blood flow
DEFF Research Database (Denmark)
Nikolov, Svetoslav; Jensen, Jørgen Arendt
2001-01-01
Presented an approach for synthetic aperture blood flow ultrasound imaging. Estimates with a low bias and standard deviation can be obtained with as few as eight emissions. The performance of the new estimator is verified using both simulations and measurements. The results demonstrate that a fully...
Influence of type of aortic valve prosthesis on coronary blood flow velocity.
Jelenc, Matija; Juvan, Katja Ažman; Medvešček, Nadja Tatjana Ružič; Geršak, Borut
2013-02-01
Severe aortic valve stenosis is associated with high resting and reduced hyperemic coronary blood flow. Coronary blood flow increases after aortic valve replacement (AVR); however, the increase depends on the type of prosthesis used. The present study investigates the influence of type of aortic valve prosthesis on coronary blood flow velocity. The blood flow velocity in the left anterior descending coronary artery (LAD) and the right coronary artery (RCA) was measured intraoperatively before and after AVR with a stentless bioprosthesis (Sorin Freedom Solo; n = 11) or a bileaflet mechanical prosthesis (St. Jude Medical Regent; n = 11). Measurements were made with an X-Plore epicardial Doppler probe (Medistim, Oslo, Norway) following induction of hyperemia with an adenosine infusion. Preoperative and postoperative echocardiography evaluations were used to assess valvular and ventricular function. Velocity time integrals (VTI) were measured from the Doppler signals and used to calculate the proportion of systolic VTI (SF), diastolic VTI (DF), and normalized systolic coronary blood flow velocities (NSF) and normalized diastolic coronary blood flow velocities (NDF). The systolic proportion of the LAD VTI increased after AVR with the St. Jude Medical Regent prosthesis, which produced higher LAD SF and NSF values than the Sorin Freedom Solo prosthesis (SF, 0.41 ± 0.09 versus 0.29 ± 0.13 [P = .04]; NSF, 0.88 ± 0.24 versus 0.55 ± 0.17 [P = .01]). No significant changes in the LAD velocity profile were noted after valve replacement with the Sorin Freedom Solo, despite a significant reduction in transvalvular gradient and an increase in the effective orifice area. AVR had no effect on the RCA flow velocity profile. The coronary flow velocity profile in the LAD was significantly influenced by the type of aortic valve prosthesis used. The differences in the LAD velocity profile probably reflect differences in valve design and the systolic transvalvular flow pattern.
International Nuclear Information System (INIS)
Taoka, Yoshiaki; Harada, Masafumi; Nishitani, Hiromu; Yukinaka, Michiko; Nomura, Masahiro
1998-01-01
Coronary arterial blood flow velocity was measured using MRI. Two types of phase contrast methods were used for the measurements, one of which exhibited good resolving power whereas the other provided more distinct images acquired while the subject patients held their breath. Before measuring coronary arterial blood flow velocity, accuracy of the two phase contrast methods was evaluated using a phantom. The results obtained with both methods largely agreed with the values obtained using the phantom. Using both methods, the patterns of coronary arterial blood flow over one cardiac cycle were essentially identical. A peak was noted in late systole or in early diastole in the right coronary artery, whereas in the left coronary artery, a peak was noted somewhat later in diastole. In healthy volunteers, no significant difference in the maximal flow velocity in the coronary arteries was found from one age group to another. Among patients with coronary arterial stenosis, coronary arterial blood flow velocity central to the area of stenosis was lower than that observed in the healthy volunteers. Coronary arterial blood flow velocity was observed to decrease after administration of isosorbide dinitrate and increased following administration of nifedipine. (author)
Recent advances in blood flow vector velocity imaging
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Nikolov, Svetoslav; Udesen, Jesper
2011-01-01
tracking. The key advantages of these techniques are very fast imaging that can attain an order of magnitude higher precision than conventional methods. SA flow imaging was implemented on the experimental scanner RASMUS using an 8-emission spherical emission sequence and reception of 64 channels on a BK...... investigated using both simulations, flow rig measurements, and in-vivo validation against MR scans. The TO method obtains a relative accuracy of 10% for a fully transverse flow in both simulations and flow rig experiments. In-vivo studies performed on 11 healthy volunteers comparing the TO method...... been acquired using a commercial implementation of the method (BK Medical ProFocus Ultraview scanner). A range of other methods are also presented. This includes synthetic aperture imaging using either spherical or plane waves with velocity estimation performed with directional beamforming or speckle...
[A capillary blood flow velocity detection system based on linear array charge-coupled devices].
Zhou, Houming; Wang, Ruofeng; Dang, Qi; Yang, Li; Wang, Xiang
2017-12-01
In order to detect the flow characteristics of blood samples in the capillary, this paper introduces a blood flow velocity measurement system based on field-programmable gate array (FPGA), linear charge-coupled devices (CCD) and personal computer (PC) software structure. Based on the analysis of the TCD1703C and AD9826 device data sheets, Verilog HDL hardware description language was used to design and simulate the driver. Image signal acquisition and the extraction of the real-time edge information of the blood sample were carried out synchronously in the FPGA. Then a series of discrete displacement were performed in a differential operation to scan each of the blood samples displacement, so that the sample flow rate could be obtained. Finally, the feasibility of the blood flow velocity detection system was verified by simulation and debugging. After drawing the flow velocity curve and analyzing the velocity characteristics, the significance of measuring blood flow velocity is analyzed. The results show that the measurement of the system is less time-consuming and less complex than other flow rate monitoring schemes.
DEFF Research Database (Denmark)
Klefter, Oliver Niels; Lauritsen, Anne Øberg; Larsen, Michael
2015-01-01
PURPOSE: To test the oxygen reactivity of a fundus photographic method of measuring macular perfusion velocity and to integrate macular perfusion velocities with measurements of retinal vessel diameters and blood oxygen saturation. METHODS: Sixteen eyes in 16 healthy volunteers were studied at two...... is a valid method for assessing macular perfusion. Results were consistent with previous observations of hyperoxic blood flow reduction using blue field entoptic and laser Doppler velocimetry. Retinal perfusion seemed to be regulated around individual set points according to blood glucose levels. Multimodal...
Joint probability discrimination between stationary tissue and blood velocity signals
DEFF Research Database (Denmark)
Schlaikjer, Malene; Jensen, Jørgen Arendt
2001-01-01
before and after echo-canceling, and (b) the amplitude variations between samples in consecutive RF-signals before and after echo-canceling. The statistical discriminator was obtained by computing the probability density functions (PDFs) for each feature through histogram analysis of data....... This study presents a new statistical discriminator. Investigation of the RF-signals reveals that features can be derived that distinguish the segments of the signal, which do an do not carry information on the blood flow. In this study 4 features, have been determined: (a) the energy content in the segments....... The discrimination is performed by determining the joint probability of the features for the segment under investigation and choosing the segment type that is most likely. The method was tested on simulated data resembling RF-signals from the carotid artery....
DEFF Research Database (Denmark)
Udesen, Jesper; Gran, Fredrik; Hansen, Kristoffer Lindskov
2008-01-01
) The ultrasound is not focused during the transmissions of the ultrasound signals; 2) A 13-bit Barker code is transmitted simultaneously from each transducer element; and 3) The 2-D vector velocity of the blood is estimated using 2-D cross-correlation. A parameter study was performed using the Field II program......, and performance of the method was investigated when a virtual blood vessel was scanned by a linear array transducer. An improved parameter set for the method was identified from the parameter study, and a flow rig measurement was performed using the same improved setup as in the simulations. Finally, the common...... carotid artery of a healthy male was scanned with a scan sequence that satisfies the limits set by the Food and Drug Administration. Vector velocity images were obtained with a frame-rate of 100 Hz where 40 speckle images are used for each vector velocity image. It was found that the blood flow...
Influence of caffeine and caffeine withdrawal on headache and cerebral blood flow velocities
Couturier, EGM; Laman, DM; vanDuijn, MAJ; vanDuijn, H
Caffeine consumption may cause headache, particularly migraine. Its withdrawal also produces headaches and may be related to weekend migraine attacks. Transcranial Doppler sonography (TCD) has shown changes in cerebral blood flow velocities (BFV) during and between attacks of migraine. In order to
Aries, Marcel J; Elting, Jan Willem; Stewart, Roy; De Keyser, Jacques; Kremer, Berry; Vroomen, Patrick
2013-01-01
Objectives: National guidelines recommend mobilisation in bed as early as possible after acute stroke. Little is known about the influence of upright positioning on real-time cerebral flow variables in patients with stroke. We aimed to assess whether cerebral blood flow velocity (CBFV) changes
Effects of ethamsylate on cerebral blood flow velocity in premature babies.
Rennie, J M; Lam, P K
1989-01-01
Cerebral blood flow velocity and cardiac output were measured with ultrasound before and 30 minutes after the administration of ethamsylate in a double blind placebo controlled study of 19 very low birthweight infants. No differences were found before or after treatment in either group.
Fast Blood Vector Velocity Imaging: Simulations and Preliminary In Vivo Results
DEFF Research Database (Denmark)
Udesen, Jesper; Gran, Fredrik; Hansen, Kristoffer Lindskov
2007-01-01
for each pulse emission. 2) The transmitted pulse consists of a 13 bit Barker code which is transmitted simultaneously from each transducer element. 3) The 2-D vector velocity of the blood is found using 2-D speckle tracking between segments in consecutive speckle images. III Results: The method was tested...
Schuurman, P. R.; Albrecht, K. W.
1999-01-01
The association of arterial oxygen content (CaO2) and viscosity with transcranial Doppler (TCD) blood flow velocity in the middle cerebral artery was studied in 20 adults without cerebrovascular disease undergoing abdominal surgery associated with significant fluctuations in hematology. TCD
International Nuclear Information System (INIS)
Li Qiang; Wang Maoqiang; Duan Liuxin; Song Peng; Ao Guokun
2009-01-01
Objective: To investigate the function mechanism of lipid emulsion (LE), used as a carrier, by observing the effect of intra-arterial infusion of LE in different concentration and dosage on blood flow velocity. Methods: According to the concentration and dosage used in arterial infusion, the experiments were divided into four groups:group A (20% LE, 2 ml), group B (20% LE, 20 ml), group C (30% LE, 2 ml) and group D (30% LE, 20 ml). Two healthy hybrid dogs were used for the study. Under DSA guidance, the 4 F catheter was placed in the splenic artery and in the hepatic artery respectively. DSA frames were counted in order to calculate the time that the contrast took from the catheter tip to the selected tertiary branches of the splenic or hepatic artery. Results LE infusion, regardless of its concentration level or its dosage, could reduce the blood velocity. The lasting time and the maximal peak value of the blood velocity reduction were significantly different among groups (P < 0.05). The lasting time was 5 minutes, 5-10 minutes, 20 minutes and 20-30 minutes among group A, B, C and D, respectively. The peak value of the reduction appeared at the 18th frames (1.44 s), 30th frames (2.4 s), 9th frames (0.9 s) and 14th frames (1.12 s) in group A, B, C and D, respectively. Conclusion Intra-arterial infusion of LE can reduce the blood flow velocity. The lasting time of the reduction in 30% LE groups is longer than that in 20% LE groups, while the blood velocity reduction in 30% LE groups is less than that in 20% LE groups. (authors)
Magnetic particle imaging for in vivo blood flow velocity measurements in mice
Kaul, Michael G.; Salamon, Johannes; Knopp, Tobias; Ittrich, Harald; Adam, Gerhard; Weller, Horst; Jung, Caroline
2018-03-01
Magnetic particle imaging (MPI) is a new imaging technology. It is a potential candidate to be used for angiographic purposes, to study perfusion and cell migration. The aim of this work was to measure velocities of the flowing blood in the inferior vena cava of mice, using MPI, and to evaluate it in comparison with magnetic resonance imaging (MRI). A phantom mimicking the flow within the inferior vena cava with velocities of up to 21 cm s‑1 was used for the evaluation of the applied analysis techniques. Time–density and distance–density analyses for bolus tracking were performed to calculate flow velocities. These findings were compared with the calibrated velocities set by a flow pump, and it can be concluded that velocities of up to 21 cm s‑1 can be measured by MPI. A time–density analysis using an arrival time estimation algorithm showed the best agreement with the preset velocities. In vivo measurements were performed in healthy FVB mice (n = 10). MRI experiments were performed using phase contrast (PC) for velocity mapping. For MPI measurements, a standardized injection of a superparamagnetic iron oxide tracer was applied. In vivo MPI data were evaluated by a time–density analysis and compared to PC MRI. A Bland–Altman analysis revealed good agreement between the in vivo velocities acquired by MRI of 4.0 ± 1.5 cm s‑1 and those measured by MPI of 4.8 ± 1.1 cm s‑1. Magnetic particle imaging is a new tool with which to measure and quantify flow velocities. It is fast, radiation-free, and produces 3D images. It therefore offers the potential for vascular imaging.
Voluntary respiratory control and cerebral blood flow velocity upon ice-water immersion
DEFF Research Database (Denmark)
Mantoni, Teit; Rasmussen, Jakob Højlund; Belhage, Bo
2008-01-01
INTRODUCTION: In non-habituated subjects, cold-shock response to cold-water immersion causes rapid reduction in cerebral blood flow velocity (approximately 50%) due to hyperventilation, increasing risk of syncope, aspiration, and drowning. Adaptation to the response is possible, but requires...... velocity (CBFV) was measured together with ventilatory parameters and heart rate before, during, and after immersion. RESULTS: Within seconds after immersion in ice-water, heart rate increased significantly from 95 +/- 8 to 126 +/- 7 bpm (mean +/- SEM). Immersion was associated with an elevation...
International Nuclear Information System (INIS)
Grosset, D.G.; McDonald, I.; Cockburn, M.; Straiton, J.; Bullock, R.R.
1994-01-01
The predictive value of cranial computed tomography (CT) blood load and serial transcranial Doppler sonography for the development of delayed ischaemic neurological deficit was assessed in 121 patients following subarachnoid haemorrhage. Of the 121 patients, 81 (67 %) had thick layers of blood or haematoma, including intraventricular bleeding. The proportion of patients who developed delayed deficit was higher with increasing amounts of subarachnoid blood on the admission CT (51 % of 53 cases in Fisher grade 3; 35 % of 33 cases in grade 2; 28 % of 7 cases in grade 1, P < 0.01). Doppler velocities obtained from readings at least every 2 days following admission were higher in patients with delayed neurological deficit (peak velocity for grade 3 patients 176 ± 6 cm/s (mean ± SE), versus grade 2: 164 ± 7 cm/s; grade 4 149 ± 9, both P = 0.04, Mann-Whitney). Peak velocity and maximal 24-h rise tended to be higher within different CT grades in patients with a deficit than in those without; this difference was significant for grade 3 patients (P < 0.01). We conclude that a combined approach with CT and Doppler sonography provides greater predictive value for the development of delayed ischaemic neurological deficit than either test considered independently. The value of Doppler sonography may be greatest for patients with Fisher grade 3 blood, in whom the risk of delayed ischaemia is greatest. (orig.)
Owen, Art B
2001-01-01
Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Blood velocity estimation using spatio-temporal encoding based on frequency division approach
DEFF Research Database (Denmark)
Gran, Fredrik; Nikolov, Svetoslav; Jensen, Jørgen Arendt
2005-01-01
In this paper a feasibility study of using a spatial encoding technique based on frequency division for blood flow estimation is presented. The spatial encoding is carried out by dividing the available bandwidth of the transducer into a number of narrow frequency bands with approximately disjoint...... spectral support. By assigning one band to one virtual source, all virtual sources can be excited simultaneously. The received echoes are beamformed using Synthetic Transmit Aperture beamforming. The velocity of the moving blood is estimated using a cross- correlation estimator. The simulation tool Field...
MR velocity mapping measurement of renal artery blood flow in patients with impaired kidney function
DEFF Research Database (Denmark)
Cortsen, M; Petersen, L.J.; Stahlberg, F
1996-01-01
Renal blood flow (RBF) was measured in 9 patients with chronic impaired kidney function using MR velocity mapping and compared to PAH clearance and 99mTc-DTPA scintigraphy. An image plane suitable for flow measurement perpendicular to the renal arteries was chosen from 2-dimensional MR angiography....... MR velocity mapping was performed in both renal arteries using an ECG-triggered gradient echo pulse sequence previously validated in normal volunteers. Effective renal plasma flow was calculated from the clearance rate of PAH during constant infusion and the split of renal function was evaluated...... by 99mTc-DTPA scintigraphy. A reduction of RBF was found, and there was a significant correlation between PAH clearance multiplied by 1/(1-hematocrit) and RBF determined by MR velocity mapping. Furthermore, a significant correlation between the distribution of renal function and the percent distribution...
2-D blood vector velocity estimation using a phase shift estimator
DEFF Research Database (Denmark)
Udesen, Jesper
are presented. Here the TO method is tested both in simulations using the Field II program and in flow phantom experiments using the RASMUS scanner. Both simulations and flow phantom experiments indicate that the TO method can estimate the 2-D vector velocity with an acceptable low bias and standard deviation...... velocity estimation is discussed. The TO method is introduced, and the basic theory behind the method is explained. This includes the creation of the acoustic fields, beamforming, echo-canceling and the velocity estimator. In the second part of the thesis the eight papers produced during this PhD project...... when the angle between the blood and the ultrasound beam is above $50^\\circ$. Furthermore, the TO method is tested in-vivo where the scannings are performed by skilled sonographers. The in-vivo scannings resulted in a sequence of 2-D vector CFM images which showed 2-D flow patterns in the bifurcation...
Wide-field absolute transverse blood flow velocity mapping in vessel centerline
Wu, Nanshou; Wang, Lei; Zhu, Bifeng; Guan, Caizhong; Wang, Mingyi; Han, Dingan; Tan, Haishu; Zeng, Yaguang
2018-02-01
We propose a wide-field absolute transverse blood flow velocity measurement method in vessel centerline based on absorption intensity fluctuation modulation effect. The difference between the light absorption capacities of red blood cells and background tissue under low-coherence illumination is utilized to realize the instantaneous and average wide-field optical angiography images. The absolute fuzzy connection algorithm is used for vessel centerline extraction from the average wide-field optical angiography. The absolute transverse velocity in the vessel centerline is then measured by a cross-correlation analysis according to instantaneous modulation depth signal. The proposed method promises to contribute to the treatment of diseases, such as those related to anemia or thrombosis.
Cherry, Erica M; Maxim, Peter G; Eaton, John K
2010-01-01
A physics-based model of a general magnetic drug targeting (MDT) system was developed with the goal of realizing the practical limitations of MDT when electromagnets are the source of the magnetic field. The simulation tracks magnetic particles subject to gravity, drag force, magnetic force, and hydrodynamic lift in specified flow fields and external magnetic field distributions. A model problem was analyzed to determine the effect of drug particle size, blood flow velocity, and magnetic field gradient strength on efficiency in holding particles stationary in a laminar Poiseuille flow modeling blood flow in a medium-sized artery. It was found that particle retention rate increased with increasing particle diameter and magnetic field gradient strength and decreased with increasing bulk flow velocity. The results suggest that MDT systems with electromagnets are unsuitable for use in small arteries because it is difficult to control particles smaller than about 20 microm in diameter.
International Nuclear Information System (INIS)
Mesquita, Jayme Alves Jr. de; Bouskela, Eliete; Wajnberg, Eliane; Lopes de Melo, Pedro
2007-01-01
Microcirculation is the generic name of vessels with internal diameter less than 100 μm of the circulatory system, whose main functions are tissue nutrition and oxygen supply. In microcirculatory studies, it is important to know the amount of oxyhemoglobin present in the blood and how fast it is moving. The present work describes improvements introduced in a classical hardware-based instrument that has usually been used to monitor blood flow velocity in the microcirculation of small animals. It consists of a virtual instrument that can be easily incorporated into existing hardware-based systems, contributing to reduce operator related biases and allowing digital processing and storage. The design and calibration of the modified instrument are described as well as in vitro and in vivo results obtained with electrical models and small animals, respectively. Results obtained in in vivo studies showed that this new system is able to detect a small reduction in blood flow velocity comparing arteries and arterioles (p<0.002) and a further reduction in capillaries (p<0.0001). A significant increase in velocity comparing capillaries and venules (p<0.001) and venules and veins (p<0.001) was also observed. These results are in close agreement with biophysical principles. Moreover, the improvements introduced in the device allowed us to clearly observe changes in blood flow introduced by a pharmacological intervention, suggesting that the system has enough temporal resolution to track these microcirculatory events. These results were also in close conformity to physiology, confirming the high scientific potential of the modified system and indicating that this instrument can also be useful for pharmacological evaluations
Relationship of 133Xe cerebral blood flow to middle cerebral arterial flow velocity in men at rest
Clark, J. M.; Skolnick, B. E.; Gelfand, R.; Farber, R. E.; Stierheim, M.; Stevens, W. C.; Beck, G. Jr; Lambertsen, C. J.
1996-01-01
Cerebral blood flow (CBF) was measured by 133Xe clearance simultaneously with the velocity of blood flow through the left middle cerebral artery (MCA) over a wide range of arterial PCO2 in eight normal men. Average arterial PCO2, which was varied by giving 4% and 6% CO2 in O2 and by controlled hyperventilation on O2, ranged from 25.3 to 49.9 mm Hg. Corresponding average values of global CBF15 were 27.2 and 65.0 ml 100 g min-1, respectively, whereas MCA blood-flow velocity ranged from 42.8 to 94.2 cm/s. The relationship of CBF to MCA blood-flow velocity over the imposed range of arterial PCO2 was described analytically by a parabola with the equation: CBF = 22.8 - 0.17 x velocity + 0.006 x velocity2 The observed data indicate that MCA blood-flow velocity is a useful index of CBF response to change in arterial PCO2 during O2 breathing at rest. With respect to baseline values measured while breathing 100% O2 spontaneously, percent changes in velocity were significantly smaller than corresponding percent changes in CBF at increased levels of arterial PCO2 and larger than CBF changes at the lower arterial PCO2. These observed relative changes are consistent with MCA vasodilation at the site of measurement during exposure to progressive hypercapnia and also during extreme hyperventilation hypocapnia.
Synchrotron X-ray PIV Technique for Measurement of Blood Flow Velocity
International Nuclear Information System (INIS)
Kim, Guk Bae; Lee, Sang Joon; Je, Jung Ho
2007-01-01
Synchrotron X-ray micro-imaging method has been used to observe internal structures of various organisms, industrial devices, and so on. However, it is not suitable to see internal flows inside a structure because tracers typically employed in conventional optical flow visualization methods cannot be detectable with the X-ray micro-imaging method. On the other hand, a PIV (particle image velocimetry) method which has recently been accepted as a reliable quantitative flow visualization technique can extract lots of flow information by applying digital image processing techniques However, it is not applicable to opaque fluids such as blood. In this study, we combined the PIV method and the synchrotron X-ray micro-imaging technique to compose a new X-ray PIV technique. Using the X-ray PIV technique, we investigated the optical characteristics of blood for a coherent synchrotron X-ray beam and quantitatively visualized real blood flows inside an opaque tube without any contrast media. The velocity field information acquired would be helpful for investigating hemorheologic characteristics of the blood flow
Carotid Velocities Determine Cerebral Blood Flow Deficits in Elderly Men with Carotid Stenosis <50%
Directory of Open Access Journals (Sweden)
Arkadiusz Siennicki-Lantz
2012-01-01
Full Text Available To examine if mild carotid stenosis correlates with silent vascular brain changes, we studied a prospective population-based cohort “Men born in 1914.” Data from followups at ages 68 and 81, have been used. Carotid ultrasound was performed at age 81, and cerebral blood flow (CBF was measured with SPECT at age 82. Out of 123 stroke-free patients, carotid stenosis <50% was observed in 94% in the right and 89% in the left internal carotid arteries (ICAs. In these subjects, Peak Systolic Velocities in ICA correlated negatively with CBF in a majority of several brain areas, especially in mesial temporal area. Results were limited to normotensive until their seventies, who developed late-onset hypertension with a subsequent blood pressure, pulse pressure, and ankle-brachial index growth. Elderly with asymptomatic carotid stenosis <50% and peak systolic velocities in ICA 0.7–1.3 m/s, should be offered an intensified pharmacotherapy to prevent stroke or silent cerebrovascular events.
Influence of hemodialysis on the mean blood flow velocity in the middle cerebral artery.
Stefanidis, I; Bach, R; Mertens, P R; Liakopoulos, V; Liapi, G; Mann, H; Heintz, B
2005-08-01
Several effects of hemodialysis, including hemoconcentration, alterations of hemostasis or hemorheology and endothelial activation, could potentially interfere with cerebral blood flow (CBF) regulation. These treatment-specific changes may also be crucial for the enhanced incidence of stroke in uremic patients. Nevertheless, the influence of hemodialysis on CBF has not been yet adequately studied. We registered mean blood flow velocity (MFV) in the middle cerebral artery (MCA) during hemodialysis treatment in order to evaluate its contribution on CBF changes. Transcranial Doppler ultrasonography (TCD) of the MCA was performed continuously during hemodialysis treatment in 18 stable patients (10 males and 8 females, mean age 62 +/- 11 years) with end-stage renal disease of various origin. Blood pressure (mmHg), heart rate (/min), ultrafiltration volume (ml), BV changes (deltaBV by hemoglobinometry, %), arterial blood gases (pO2, blood oxygen content, pCO2), hemostasis activation (thrombin-antithrombin III complex, ELISA) and fibrinogen (Clauss) were measured simultaneously at the beginning of treatment and every hour thereafter. Before the hemodialysis session the MFV in the MCA was within normal range (57.5 +/- 13.0 cm/s, ref. 60 +/- 12) and was mainly dependent on the patients' age (r = -0.697, p delta%MFV) were interrelated to the ultrafiltration volume (r = -0.486, p delta%acO2, r = -0.420, p delta%fibrinogen, r = 0.244, p < 0.05). A significant continuous decrease of the MFV in the MCA was observed during hemodialysis treatment, which inversely correlated both with ultrafiltration volume, BV changes and changes of plasma fibrinogen. The ultrafiltration-induced hemoconcentration with concomitant rise of hematocrit and oxygen transport capacity, may partly explain the alterations in the cerebral MFV observed during hemodialysis.
Fülesdi, B.; Limburg, M.; Bereczki, D.; Molnár, C.; Michels, R. P.; Leányvári, Z.; Csiba, L.
1999-01-01
Blood glucose and insulin concentrations have been reported to influence cerebral hemodynamics. We studied the relationship between actual blood glucose and insulin concentrations and resting cerebral blood flow velocity in the middle cerebral artery and cerebrovascular reserve capacity after
DEFF Research Database (Denmark)
Kjeld, Thomas; Pott, Frank C; Secher, Niels H
2009-01-01
perfusion evaluated as the middle cerebral artery mean flow velocity (MCA V(mean)) during exercise in nine male subjects. At rest, a breath hold of maximum duration increased the arterial carbon dioxide tension (Pa(CO(2))) from 4.2 to 6.7 kPa and MCA V(mean) from 37 to 103 cm/s (mean; approximately 178%; P...... breath hold increased Pa(CO(2)) from 5.9 to 8.2 kPa (P ... 180-W exercise (from 47 to 53 cm/s), and this increment became larger with facial immersion (76 cm/s, approximately 62%; P breath hold diverts blood toward the brain with a >100% increase in MCA V(mean), largely...
Estimation of the blood velocity spectrum using a recursive lattice filter
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Buelund, Claus; Jørgensen, Allan
1996-01-01
acquired for showing the blood velocity distribution are inherently non-stationary, due to the pulsatility of the flow. All current signal processing schemes assume that the signal is stationary within the window of analysis, although this is an approximation. In this paper a recursive least......-stationarity are incorporated through an exponential decay factor, that sets the exponential horizon of the filter. A factor close to 1 gives a long horizon with low variance estimates, but can not track a highly non-stationary flow. Setting the factor is therefore a compromise between estimate variance and the filter...... with the actual distributions that always will be smooth. Setting the exponential decay factor to 0.99 gives satisfactory results for in-vivo data from the carotid artery. The filter can easily be implemented using a standard fixed-point signal processing chip for real-time processing...
Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua
2016-03-01
This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, Psampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.
Relationship of antral follicular blood flow velocity to superovulatory responses in ewes.
Oliveira, M E F; Bartlewski, P M; Jankowski, N; Padilha-Nakaghi, L C; Oliveira, L G; Bicudo, S D; Fonseca, J F; Vicente, W R R
2017-07-01
The aim of this study was to examine the association between antral follicular blood flow velocity and the response of ewes to hormonal ovarian superstimulation. Ten Santa Inês ewes were subjected to a short- (7days; Group 1) or long-term (13days; Group 2) progesterone (CIDR ® ; InterAg, Hamilton, New Zealand) priming, and a superovulatory treatment with porcine follicle-stimulating hormone (pFSH; Folltropin ® -V; Bioniche Animal Health, Belleville, ON, Canada), given twice daily for four consecutive days in decreasing doses and initiated four or ten days after CIDR insertion, respectively. Embryos were recovered surgically seven days after the last pFSH dose. From one day prior to until the end of the pFSH regimen (Days -1 to 3), all ewes underwent daily transrectal ultrasonography of ovaries. The number of high-velocity pixels (HVPs; 0.055-0.11m/s or upper 50% of recordable velocities) on Day 1 correlated directly with the number of corpora lutea (CL; r=0.92, P=0.0002) and of viable embryos (r=0.77, P=0.01). Correlations were also recorded between the number of HVPs on Day 3 and the recovery rate (r=-0.69, P=0.03), viability rate (r=-0.64, P=0.05), and percentage of degenerated embryos (r=0.65, P=0.04). The percentage of HVPs relative to the total area of ovarian cross section on Day 1 was correlated with the number of CL (r=0.95, Pflow has the makings of a useful non-invasive method to predict the outcome of the superovulatory treatment in ewes. Copyright © 2017 Elsevier B.V. All rights reserved.
Wilson, Thad E.; Cui, Jian; Zhang, Rong; Witkowski, Sarah; Crandall, Craig G.
2002-01-01
Orthostatic tolerance is reduced in the heat-stressed human. The purpose of this project was to identify whether skin-surface cooling improves orthostatic tolerance. Nine subjects were exposed to 10 min of 60 degrees head-up tilting in each of four conditions: normothermia (NT-tilt), heat stress (HT-tilt), normothermia plus skin-surface cooling 1 min before and throughout tilting (NT-tilt(cool)), and heat stress plus skin-surface cooling 1 min before and throughout tilting (HT-tilt(cool)). Heating and cooling were accomplished by perfusing 46 and 15 degrees C water, respectively, though a tube-lined suit worn by each subject. During HT-tilt, four of nine subjects developed presyncopal symptoms resulting in the termination of the tilt test. In contrast, no subject experienced presyncopal symptoms during NT-tilt, NT-tilt(cool), or HT-tilt(cool). During the HT-tilt procedure, mean arterial blood pressure (MAP) and cerebral blood flow velocity (CBFV) decreased. However, during HT-tilt(cool), MAP, total peripheral resistance, and CBFV were significantly greater relative to HT-tilt (all P heat-stressed humans.
Transesophageal Doppler measurement of renal arterial blood flow velocities and indices in children.
Zabala, Luis; Ullah, Sana; Pierce, Carol D'Ann; Gautam, Nischal K; Schmitz, Michael L; Sachdeva, Ritu; Craychee, Judith A; Harrison, Dale; Killebrew, Pamela; Bornemeier, Renee A; Prodhan, Parthak
2012-06-01
Doppler-derived renal blood flow indices have been used to assess renal pathologies. However, transesophageal ultrasonography (TEE) has not been previously used to assess these renal variables in pediatric patients. In this study, we (a) assessed whether TEE allows adequate visualization of the renal parenchyma and renal artery, and (b) evaluated the concordance of TEE Doppler-derived renal blood flow measurements/indices compared with a standard transabdominal renal ultrasound (TAU) in children. This prospective cohort study enrolled 28 healthy children between the ages of 1 and 17 years without known renal dysfunction who were undergoing atrial septal defect device closure in the cardiac catheterization laboratory. TEE was used to obtain Doppler renal artery blood velocities (peak systolic velocity, end-diastolic velocity, mean diastolic velocity, resistive index, and pulsatility index), and these values were compared with measurements obtained by TAU. Concordance correlation coefficient (CCC) was used to determine clinically significant agreement between the 2 methods. The Bland-Altman plots were used to determine whether these 2 methods agree sufficiently to be used interchangeably. Statistical significance was accepted at P ≤ 0.05. Obtaining 2-dimensional images of kidney parenchyma and Doppler-derived measurements using TEE in children is feasible. There was statistically significant agreement between the 2 methods for all measurements. The CCC between the 2 imaging techniques was 0.91 for the pulsatility index and 0.66 for the resistive index. These coefficients were sensitive to outliers. When the highest and lowest data points were removed from the analysis, the CCC between the 2 imaging techniques was 0.62 for the pulsatility index and 0.50 for the resistive index. The 95% confidence interval (CI) for pulsatility index was 0.35 to 0.98 and for resistive index was 0.21 to 0.89. The Bland-Altman plots indicate good agreement between the 2 methods; for the
Plourde, Brian D; Vallez, Lauren J; Sun, Biyuan; Nelson-Cheeseman, Brittany B; Abraham, John P; Staniloae, Cezar S
2016-09-01
Simulations were made of the pressure and velocity fields throughout an artery before and after removal of plaque using orbital atherectomy plus adjunctive balloon angioplasty or stenting. The calculations were carried out with an unsteady computational fluid dynamic solver that allows the fluid to naturally transition to turbulence. The results of the atherectomy procedure leads to an increased flow through the stenotic zone with a coincident decrease in pressure drop across the stenosis. The measured effect of atherectomy and adjunctive treatment showed decrease the systolic pressure drop by a factor of 2.3. Waveforms obtained from a measurements were input into a numerical simulation of blood flow through geometry obtained from medical imaging. From the numerical simulations, a detailed investigation of the sources of pressure loss was obtained. It is found that the major sources of pressure drop are related to the acceleration of blood through heavily occluded cross sections and the imperfect flow recovery downstream. This finding suggests that targeting only the most occluded parts of a stenosis would benefit the hemodynamics. The calculated change in systolic pressure drop through the lesion was a factor of 2.4, in excellent agreement with the measured improvement. The systolic and cardiac-cycle-average pressure results were compared with measurements made in a multi-patient study treated with orbital atherectomy and adjunctive treatment. The agreements between the measured and calculated systolic pressure drop before and after the treatment were within 3%. This excellent agreement adds further confidence to the results. This research demonstrates the use of orbital atherectomy to facilitate balloon expansion to restore blood flow and how pressure measurements can be utilized to optimize revascularization of occluded peripheral vessels.
van Amerom, Joshua F P; Kellenberger, Christian J; Yoo, Shi-Joon; Macgowan, Christopher K
2009-01-01
An automated method was evaluated to detect blood flow in small pulmonary arteries and classify each as artery or vein, based on a temporal correlation analysis of their blood-flow velocity patterns. The method was evaluated using velocity-sensitive phase-contrast magnetic resonance data collected in vitro with a pulsatile flow phantom and in vivo in 11 human volunteers. The accuracy of the method was validated in vitro, which showed relative velocity errors of 12% at low spatial resolution (four voxels per diameter), but was reduced to 5% at increased spatial resolution (16 voxels per diameter). The performance of the method was evaluated in vivo according to its reproducibility and agreement with manual velocity measurements by an experienced radiologist. In all volunteers, the correlation analysis was able to detect and segment peripheral pulmonary vessels and distinguish arterial from venous velocity patterns. The intrasubject variability of repeated measurements was approximately 10% of peak velocity, or 2.8 cm/s root-mean-variance, demonstrating the high reproducibility of the method. Excellent agreement was obtained between the correlation analysis and radiologist measurements of pulmonary velocities, with a correlation of R2=0.98 (P<.001) and a slope of 0.99+/-0.01.
Downing, Janelle; Bollyky, Jenna; Schneider, Jennifer
2017-07-11
The Livongo for Diabetes Program offers members (1) a cellular technology-enabled, two-way messaging device that measures blood glucose (BG), centrally stores the glucose data, and delivers messages back to the individual in real time; (2) unlimited BG test strips; and (3) access to a diabetes coaching team for questions, goal setting, and automated support for abnormal glucose excursions. The program is sponsored by at-risk self-insured employers, health plans and provider organizations where it is free to members with diabetes or it is available directly to the person with diabetes where they cover the cost. The objective of our study was to evaluate BG data from 4544 individuals with diabetes who were enrolled in the Livongo program from October 2014 through December 2015. Members used the Livongo glucose meter to measure their BG levels an average of 1.8 times per day. We estimated the probability of having a day with a BG reading outside of the normal range (70-180 mg/dL, or 3.9-10.0 mmol/L) in months 2 to 12 compared with month 1 of the program, using individual fixed effects to control for individual characteristics. Livongo members experienced an average 18.4% decrease in the likelihood of having a day with hypoglycemia (BG 180 mg/dL) in months 2-12 compared with month 1 as the baseline. The biggest impact was seen on hyperglycemia for nonusers of insulin. We do not know all of the contributing factors such as medication or other treatment changes during the study period. These findings suggest that access to a connected glucose meter and certified diabetes educator coaching is associated with a decrease in the likelihood of abnormal glucose excursions, which can lead to diabetes-related health care savings. ©Janelle Downing, Jenna Bollyky, Jennifer Schneider. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 11.07.2017.
Relations between diabetes, blood pressure and aortic pulse wave velocity in haemodialysis patients
DEFF Research Database (Denmark)
Peters, Christian Daugaard; Kjærgaard, Krista Dybtved; Dzeko, Mirela
(HD) and 32 HD patients with DM (HD+DM). The SphygmoCor system was used for estimation of PWV. HD-duration, age, gender and BP medication were similar in the two groups. Mean DM-duration was 23±11 years and 25(78%) had type 2 DM. HD+DM had higher BMI (26±5 vs. 29±5 kg/m2, p=0.02), systolic BP (142......Diabetes (DM) is common in haemodialysis (HD) patients and affects both blood pressure (BP) and arterial stiffness. Carotid femoral pulse wave velocity (PWV) reflects the stiffness of the aorta and is regarded as a strong risk factor for cardiovascular (CV) mortality in HD patients. However, PWV......±20 vs. 152±21 mmHg, p=0.02) and pulse pressure (65±17 vs. 80±18 mmHg, p2.5 in HD and 12.3±3.1 m/s in HD+DM. The mean PWV difference HD vs. HD+DM was 3.1(1.9-4.3)m/s, p
Kjeld, Thomas; Pott, Frank C; Secher, Niels H
2009-04-01
The diving response is initiated by apnea and facial immersion in cold water and includes, besides bradycardia, peripheral vasoconstriction, while cerebral perfusion may be enhanced. This study evaluated whether facial immersion in 10 degrees C water has an independent influence on cerebral perfusion evaluated as the middle cerebral artery mean flow velocity (MCA V(mean)) during exercise in nine male subjects. At rest, a breath hold of maximum duration increased the arterial carbon dioxide tension (Pa(CO(2))) from 4.2 to 6.7 kPa and MCA V(mean) from 37 to 103 cm/s (mean; approximately 178%; P breath hold increased Pa(CO(2)) from 5.9 to 8.2 kPa (P breath hold diverts blood toward the brain with a >100% increase in MCA V(mean), largely because Pa(CO(2)) increases, but the increase in MCA V(mean) becomes larger when combined with facial immersion in cold water independent of Pa(CO(2)).
Altered cerebral blood flow velocity features in fibromyalgia patients in resting-state conditions.
Rodríguez, Alejandro; Tembl, José; Mesa-Gresa, Patricia; Muñoz, Miguel Ángel; Montoya, Pedro; Rey, Beatriz
2017-01-01
The aim of this study is to characterize in resting-state conditions the cerebral blood flow velocity (CBFV) signals of fibromyalgia patients. The anterior and middle cerebral arteries of both hemispheres from 15 women with fibromyalgia and 15 healthy women were monitored using Transcranial Doppler (TCD) during a 5-minute eyes-closed resting period. Several signal processing methods based on time, information theory, frequency and time-frequency analyses were used in order to extract different features to characterize the CBFV signals in the different vessels. Main results indicated that, in comparison with control subjects, fibromyalgia patients showed a higher complexity of the envelope CBFV and a different distribution of the power spectral density. In addition, it has been observed that complexity and spectral features show correlations with clinical pain parameters and emotional factors. The characterization features were used in a lineal model to discriminate between fibromyalgia patients and healthy controls, providing a high accuracy. These findings indicate that CBFV signals, specifically their complexity and spectral characteristics, contain information that may be relevant for the assessment of fibromyalgia patients in resting-state conditions.
Bilardo, C M; Campbell, S; Nicolaides, K H
1988-12-01
A linear array pulsed Doppler duplex scanner was used to establish reference ranges for mean blood velocities and flow impedance (Pulsatility Index = PI) in the descending thoracic aorta and in the common carotid artery from 70 fetuses in normal pregnancies at 17-42 weeks' gestation. The aortic velocity increased with gestation up to 32 weeks, then remained constant until term, when it decreased. In contrast, the velocity in the common carotid artery increased throughout pregnancy. The PI in the aorta remained constant throughout pregnancy, while in the common carotid artery it fell steeply after 32 weeks. These results suggest that with advancing gestation there is a redistribution of the fetal circulation with decreased impedance to flow to the fetal brain, presumably to compensate for the progressive decrease in fetal blood PO2.
Kadoi, Y; Kawauchi, C H; Ide, M; Saito, S; Mizutani, A
2009-07-01
The purpose of this study was to examine the comparative effects of sevoflurane, isoflurane or propofol on cerebral blood flow velocity after tourniquet deflation during orthopaedic surgery. Thirty patients undergoing elective orthopaedic surgery were randomly divided into sevoflurane, isoflurane and propofol groups. Anaesthesia was maintained with sevoflurane, isoflurane or propofol infusion in 33% oxygen and 67% nitrous oxide, in whatever concentrations were necessary to keep bispectral index values between 45 and 50. Ventilatory rate or tidal volume was adjusted to target PaCO2 of 35 mmHg. A 2.0 MHz transcranial Doppler probe was attached to the patient's head at the temporal window and mean blood flow velocity in the middle cerebral artery was continuously measured. The extremity was exsanguinated with an Esmarch bandage and the pneumatic tourniquet was inflated to a pressure of 450 mmHg. Arterial blood pressure, heart rate, velocity in the middle cerebral artery and arterial blood gas analysis were measured every minute for 10 minutes after release of the tourniquet in all three groups. Velocity in the middle cerebral artery in the three groups increased for five minutes after tourniquet deflation. Because of the different cerebrovascular effects of the three agents, the degree of increase in flow velocity in the isoflurane group was greater than in the other two groups, the change in flow velocity in the propofol group being the lowest (at three minutes after deflation 40 +/- 7%, 32 +/- 6% and 28 +/- 10% in the isoflurane, sevoflurane and propofol groups respectively, P < 0.05).
Périard, J D; Racinais, S
2015-06-01
This study examined the influence of hyperthermia on middle cerebral artery mean blood velocity (MCA Vmean). Eleven cyclists undertook a 750 kJ self-paced time trial in HOT (35 °C) and COOL (20 °C) conditions. Exercise time was longer in HOT (56 min) compared with COOL (49 min; P blood flow, and heart rate were higher throughout HOT compared with COOL (P blood pressure and oxygen uptake were lower from 50% of work completed onward in HOT compared with COOL (P heat appears to have exacerbated the reduction in MCA Vmean, in part via increases in peripheral blood flow and a decrease in arterial blood pressure. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
On the shape of the common carotid artery with implications for blood velocity profiles
International Nuclear Information System (INIS)
Manbachi, Amir; Hoi, Yiemeng; Steinman, David A; Wasserman, Bruce A; Lakatta, Edward G
2011-01-01
Clinical and engineering studies typically assume that the common carotid artery (CCA) is straight enough to assume fully developed flow, yet recent studies have demonstrated the presence of skewed velocity profiles. Toward elucidating the influence of mild vascular curvatures on blood flow patterns and atherosclerosis, this study aimed to characterize the three-dimensional shape of the human CCA. The left and right carotid arteries of 28 participants (63 ± 12 years) in the VALIDATE (Vascular Aging-–The Link that Bridges Age to Atherosclerosis) study were digitally segmented from 3D contrast-enhanced magnetic resonance angiograms, from the aortic arch to the carotid bifurcation. Each CCA was divided into nominal cervical and thoracic segments, for which curvatures were estimated by least-squares fitting of the respective centerlines to planar arcs. The cervical CCA had a mean radius of curvature of 127 mm, corresponding to a mean lumen:curvature radius ratio of 1:50. The thoracic CCA was significantly more curved at 1:16, with the plane of curvature tilted by a mean angle of 25° and rotated close to 90° with respect to that of the cervical CCA. The left CCA was significantly longer and slightly more curved than the right CCA, and there was a weak but significant increase in CCA curvature with age. Computational fluid dynamic simulations carried out for idealized CCA geometries derived from these and other measured geometric parameters demonstrated that mild cervical curvature is sufficient to prevent flow from fully-developing to axisymmetry, independent of the degree of thoracic curvature. These findings reinforce the idea that fully developed flow may be the exception rather than the rule for the CCA, and perhaps other nominally long and straight vessels
Hanouz, Jean-Luc; Fiant, Anne-Lise; Gérard, Jean-Louis
2016-09-01
The goal of the present study was to examine changes of middle cerebral artery (VMCA) blood flow velocity in patients scheduled for shoulder surgery in beach chair position. Prospective observational study. Operating room, shoulder surgery. Fifty-three consecutive patients scheduled for shoulder surgery in beach chair position. Transcranial Doppler performed after induction of general anesthesia (baseline), after beach chair positioning (BC1), during surgery 20minutes (BC2), and after back to supine position before stopping anesthesia (supine). Mean arterial pressure (MAP), end-tidal CO2, and volatile anesthetic concentration and VMCA were recorded at baseline, BC1, BC2, and supine. Postoperative neurologic complications were searched. Beach chair position induced decrease in MAP (baseline: 73±10mm Hg vs lower MAP recorded: 61±10mm Hg; P<.0001) requiring vasopressors and fluid challenge in 44 patients (83%). There was a significant decrease in VMCA after beach chair positioning (BC1: 33±10cm/s vs baseline: 39±14cm/s; P=.001). The VMCA at baseline (39±2cm/s), BC2 (35±14cm/s), and supine (39±14cm/s) were not different. The minimal alveolar concentration of volatile anesthetics, end-tidal CO2, SpO2, and MAP were not different at baseline, BC1, BC2, and supine. Beach chair position resulted in transient decrease in MAP requiring fluid challenge and vasopressors and a moderate decrease in VMCA. Copyright © 2016 Elsevier Inc. All rights reserved.
Lagunju, IkeOluwa; Sodeinde, Olugbemiro; Brown, Biobele; Akinbami, Felix; Adedokun, Babatunde
2014-02-01
Transcranial Doppler (TCD) sonography of major cerebral arteries is now recommended for routine screening for stroke risk in children with sickle cell disease (SCD). We performed TCD studies on children with sickle cell anemia (SCA) seen at the pediatric hematology clinic over a period of 2 years. TCD scans were repeated yearly in children with normal flow velocities and every 3 months in children with elevated velocities. Findings were correlated with clinical variables, hematologic indices, and arterial oxygen saturation. Predictors of elevated velocities were identified by multiple linear regressions. We enrolled 237 children and performed a total of 526 TCD examinations. Highest time-averaged maximum flow velocities were ≥170 cm/s in 72 (30.3%) cases and ≥200 cm/s in 20 (8.4%). Young age, low hematocrit, low hemoglobin, and arterial oxygen desaturation <95% showed significant correlations with presence of increased cerebral flow velocities. Low hematocrit, low hemoglobin concentration, young age, and low arterial oxygen desaturation predicted elevated cerebral blood flow velocities and, invariably, increased stroke risk, in children with SCA. Children who exhibit these features should be given high priority for TCD examination in the setting of limited resources. Copyright © 2013 Wiley Periodicals, Inc.
Pre- and post-processing filters for improvement of blood velocity estimation
DEFF Research Database (Denmark)
Schlaikjer, Malene; Jensen, Jørgen Arendt
2000-01-01
with different signal-to-noise ratios (SNR). The exact extent of the vessel and the true velocities are thereby known. Velocity estimates were obtained by employing Kasai's autocorrelator on the data. The post-processing filter was used on the computed 2D velocity map. An improvement of the RMS error...... velocity in the vessels. Post-processing is beneficial to obtain an image that minimizes the variation, and present the important information to the clinicians. Applying the theory of fluid mechanics introduces restrictions on the variations possible in a flow field. Neighboring estimates in time and space...... should be highly correlated, since transitions should occur smoothly. This idea is the basis of the algorithm developed in this study. From Bayesian image processing theory an a posteriori probability distribution for the velocity field is computed based on constraints on smoothness. An estimate...
DEFF Research Database (Denmark)
Lassen, L.H.; Jacobsen, V.B.; Haderslev, P.A.
2008-01-01
Calcitonin gene-related peptide (CGRP)-containing nerves are closely associated with cranial blood vessels. CGRP is the most potent vasodilator known in isolated cerebral blood vessels. CGRP can induce migraine attacks, and two selective CGRP receptor antagonists are effective in the treatment...
Bellofiore, Alessandro; Quinlan, Nathan J
2011-09-01
We investigate the potential of prosthetic heart valves to generate abnormal flow and stress patterns, which can contribute to platelet activation and lysis according to blood damage accumulation mechanisms. High-resolution velocity measurements of the unsteady flow field, obtained with a standard particle image velocimetry system and a scaled-up model valve, are used to estimate the shear stresses arising downstream of the valve, accounting for flow features at scales less than one order of magnitude larger than blood cells. Velocity data at effective spatial and temporal resolution of 60 μm and 1.75 kHz, respectively, enabled accurate extraction of Lagrangian trajectories and loading histories experienced by blood cells. Non-physiological stresses up to 10 Pa were detected, while the development of vortex flow in the wake of the valve was observed to significantly increase the exposure time, favouring platelet activation. The loading histories, combined with empirical models for blood damage, reveal that platelet activation and lysis are promoted at different stages of the heart cycle. Shear stress and blood damage estimates are shown to be sensitive to measurement resolution.
Cowan, F; Thoresen, M
1985-06-01
A pulsed Doppler bidirectional ultrasound system has been used to measure alterations in the blood velocities in the superior sagittal sinus of the healthy term newborn infant in response to unilateral and bilateral jugular venous occlusion. These maneuvers were performed with the baby lying in different positions: supine, prone, and on the side (both left and right), the neck flexed or extended, and with the head in the midline or turned 90 degrees to the side (both left and right). Transfontanel pressure was also measured in these positions during occlusions. Results show that turning the head effectively occludes the jugular vein on the side to which the head is turned and that occluding the other jugular vein does not force blood through this functional obstruction. The effect of different forms of external pressure to the head on the superior sagittal sinus velocities was also examined. Alterations in velocities were frequently profound although they varied considerably from baby to baby. This work shows how readily large fluctuations in cranial venous velocities and pressures can occur in the course of normal handling of babies.
Choomphon-anomakhun, Natthaphon; Natenapit, Mayuree
2018-02-01
A numerical simulation of three-dimensional (3-D) implant assisted-magnetic drug targeting (IA-MDT) using ferromagnetic spherical targets, including the effect from the vessel wall on the blood flow, is presented. The targets were implanted within arterioles and subjected to an externally uniform applied magnetic field in order to increase the effectiveness of targeting magnetic drug carrier particles (MDCPs). The capture area (As) of the MDCPs was determined by inspection of the particle trajectories simulated from the particle equations of motion. The blood flow velocities at any particle position around the target were obtained by applying bilinear interpolation to the numerical blood velocity data. The effects on As of the type of ferromagnetic materials in the targets and MDCPs, average blood flow rates, mass fraction of the ferromagnetic material in the MDCPs, average radii of MDCPs (Rp) and the externally applied magnetic field strength (μ0H0) were evaluated. Furthermore, the appropriate μ0H0 and Rp for the IA-MDT design is suggested. In the case of the SS409 target and magnetite MDCPs, dimensionless capture areas ranging from 4.1- to 12.4 and corresponding to particle capture efficiencies of 31-94% were obtained with Rp ranging from 100- to 500 nm, weight fraction of 80%, μ0H0 of 0.6 T and an average blood flow rate of 0.01 ms-1. In addition, the more general 3-D modelling of IA-MDT in this work is applicable to IA-MDT using spherical targets implanted within blood vessels for both laminar and potential blood flows including the wall effect.
Asano, Kenichiro; Ogata, Ai; Tanaka, Keiko; Ide, Yoko; Sankoda, Akiko; Kawakita, Chieko; Nishikawa, Mana; Ohmori, Kazuyoshi; Kinomura, Masaru; Shimada, Noriaki; Fukushima, Masaki
2014-05-01
The aim of this study was to identify the main influencing factor of the shear wave velocity (SWV) of the kidneys measured by acoustic radiation force impulse elastography. The SWV was measured in the kidneys of 14 healthy volunteers and 319 patients with chronic kidney disease. The estimated glomerular filtration rate was calculated by the serum creatinine concentration and age. As an indicator of arteriosclerosis of large vessels, the brachial-ankle pulse wave velocity was measured in 183 patients. Compared to the degree of interobserver and intraobserver deviation, a large variance of SWV values was observed in the kidneys of the patients with chronic kidney disease. Shear wave velocity values in the right and left kidneys of each patient correlated well, with high correlation coefficients (r = 0.580-0.732). The SWV decreased concurrently with a decline in the estimated glomerular filtration rate. A low SWV was obtained in patients with a high brachial-ankle pulse wave velocity. Despite progression of renal fibrosis in the advanced stages of chronic kidney disease, these results were in contrast to findings for chronic liver disease, in which progression of hepatic fibrosis results in an increase in the SWV. Considering that a high brachial-ankle pulse wave velocity represents the progression of arteriosclerosis in the large vessels, the reduction of elasticity succeeding diminution of blood flow was suspected to be the main influencing factor of the SWV in the kidneys. This study indicates that diminution of blood flow may affect SWV values in the kidneys more than the progression of tissue fibrosis. Future studies for reducing data variance are needed for effective use of acoustic radiation force impulse elastography in patients with chronic kidney disease.
Blood flow velocity in the Popliteal Vein using Transverse Oscillation Ultrasound
DEFF Research Database (Denmark)
Bechsgaard, Thor; Lindskov Hansen, Kristoffer; Brandt, Andreas Hjelm
2016-01-01
. Transverse Oscillation US (TOUS), a non-invasive angle independent method, has been implemented on a commercial scanner. TOUS’s advantage compared to SDUS is a more elaborate visualization of complex flow. The aim of this study was to evaluate, whether TOUS perform equal to SDUS for recording velocities...
Bessems, D.; Rutten, M.C.M.; Vosse, van de F.N.
2007-01-01
Lumped-parameter models (zero-dimensional) and wave-propagation models (one-dimensional) for pressure and flow in large vessels, as well as fully three-dimensional fluid–structure interaction models for pressure and velocity, can contribute valuably to answering physiological and patho-physiological
ter Laan, Mark; van Dijk, J. Marc C.; Elting, Jan-Willem J.; Fidler, Vaclav; Staal, Michiel J.
It has been shown that transcutaneous electrical neurostimulation (TENS) reduces sympathetic tone. Spinal cord stimulation (SCS) has proven qualities to improve coronary, peripheral, and cerebral blood circulation. Therefore, we postulate that TENS and SCS affect the autonomic nervous system in
Hinohara, Hiroshi; Kadoi, Yuji; Ide, Masanobu; Kuroda, Masataka; Saito, Shigeru; Mizutani, Akio
2010-08-01
The purpose of this study was to compare the degree of increase in middle cerebral artery (MCA) blood flow velocity after tourniquet deflation when modulating hyperventilation during orthopedic surgery under sevoflurane, isoflurane, or propofol anesthesia. Twenty-four patients undergoing elective orthopedic surgery were randomly divided into sevoflurane, isoflurane, and propofol groups. Anesthesia was maintained with sevoflurane, isoflurane, or propofol administration with 33% oxygen and 67% nitrous oxide at anesthetic drug concentrations adequate to maintain bispectral values between 45 and 50. A 2.0-MHz transcranial Doppler probe was attached to the patient's head at the temporal window, and mean blood flow velocity in the MCA (V (mca)) was continuously measured. The extremity was exsanguinated with an Esmarch bandage, and the pneumatic tourniquet was inflated to a pressure of 450 mmHg. Arterial blood pressure, heart rate, V (mca) and arterial blood gases were measured every minute for 10 min after release of the tourniquet in all three groups. Immediately after tourniquet release, the patients' respiratory rates were increased to tightly maintain end-tidal carbon dioxide (PetCO(2)) at 35 mmHg. No change in partial pressure of carbon dioxide in arterial blood (PaCO(2)) was observed pre- and posttourniquet deflation in any of the three groups. Increase in V (mca) in the isoflurane group was greater than that in the other two groups after tourniquet deflation. In addition, during the study period, no difference in V (mca) after tourniquet deflation was observed between the propofol and sevoflurane groups. Hyperventilation could prevent an increase in V (mca) in the propofol and sevoflurane groups after tourniquet deflation. However, hyperventilation could not prevent an increase in V (mca) in the isoflurane group.
Directory of Open Access Journals (Sweden)
Siu H. Chan
2012-02-01
Full Text Available Vascular stiffness has been proposed as a simple method to assess arterial loading conditions of the heart which induce left ventricular hypertrophy (LVH. There is some controversy as to whether the relationship of vascular stiffness to LVH is independent of blood pressure, and which measurement of arterial stiffness, augmentation index (AI or pulse wave velocity (PWV is best. Carotid pulse wave contor and pulse wave velocity of patients (n=20 with hypertension whose blood pressure (BP was under control (<140/90 mmHg with antihypertensive drug treatment medications, and without valvular heart disease, were measured. Left ventricular mass, calculated from 2D echocardiogram, was adjusted for body size using two different methods: body surface area and height. There was a significant (P<0.05 linear correlation between LV mass index and pulse wave velocity. This was not explained by BP level or lower LV mass in women, as there was no significant difference in PWV according to gender (1140.1+67.8 vs 1110.6+57.7 cm/s. In contrast to PWV, there was no significant correlation between LV mass and AI. In summary, these data suggest that aortic vascular stiffness is an indicator of LV mass even when blood pressure is controlled to less than 140/90 mmHg in hypertensive patients. The data further suggest that PWV is a better proxy or surrogate marker for LV mass than AI and the measurement of PWV may be useful as a rapid and less expensive assessment of the presence of LVH in this patient population.
International Nuclear Information System (INIS)
Parthimos, D; Osterloh, K; Pries, A R; Griffith, T M
2004-01-01
We have performed a nonlinear analysis of fluctuations in red cell velocity and arteriolar calibre in the mesenteric bed of the anaesthetized rat. Measurements were obtained under control conditions and during local superfusion with N G -nitro-L-arginine (L-NNA, 30 μM) and tetrabutylammonium (TBA, 0.1 mM), which suppress NO synthesis and block Ca 2+ activated K + channels (K Ca ), respectively. Time series were analysed by calculating correlation dimensions and largest Lyapunov exponents. Both statistics were higher for red cell velocity than diameter fluctuations, thereby potentially differentiating between global and local mechanisms that regulate microvascular flow. Evidence for underlying nonlinear structure was provided by analysis of surrogate time series generated from the experimental data following randomization of Fourier phase. Complexity indices characterizing time series under control conditions were in general higher than those derived from data obtained during superfusion with L-NNA and TBA
Estimation of blood velocity vectors using transverse ultrasound beam focusing and cross-correlation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Lacasa, Isabel Rodriguez
1999-01-01
program. Simulations are shown for a parabolic velocity profile for flow-to-beam angles of 30, 45, 60, and 90 degrees using a 64 elements linear array with a center frequency of 3 MHz, a pitch of 0.3 mm, and an element height of 5 mm. The peak velocity in the parabolic flow was 0.5 m/s, and the pulse...... repetition frequency was 3.5 kHz. Using four pulse-echo lines, the parabolic flow profile was found with a standard deviation of 0.028 m/s at 60 degrees and 0.092 m/s at 90 degrees (transverse to the ultrasound beam), corresponding to accuracies of 5.6% and 18.4%. Using ten lines gave standard deviations...
Calixto, RD; Verlengia, R; Crisp, AH; Carvalho, TB; Crepaldi, MD; Pereira, AA; Yamada, AK; da Mota, GR; Lopes, CR
2014-01-01
This study aimed to compare the effects of different velocities of eccentric muscle actions on acute blood lactate and serum growth hormone (GH) concentrations following free weight bench press exercises performed by resistance-trained men. Sixteen healthy men were divided into two groups: slow eccentric velocity (SEV; n = 8) and fast eccentric velocity (FEV; n = 8). Both groups performed four sets of eight eccentric repetitions at an intensity of 70% of their one repetition maximum eccentric...
National Research Council Canada - National Science Library
Cervenansky, F
2001-01-01
...), and arterial blood pressure (ABP). To clarify the links, we compared two frequency methods based on coherence function to estimate the influence of ICP, ABP, and CBV on couples, respectively CBV-ABP, ICP-CBV and ICP-ABP, of slow waves...
Mueller, A.A.; Schumann, D.; Reddy, R.R.; Schwenzer-Zimmerer, K.; Mueller-Gerbl, M.; Zeilhofer, H.F.; Sailer, H.F.; Reddy, S.G.
2012-01-01
BACKGROUND: Cleft lip repair aims to normalize the disturbed anatomy and function. The authors determined whether normalization of blood circulation is achieved. METHODS: The authors measured the microcirculatory flow, oxygen saturation, and hemoglobin level in the lip and nose of controls (n = 22)
Improved accuracy in the estimation of blood velocity vectors using matched filtering
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Gori, P.
2000-01-01
the flow and the ultrasound beam (30, 45, 60, and 90 degrees). The parabolic flow has a peak velocity of 0.5 m/s and the pulse repetition frequency is 3.5 kHz. Simulating twenty emissions and calculating the cross-correlation using four pulse-echo lines for each estimate, the parabolic flow profile...... is found with a standard deviation of 0.014 m/s at 45 degrees (corresponding to an accuracy of 2.8%) and 0.022 m/s (corresponding to an accuracy of 4.4%) at 90 degrees, which is transverse to the ultrasound beam....
Wu, Ying-Chin; Hsieh, Wu-Shiun; Hsu, Chyong-Hsin; Chiu, Nan-Chang; Chou, Hung-Chieh; Chen, Chien-Yi; Peng, Shinn-Forng; Hung, Han-Yang; Chang, Jui-Hsing; Chen, Wei J; Jeng, Suh-Fang
2013-05-01
The objective of this study was to examine the relationships of Doppler cerebral blood flow velocity (CBFV) asymmetry measures with developmental outcomes in term infants. Doppler CBFV parameters (peak systolic velocity [PSV] and mean velocity [MV]) of the bilateral middle cerebral arteries of 52 healthy term infants were prospectively examined on postnatal days 1-5, and then their motor, cognitive and language development was evaluated with the Bayley Scales of Infant and Toddler Development, Third Edition, at 6, 12, 18 and 24 months of age. The left CBFV asymmetry measure (PSV or MV) was calculated by subtracting the right-side value from the left-side value. Left CBFV asymmetry measures were significantly positively related to motor scores at 6 (r = 0.3-0.32, p cognitive or language outcome. Thus, the leftward hemodynamic status of the middle cerebral arteries, as measured by cranial Doppler ultrasound in the neonatal period, predicts early motor outcome in term infants. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Middle cerebral artery flow velocity and blood flow during exercise and muscle ischemia in humans
DEFF Research Database (Denmark)
Jørgensen, L G; Perko, M; Hanel, B
1992-01-01
Changes in middle cerebral artery flow velocity (Vmean), measured by transcranial Doppler ultrasound, were used to determine whether increases in mean arterial pressure (MAP) or brain activation enhance cerebral perfusion during exercise. We also evaluated the role of "central command......, they support the hypothesis that cerebral perfusion during exercise reflects an increase in brain activation that is independent of MAP, central command, and muscle metaboreceptors but is likely to depend on influence of mechanoreceptors.......," mechanoreceptors, and/or muscle "metaboreceptors" on cerebral perfusion. Ten healthy subjects performed two levels of dynamic exercise corresponding to a heart rate of 110 (range 89-134) and 148 (129-170) beats/min, respectively, and exhaustive one-legged static knee extension. Measurements were continued during 2...
Directory of Open Access Journals (Sweden)
Ananya Tripathi
2017-11-01
Full Text Available Background: Constant blood flow despite changes in blood pressure, a phenomenon called autoregulation, has been demonstrated for various organ systems. We hypothesized that by changing hydrostatic pressures in peripheral arteries, we can establish these limits of autoregulation in peripheral arteries based on local pulse wave velocity (PWV.Methods: Electrocardiogram and plethysmograph waveforms were recorded at the left and right index fingers in 18 healthy volunteers. Each subject changed their left arm position, keeping the right arm stationary. Pulse arrival times (PAT at both fingers were measured and used to calculate PWV. We calculated ΔPAT (ΔPWV, the differences between the left and right PATs (PWVs, and compared them to the respective calculated blood pressure at the left index fingertip to derive the limits of autoregulation.Results: ΔPAT decreased and ΔPWV increased exponentially at low blood pressures in the fingertip up to a blood pressure of 70 mmHg, after which changes in ΔPAT and ΔPWV were minimal. The empirically chosen 20 mmHg window (75–95 mmHg was confirmed to be within the autoregulatory limit (slope = 0.097, p = 0.56. ΔPAT and ΔPWV within a 20 mmHg moving window were not significantly different from the respective data points within the control 75–95 mmHg window when the pressure at the fingertip was between 56 and 110 mmHg for ΔPAT and between 57 and 112 mmHg for ΔPWV.Conclusions: Changes in hydrostatic pressure due to changes in arm position significantly affect peripheral arterial stiffness as assessed by ΔPAT and ΔPWV, allowing us to estimate peripheral autoregulation limits based on PWV.
Tanigawa, Shohei; Mano, Kazune; Wada, Kenji; Matsunaka, Toshiyuki; Horinaka, Hiromichi
2016-04-01
Blood vessel plaque with a large lipid core is at risk of becoming thrombus and is likely to induce acute heart disease. To prevent this, it is necessary to determine not only the plaque's size but also its chemical composition. We, therefore, made the prototype of a combination probe to diagnose carotid artery plaque. It is used to differentiate propagation characteristics between light spectra and ultrasonic images. By propagating light and ultrasound along a common direction, it is possible to effectively warm the diagnosis domain. Moreover, the probe is thought to be compact and be easy to use for diagnosing human carotid artery plaque. We applied the combination probe to a carotid artery phantom with a lipid area and obtained an image of the ultrasonic velocity change in the fatty area.
International Nuclear Information System (INIS)
Stadlbauer, Andreas; Riet, Wilma van der; Crelier, Gerard; Salomonowitz, Erich
2010-01-01
Purpose: To assess the feasibility and potential limitations of the acceleration techniques SENSE and k-t BLAST for time-resolved three-dimensional (3D) velocity mapping of aortic blood flow. Furthermore, to quantify differences in peak velocity versus heart phase curves. Materials and methods: Time-resolved 3D blood flow patterns were investigated in eleven volunteers and two patients suffering from aortic diseases with accelerated PC-MR sequences either in combination with SENSE (R = 2) or k-t BLAST (6-fold). Both sequences showed similar data acquisition times and hence acceleration efficiency. Flow-field streamlines were calculated and visualized using the GTFlow software tool in order to reconstruct 3D aortic blood flow patterns. Differences between the peak velocities from single-slice PC-MRI experiments using SENSE 2 and k-t BLAST 6 were calculated for the whole cardiac cycle and averaged for all volunteers. Results: Reconstruction of 3D flow patterns in volunteers revealed attenuations in blood flow dynamics for k-t BLAST 6 compared to SENSE 2 in terms of 3D streamlines showing fewer and less distinct vortices and reduction in peak velocity, which is caused by temporal blurring. Solely by time-resolved 3D MR velocity mapping in combination with SENSE detected pathologic blood flow patterns in patients with aortic diseases. For volunteers, we found a broadening and flattering of the peak velocity versus heart phase diagram between the two acceleration techniques, which is an evidence for the temporal blurring of the k-t BLAST approach. Conclusion: We demonstrated the feasibility of SENSE and detected potential limitations of k-t BLAST when used for time-resolved 3D velocity mapping. The effects of higher k-t BLAST acceleration factors have to be considered for application in 3D velocity mapping.
Fredriksson, Ingemar; Larsson, Marcus; Nyström, Fredrik H.; Länne, Toste; Östgren, Carl J.; Strömberg, Tomas
2010-01-01
OBJECTIVE To compare the microcirculatory velocity distribution in type 2 diabetic patients and nondiabetic control subjects at baseline and after local heating. RESEARCH DESIGN AND METHODS The skin blood flow response to local heating (44°C for 20 min) was assessed in 28 diabetic patients and 29 control subjects using a new velocity-resolved quantitative laser Doppler flowmetry technique (qLDF). The qLDF estimates erythrocyte (RBC) perfusion (velocity × concentration), in a physiologically relevant unit (grams RBC per 100 g tissue × millimeters per second) in a fixed output volume, separated into three velocity regions: v 10 mm/s. RESULTS The increased blood flow occurs in vessels with a velocity >1 mm/s. A significantly lower response in qLDF total perfusion was found in diabetic patients than in control subjects after heat provocation because of less high-velocity blood flow (v >10 mm/s). The RBC concentration in diabetic patients increased sevenfold for v between 1 and 10 mm/s, and 15-fold for v >10 mm/s, whereas no significant increase was found for v <1 mm/s. The mean velocity increased from 0.94 to 7.3 mm/s in diabetic patients and from 0.83 to 9.7 mm/s in control subjects. CONCLUSIONS The perfusion increase occurs in larger shunting vessels and not as an increase in capillary flow. Baseline diabetic patient data indicated a redistribution of flow to higher velocity regions, associated with longer duration of diabetes. A lower perfusion was associated with a higher BMI and a lower toe-to-brachial systolic blood pressure ratio. PMID:20393143
Lima, Rui; Wada, Shigeo; Takeda, Motohiro; Tsubota, Ken-ichi; Yamaguchi, Takami
2007-01-01
A confocal microparticle image velocimetry (micro-PIV) system was used to obtain detailed information on the velocity profiles for the flow of pure water (PW) and in vitro blood (haematocrit up to 17%) in a 100-microm-square microchannel. All the measurements were made in the middle plane of the microchannel at a constant flow rate and low Reynolds number (Re=0.025). The averaged ensemble velocity profiles were found to be markedly parabolic for all the working fluids studied. When comparing the instantaneous velocity profiles of the three fluids, our results indicated that the profile shape depended on the haematocrit. Our confocal micro-PIV measurements demonstrate that the root mean square (RMS) values increase with the haematocrit implying that it is important to consider the information provided by the instantaneous velocity fields, even at low Re. The present study also examines the potential effect of the RBCs on the accuracy of the instantaneous velocity measurements.
Directory of Open Access Journals (Sweden)
Su-Youn Cho
2017-04-01
Full Text Available Although regular Taekwondo (TKD training has been reported to be effective for improving cognitive function in children, the mechanism underlying this improvement remains unclear. The purpose of the present study was to observe changes in neuroplasticity-related growth factors in the blood, assess cerebral blood flow velocity, and verify the resulting changes in children’s cognitive function after TKD training. Thirty healthy elementary school students were randomly assigned to control (n = 15 and TKD (n = 15 groups. The TKD training was conducted for 60 min at a rating of perceived exertion (RPE of 11–15, 5 times per week, for 16 weeks. Brain-derived neurotrophic factor (BDNF, vascular endothelial growth factor (VEGF, and insulin-like growth factor-1 (IGF-1 levels were measured by blood sampling before and after the training, and the cerebral blood flow velocities (peak systolic [MCAs], end diastolic [MCAd], mean cerebral blood flow velocities [MCAm], and pulsatility index [PI] of the middle cerebral artery (MCA were measured using Doppler ultrasonography. For cognitive function assessment, Stroop Color and Word Tests (Word, Color, and Color-Word were administered along with other measurements. The serum BDNF, VEGF, and IGF-1 levels and the Color-Word test scores among the sub-factors of the Stroop Color and Word Test scores were significantly higher in the TKD group after the intervention (p < 0.05. On the other hand, no statistically significant differences were found in any factors related to cerebral blood flow velocities, or in the Word test and Color test scores (p > 0.05. Thus, 16-week TKD training did not significantly affect cerebral blood flow velocities, but the training may have been effective in increasing children’s cognitive function by inducing an increase in the levels of neuroplasticity-related growth factors.
International Nuclear Information System (INIS)
Wall, M.J.W.
1992-01-01
The notion of open-quotes probabilityclose quotes is generalized to that of open-quotes likelihood,close quotes and a natural logical structure is shown to exist for any physical theory which predicts likelihoods. Two physically based axioms are given for this logical structure to form an orthomodular poset, with an order-determining set of states. The results strengthen the basis of the quantum logic approach to axiomatic quantum theory. 25 refs
The phylogenetic likelihood library.
Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A
2015-03-01
We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL). © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Tsuji, Bun; Honda, Yasushi; Ikebe, Yusuke; Fujii, Naoto; Kondo, Narihiko; Nishiyasu, Takeshi
2015-04-15
Hyperthermia during prolonged exercise leads to hyperventilation, which can reduce arterial CO2 pressure (PaCO2 ) and, in turn, cerebral blood flow (CBF) and thermoregulatory response. We investigated 1) whether humans can voluntarily suppress hyperthermic hyperventilation during prolonged exercise and 2) the effects of voluntary breathing control on PaCO2 , CBF, sweating, and skin blood flow. Twelve male subjects performed two exercise trials at 50% of peak oxygen uptake in the heat (37°C, 50% relative humidity) for up to 60 min. Throughout the exercise, subjects breathed normally (normal-breathing trial) or they tried to control their minute ventilation (respiratory frequency was timed with a metronome, and target tidal volumes were displayed on a monitor) to the level reached after 5 min of exercise (controlled-breathing trial). Plotting ventilatory and cerebrovascular responses against esophageal temperature (Tes) showed that minute ventilation increased linearly with rising Tes during normal breathing, whereas controlled breathing attenuated the increased ventilation (increase in minute ventilation from the onset of controlled breathing: 7.4 vs. 1.6 l/min at +1.1°C Tes; P flow velocity (MCAV) with rising Tes, but controlled breathing attenuated those reductions (estimated PaCO2 -3.4 vs. -0.8 mmHg; MCAV -10.4 vs. -3.9 cm/s at +1.1°C Tes; P = 0.002 and 0.011, respectively). Controlled breathing had no significant effect on chest sweating or forearm vascular conductance (P = 0.67 and 0.91, respectively). Our results indicate that humans can voluntarily suppress hyperthermic hyperventilation during prolonged exercise, and this suppression mitigates changes in PaCO2 and CBF. Copyright © 2015 the American Physiological Society.
Zafar, Haroon; Sharif, Faisal; Leahy, Martin J
2014-12-01
The main objective of this study was to assess the blood flow rate and velocity in coronary artery stenosis using intracoronary frequency domain optical coherence tomography (FD-OCT). A correlation between fractional flow reserve (FFR) and FD-OCT derived blood flow velocity is also included in this study. A total of 20 coronary stenoses in 15 patients were assessed consecutively by quantitative coronary angiography (QCA), FFR and FD-OCT. A percutaneous coronary intervention (PCI) optimization system was used in this study which combines wireless FFR measurement and FD-OCT imaging in one platform. Stenoses were labelled severe if FFR ≤ 0.8. Blood flow rate and velocity in each stenosis segment were derived from the volumetric analysis of the FD-OCT pull back images. The FFR value was ≤ 0.80 in 5 stenoses (25%). The mean blood flow rate in severe coronary stenosis ( n = 5) was 2.54 ± 0.55 ml/s as compared to 4.81 ± 1.95 ml/s in stenosis with FFR > 0.8 ( n = 15). A good and significant correlation between FFR and FD-OCT blood flow velocity in coronary artery stenosis ( r = 0.74, p < 0.001) was found. The assessment of stenosis severity using FD-OCT derived blood flow rate and velocity has the ability to overcome many limitations of QCA and intravascular ultrasound (IVUS).
Lee, Jonghwan; Radhakrishnan, Harsha; Wu, Weicheng; Daneshmand, Ali; Climov, Mihail; Ayata, Cenk; Boas, David A
2013-06-01
This paper describes a novel optical method for label-free quantitative imaging of cerebral blood flow (CBF) and intracellular motility (IM) in the rodent cerebral cortex. This method is based on a technique that integrates dynamic light scattering (DLS) and optical coherence tomography (OCT), named DLS-OCT. The technique measures both the axial and transverse velocities of CBF, whereas conventional Doppler OCT measures only the axial one. In addition, the technique produces a three-dimensional map of the diffusion coefficient quantifying nontranslational motions. In the DLS-OCT diffusion map, we observed high-diffusion spots, whose locations highly correspond to neuronal cell bodies and whose diffusion coefficient agreed with that of the motion of intracellular organelles reported in vitro in the literature. Therefore, the present method has enabled, for the first time to our knowledge, label-free imaging of the diffusion-like motion of intracellular organelles in vivo. As an example application, we used the method to monitor CBF and IM during a brief ischemic stroke, where we observed an induced persistent reduction in IM despite the recovery of CBF after stroke. This result supports that the IM measured in this study represent the cellular energy metabolism-related active motion of intracellular organelles rather than free diffusion of intracellular macromolecules.
DEFF Research Database (Denmark)
Tarnow, Inge; Kristensen, Annemarie Thuri; Olsen, Lisbeth Høier
2005-01-01
The purpose of this prospective study was to investigate platelet function using in vitro tests based on both high and low shear rates and von Willebrand factor (vWf) multimeric composition in dogs with cardiac disease and turbulent high-velocity blood flow. Client-owned asymptomatic, untreated d...
Cote, Claudia; MacLeod, Jeffrey B; Yip, Alexandra M; Ouzounian, Maral; Brown, Craig D; Forgie, Rand; Pelletier, Marc P; Hassan, Ansar
2015-01-01
Rates of perioperative transfusion vary widely among patients undergoing cardiac surgery. Few studies have examined factors beyond the clinical characteristics of the patients that may be responsible for such variation. The purpose of this study was to determine whether differing practice patterns had an impact on variation in perioperative transfusion at a single center. Patients who underwent cardiac surgery at a single center between 2004 and 2011 were considered. Comparisons were made between patients who had received a perioperative transfusion and those who had not from the clinical factors at baseline, intraoperative variables, and differing practice patterns, as defined by the surgeon, anesthesiologist, perfusionist, and the year in which the procedure was performed. The risk-adjusted effect of these factors on perioperative transfusion rates was determined using multivariable regression modeling techniques. The study population comprised 4823 patients, of whom 1929 (40.0%) received a perioperative transfusion. Significant variation in perioperative transfusion rates was noted between surgeons (from 32.4% to 51.5%, P patterns contribute to significant variation in rates of perioperative transfusion within a single center. Strategies aimed at reducing overall transfusion rates must take into account such variability in practice patterns and account for nonclinical factors as well as known clinical predictors of blood transfusions. Copyright © 2015 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Kyoden, Tomoaki, E-mail: kyouden@nc-toyama.ac.jp; Naruki, Shoji; Akiguchi, Shunsuke; Momose, Noboru; Homae, Tomotaka; Hachiga, Tadashi [National Institute of Technology, Toyama College, 1-2 Ebie-Neriya, Imizu, Toyama 933-0293 (Japan); Ishida, Hiroki [Department of Applied Physics, Faculty of Science, Okayama University of Science, 1-1 Ridai-cho, Okayama 700-0005 (Japan); Andoh, Tsugunobu [Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, 2630 Sugitani, Toyama 930-0194 (Japan); Takada, Yogo [Graduate School of Engineering, Osaka City University, 3-3-138 Sugimoto, Sumiyoshi, Osaka 558-8585 (Japan)
2016-08-28
Two-beam multipoint laser Doppler velocimetry (two-beam MLDV) is a non-invasive imaging technique able to provide an image of two-dimensional blood flow and has potential for observing cancer as previously demonstrated in a mouse model. In two-beam MLDV, the blood flow velocity can be estimated from red blood cells passing through a fringe pattern generated in the skin. The fringe pattern is created at the intersection of two beams in conventional LDV and two-beam MLDV. Being able to choose the depth position is an advantage of two-beam MLDV, and the position of a blood vessel can be identified in a three-dimensional space using this technique. Initially, we observed the fringe pattern in the skin, and the undeveloped or developed speckle pattern generated in a deeper position of the skin. The validity of the absolute velocity value detected by two-beam MLDV was verified while changing the number of layers of skin around a transparent flow channel. The absolute velocity value independent of direction was detected using the developed speckle pattern, which is created by the skin construct and two beams in the flow channel. Finally, we showed the relationship between the signal intensity and the fringe pattern, undeveloped speckle, or developed speckle pattern based on the skin depth. The Doppler signals were not detected at deeper positions in the skin, which qualitatively indicates the depth limit for two-beam MLDV.
Calixto, Rd; Verlengia, R; Crisp, Ah; Carvalho, Tb; Crepaldi, Md; Pereira, Aa; Yamada, Ak; da Mota, Gr; Lopes, Cr
2014-12-01
This study aimed to compare the effects of different velocities of eccentric muscle actions on acute blood lactate and serum growth hormone (GH) concentrations following free weight bench press exercises performed by resistance-trained men. Sixteen healthy men were divided into two groups: slow eccentric velocity (SEV; n = 8) and fast eccentric velocity (FEV; n = 8). Both groups performed four sets of eight eccentric repetitions at an intensity of 70% of their one repetition maximum eccentric (1RMecc) test, with 2-minute rest intervals between sets. The eccentric velocity was controlled to 3 seconds per range of motion for SEV and 0.5 seconds for the FEV group. There was a significant difference (P bench press exercise in the SEV group (1.7 ± 0.6 ng · mL(-1)) relative to the FEV group (0.1 ± 0.0 ng · mL(-1)). In conclusion, the velocity of eccentric muscle action influences acute responses following bench press exercises performed by resistance-trained men using a slow velocity resulting in a greater metabolic stress and hormone response.
Chamuleau, SAJ; Tio, RA; de Cock, CC; de Muinck, ED; Pijls, NHJ; van Eck-Smit, BLF; Koch, KT; Meuwissen, M; Dijkgraaf, MGW; Verberne, HJ; van Liebergen, RAM; Laarman, GJ; Tijssen, JGP; Piek, JJ; de Jong, A.
2002-01-01
OBJECTIVES This study aimed to investigate the roles of intracoronary derived coronary flow velocity reserve (CFVR) and myocardial perfusion scintigraphy (single photon emission computed tomography, or SPECT) for management of an intermediate lesion in patient, with multivessel coronary artery
Examples of Vector Velocity Imaging
DEFF Research Database (Denmark)
Hansen, Peter M.; Pedersen, Mads M.; Hansen, Kristoffer L.
2011-01-01
To measure blood flow velocity in vessels with conventional ultrasound, the velocity is estimated along the direction of the emitted ultrasound wave. It is therefore impossible to obtain accurate information on blood flow velocity and direction, when the angle between blood flow and ultrasound wa...
DEFF Research Database (Denmark)
Thomsen, C; Cortsen, M; Söndergaard, L
1995-01-01
for renal artery flow determination. The protocol uses 16 phase-encoding lines per heart beat during 16 heart cycles and gives a temporal velocity resolution of 160 msec. Comparison with a conventional ECG-triggered velocity mapping protocol was made in phantoms as well as in volunteers. In our study, both...... methods showed sufficient robustness toward complex flow in a phantom model. In comparison with the ECG technique, the segmentation technique reduced vessel blurring and pulsatility artifacts caused by respiratory motion, and average flow values obtained in vivo in the left renal artery agreed between......Two important prerequisites for MR velocity mapping of pulsatile motion are synchronization of the sequence execution to the time course of the flow pattern and robustness toward loss of signal in complex flow fields. Synchronization is normally accomplished by using either prospective ECG...
DEFF Research Database (Denmark)
Højlund Rasmussen, J; Mantoni, T; Belhage, B
2007-01-01
Continuous positive airway pressure (CPAP) is a treatment modality for pulmonary oxygenation difficulties. CPAP impairs venous return to the heart and, in turn, affects cerebral blood flow (CBF) and augments cerebral blood volume (CBV). We considered that during CPAP, elevation of the upper body ...
Earthquake likelihood model testing
Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.
2007-01-01
INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a
T. Huisman (T.); S. van den Eijnde (Stefan); P.A. Stewart (Patricia); J.W. Wladimiroff (Juriy)
1993-01-01
textabstractBreathing movements in the human fetus cause distinct changes in Doppler flow velocity measurements at arterial, venous and cardiac levels. In adults, breathing movements result in a momentary inspiratory collapse of the inferior vena cava vessel wall. The study objective was to quantify
DEFF Research Database (Denmark)
Brandt, Andreas Hjelm; Hansen, Kristoffer Lindskov; Ewertsen, Caroline
2018-01-01
Magnetic resonance phase contrast angiography (MRA) is the gold standard for blood flow evaluation. Spectral Doppler ultrasound (SDU) is the first clinical choice, although the method is angle dependent. Vector flow imaging (VFI) is an angle-independent ultrasound method. The aim of the study...
Oskouian, Rod J; Martin, Neil A; Lee, Jae Hong; Glenn, Thomas C; Guthrie, Donald; Gonzalez, Nestor R; Afari, Arash; Viñuela, Fernando
2002-07-01
The goal of this study was to quantify the effects of endovascular therapy on vasospastic cerebral vessels. We reviewed the medical records for 387 patients with ruptured intracranial aneurysms who were treated at a single institution (University of California, Los Angeles) between May 1, 1993, and March 31, 2001. Patients who developed cerebral vasospasm and underwent cerebral arteriographic, transcranial Doppler ultrasonographic, and cerebral blood flow (CBF) studies before and after endovascular therapy for cerebral arterial spasm (vasospasm) were included in this study. Forty-five patients fulfilled the aforementioned criteria and were treated with either papaverine infusion, papaverine infusion with angioplasty, or angioplasty alone. After balloon angioplasty (12 patients), CBF increased from 27.8 +/- 2.8 ml/100 g/min to 28.4 +/- 3.0 ml/100 g/min (P = 0.87); the middle cerebral artery blood flow velocity was 1 57.6 +/- 9.4 cm/s and decreased to 76.3 +/- 9.3 cm/s (P < 0.05), with a mean increase in cerebral artery diameters of 24.4%. Papaverine infusion (20 patients) transiently increased the CBF from 27.5 +/- 2.1 ml/100 g/min to 38.7 +/- 2.8 ml/100 g/min (P < 0.05) and decreased the middle cerebral artery blood flow velocity from 109.9 +/- 9.1 cm/s to 82.8 +/- 8.6 cm/s (P < 0.05). There was a mean increase in vessel diameters of 30.1% after papaverine infusion. Combined treatment (13 patients) significantly increased the CBF from 33.3 +/- 3.2 ml/100 g/min to 41.7 +/- 2.8 ml/100 g/min (P< 0.05) and decreased the transcranial Doppler velocities from 148.9 +/- 12.7 cm/s to 111.4 +/- 10.6 cm/s (P < 0.05), with a mean increase in vessel diameters of 42.2%. Balloon angioplasty increased proximal vessel diameters, whereas papaverine treatment effectively dilated distal cerebral vessels. In our small series, we observed no correlation between early clinical improvement or clinical outcomes and any of our quantitative or physiological data (CBF, transcranial Doppler
Vennin, Samuel; Mayer, Alexia; Li, Ye; Fok, Henry; Clapp, Brian; Alastruey, Jordi; Chowienczyk, Phil
2015-09-01
Estimation of aortic and left ventricular (LV) pressure usually requires measurements that are difficult to acquire during the imaging required to obtain concurrent LV dimensions essential for determination of LV mechanical properties. We describe a novel method for deriving aortic pressure from the aortic flow velocity. The target pressure waveform is divided into an early systolic upstroke, determined by the water hammer equation, and a diastolic decay equal to that in the peripheral arterial tree, interposed by a late systolic portion described by a second-order polynomial constrained by conditions of continuity and conservation of mean arterial pressure. Pulse wave velocity (PWV, which can be obtained through imaging), mean arterial pressure, diastolic pressure, and diastolic decay are required inputs for the algorithm. The algorithm was tested using 1) pressure data derived theoretically from prespecified flow waveforms and properties of the arterial tree using a single-tube 1-D model of the arterial tree, and 2) experimental data acquired from a pressure/Doppler flow velocity transducer placed in the ascending aorta in 18 patients (mean ± SD: age 63 ± 11 yr, aortic BP 136 ± 23/73 ± 13 mmHg) at the time of cardiac catheterization. For experimental data, PWV was calculated from measured pressures/flows, and mean and diastolic pressures and diastolic decay were taken from measured pressure (i.e., were assumed to be known). Pressure reconstructed from measured flow agreed well with theoretical pressure: mean ± SD root mean square (RMS) error 0.7 ± 0.1 mmHg. Similarly, for experimental data, pressure reconstructed from measured flow agreed well with measured pressure (mean RMS error 2.4 ± 1.0 mmHg). First systolic shoulder and systolic peak pressures were also accurately rendered (mean ± SD difference 1.4 ± 2.0 mmHg for peak systolic pressure). This is the first noninvasive derivation of aortic pressure based on fluid dynamics (flow and wave speed) in the
Pflugradt, Maik; Geissdoerfer, Kai; Goernig, Matthias; Orglmeister, Reinhold
2017-01-14
Automatic detection of ectopic beats has become a thoroughly researched topic, with literature providing manifold proposals typically incorporating morphological analysis of the electrocardiogram (ECG). Although being well understood, its utilization is often neglected, especially in practical monitoring situations like online evaluation of signals acquired in wearable sensors. Continuous blood pressure estimation based on pulse wave velocity considerations is a prominent example, which depends on careful fiducial point extraction and is therefore seriously affected during periods of increased occurring extrasystoles. In the scope of this work, a novel ectopic beat discriminator with low computational complexity has been developed, which takes advantage of multimodal features derived from ECG and pulse wave relating measurements, thereby providing additional information on the underlying cardiac activity. Moreover, the blood pressure estimations' vulnerability towards ectopic beats is closely examined on records drawn from the Physionet database as well as signals recorded in a small field study conducted in a geriatric facility for the elderly. It turns out that a reliable extrasystole identification is essential to unsupervised blood pressure estimation, having a significant impact on the overall accuracy. The proposed method further convinces by its applicability to battery driven hardware systems with limited processing power and is a favorable choice when access to multimodal signal features is given anyway.
Directory of Open Access Journals (Sweden)
Ignacio Farro
2012-01-01
Full Text Available Carotid-femoral pulse wave velocity (PWV has emerged as the gold standard for non-invasive evaluation of aortic stiffness; absence of standardized methodologies of study and lack of normal and reference values have limited a wider clinical implementation. This work was carried out in a Uruguayan (South American population in order to characterize normal, reference, and threshold levels of PWV considering normal age-related changes in PWV and the prevailing blood pressure level during the study. A conservative approach was used, and we excluded symptomatic subjects; subjects with history of cardiovascular (CV disease, diabetes mellitus or renal failure; subjects with traditional CV risk factors (other than age and gender; asymptomatic subjects with atherosclerotic plaques in carotid arteries; patients taking anti-hypertensives or lipid-lowering medications. The included subjects (n=429 were categorized according to the age decade and the blood pressure levels (at study time. All subjects represented the “reference population”; the group of subjects with optimal/normal blood pressures levels at study time represented the “normal population.” Results. Normal and reference PWV levels were obtained. Differences in PWV levels and aging-associated changes were obtained. The obtained data could be used to define vascular aging and abnormal or disease-related arterial changes.
Hinohara, Hiroshi; Kadoi, Yuji; Takahashi, Kenichiro; Saito, Shigeru; Kawauchi, Chikara; Mizutani, Akio
2011-06-01
We observed an increase in mean middle cerebral artery blood flow velocity (V(mca)) after tourniquet deflation during orthopedic surgery under sevoflurane anesthesia in patients with diabetes mellitus or previous stroke. Eight controls, seven insulin-treated diabetic patients, and eight previous stroke patients were studied. Arterial blood pressure, heart rate, V(mca), arterial blood gases, and plasma lactate levels were measured every minute for 10 min after tourniquet release in all patients. V(mca) was measured using a transcranial Doppler probe. V(mca) in all three groups increased after tourniquet deflation, the increase lasting for 4 or 5 min. However, the degree of increase in V(mca) in the diabetic patients was smaller than that in the other two groups after tourniquet deflation (at 2 min after tourniquet deflation: control 58.5 ± 3.3, previous stroke 58.4 ± 4.6, diabetes 51.7 ± 2.3; P < 0.05 compared with the other two groups). In conclusion, the degree of increase in V (mca) in diabetic patients is smaller than that in controls and patients with previous stroke.
Likelihood devices in spatial statistics
Zwet, E.W. van
1999-01-01
One of the main themes of this thesis is the application to spatial data of modern semi- and nonparametric methods. Another, closely related theme is maximum likelihood estimation from spatial data. Maximum likelihood estimation is not common practice in spatial statistics. The method of moments
Schönwald, U G; Jorczyk, U; Kipfmüller, B
2011-01-01
Stents are commonly used for the treatment of occlusive artery diseases in carotid arteries. Today, there is a controversial discussion as to whether duplex sonography (DS) displays blood velocities (BV) that are too high in stented areas. The goal of this study was to evaluate the effect of stenting on DS with respect to BV in artificial carotid arteries. The results of computational fluid dynamics (CFD) were also used for the comparison. To analyze BV using DS, a phantom with a constant flow (70 cm/s) was created. Three different types of stents for carotid arteries were selected. The phantom fluid consisted of 67 % water and 33 % glycerol. All BV measurements were carried out on the last third of the stents. Furthermore, all test runs were simulated using CFD. All measurements were statistically analyzed. DS-derived BV values increased significantly after the placement of the Palmaz Genesis stent (77.6 ± 4.92 cm/sec, p = 0.03). A higher increase in BV values was registered when using the Precise RX stent (80.1 ± 2.01 cm/sec, p CFD simulations showed similar results. Stents have a significant impact on BV, but no effect on DS. The main factor of the blood flow acceleration is the material thickness of the stents. Therefore, different stents need different velocity criteria. Furthermore, the results of computational fluid dynamics prove that CFD can be used to simulate BV in stented silicone tubes. © Georg Thieme Verlag KG Stuttgart · New York.
Extended likelihood inference in reliability
International Nuclear Information System (INIS)
Martz, H.F. Jr.; Beckman, R.J.; Waller, R.A.
1978-10-01
Extended likelihood methods of inference are developed in which subjective information in the form of a prior distribution is combined with sampling results by means of an extended likelihood function. The extended likelihood function is standardized for use in obtaining extended likelihood intervals. Extended likelihood intervals are derived for the mean of a normal distribution with known variance, the failure-rate of an exponential distribution, and the parameter of a binomial distribution. Extended second-order likelihood methods are developed and used to solve several prediction problems associated with the exponential and binomial distributions. In particular, such quantities as the next failure-time, the number of failures in a given time period, and the time required to observe a given number of failures are predicted for the exponential model with a gamma prior distribution on the failure-rate. In addition, six types of life testing experiments are considered. For the binomial model with a beta prior distribution on the probability of nonsurvival, methods are obtained for predicting the number of nonsurvivors in a given sample size and for predicting the required sample size for observing a specified number of nonsurvivors. Examples illustrate each of the methods developed. Finally, comparisons are made with Bayesian intervals in those cases where these are known to exist
Obtaining reliable Likelihood Ratio tests from simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed param...
Zuhur, Sayid Shafi; Ozel, Alper; Velet, Selvinaz; Buğdacı, Mehmet Sait; Cil, Esra; Altuntas, Yüksel
2012-01-01
To determine the role of peak systolic velocity, end-diastolic velocity and resistance indices of both the right and left inferior thyroid arteries measured by color-flow Doppler ultrasonography for a differential diagnosis between gestational transient thyrotoxicosis and Graves' disease during pregnancy. The right and left inferior thyroid artery-peak systolic velocity, end-diastolic velocity and resistance indices of 96 patients with thyrotoxicosis (41 with gestational transient thyrotoxicosis, 31 age-matched pregnant patients with Graves' disease and 24 age- and sex-matched non-pregnant patients with Graves' disease) and 25 age and sex-matched healthy euthyroid subjects were assessed with color-flow Doppler ultrasonography. The right and left inferior thyroid artery-peak systolic and end-diastolic velocities in patients with gestational transient thyrotoxicosis were found to be significantly lower than those of pregnant patients with Graves' disease and higher than those of healthy euthyroid subjects. However, the right and left inferior thyroid artery peak systolic and end-diastolic velocities in pregnant patients with Graves' disease were significantly lower than those of non-pregnant patients with Graves' disease. The right and left inferior thyroid artery peak systolic and end-diastolic velocities were positively correlated with TSH-receptor antibody levels. We found an overlap between the inferior thyroid artery-blood flow velocities in a considerable number of patients with gestational transient thyrotoxicosis and pregnant patients with Graves' disease. This study suggests that the measurement of inferior thyroid artery-blood flow velocities with color-flow Doppler ultrasonography does not have sufficient sensitivity and specificity to be recommended as an initial diagnostic test for a differential diagnosis between gestational transient thyrotoxicosis and Graves' disease during pregnancy.
Miyase, Yuiko; Miura, Shin-Ichiro; Shiga, Yuhei; Yano, Masaya; Suematsu, Yasunori; Adachi, Sen; Norimatsu, Kenji; Nakamura, Ayumi; Saku, Keijiro
2016-01-01
A difference in systolic blood pressure (SBP) ≥10 mmHg between the arms is associated with an increased risk of coronary artery disease (CAD) and mortality in high-risk patients. Four hundred and fourteen patients were divided into three groups according to the percent most severe luminal narrowing of a coronary artery as diagnosed by coronary computed tomography angiography: no or mild coronary stenosis (0-49%), moderate stenosis (50-69%) and severe stenosis (≥70%) groups. The relative difference in SBP between arms in the severe group was significantly lower than those in the no or mild and moderate groups. The brachial-ankle pulse wave velocity (baPWV) significantly increased as the severity of coronary stenosis increased. We confirmed that severe coronary stenosis was independently associated with both the relative difference in SBP between arms and baPWV, in addition to age, gender, hypertension, dyslipidemia, diabetes mellitus and ankle-brachial index by a logistic regression analysis. The group with a relative difference in SBP between arms of difference in SBP between arms and baPWV may be a more effective approach for the non-invasive assessment of the severity of CAD.
Directory of Open Access Journals (Sweden)
Yang Wang
2016-11-01
Full Text Available Background/Aims: This study aimed to investigate the association of renalase with blood pressure (BP and brachial-ankle pulse wave velocity (baPWV in order to better understand the role of renalase in the pathogenesis of hypertension and atherosclerosis. Methods: A total of 344 subjects with normal kidney function were recruited from our previously established cohort in Shaanxi Province, China. They were divided into the normotensive (NT and hypertensive (HT groups or high baPWV and normal baPWV on the basis of BP levels or baPWV measured with an automatic waveform analyzer. Plasma renalase was determined through an enzyme-linked immunosorbent assay. Results: Plasma renalase did not significantly differ between HT and NT groups (3.71 ± 0.69 µg/mL vs. 3.72 ± 0.73 μg/mL, P = 0.905 and between subjects with and without high baPWV (3.67 ± 0.66 µg/mL vs. 3.73 ± 0.74 µg/mL, P = 0.505. However, baPWV was significantly higher in the HT group than in the NT group (1460.4 ± 236.7 vs. 1240.7 ± 174.5 cm/s, P Conclusion: Plasma renalase may not be associated with BP and baPWV in Chinese subjects with normal renal function.
Wang, Yang; Lv, Yong-Bo; Chu, Chao; Wang, Man; Xie, Bing-Qing; Wang, Lan; Yang, Fan; Yan, Ding-Yi; Yang, Rui-Hai; Yang, Jun; Ren, Yong; Yuan, Zu-Yi; Mu, Jian-Jun
2016-01-01
This study aimed to investigate the association of renalase with blood pressure (BP) and brachial-ankle pulse wave velocity (baPWV) in order to better understand the role of renalase in the pathogenesis of hypertension and atherosclerosis. A total of 344 subjects with normal kidney function were recruited from our previously established cohort in Shaanxi Province, China. They were divided into the normotensive (NT) and hypertensive (HT) groups or high baPWV and normal baPWV on the basis of BP levels or baPWV measured with an automatic waveform analyzer. Plasma renalase was determined through an enzyme-linked immunosorbent assay. Plasma renalase did not significantly differ between HT and NT groups (3.71 ± 0.69 µg/mL vs. 3.72 ± 0.73 μg/mL, P = 0.905) and between subjects with and without high baPWV (3.67 ± 0.66 µg/mL vs. 3.73 ± 0.74 µg/mL, P = 0.505). However, baPWV was significantly higher in the HT group than in the NT group (1460.4 ± 236.7 vs. 1240.7 ± 174.5 cm/s, P function. © 2016 The Author(s) Published by S. Karger AG, Basel.
Directory of Open Access Journals (Sweden)
Andrzej F Frydrychowski
Full Text Available PURPOSE: The aim of this study was to assess the effect of acute bilateral jugular vein compression on: (1 pial artery pulsation (cc-TQ; (2 cerebral blood flow velocity (CBFV; (3 peripheral blood pressure; and (4 possible relations between mentioned parameters. METHODS: Experiments were performed on a group of 32 healthy 19-30 years old male subjects. cc-TQ and the subarachnoid width (sas-TQ were measured using near-infrared transillumination/backscattering sounding (NIR-T/BSS, CBFV in the left anterior cerebral artery using transcranial Doppler, blood pressure was measured using Finapres, while end-tidal CO(2 was measured using medical gas analyser. Bilateral jugular vein compression was achieved with the use of a sphygmomanometer held on the neck of the participant and pumped at the pressure of 40 mmHg, and was performed in the bend-over (BOPT and swayed to the back (initial position. RESULTS: In the first group (n = 10 during BOPT, sas-TQ and pulse pressure (PP decreased (-17.6% and -17.9%, respectively and CBFV increased (+35.0%, while cc-TQ did not change (+1.91%. In the second group, in the initial position (n = 22 cc-TQ and CBFV increased (106.6% and 20.1%, respectively, while sas-TQ and PP decreases were not statistically significant (-15.5% and -9.0%, respectively. End-tidal CO(2 remained stable during BOPT and venous compression in both groups. Significant interdependence between changes in cc-TQ and PP after bilateral jugular vein compression in the initial position was found (r = -0.74. CONCLUSIONS: Acute bilateral jugular venous insufficiency leads to hyperkinetic cerebral circulation characterised by augmented pial artery pulsation and CBFV and direct transmission of PP into the brain microcirculation. The Windkessel effect with impaired jugular outflow and more likely increased intracranial pressure is described. This study clarifies the potential mechanism linking jugular outflow insufficiency with arterial small vessel cerebral
Ego involvement increases doping likelihood.
Ring, Christopher; Kavussanu, Maria
2018-08-01
Achievement goal theory provides a framework to help understand how individuals behave in achievement contexts, such as sport. Evidence concerning the role of motivation in the decision to use banned performance enhancing substances (i.e., doping) is equivocal on this issue. The extant literature shows that dispositional goal orientation has been weakly and inconsistently associated with doping intention and use. It is possible that goal involvement, which describes the situational motivational state, is a stronger determinant of doping intention. Accordingly, the current study used an experimental design to examine the effects of goal involvement, manipulated using direct instructions and reflective writing, on doping likelihood in hypothetical situations in college athletes. The ego-involving goal increased doping likelihood compared to no goal and a task-involving goal. The present findings provide the first evidence that ego involvement can sway the decision to use doping to improve athletic performance.
... a reduced production of red blood cells, including: Iron deficiency anemia. Iron deficiency anemia is the most common type of anemia and ... inflammatory bowel disease are especially likely to have iron deficiency anemia. Anemia due to chronic disease. People with chronic ...
Chowdhury, Abeed H; Cox, Eleanor F; Francis, Susan T; Lobo, Dileep N
2012-07-01
We compared the effects of intravenous infusions of 0.9% saline ([Cl] 154 mmol/L) and Plasma-Lyte 148 ([Cl] 98 mmol/L, Baxter Healthcare) on renal blood flow velocity and perfusion in humans using magnetic resonance imaging (MRI). Animal experiments suggest that hyperchloremia resulting from 0.9% saline infusion may affect renal hemodynamics adversely, a phenomenon not studied in humans. Twelve healthy adult male subjects received 2-L intravenous infusions over 1 hour of 0.9% saline or Plasma-Lyte 148 in a randomized, double-blind manner. Crossover studies were performed 7 to 10 days apart. MRI scanning proceeded for 90 minutes after commencement of infusion to measure renal artery blood flow velocity and renal cortical perfusion. Blood was sampled and weight recorded hourly for 4 hours. Sustained hyperchloremia was seen with saline but not with Plasma-Lyte 148 (P Blood volume changes were identical (P = 0.867), but there was greater expansion of the extravascular fluid volume after saline (P = 0.029). There was a significant reduction in mean renal artery flow velocity (P = 0.045) and renal cortical tissue perfusion (P = 0.008) from baseline after saline, but not after Plasma-Lyte 148. There was no difference in concentrations of urinary neutrophil gelatinase-associated lipocalin after the 2 infusions (P = 0.917). This is the first human study to demonstrate that intravenous infusion of 0.9% saline results in reductions in renal blood flow velocity and renal cortical tissue perfusion. This has implications for intravenous fluid therapy in perioperative and critically ill patients. NCT01087853.
Likelihood estimators for multivariate extremes
Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.
2015-01-01
The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.
Likelihood estimators for multivariate extremes
Huser, Raphaël
2015-11-17
The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.
Maximum likelihood of phylogenetic networks.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2006-11-01
Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf
Gnaneswara Reddy, M.
2017-09-01
This communication presents the transportation of third order hydromagnetic fluid with thermal radiation by peristalsis through an irregular channel configuration filled a porous medium under the low Reynolds number and large wavelength approximations. Joule heating, Hall current and homogeneous-heterogeneous reactions effects are considered in the energy and species equations. The Second-order velocity and energy slip restrictions are invoked. Final dimensionless governing transport equations along the boundary restrictions are resolved numerically with the help of NDsolve in Mathematica package. Impact of involved sundry parameters on the non-dimensional axial velocity, fluid temperature and concentration characteristics have been analyzed via plots and tables. It is manifest that an increasing porosity parameter leads to maximum velocity in the core part of the channel. Fluid velocity boosts near the walls of the channel where as the reverse effect in the central part of the channel for higher values of first order slip. Larger values of thermal radiation parameter R reduce the fluid temperature field. Also, an increase in heterogeneous reaction parameter Ks magnifies the concentration profile. The present study has the crucial application of thermal therapy in biomedical engineering.
van Ooij, Pim; Garcia, Julio; Potters, Wouter V.; Malaisrie, S. Chris; Collins, Jeremy D.; Carr, James C.; Markl, Michael; Barker, Alex J.
2016-01-01
To investigate age-related changes in peak systolic aortic 3D velocity and wall shear stress (WSS) in healthy controls and to investigate the importance of age-matching for 3D mapping of abnormal aortic hemodynamics in bicuspid aortic valve disease (BAV). 4D flow MRI (fields strengths = 1.5-3T;
The Laplace Likelihood Ratio Test for Heteroscedasticity
Directory of Open Access Journals (Sweden)
J. Martin van Zyl
2011-01-01
Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.
MXLKID: a maximum likelihood parameter identifier
International Nuclear Information System (INIS)
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
Validation of Transverse Oscillation Vector Velocity Estimation In-Vivo
DEFF Research Database (Denmark)
Hansen, Kristoffer Lindskov; Udesen, Jesper; Thomsen, Carsten
2007-01-01
Conventional Doppler methods for blood velocity estimation only estimate the velocity component along the ultrasound (US) beam direction. This implies that a Doppler angle under examination close to 90deg results in unreliable information about the true blood direction and blood velocity. The novel...... the presented angle independent 2-D vector velocity method. The results give reason to believe that the TO method can be a useful alternative to conventional Doppler systems bringing forth new information to the US examination of blood flow....
International Nuclear Information System (INIS)
Töger, Johannes; Carlsson, Marcus; Söderlind, Gustaf; Arheden, Håkan; Heiberg, Einar
2011-01-01
Functional and morphological changes of the heart influence blood flow patterns. Therefore, flow patterns may carry diagnostic and prognostic information. Three-dimensional, time-resolved, three-directional phase contrast cardiovascular magnetic resonance (4D PC-CMR) can image flow patterns with unique detail, and using new flow visualization methods may lead to new insights. The aim of this study is to present and validate a novel visualization method with a quantitative potential for blood flow from 4D PC-CMR, called Volume Tracking, and investigate if Volume Tracking complements particle tracing, the most common visualization method used today. Eight healthy volunteers and one patient with a large apical left ventricular aneurysm underwent 4D PC-CMR flow imaging of the whole heart. Volume Tracking and particle tracing visualizations were compared visually side-by-side in a visualization software package. To validate Volume Tracking, the number of particle traces that agreed with the Volume Tracking visualizations was counted and expressed as a percentage of total released particles in mid-diastole and end-diastole respectively. Two independent observers described blood flow patterns in the left ventricle using Volume Tracking visualizations. Volume Tracking was feasible in all eight healthy volunteers and in the patient. Visually, Volume Tracking and particle tracing are complementary methods, showing different aspects of the flow. When validated against particle tracing, on average 90.5% and 87.8% of the particles agreed with the Volume Tracking surface in mid-diastole and end-diastole respectively. Inflow patterns in the left ventricle varied between the subjects, with excellent agreement between observers. The left ventricular inflow pattern in the patient differed from the healthy subjects. Volume Tracking is a new visualization method for blood flow measured by 4D PC-CMR. Volume Tracking complements and provides incremental information compared to particle
Essays on empirical likelihood in economics
Gao, Z.
2012-01-01
This thesis intends to exploit the roots of empirical likelihood and its related methods in mathematical programming and computation. The roots will be connected and the connections will induce new solutions for the problems of estimation, computation, and generalization of empirical likelihood.
Directory of Open Access Journals (Sweden)
Castaño-Sánchez Carmen
2010-03-01
Full Text Available Abstract Background Diabetic patients show an increased prevalence of non-dipping arterial pressure pattern, target organ damage and elevated arterial stiffness. These alterations are associated with increased cardiovascular risk. The objectives of this study are the following: to evaluate the prognostic value of central arterial pressure and pulse wave velocity in relation to the incidence and outcome of target organ damage and the appearance of cardiovascular episodes (cardiovascular mortality, myocardial infarction, chest pain and stroke in patients with type 2 diabetes mellitus or metabolic syndrome. Methods/Design Design: This is an observational prospective study with 5 years duration, of which the first year corresponds to patient inclusion and initial evaluation, and the remaining four years to follow-up. Setting: The study will be carried out in the urban primary care setting. Study population: Consecutive sampling will be used to include patients diagnosed with type 2 diabetes between 20-80 years of age. A total of 110 patients meeting all the inclusion criteria and none of the exclusion criteria will be included. Measurements: Patient age and sex, family and personal history of cardiovascular disease, and cardiovascular risk factors. Height, weight, heart rate and abdominal circumference. Laboratory tests: hemoglobin, lipid profile, creatinine, microalbuminuria, glomerular filtration rate, blood glucose, glycosylated hemoglobin, blood insulin, fibrinogen and high sensitivity C-reactive protein. Clinical and 24-hour ambulatory (home blood pressure monitoring and self-measured blood pressure. Common carotid artery ultrasound for the determination of mean carotid intima-media thickness. Electrocardiogram for assessing left ventricular hypertrophy. Ankle-brachial index. Retinal vascular study based on funduscopy with non-mydriatic retinography and evaluation of pulse wave morphology and pulse wave velocity using the SphygmoCor system. The
Gómez-Marcos, Manuel A; Recio-Rodríguez, José I; Rodríguez-Sánchez, Emiliano; Castaño-Sánchez, Yolanda; de Cabo-Laso, Angela; Sánchez-Salgado, Benigna; Rodríguez-Martín, Carmela; Castaño-Sánchez, Carmen; Gómez-Sánchez, Leticia; García-Ortiz, Luis
2010-03-18
Diabetic patients show an increased prevalence of non-dipping arterial pressure pattern, target organ damage and elevated arterial stiffness. These alterations are associated with increased cardiovascular risk.The objectives of this study are the following: to evaluate the prognostic value of central arterial pressure and pulse wave velocity in relation to the incidence and outcome of target organ damage and the appearance of cardiovascular episodes (cardiovascular mortality, myocardial infarction, chest pain and stroke) in patients with type 2 diabetes mellitus or metabolic syndrome. This is an observational prospective study with 5 years duration, of which the first year corresponds to patient inclusion and initial evaluation, and the remaining four years to follow-up. The study will be carried out in the urban primary care setting. Consecutive sampling will be used to include patients diagnosed with type 2 diabetes between 20-80 years of age. A total of 110 patients meeting all the inclusion criteria and none of the exclusion criteria will be included. Patient age and sex, family and personal history of cardiovascular disease, and cardiovascular risk factors. Height, weight, heart rate and abdominal circumference. Laboratory tests: hemoglobin, lipid profile, creatinine, microalbuminuria, glomerular filtration rate, blood glucose, glycosylated hemoglobin, blood insulin, fibrinogen and high sensitivity C-reactive protein. Clinical and 24-hour ambulatory (home) blood pressure monitoring and self-measured blood pressure. Common carotid artery ultrasound for the determination of mean carotid intima-media thickness. Electrocardiogram for assessing left ventricular hypertrophy. Ankle-brachial index. Retinal vascular study based on funduscopy with non-mydriatic retinography and evaluation of pulse wave morphology and pulse wave velocity using the SphygmoCor system. The medication used for diabetes, arterial hypertension and hyperlipidemia will be registered, together
Directory of Open Access Journals (Sweden)
Arheden Håkan
2011-04-01
Full Text Available Abstract Background Functional and morphological changes of the heart influence blood flow patterns. Therefore, flow patterns may carry diagnostic and prognostic information. Three-dimensional, time-resolved, three-directional phase contrast cardiovascular magnetic resonance (4D PC-CMR can image flow patterns with unique detail, and using new flow visualization methods may lead to new insights. The aim of this study is to present and validate a novel visualization method with a quantitative potential for blood flow from 4D PC-CMR, called Volume Tracking, and investigate if Volume Tracking complements particle tracing, the most common visualization method used today. Methods Eight healthy volunteers and one patient with a large apical left ventricular aneurysm underwent 4D PC-CMR flow imaging of the whole heart. Volume Tracking and particle tracing visualizations were compared visually side-by-side in a visualization software package. To validate Volume Tracking, the number of particle traces that agreed with the Volume Tracking visualizations was counted and expressed as a percentage of total released particles in mid-diastole and end-diastole respectively. Two independent observers described blood flow patterns in the left ventricle using Volume Tracking visualizations. Results Volume Tracking was feasible in all eight healthy volunteers and in the patient. Visually, Volume Tracking and particle tracing are complementary methods, showing different aspects of the flow. When validated against particle tracing, on average 90.5% and 87.8% of the particles agreed with the Volume Tracking surface in mid-diastole and end-diastole respectively. Inflow patterns in the left ventricle varied between the subjects, with excellent agreement between observers. The left ventricular inflow pattern in the patient differed from the healthy subjects. Conclusion Volume Tracking is a new visualization method for blood flow measured by 4D PC-CMR. Volume Tracking
Chowdhury, Abeed H; Cox, Eleanor F; Francis, Susan T; Lobo, Dileep N
2014-05-01
We compared the effects of intravenous administration of 6% hydroxyethyl starch (maize-derived) in 0.9% saline (Voluven; Fresenius Kabi, Runcorn, United Kingdom) and a "balanced" preparation of 6% hydroxyethyl starch (potato-derived) [Plasma Volume Redibag (PVR); Baxter Healthcare, Thetford, United Kingdom] on renal blood flow velocity and renal cortical tissue perfusion in humans using magnetic resonance imaging. Hyperchloremia resulting from 0.9% saline infusion may adversely affect renal hemodynamics when compared with balanced crystalloids. This phenomenon has not been studied with colloids. Twelve healthy adult male subjects received 1-L intravenous infusions of Voluven or PVR over 30 minutes in a randomized, double-blind manner, with crossover studies 7 to 10 days later. Magnetic resonance imaging proceeded for 60 minutes after commencement of infusion to measure renal artery blood flow velocity and renal cortical perfusion. Blood was sampled, and weight was recorded at 0, 30, 60, 120, 180, and 240 minutes. Mean peak serum chloride concentrations were 108 and 106 mmol/L, respectively, after Voluven and PVR infusion (P = 0.032). Changes in blood volume (P = 0.867), strong ion difference (P = 0.219), and mean renal artery flow velocity (P = 0.319) were similar. However, there was a significant increase in mean renal cortical tissue perfusion after PVR when compared with Voluven (P = 0.033). There was no difference in urinary neutrophil gelatinase-associated liopcalin to creatinine ratios after the infusion (P = 0.164). There was no difference in the blood volume-expanding properties of the 2 preparations of 6% hydroxyethyl starch. The balanced starch produced an increase in renal cortical tissue perfusion, a phenomenon not seen with starch in 0.9% saline.
Asymptotic Likelihood Distribution for Correlated & Constrained Systems
Agarwal, Ujjwal
2016-01-01
It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.
Rukhin, Andrew L
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.
Demonstration of a Vector Velocity Technique
DEFF Research Database (Denmark)
Hansen, Peter Møller; Pedersen, Mads M.; Hansen, Kristoffer L.
2011-01-01
With conventional Doppler ultrasound it is not possible to estimate direction and velocity of blood flow, when the angle of insonation exceeds 60–70°. Transverse oscillation is an angle independent vector velocity technique which is now implemented on a conventional ultrasound scanner. In this pa......With conventional Doppler ultrasound it is not possible to estimate direction and velocity of blood flow, when the angle of insonation exceeds 60–70°. Transverse oscillation is an angle independent vector velocity technique which is now implemented on a conventional ultrasound scanner...
Directory of Open Access Journals (Sweden)
Oswaldo Luiz Pizzi
2013-01-01
identificação do acometimento vascular nessa faixa etária.BACKGROUND: Data on noninvasive vascular assessment and their association with cardiovascular risk variables are scarce in young individuals. OBJECTIVE: To evaluate the association between pulse wave velocity and blood pressure, anthropometric and metabolic variables, including adipocytokines, in young adults. METHODS: A total of 96 individuals aged 26 to 35 years (mean 30.09 ± 1.92; 51 males were assessed in the Rio de Janeiro study. Pulse wave velocity (Complior method, blood pressure, body mass index, glucose, lipid profile, leptin, insulin, adiponectin and insulin resistance index (HOMA-IR were analyzed. Subjects were stratified into three groups according to the PWV tertile for each gender. RESULTS: The group with the highest pulse wave velocity (PWV tertile showed higher mean systolic and diastolic blood pressure, mean blood pressure, body mass index, insulin, and HOMA-IR, as well as lower mean adiponectin; higher prevalence of diabetes mellitus/glucose intolerance and hyperinsulinemia. There was a significant positive correlation of PWV with systolic blood pressure, diastolic blood pressure, pulse pressure and mean blood pressure, body mass index, and LDL-cholesterol, and a negative correlation with HDL-cholesterol and adiponectin. In the multiple regression model, after adjustment of HDL-cholesterol, LDL-cholesterol and adiponectin for gender, age, body mass index and mean blood pressure, only the male gender and mean blood pressure remained significantly correlated with PWV. CONCLUSION: PWV in young adults showed a significant association with cardiovascular risk variables, especially in the male gender, and mean blood pressure as important determinant variables. The findings suggest that PWV measurement can be useful for the identification of vascular impairment in this age group.
Directory of Open Access Journals (Sweden)
Oswaldo Luiz Pizzi
2012-01-01
identificação do acometimento vascular nessa faixa etária.BACKGROUND: Data on noninvasive vascular assessment and their association with cardiovascular risk variables are scarce in young individuals. OBJECTIVE: To evaluate the association between pulse wave velocity and blood pressure, anthropometric and metabolic variables, including adipocytokines, in young adults. METHODS: A total of 96 individuals aged 26 to 35 years (mean 30.09 ± 1.92; 51 males were assessed in the Rio de Janeiro study. Pulse wave velocity (Complior method, blood pressure, body mass index, glucose, lipid profile, leptin, insulin, adiponectin and insulin resistance index (HOMA-IR were analyzed. Subjects were stratified into three groups according to the PWV tertile for each gender. RESULTS: The group with the highest pulse wave velocity (PWV tertile showed higher mean systolic and diastolic blood pressure, mean blood pressure, body mass index, insulin, and HOMA-IR, as well as lower mean adiponectin; higher prevalence of diabetes mellitus/glucose intolerance and hyperinsulinemia. There was a significant positive correlation of PWV with systolic blood pressure, diastolic blood pressure, pulse pressure and mean blood pressure, body mass index, and LDL-cholesterol, and a negative correlation with HDL-cholesterol and adiponectin. In the multiple regression model, after adjustment of HDL-cholesterol, LDL-cholesterol and adiponectin for gender, age, body mass index and mean blood pressure, only the male gender and mean blood pressure remained significantly correlated with PWV. CONCLUSION: PWV in young adults showed a significant association with cardiovascular risk variables, especially in the male gender, and mean blood pressure as important determinant variables. The findings suggest that PWV measurement can be useful for the identification of vascular impairment in this age group.
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisová, Katarina
To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....
Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
A classical model explaining the OPERA velocity paradox
Broda, Boguslaw
2011-01-01
In the context of the paradoxical results of the OPERA Collaboration, we have proposed a classical mechanics model yielding the statistically measured velocity of a beam higher than the velocity of the particles constituting the beam. Ingredients of our model necessary to obtain this curious result are a non-constant fraction function and the method of the maximum-likelihood estimation.
Maintaining symmetry of simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...... improves precision substantially. Another source of error is that models testing away mixing dimensions must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. These simulation errors are ignored in the standard estimation procedures used today...
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisova, K.
2010-01-01
This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point...... process, where the germs are the centres and the marks are the associated radii of the discs. We propose to use a recent parametric class of interacting disc process models, where the minimal sufficient statistic depends on various geometric properties of the random set, and the density is specified......-based maximum likelihood inference and the effect of specifying different reference Poisson models....
Composite likelihood estimation of demographic parameters
Directory of Open Access Journals (Sweden)
Garrigan Daniel
2009-11-01
Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable
Efficient Bit-to-Symbol Likelihood Mappings
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
Likelihood-ratio-based biometric verification
Bazen, A.M.; Veldhuis, Raymond N.J.
2002-01-01
This paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that for single-user verification the likelihood ratio is optimal.
Likelihood Ratio-Based Biometric Verification
Bazen, A.M.; Veldhuis, Raymond N.J.
The paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that, for single-user verification, the likelihood ratio is optimal.
Phylogenetic analysis using parsimony and likelihood methods.
Yang, Z
1996-02-01
The assumptions underlying the maximum-parsimony (MP) method of phylogenetic tree reconstruction were intuitively examined by studying the way the method works. Computer simulations were performed to corroborate the intuitive examination. Parsimony appears to involve very stringent assumptions concerning the process of sequence evolution, such as constancy of substitution rates between nucleotides, constancy of rates across nucleotide sites, and equal branch lengths in the tree. For practical data analysis, the requirement of equal branch lengths means similar substitution rates among lineages (the existence of an approximate molecular clock), relatively long interior branches, and also few species in the data. However, a small amount of evolution is neither a necessary nor a sufficient requirement of the method. The difficulties involved in the application of current statistical estimation theory to tree reconstruction were discussed, and it was suggested that the approach proposed by Felsenstein (1981, J. Mol. Evol. 17: 368-376) for topology estimation, as well as its many variations and extensions, differs fundamentally from the maximum likelihood estimation of a conventional statistical parameter. Evidence was presented showing that the Felsenstein approach does not share the asymptotic efficiency of the maximum likelihood estimator of a statistical parameter. Computer simulations were performed to study the probability that MP recovers the true tree under a hierarchy of models of nucleotide substitution; its performance relative to the likelihood method was especially noted. The results appeared to support the intuitive examination of the assumptions underlying MP. When a simple model of nucleotide substitution was assumed to generate data, the probability that MP recovers the true topology could be as high as, or even higher than, that for the likelihood method. When the assumed model became more complex and realistic, e.g., when substitution rates were
Quantification of aortic regurgitation by magnetic resonance velocity mapping
DEFF Research Database (Denmark)
Søndergaard, Lise; Lindvig, K; Hildebrandt, P
1993-01-01
The use of magnetic resonance (MR) velocity mapping in the quantification of aortic valvular blood flow was examined in 10 patients with angiographically verified aortic regurgitation. MR velocity mapping succeeded in identifying and quantifying the regurgitation in all patients, and the regurgit......The use of magnetic resonance (MR) velocity mapping in the quantification of aortic valvular blood flow was examined in 10 patients with angiographically verified aortic regurgitation. MR velocity mapping succeeded in identifying and quantifying the regurgitation in all patients...
Factors Associated with Young Adults’ Pregnancy Likelihood
Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan
2014-01-01
OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849
Review of Elaboration Likelihood Model of persuasion
藤原, 武弘; 神山, 貴弥
1989-01-01
This article mainly introduces Elaboration Likelihood Model (ELM), proposed by Petty & Cacioppo, that is, a general attitude change theory. ELM posturates two routes to persuasion; central and peripheral route. Attitude change by central route is viewed as resulting from a diligent consideration of the issue-relevant informations presented. On the other hand, attitude change by peripheral route is viewed as resulting from peripheral cues in the persuasion context. Secondly we compare these tw...
Unbinned likelihood analysis of EGRET observations
International Nuclear Information System (INIS)
Digel, Seth W.
2000-01-01
We present a newly-developed likelihood analysis method for EGRET data that defines the likelihood function without binning the photon data or averaging the instrumental response functions. The standard likelihood analysis applied to EGRET data requires the photons to be binned spatially and in energy, and the point-spread functions to be averaged over energy and inclination angle. The full-width half maximum of the point-spread function increases by about 40% from on-axis to 30 degree sign inclination, and depending on the binning in energy can vary by more than that in a single energy bin. The new unbinned method avoids the loss of information that binning and averaging cause and can properly analyze regions where EGRET viewing periods overlap and photons with different inclination angles would otherwise be combined in the same bin. In the poster, we describe the unbinned analysis method and compare its sensitivity with binned analysis for detecting point sources in EGRET data
Dimension-Independent Likelihood-Informed MCMC
Cui, Tiangang; Law, Kody; Marzouk, Youssef
2015-01-01
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.
Multi-Channel Maximum Likelihood Pitch Estimation
DEFF Research Database (Denmark)
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Dimension-Independent Likelihood-Informed MCMC
Cui, Tiangang
2015-01-07
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
A Predictive Likelihood Approach to Bayesian Averaging
Directory of Open Access Journals (Sweden)
Tomáš Jeřábek
2015-01-01
Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Subtracting and Fitting Histograms using Profile Likelihood
D'Almeida, F M L
2008-01-01
It is known that many interesting signals expected at LHC are of unknown shape and strongly contaminated by background events. These signals will be dif cult to detect during the rst years of LHC operation due to the initial low luminosity. In this work, one presents a method of subtracting histograms based on the pro le likelihood function when the background is previously estimated by Monte Carlo events and one has low statistics. Estimators for the signal in each bin of the histogram difference are calculated so as limits for the signals with 68.3% of Con dence Level in a low statistics case when one has a exponential background and a Gaussian signal. The method can also be used to t histograms when the signal shape is known. Our results show a good performance and avoid the problem of negative values when subtracting histograms.
A maximum likelihood framework for protein design
Directory of Open Access Journals (Sweden)
Philippe Hervé
2006-06-01
Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces
Modelling maximum likelihood estimation of availability
International Nuclear Information System (INIS)
Waller, R.A.; Tietjen, G.L.; Rock, G.W.
1975-01-01
Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)
... this page: //medlineplus.gov/ency/article/003927.htm Nerve conduction velocity To use the sharing features on this page, please enable JavaScript. Nerve conduction velocity (NCV) is a test to see ...
International Nuclear Information System (INIS)
Beyer, R.T.
1985-01-01
The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)
The Velocity Distribution of Isolated Radio Pulsars
Arzoumanian, Z.; Chernoff, D. F.; Cordes, J. M.; White, Nicholas E. (Technical Monitor)
2002-01-01
We infer the velocity distribution of radio pulsars based on large-scale 0.4 GHz pulsar surveys. We do so by modelling evolution of the locations, velocities, spins, and radio luminosities of pulsars; calculating pulsed flux according to a beaming model and random orientation angles of spin and beam; applying selection effects of pulsar surveys; and comparing model distributions of measurable pulsar properties with survey data using a likelihood function. The surveys analyzed have well-defined characteristics and cover approx. 95% of the sky. We maximize the likelihood in a 6-dimensional space of observables P, dot-P, DM, absolute value of b, mu, F (period, period derivative, dispersion measure, Galactic latitude, proper motion, and flux density). The models we test are described by 12 parameters that characterize a population's birth rate, luminosity, shutoff of radio emission, birth locations, and birth velocities. We infer that the radio beam luminosity (i) is comparable to the energy flux of relativistic particles in models for spin-driven magnetospheres, signifying that radio emission losses reach nearly 100% for the oldest pulsars; and (ii) scales approximately as E(exp 1/2) which, in magnetosphere models, is proportional to the voltage drop available for acceleration of particles. We find that a two-component velocity distribution with characteristic velocities of 90 km/ s and 500 km/ s is greatly preferred to any one-component distribution; this preference is largely immune to variations in other population parameters, such as the luminosity or distance scale, or the assumed spin-down law. We explore some consequences of the preferred birth velocity distribution: (1) roughly 50% of pulsars in the solar neighborhood will escape the Galaxy, while approx. 15% have velocities greater than 1000 km/ s (2) observational bias against high velocity pulsars is relatively unimportant for surveys that reach high Galactic absolute value of z distances, but is severe for
Likelihood analysis of the minimal AMSB model
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)
2017-04-15
We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}}
Dimension-independent likelihood-informed MCMC
Cui, Tiangang
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
Likelihood Analysis of Supersymmetric SU(5) GUTs
Bagnaschi, E.
2017-01-01
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...
Reducing the likelihood of long tennis matches.
Barnett, Tristan; Alan, Brown; Pollard, Graham
2006-01-01
Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody; Marzouk, Youssef M.
2015-01-01
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
Maximum likelihood window for time delay estimation
International Nuclear Information System (INIS)
Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup
2004-01-01
Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.
Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation
Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.
2015-11-01
We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.
Maximum likelihood versus likelihood-free quantum system identification in the atom maser
International Nuclear Information System (INIS)
Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin
2014-01-01
We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle. (paper)
In-vivo studies of new vector velocity and adaptive spectral estimators in medical ultrasound
DEFF Research Database (Denmark)
Hansen, Kristoffer Lindskov
2010-01-01
New ultrasound techniques for blood flow estimation have been investigated in-vivo. These are vector velocity estimators (Transverse Oscillation, Synthetic Transmit Aperture, Directional Beamforming and Plane Wave Excitation) and adaptive spectral estimators (Blood spectral Power Capon and Blood...
Likelihood analysis of supersymmetric SU(5) GUTs
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A.; De Vries, K.J. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Sakurai, K. [University of Durham, Science Laboratories, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Chobanova, V.; Lucio, M.; Martinez Santos, D. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Theoretical Physics Department, CERN, Geneva 23 (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Cantoblanco, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Isidori, G. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Olive, K.A. [University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States)
2017-02-15
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R} - χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub τ} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC. (orig.)
Likelihood analysis of supersymmetric SU(5) GUTs
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E. [DESY, Hamburg (Germany); Costa, J.C. [Imperial College, London (United Kingdom). Blackett Lab.; Sakurai, K. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomonology; Warsaw Univ. (Poland). Inst. of Theoretical Physics; Collaboration: MasterCode Collaboration; and others
2016-10-15
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and avour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets+E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R}-χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub T} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC.
HLA Match Likelihoods for Hematopoietic Stem-Cell Grafts in the U.S. Registry
Gragert, Loren; Eapen, Mary; Williams, Eric; Freeman, John; Spellman, Stephen; Baitty, Robert; Hartzman, Robert; Rizzo, J. Douglas; Horowitz, Mary; Confer, Dennis; Maiers, Martin
2018-01-01
Background Hematopoietic stem-cell transplantation (HSCT) is a potentially lifesaving therapy for several blood cancers and other diseases. For patients without a suitable related HLA-matched donor, unrelated-donor registries of adult volunteers and banked umbilical cord–blood units, such as the Be the Match Registry operated by the National Marrow Donor Program (NMDP), provide potential sources of donors. Our goal in the present study was to measure the likelihood of finding a suitable donor in the U.S. registry. Methods Using human HLA data from the NMDP donor and cord-blood-unit registry, we built population-based genetic models for 21 U.S. racial and ethnic groups to predict the likelihood of identifying a suitable donor (either an adult donor or a cord-blood unit) for patients in each group. The models incorporated the degree of HLA matching, adult-donor availability (i.e., ability to donate), and cord-blood-unit cell dose. Results Our models indicated that most candidates for HSCT will have a suitable (HLA-matched or minimally mismatched) adult donor. However, many patients will not have an optimal adult donor — that is, a donor who is matched at high resolution at HLA-A, HLA-B, HLA-C, and HLA-DRB1. The likelihood of finding an optimal donor varies among racial and ethnic groups, with the highest probability among whites of European descent, at 75%, and the lowest probability among blacks of South or Central American descent, at 16%. Likelihoods for other groups are intermediate. Few patients will have an optimal cord-blood unit — that is, one matched at the antigen level at HLA-A and HLA-B and matched at high resolution at HLA-DRB1. However, cord-blood units mismatched at one or two HLA loci are available for almost all patients younger than 20 years of age and for more than 80% of patients 20 years of age or older, regardless of racial and ethnic background. Conclusions Most patients likely to benefit from HSCT will have a donor. Public investment in
The behavior of the likelihood ratio test for testing missingness
Hens, Niel; Aerts, Marc; Molenberghs, Geert; Thijs, Herbert
2003-01-01
To asses the sensitivity of conclusions to model choices in the context of selection models for non-random dropout, one can oppose the different missing mechanisms to each other; e.g. by the likelihood ratio tests. The finite sample behavior of the null distribution and the power of the likelihood ratio test is studied under a variety of missingness mechanisms. missing data; sensitivity analysis; likelihood ratio test; missing mechanisms
Penalized Maximum Likelihood Estimation for univariate normal mixture distributions
International Nuclear Information System (INIS)
Ridolfi, A.; Idier, J.
2001-01-01
Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test
Roberts, C. W.; Smith, D. L.
1970-01-01
Simple, inexpensive drag sphere velocity meter with a zero to 6 ft/sec range measures steady-state flow. When combined with appropriate data acquisition system, it is suited to applications where large numbers of simultaneous measurements are needed for current mapping or velocity profile determination.
DEFF Research Database (Denmark)
2000-01-01
Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...
NMR blood vessel imaging method and apparatus
International Nuclear Information System (INIS)
Riederer, S.J.
1988-01-01
A high speed method of forming computed images of blood vessels based on measurements of characteristics of a body is described comprising the steps of: subjecting a predetermined body area containing blood vessels of interest to, successively, applications of a short repetition time (TR) NMR pulse sequence during the period of high blood velocity and then to corresponding applications during the period of low blood velocity for successive heart beat cycles; weighting the collected imaging data from each application of the NMR pulse sequence according to whether the data was acquired during the period of high blood velocity or a period of low blood velocity of the corresponding heart beat cycle; accumulating weighted imaging data from a plurality of NMR pulse sequences corresponding to high blood velocity periods and from a plurality of NMR pulse sequences corresponding to low blood velocity periods; subtracting the weighted imaging data corresponding to each specific phase encoding acquired during the high blood velocity periods from the weighted imaging data for the same phase encoding corresponding to low blood velocity periods in order to compute blood vessel imaging data; and forming an image of the blood vessels of interest from the blood vessel imaging data
Efficient Detection of Repeating Sites to Accelerate Phylogenetic Likelihood Calculations.
Kobert, K; Stamatakis, A; Flouri, T
2017-03-01
The phylogenetic likelihood function (PLF) is the major computational bottleneck in several applications of evolutionary biology such as phylogenetic inference, species delimitation, model selection, and divergence times estimation. Given the alignment, a tree and the evolutionary model parameters, the likelihood function computes the conditional likelihood vectors for every node of the tree. Vector entries for which all input data are identical result in redundant likelihood operations which, in turn, yield identical conditional values. Such operations can be omitted for improving run-time and, using appropriate data structures, reducing memory usage. We present a fast, novel method for identifying and omitting such redundant operations in phylogenetic likelihood calculations, and assess the performance improvement and memory savings attained by our method. Using empirical and simulated data sets, we show that a prototype implementation of our method yields up to 12-fold speedups and uses up to 78% less memory than one of the fastest and most highly tuned implementations of the PLF currently available. Our method is generic and can seamlessly be integrated into any phylogenetic likelihood implementation. [Algorithms; maximum likelihood; phylogenetic likelihood function; phylogenetics]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Planck intermediate results: XVI. Profile likelihoods for cosmological parameters
DEFF Research Database (Denmark)
Bartlett, J.G.; Cardoso, J.-F.; Delabrouille, J.
2014-01-01
We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the CDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agr...
Planck 2013 results. XV. CMB power spectra and likelihood
DEFF Research Database (Denmark)
Tauber, Jan; Bartlett, J.G.; Bucher, M.
2014-01-01
This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best...
The modified signed likelihood statistic and saddlepoint approximations
DEFF Research Database (Denmark)
Jensen, Jens Ledet
1992-01-01
SUMMARY: For a number of tests in exponential families we show that the use of a normal approximation to the modified signed likelihood ratio statistic r * is equivalent to the use of a saddlepoint approximation. This is also true in a large deviation region where the signed likelihood ratio...... statistic r is of order √ n. © 1992 Biometrika Trust....
Likelihood analysis of parity violation in the compound nucleus
International Nuclear Information System (INIS)
Bowman, D.; Sharapov, E.
1993-01-01
We discuss the determination of the root mean-squared matrix element of the parity-violating interaction between compound-nuclear states using likelihood analysis. We briefly review the relevant features of the statistical model of the compound nucleus and the formalism of likelihood analysis. We then discuss the application of likelihood analysis to data on panty-violating longitudinal asymmetries. The reliability of the extracted value of the matrix element and errors assigned to the matrix element is stressed. We treat the situations where the spins of the p-wave resonances are not known and known using experimental data and Monte Carlo techniques. We conclude that likelihood analysis provides a reliable way to determine M and its confidence interval. We briefly discuss some problems associated with the normalization of the likelihood function
Directory of Open Access Journals (Sweden)
Chiu Choi
2017-02-01
Full Text Available Transient response such as ringing in a control system can be reduced or removed by velocity feedback. It is a useful control technique that should be covered in the relevant engineering laboratory courses. We developed velocity feedback experiments using two different low cost technologies, viz., operational amplifiers and microcontrollers. These experiments can be easily integrated into laboratory courses on feedback control systems or microcontroller applications. The intent of developing these experiments was to illustrate the ringing problem and to offer effective, low cost solutions for removing such problem. In this paper the pedagogical approach for these velocity feedback experiments was described. The advantages and disadvantages of the two different implementation of velocity feedback were discussed also.
The critical ionization velocity
International Nuclear Information System (INIS)
Raadu, M.A.
1980-06-01
The critical ionization velocity effect was first proposed in the context of space plasmas. This effect occurs for a neutral gas moving through a magnetized plasma and leads to rapid ionization and braking of the relative motion when a marginal velocity, 'the critical velocity', is exceeded. Laboratory experiments have clearly established the significance of the critical velocity and have provided evidence for an underlying mechanism which relies on the combined action of electron impact ionization and a collective plasma interaction heating electrons. There is experimental support for such a mechanism based on the heating of electrons by the modified two-stream instability as part of a feedback process. Several applications to space plasmas have been proposed and the possibility of space experiments has been discussed. (author)
1988-01-01
A video tape related to orbital debris research is presented. The video tape covers the process of loading a High Velocity Gas Gun and firing it into a mounted metal plate. The process is then repeated in slow motion.
Vink, H.; Wieringa, P. A.; Spaan, J. A.
1995-01-01
1. From capillary red cell velocity (V)-flux (F) relationships of hamster cremaster muscle a yield velocity (VF = 0) can be derived at which red cell flux is zero. Red cell velocity becomes intermittent and/or red blood cells come to a complete standstill for velocities close to this yield velocity,
Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes
Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen
2016-06-01
Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.
New technology - demonstration of a vector velocity technique
DEFF Research Database (Denmark)
Møller Hansen, Peter; Pedersen, Mads M; Hansen, Kristoffer L
2011-01-01
With conventional Doppler ultrasound it is not possible to estimate direction and velocity of blood flow, when the angle of insonation exceeds 60-70°. Transverse oscillation is an angle independent vector velocity technique which is now implemented on a conventional ultrasound scanner. In this pa...
DEFF Research Database (Denmark)
Greve, Sara V; Blicher, Marie K; Kruger, Ruan
2016-01-01
BACKGROUND: Carotid-femoral pulse wave velocity (cfPWV) adds significantly to traditional cardiovascular risk prediction, but is not widely available. Therefore, it would be helpful if cfPWV could be replaced by an estimated carotid-femoral pulse wave velocity (ePWV) using age and mean blood pres...... that these traditional risk scores have underestimated the complicated impact of age and blood pressure on arterial stiffness and cardiovascular risk....
Lucewicz, A; Fisher, K; Henry, A; Welsh, A W
2016-02-01
Twin anemia-polycythemia sequence (TAPS) is recognized increasingly antenatally by the demonstration of an anemic twin and a polycythemic cotwin using the middle cerebral artery peak systolic velocity (MCA-PSV). While the MCA-PSV has been shown to correlate well with anemia in singleton fetuses, the evidence to support its use to diagnose fetal polycythemia appears to be less clear-cut. We aimed to evaluate fetal, neonatal and adult literature used to support the use of MCA-PSV for the diagnosis of polycythemia. Comprehensive literature searches were performed for ultrasound evidence of polycythemia in the human fetus, neonate and adult using key search terms. Only manuscripts in the English language with an abstract were considered for the review, performed in June 2014. Fifteen manuscripts were found for the human fetus, including 38 cases of TAPS. Nine of these defined fetal polycythemia as MCA-PSV polycythemia and a consequent increase with hemodilution. In the adult, five studies (57 polycythemic adults) demonstrated increased flow or velocity with hemodilution. Neither neonatal nor adult studies conclusively defined levels for screening for polycythemia. Despite widespread adoption of a cut-off of < 0.8 MoM in the published literature for the polycythemic fetus in TAPS, this is based upon minimal evidence, with unknown sensitivity and specificity. We recommend caution in excluding TAPS based purely upon the absence of a reduced MCA-PSV. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.
Posterior distributions for likelihood ratios in forensic science.
van den Hout, Ardo; Alberink, Ivo
2016-09-01
Evaluation of evidence in forensic science is discussed using posterior distributions for likelihood ratios. Instead of eliminating the uncertainty by integrating (Bayes factor) or by conditioning on parameter values, uncertainty in the likelihood ratio is retained by parameter uncertainty derived from posterior distributions. A posterior distribution for a likelihood ratio can be summarised by the median and credible intervals. Using the posterior mean of the distribution is not recommended. An analysis of forensic data for body height estimation is undertaken. The posterior likelihood approach has been criticised both theoretically and with respect to applicability. This paper addresses the latter and illustrates an interesting application area. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Practical likelihood analysis for spatial generalized linear mixed models
DEFF Research Database (Denmark)
Bonat, W. H.; Ribeiro, Paulo Justiniano
2016-01-01
We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...
Algorithms of maximum likelihood data clustering with applications
Giada, Lorenzo; Marsili, Matteo
2002-12-01
We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.
Generalized empirical likelihood methods for analyzing longitudinal data
Wang, S.; Qian, L.; Carroll, R. J.
2010-01-01
Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Attitude towards, and likelihood of, complaining in the banking ...
African Journals Online (AJOL)
aims to determine customers' attitudes towards complaining as well as their likelihood of voicing a .... is particularly powerful and impacts greatly on customer satisfaction and retention. ...... 'Cross-national analysis of hotel customers' attitudes ...
Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach
Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.
2012-01-01
This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous
Modified circular velocity law
Djeghloul, Nazim
2018-05-01
A modified circular velocity law is presented for a test body orbiting around a spherically symmetric mass. This law exhibits a distance scale parameter and allows to recover both usual Newtonian behaviour for lower distances and a constant velocity limit at large scale. Application to the Galaxy predicts the known behaviour and also leads to a galactic mass in accordance with the measured visible stellar mass so that additional dark matter inside the Galaxy can be avoided. It is also shown that this circular velocity law can be embedded in a geometrical description of spacetime within the standard general relativity framework upon relaxing the usual asymptotic flatness condition. This formulation allows to redefine the introduced Newtonian scale limit in term of the central mass exclusively. Moreover, a satisfactory answer to the galactic escape speed problem can be provided indicating the possibility that one can also get rid of dark matter halo outside the Galaxy.
On the likelihood function of Gaussian max-stable processes
Genton, M. G.; Ma, Y.; Sang, H.
2011-01-01
We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.
Incorporating Nuisance Parameters in Likelihoods for Multisource Spectra
Conway, J.S.
2011-01-01
We describe here the general mathematical approach to constructing likelihoods for fitting observed spectra in one or more dimensions with multiple sources, including the effects of systematic uncertainties represented as nuisance parameters, when the likelihood is to be maximized with respect to these parameters. We consider three types of nuisance parameters: simple multiplicative factors, source spectra "morphing" parameters, and parameters representing statistical uncertainties in the predicted source spectra.
On the likelihood function of Gaussian max-stable processes
Genton, M. G.
2011-05-24
We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.
Tapered composite likelihood for spatial max-stable models
Sang, Huiyan
2014-05-01
Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.
Dissociating response conflict and error likelihood in anterior cingulate cortex.
Yeung, Nick; Nieuwenhuis, Sander
2009-11-18
Neuroimaging studies consistently report activity in anterior cingulate cortex (ACC) in conditions of high cognitive demand, leading to the view that ACC plays a crucial role in the control of cognitive processes. According to one prominent theory, the sensitivity of ACC to task difficulty reflects its role in monitoring for the occurrence of competition, or "conflict," between responses to signal the need for increased cognitive control. However, a contrasting theory proposes that ACC is the recipient rather than source of monitoring signals, and that ACC activity observed in relation to task demand reflects the role of this region in learning about the likelihood of errors. Response conflict and error likelihood are typically confounded, making the theories difficult to distinguish empirically. The present research therefore used detailed computational simulations to derive contrasting predictions regarding ACC activity and error rate as a function of response speed. The simulations demonstrated a clear dissociation between conflict and error likelihood: fast response trials are associated with low conflict but high error likelihood, whereas slow response trials show the opposite pattern. Using the N2 component as an index of ACC activity, an EEG study demonstrated that when conflict and error likelihood are dissociated in this way, ACC activity tracks conflict and is negatively correlated with error likelihood. These findings support the conflict-monitoring theory and suggest that, in speeded decision tasks, ACC activity reflects current task demands rather than the retrospective coding of past performance.
Tapered composite likelihood for spatial max-stable models
Sang, Huiyan; Genton, Marc G.
2014-01-01
Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.
The Prescribed Velocity Method
DEFF Research Database (Denmark)
Nielsen, Peter Vilhelm
The- velocity level in a room ventilated by jet ventilation is strongly influenced by the supply conditions. The momentum flow in the supply jets controls the air movement in the room and, therefore, it is very important that the inlet conditions and the numerical method can generate a satisfactory...
Multidisc neutron velocity selector
International Nuclear Information System (INIS)
Rosta, L.; Zsigmond, Gy.; Farago, B.; Mezei, F.; Ban, K.; Perendi, J.
1987-12-01
The prototype of a velocity selector for neutron monochromatization in the 4-20 A wavelength range is presented. The theoretical background of the multidisc rotor system is given together with a description of the mechanical construction and electronic driving system. The first tests and neutron measurements prove easy handling and excellent parameters. (author) 6 refs.; 7 figs.; 2 tabs
Doppler velocity measurements from large and small arteries of mice
Reddy, Anilkumar K.; Madala, Sridhar; Entman, Mark L.; Michael, Lloyd H.; Taffet, George E.
2011-01-01
With the growth of genetic engineering, mice have become increasingly common as models of human diseases, and this has stimulated the development of techniques to assess the murine cardiovascular system. Our group has developed nonimaging and dedicated Doppler techniques for measuring blood velocity in the large and small peripheral arteries of anesthetized mice. We translated technology originally designed for human vessels for use in smaller mouse vessels at higher heart rates by using higher ultrasonic frequencies, smaller transducers, and higher-speed signal processing. With these methods one can measure cardiac filling and ejection velocities, velocity pulse arrival times for determining pulse wave velocity, peripheral blood velocity and vessel wall motion waveforms, jet velocities for the calculation of the pressure drop across stenoses, and left main coronary velocity for the estimation of coronary flow reserve. These noninvasive methods are convenient and easy to apply, but care must be taken in interpreting measurements due to Doppler sample volume size and angle of incidence. Doppler methods have been used to characterize and evaluate numerous cardiovascular phenotypes in mice and have been particularly useful in evaluating the cardiac and vascular remodeling that occur following transverse aortic constriction. Although duplex ultrasonic echo-Doppler instruments are being applied to mice, dedicated Doppler systems are more suitable for some applications. The magnitudes and waveforms of blood velocities from both cardiac and peripheral sites are similar in mice and humans, such that much of what is learned using Doppler technology in mice may be translated back to humans. PMID:21572013
Constraint likelihood analysis for a network of gravitational wave detectors
International Nuclear Information System (INIS)
Klimenko, S.; Rakhmanov, M.; Mitselmakher, G.; Mohanty, S.
2005-01-01
We propose a coherent method for detection and reconstruction of gravitational wave signals with a network of interferometric detectors. The method is derived by using the likelihood ratio functional for unknown signal waveforms. In the likelihood analysis, the global maximum of the likelihood ratio over the space of waveforms is used as the detection statistic. We identify a problem with this approach. In the case of an aligned pair of detectors, the detection statistic depends on the cross correlation between the detectors as expected, but this dependence disappears even for infinitesimally small misalignments. We solve the problem by applying constraints on the likelihood functional and obtain a new class of statistics. The resulting method can be applied to data from a network consisting of any number of detectors with arbitrary detector orientations. The method allows us reconstruction of the source coordinates and the waveforms of two polarization components of a gravitational wave. We study the performance of the method with numerical simulations and find the reconstruction of the source coordinates to be more accurate than in the standard likelihood method
Multidisk neutron velocity selectors
International Nuclear Information System (INIS)
Hammouda, B.
1992-01-01
Helical multidisk velocity selectors used for neutron scattering applications have been analyzed and tested experimentally. Design and performance considerations are discussed along with simple explanation of the basic concept. A simple progression is used for the inter-disk spacing in the 'Rosta' design. Ray tracing computer investigations are presented in order to assess the 'coverage' (how many absorbing layers are stacked along the path of 'wrong' wavelength neutrons) and the relative number of neutrons absorbed in each disk (and therefore the relative amount of gamma radiation emitted from each disk). We discuss whether a multidisk velocity selector can be operated in the 'reverse' configuration (i.e. the selector is turned by 180 0 around a vertical axis with the rotor spun in the reverse direction). Experimental tests and calibration of a multidisk selector are reported together with evidence that a multidisk selector can be operated in the 'reverse' configuration. (orig.)
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Generalized empirical likelihood methods for analyzing longitudinal data
Wang, S.
2010-02-16
Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.
Likelihood ratio sequential sampling models of recognition memory.
Osth, Adam F; Dennis, Simon; Heathcote, Andrew
2017-02-01
The mirror effect - a phenomenon whereby a manipulation produces opposite effects on hit and false alarm rates - is benchmark regularity of recognition memory. A likelihood ratio decision process, basing recognition on the relative likelihood that a stimulus is a target or a lure, naturally predicts the mirror effect, and so has been widely adopted in quantitative models of recognition memory. Glanzer, Hilford, and Maloney (2009) demonstrated that likelihood ratio models, assuming Gaussian memory strength, are also capable of explaining regularities observed in receiver-operating characteristics (ROCs), such as greater target than lure variance. Despite its central place in theorising about recognition memory, however, this class of models has not been tested using response time (RT) distributions. In this article, we develop a linear approximation to the likelihood ratio transformation, which we show predicts the same regularities as the exact transformation. This development enabled us to develop a tractable model of recognition-memory RT based on the diffusion decision model (DDM), with inputs (drift rates) provided by an approximate likelihood ratio transformation. We compared this "LR-DDM" to a standard DDM where all targets and lures receive their own drift rate parameters. Both were implemented as hierarchical Bayesian models and applied to four datasets. Model selection taking into account parsimony favored the LR-DDM, which requires fewer parameters than the standard DDM but still fits the data well. These results support log-likelihood based models as providing an elegant explanation of the regularities of recognition memory, not only in terms of choices made but also in terms of the times it takes to make them. Copyright © 2016 Elsevier Inc. All rights reserved.
Unbinned likelihood maximisation framework for neutrino clustering in Python
Energy Technology Data Exchange (ETDEWEB)
Coenders, Stefan [Technische Universitaet Muenchen, Boltzmannstr. 2, 85748 Garching (Germany)
2016-07-01
Albeit having detected an astrophysical neutrino flux with IceCube, sources of astrophysical neutrinos remain hidden up to now. A detection of a neutrino point source is a smoking gun for hadronic processes and acceleration of cosmic rays. The search for neutrino sources has many degrees of freedom, for example steady versus transient, point-like versus extended sources, et cetera. Here, we introduce a Python framework designed for unbinned likelihood maximisations as used in searches for neutrino point sources by IceCube. Implementing source scenarios in a modular way, likelihood searches on various kinds can be implemented in a user-friendly way, without sacrificing speed and memory management.
Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis
DEFF Research Database (Denmark)
Jansson, Michael; Nielsen, Morten Ørregaard
Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....
A note on estimating errors from the likelihood function
International Nuclear Information System (INIS)
Barlow, Roger
2005-01-01
The points at which the log likelihood falls by 12 from its maximum value are often used to give the 'errors' on a result, i.e. the 68% central confidence interval. The validity of this is examined for two simple cases: a lifetime measurement and a Poisson measurement. Results are compared with the exact Neyman construction and with the simple Bartlett approximation. It is shown that the accuracy of the log likelihood method is poor, and the Bartlett construction explains why it is flawed
Nearly Efficient Likelihood Ratio Tests for Seasonal Unit Roots
DEFF Research Database (Denmark)
Jansson, Michael; Nielsen, Morten Ørregaard
In an important generalization of zero frequency autore- gressive unit root tests, Hylleberg, Engle, Granger, and Yoo (1990) developed regression-based tests for unit roots at the seasonal frequencies in quarterly time series. We develop likelihood ratio tests for seasonal unit roots and show...... that these tests are "nearly efficient" in the sense of Elliott, Rothenberg, and Stock (1996), i.e. that their local asymptotic power functions are indistinguishable from the Gaussian power envelope. Currently available nearly efficient testing procedures for seasonal unit roots are regression-based and require...... the choice of a GLS detrending parameter, which our likelihood ratio tests do not....
LDR: A Package for Likelihood-Based Sufficient Dimension Reduction
Directory of Open Access Journals (Sweden)
R. Dennis Cook
2011-03-01
Full Text Available We introduce a new mlab software package that implements several recently proposed likelihood-based methods for sufficient dimension reduction. Current capabilities include estimation of reduced subspaces with a fixed dimension d, as well as estimation of d by use of likelihood-ratio testing, permutation testing and information criteria. The methods are suitable for preprocessing data for both regression and classification. Implementations of related estimators are also available. Although the software is more oriented to command-line operation, a graphical user interface is also provided for prototype computations.
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
CAN THE CURRICULUM BE USED TO ESTIMATE CRITICAL VELOCITY IN YOUNG COMPETITIVE SWIMMERS?
Directory of Open Access Journals (Sweden)
Daniel A. Marinho
2009-03-01
Full Text Available The aims of the present study were to assess critical velocity using the swimmer curriculum in front crawl events and to compare critical velocity to the velocity corresponding to a 4 mmol·l-1 of blood lactate concentration and to the velocity of a 30 min test. The sample included 24 high level male swimmers ranged between 14 and 16 years old. For each subject the critical velocity, the velocity corresponding to a 4 mmol·l-1 of blood lactate concentration and the mean velocity of a 30 min test were determined. The critical velocity was also estimated by considering the best performance of a swimmer over several distances based on the swimmer curriculum. Critical velocity including 100, 200 and 400 m events was not different from the velocity of 4 mmol·l-1 of blood lactate concentration. Critical velocity including all the swimmer events was not different from the velocity of a 30 min test. The assessment of critical velocity based upon the swimmer curriculum would therefore seem to be a good approach to determine the aerobic ability of a swimmer. The selection of the events to be included in critical velocity assessment must be a main concern in the evaluation of the swimmer
Normal blood pressure is important for proper blood flow to the body's organs and tissues. The force of the blood on the walls of the arteries is called blood pressure. Blood pressure is measured both as the heart ...
... blood, safe blood transfusions depend on careful blood typing and cross-matching. There are four major blood ... cause exceptions to the above patterns. ABO blood typing is not sufficient to prove or disprove paternity ...
Understanding the properties of diagnostic tests - Part 2: Likelihood ratios.
Ranganathan, Priya; Aggarwal, Rakesh
2018-01-01
Diagnostic tests are used to identify subjects with and without disease. In a previous article in this series, we examined some attributes of diagnostic tests - sensitivity, specificity, and predictive values. In this second article, we look at likelihood ratios, which are useful for the interpretation of diagnostic test results in everyday clinical practice.
Comparison of likelihood testing procedures for parallel systems with covariances
International Nuclear Information System (INIS)
Ayman Baklizi; Isa Daud; Noor Akma Ibrahim
1998-01-01
In this paper we considered investigating and comparing the behavior of the likelihood ratio, the Rao's and the Wald's statistics for testing hypotheses on the parameters of the simple linear regression model based on parallel systems with covariances. These statistics are asymptotically equivalent (Barndorff-Nielsen and Cox, 1994). However, their relative performances in finite samples are generally known. A Monte Carlo experiment is conducted to stimulate the sizes and the powers of these statistics for complete samples and in the presence of time censoring. Comparisons of the statistics are made according to the attainment of assumed size of the test and their powers at various points in the parameter space. The results show that the likelihood ratio statistics appears to have the best performance in terms of the attainment of the assumed size of the test. Power comparisons show that the Rao statistic has some advantage over the Wald statistic in almost all of the space of alternatives while likelihood ratio statistic occupies either the first or the last position in term of power. Overall, the likelihood ratio statistic appears to be more appropriate to the model under study, especially for small sample sizes
Maximum likelihood estimation of the attenuated ultrasound pulse
DEFF Research Database (Denmark)
Rasmussen, Klaus Bolding
1994-01-01
The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...
Planck 2013 results. XV. CMB power spectra and likelihood
Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Hurier, G.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Le Jeune, M.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P.B.; Lindholm, V.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Orieux, F.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-01-01
We present the Planck likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations. We use this likelihood to derive the Planck CMB power spectrum over three decades in l, covering 2 = 50, we employ a correlated Gaussian likelihood approximation based on angular cross-spectra derived from the 100, 143 and 217 GHz channels. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on cosmological parameters. We find good internal agreement among the high-l cross-spectra with residuals of a few uK^2 at l <= 1000. We compare our results with foreground-cleaned CMB maps, and with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. The best-fit LCDM cosmology is in excellent agreement with preliminary Planck polarisation spectra. The standard LCDM cosmology is well constrained b...
Robust Gaussian Process Regression with a Student-t Likelihood
Jylänki, P.P.; Vanhatalo, J.; Vehtari, A.
2011-01-01
This paper considers the robust and efficient implementation of Gaussian process regression with a Student-t observation model, which has a non-log-concave likelihood. The challenge with the Student-t model is the analytically intractable inference which is why several approximative methods have
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the
A simplification of the likelihood ratio test statistic for testing ...
African Journals Online (AJOL)
The traditional likelihood ratio test statistic for testing hypothesis about goodness of fit of multinomial probabilities in one, two and multi – dimensional contingency table was simplified. Advantageously, using the simplified version of the statistic to test the null hypothesis is easier and faster because calculating the expected ...
Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation
DEFF Research Database (Denmark)
Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik
2017-01-01
The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...
LIKELIHOOD ESTIMATION OF PARAMETERS USING SIMULTANEOUSLY MONITORED PROCESSES
DEFF Research Database (Denmark)
Friis-Hansen, Peter; Ditlevsen, Ove Dalager
2004-01-01
The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time. The consi....... The considered example is a ship sailing with a given speed through a Gaussian wave field....
Likelihood-based inference for clustered line transect data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus; Schweder, Tore
2006-01-01
The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...
Likelihood-based Dynamic Factor Analysis for Measurement and Forecasting
Jungbacker, B.M.J.P.; Koopman, S.J.
2015-01-01
We present new results for the likelihood-based analysis of the dynamic factor model. The latent factors are modelled by linear dynamic stochastic processes. The idiosyncratic disturbance series are specified as autoregressive processes with mutually correlated innovations. The new results lead to
Likelihood-based inference for clustered line transect data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge; Schweder, Tore
The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...
Composite likelihood and two-stage estimation in family studies
DEFF Research Database (Denmark)
Andersen, Elisabeth Anne Wreford
2004-01-01
In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...
Reconceptualizing Social Influence in Counseling: The Elaboration Likelihood Model.
McNeill, Brian W.; Stoltenberg, Cal D.
1989-01-01
Presents Elaboration Likelihood Model (ELM) of persuasion (a reconceptualization of the social influence process) as alternative model of attitude change. Contends ELM unifies conflicting social psychology results and can potentially account for inconsistent research findings in counseling psychology. Provides guidelines on integrating…
Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.
Heesacker, Martin
1986-01-01
Results of the application of the Elaboration Likelihood Model (ELM) to a counseling context revealed that more favorable attitudes toward counseling occurred as subjects' ego involvement increased and as intervention quality improved. Counselor credibility affected the degree to which subjects' attitudes reflected argument quality differences.…
Cases in which ancestral maximum likelihood will be confusingly misleading.
Handelman, Tomer; Chor, Benny
2017-05-07
Ancestral maximum likelihood (AML) is a phylogenetic tree reconstruction criteria that "lies between" maximum parsimony (MP) and maximum likelihood (ML). ML has long been known to be statistically consistent. On the other hand, Felsenstein (1978) showed that MP is statistically inconsistent, and even positively misleading: There are cases where the parsimony criteria, applied to data generated according to one tree topology, will be optimized on a different tree topology. The question of weather AML is statistically consistent or not has been open for a long time. Mossel et al. (2009) have shown that AML can "shrink" short tree edges, resulting in a star tree with no internal resolution, which yields a better AML score than the original (resolved) model. This result implies that AML is statistically inconsistent, but not that it is positively misleading, because the star tree is compatible with any other topology. We show that AML is confusingly misleading: For some simple, four taxa (resolved) tree, the ancestral likelihood optimization criteria is maximized on an incorrect (resolved) tree topology, as well as on a star tree (both with specific edge lengths), while the tree with the original, correct topology, has strictly lower ancestral likelihood. Interestingly, the two short edges in the incorrect, resolved tree topology are of length zero, and are not adjacent, so this resolved tree is in fact a simple path. While for MP, the underlying phenomenon can be described as long edge attraction, it turns out that here we have long edge repulsion. Copyright © 2017. Published by Elsevier Ltd.
Multilevel maximum likelihood estimation with application to covariance matrices
Czech Academy of Sciences Publication Activity Database
Turčičová, Marie; Mandel, J.; Eben, Kryštof
Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016
Pendeteksian Outlier pada Regresi Nonlinier dengan Metode statistik Likelihood Displacement
Directory of Open Access Journals (Sweden)
Siti Tabi'atul Hasanah
2012-11-01
Full Text Available Outlier is an observation that much different (extreme from the other observational data, or data can be interpreted that do not follow the general pattern of the model. Sometimes outliers provide information that can not be provided by other data. That's why outliers should not just be eliminated. Outliers can also be an influential observation. There are many methods that can be used to detect of outliers. In previous studies done on outlier detection of linear regression. Next will be developed detection of outliers in nonlinear regression. Nonlinear regression here is devoted to multiplicative nonlinear regression. To detect is use of statistical method likelihood displacement. Statistical methods abbreviated likelihood displacement (LD is a method to detect outliers by removing the suspected outlier data. To estimate the parameters are used to the maximum likelihood method, so we get the estimate of the maximum. By using LD method is obtained i.e likelihood displacement is thought to contain outliers. Further accuracy of LD method in detecting the outliers are shown by comparing the MSE of LD with the MSE from the regression in general. Statistic test used is Λ. Initial hypothesis was rejected when proved so is an outlier.
Accelerated radial Fourier-velocity encoding using compressed sensing
International Nuclear Information System (INIS)
Hilbert, Fabian; Han, Dietbert
2014-01-01
Purpose:Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. Materials and Methods:We imaged the femoral artery of healthy volunteers with ECG - triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Results:Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6 - fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Conclusion: Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity
Accelerated radial Fourier-velocity encoding using compressed sensing
Energy Technology Data Exchange (ETDEWEB)
Hilbert, Fabian; Han, Dietbert [Wuerzburg Univ. (Germany). Inst. of Radiology; Wech, Tobias; Koestler, Herbert [Wuerzburg Univ. (Germany). Inst. of Radiology; Wuerzburg Univ. (Germany). Comprehensive Heart Failure Center (CHFC)
2014-10-01
Purpose:Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. Materials and Methods:We imaged the femoral artery of healthy volunteers with ECG - triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Results:Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6 - fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Conclusion: Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity
Accelerated radial Fourier-velocity encoding using compressed sensing.
Hilbert, Fabian; Wech, Tobias; Hahn, Dietbert; Köstler, Herbert
2014-09-01
Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. We imaged the femoral artery of healthy volunteers with ECG-triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6-fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity distribution in vessels in the order of the voxel size. Thus
Gaussian copula as a likelihood function for environmental models
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an
Modeling gene expression measurement error: a quasi-likelihood approach
Directory of Open Access Journals (Sweden)
Strimmer Korbinian
2003-03-01
Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.
No evidence for bulk velocity from type Ia supernovae
Energy Technology Data Exchange (ETDEWEB)
Huterer, Dragan; Shafer, Daniel L. [Department of Physics, University of Michigan, 450 Church Street, Ann Arbor, MI 48109 (United States); Schmidt, Fabian, E-mail: huterer@umich.edu, E-mail: dlshafer@umich.edu, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching (Germany)
2015-12-01
We revisit the effect of peculiar velocities on low-redshift type Ia supernovae. Velocities introduce an additional guaranteed source of correlations between supernova magnitudes that should be considered in all analyses of nearby supernova samples but has largely been neglected in the past. Applying a likelihood analysis to the latest compilation of nearby supernovae, we find no evidence for the presence of these correlations, although, given the significant noise, the data is also consistent with the correlations predicted for the standard ΛCDM model. We then consider the dipolar component of the velocity correlations—the frequently studied ''bulk velocity''—and explicitly demonstrate that including the velocity correlations in the data covariance matrix is crucial for drawing correct and unambiguous conclusions about the bulk flow. In particular, current supernova data is consistent with no excess bulk flow on top of what is expected in ΛCDM and effectively captured by the covariance. We further clarify the nature of the apparent bulk flow that is inferred when the velocity covariance is ignored. We show that a significant fraction of this quantity is expected to be noise bias due to uncertainties in supernova magnitudes and not any physical peculiar motion.
No evidence for bulk velocity from type Ia supernovae
International Nuclear Information System (INIS)
Huterer, Dragan; Shafer, Daniel L.; Schmidt, Fabian
2015-01-01
We revisit the effect of peculiar velocities on low-redshift type Ia supernovae. Velocities introduce an additional guaranteed source of correlations between supernova magnitudes that should be considered in all analyses of nearby supernova samples but has largely been neglected in the past. Applying a likelihood analysis to the latest compilation of nearby supernovae, we find no evidence for the presence of these correlations, although, given the significant noise, the data is also consistent with the correlations predicted for the standard ΛCDM model. We then consider the dipolar component of the velocity correlations—the frequently studied ''bulk velocity''—and explicitly demonstrate that including the velocity correlations in the data covariance matrix is crucial for drawing correct and unambiguous conclusions about the bulk flow. In particular, current supernova data is consistent with no excess bulk flow on top of what is expected in ΛCDM and effectively captured by the covariance. We further clarify the nature of the apparent bulk flow that is inferred when the velocity covariance is ignored. We show that a significant fraction of this quantity is expected to be noise bias due to uncertainties in supernova magnitudes and not any physical peculiar motion
Deterministic hydrodynamics: Taking blood apart
Davis, John A.; Inglis, David W.; Morton, Keith J.; Lawrence, David A.; Huang, Lotien R.; Chou, Stephen Y.; Sturm, James C.; Austin, Robert H.
2006-10-01
We show the fractionation of whole blood components and isolation of blood plasma with no dilution by using a continuous-flow deterministic array that separates blood components by their hydrodynamic size, independent of their mass. We use the technology we developed of deterministic arrays which separate white blood cells, red blood cells, and platelets from blood plasma at flow velocities of 1,000 μm/sec and volume rates up to 1 μl/min. We verified by flow cytometry that an array using focused injection removed 100% of the lymphocytes and monocytes from the main red blood cell and platelet stream. Using a second design, we demonstrated the separation of blood plasma from the blood cells (white, red, and platelets) with virtually no dilution of the plasma and no cellular contamination of the plasma. cells | plasma | separation | microfabrication
A composite likelihood approach for spatially correlated survival data
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450
A composite likelihood approach for spatially correlated survival data.
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.
Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood
Directory of Open Access Journals (Sweden)
Olli Saarela
2012-01-01
Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.
GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS
Directory of Open Access Journals (Sweden)
S. Sridevi
2013-02-01
Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.
Likelihood inference for a nonstationary fractional autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren; Ørregård Nielsen, Morten
2010-01-01
This paper discusses model-based inference in an autoregressive model for fractional processes which allows the process to be fractional of order d or d-b. Fractional differencing involves infinitely many past values and because we are interested in nonstationary processes we model the data X1......,...,X_{T} given the initial values X_{-n}, n=0,1,..., as is usually done. The initial values are not modeled but assumed to be bounded. This represents a considerable generalization relative to all previous work where it is assumed that initial values are zero. For the statistical analysis we assume...... the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
Energy Technology Data Exchange (ETDEWEB)
Lowell, A. W.; Boggs, S. E; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C. [Space Sciences Laboratory, University of California, Berkeley (United States); Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y. [Institute of Astronomy, National Tsing Hua University, Taiwan (China); Jean, P.; Ballmoos, P. von [IRAP Toulouse (France); Lin, C.-H. [Institute of Physics, Academia Sinica, Taiwan (China); Amman, M. [Lawrence Berkeley National Laboratory (United States)
2017-10-20
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.
Physical constraints on the likelihood of life on exoplanets
Lingam, Manasvi; Loeb, Abraham
2018-04-01
One of the most fundamental questions in exoplanetology is to determine whether a given planet is habitable. We estimate the relative likelihood of a planet's propensity towards habitability by considering key physical characteristics such as the role of temperature on ecological and evolutionary processes, and atmospheric losses via hydrodynamic escape and stellar wind erosion. From our analysis, we demonstrate that Earth-sized exoplanets in the habitable zone around M-dwarfs seemingly display much lower prospects of being habitable relative to Earth, owing to the higher incident ultraviolet fluxes and closer distances to the host star. We illustrate our results by specifically computing the likelihood (of supporting life) for the recently discovered exoplanets, Proxima b and TRAPPIST-1e, which we find to be several orders of magnitude smaller than that of Earth.
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
Deformation of log-likelihood loss function for multiclass boosting.
Kanamori, Takafumi
2010-09-01
The purpose of this paper is to study loss functions in multiclass classification. In classification problems, the decision function is estimated by minimizing an empirical loss function, and then, the output label is predicted by using the estimated decision function. We propose a class of loss functions which is obtained by a deformation of the log-likelihood loss function. There are four main reasons why we focus on the deformed log-likelihood loss function: (1) this is a class of loss functions which has not been deeply investigated so far, (2) in terms of computation, a boosting algorithm with a pseudo-loss is available to minimize the proposed loss function, (3) the proposed loss functions provide a clear correspondence between the decision functions and conditional probabilities of output labels, (4) the proposed loss functions satisfy the statistical consistency of the classification error rate which is a desirable property in classification problems. Based on (3), we show that the deformed log-likelihood loss provides a model of mislabeling which is useful as a statistical model of medical diagnostics. We also propose a robust loss function against outliers in multiclass classification based on our approach. The robust loss function is a natural extension of the existing robust loss function for binary classification. A model of mislabeling and a robust loss function are useful to cope with noisy data. Some numerical studies are presented to show the robustness of the proposed loss function. A mathematical characterization of the deformed log-likelihood loss function is also presented. Copyright 2010 Elsevier Ltd. All rights reserved.
Bayesian interpretation of Generalized empirical likelihood by maximum entropy
Rochet , Paul
2011-01-01
We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...
Menyoal Elaboration Likelihood Model (ELM) dan Teori Retorika
Yudi Perbawaningsih
2012-01-01
Abstract: Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM). This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of ...
Corporate governance effect on financial distress likelihood: Evidence from Spain
Directory of Open Access Journals (Sweden)
Montserrat Manzaneque
2016-01-01
Full Text Available The paper explores some mechanisms of corporate governance (ownership and board characteristics in Spanish listed companies and their impact on the likelihood of financial distress. An empirical study was conducted between 2007 and 2012 using a matched-pairs research design with 308 observations, with half of them classified as distressed and non-distressed. Based on the previous study by Pindado, Rodrigues, and De la Torre (2008, a broader concept of bankruptcy is used to define business failure. Employing several conditional logistic models, as well as to other previous studies on bankruptcy, the results confirm that in difficult situations prior to bankruptcy, the impact of board ownership and proportion of independent directors on business failure likelihood are similar to those exerted in more extreme situations. These results go one step further, to offer a negative relationship between board size and the likelihood of financial distress. This result is interpreted as a form of creating diversity and to improve the access to the information and resources, especially in contexts where the ownership is highly concentrated and large shareholders have a great power to influence the board structure. However, the results confirm that ownership concentration does not have a significant impact on financial distress likelihood in the Spanish context. It is argued that large shareholders are passive as regards an enhanced monitoring of management and, alternatively, they do not have enough incentives to hold back the financial distress. These findings have important implications in the Spanish context, where several changes in the regulatory listing requirements have been carried out with respect to corporate governance, and where there is no empirical evidence regarding this respect.
Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation
Rajiv D. Banker
1993-01-01
This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...
Multiple Improvements of Multiple Imputation Likelihood Ratio Tests
Chan, Kin Wai; Meng, Xiao-Li
2017-01-01
Multiple imputation (MI) inference handles missing data by first properly imputing the missing values $m$ times, and then combining the $m$ analysis results from applying a complete-data procedure to each of the completed datasets. However, the existing method for combining likelihood ratio tests has multiple defects: (i) the combined test statistic can be negative in practice when the reference null distribution is a standard $F$ distribution; (ii) it is not invariant to re-parametrization; ...
Maximum likelihood convolutional decoding (MCD) performance due to system losses
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Menyoal Elaboration Likelihood Model (ELM) Dan Teori Retorika
Perbawaningsih, Yudi
2012-01-01
: Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM). This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of the mess...
Penggunaan Elaboration Likelihood Model dalam Menganalisis Penerimaan Teknologi Informasi
vitrian, vitrian2
2010-01-01
This article discusses some technology acceptance models in an organization. Thorough analysis of how technology is acceptable help managers make any planning to implement new teachnology and make sure that new technology could enhance organization's performance. Elaboration Likelihood Model (ELM) is the one which sheds light on some behavioral factors in acceptance of information technology. The basic tenet of ELM states that human behavior in principle can be influenced through central r...
Statistical Bias in Maximum Likelihood Estimators of Item Parameters.
1982-04-01
34 a> E r’r~e r ,C Ie I# ne,..,.rVi rnd Id.,flfv b1 - bindk numb.r) I; ,t-i i-cd I ’ tiie bias in the maximum likelihood ,st i- i;, ’ t iIeiIrs in...NTC, IL 60088 Psychometric Laboratory University of North Carolina I ERIC Facility-Acquisitions Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and
Democracy, Autocracy and the Likelihood of International Conflict
Tangerås, Thomas
2008-01-01
This is a game-theoretic analysis of the link between regime type and international conflict. The democratic electorate can credibly punish the leader for bad conflict outcomes, whereas the autocratic selectorate cannot. For the fear of being thrown out of office, democratic leaders are (i) more selective about the wars they initiate and (ii) on average win more of the wars they start. Foreign policy behaviour is found to display strategic complementarities. The likelihood of interstate war, ...
Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood
Directory of Open Access Journals (Sweden)
Yunquan Song
2014-01-01
Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.
Approximate maximum likelihood estimation for population genetic inference.
Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas
2017-11-27
In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.
Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains
Energy Technology Data Exchange (ETDEWEB)
Bouland, Adam; Easther, Richard; Rosenfeld, Katherine, E-mail: adam.bouland@aya.yale.edu, E-mail: richard.easther@yale.edu, E-mail: krosenfeld@cfa.harvard.edu [Department of Physics, Yale University, New Haven CT 06520 (United States)
2011-05-01
We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user.
Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains
International Nuclear Information System (INIS)
Bouland, Adam; Easther, Richard; Rosenfeld, Katherine
2011-01-01
We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user
Maximum likelihood as a common computational framework in tomotherapy
International Nuclear Information System (INIS)
Olivera, G.H.; Shepard, D.M.; Reckwerdt, P.J.; Ruchala, K.; Zachman, J.; Fitchard, E.E.; Mackie, T.R.
1998-01-01
Tomotherapy is a dose delivery technique using helical or axial intensity modulated beams. One of the strengths of the tomotherapy concept is that it can incorporate a number of processes into a single piece of equipment. These processes include treatment optimization planning, dose reconstruction and kilovoltage/megavoltage image reconstruction. A common computational technique that could be used for all of these processes would be very appealing. The maximum likelihood estimator, originally developed for emission tomography, can serve as a useful tool in imaging and radiotherapy. We believe that this approach can play an important role in the processes of optimization planning, dose reconstruction and kilovoltage and/or megavoltage image reconstruction. These processes involve computations that require comparable physical methods. They are also based on equivalent assumptions, and they have similar mathematical solutions. As a result, the maximum likelihood approach is able to provide a common framework for all three of these computational problems. We will demonstrate how maximum likelihood methods can be applied to optimization planning, dose reconstruction and megavoltage image reconstruction in tomotherapy. Results for planning optimization, dose reconstruction and megavoltage image reconstruction will be presented. Strengths and weaknesses of the methodology are analysed. Future directions for this work are also suggested. (author)
Communicating likelihoods and probabilities in forecasts of volcanic eruptions
Doyle, Emma E. H.; McClure, John; Johnston, David M.; Paton, Douglas
2014-02-01
The issuing of forecasts and warnings of natural hazard events, such as volcanic eruptions, earthquake aftershock sequences and extreme weather often involves the use of probabilistic terms, particularly when communicated by scientific advisory groups to key decision-makers, who can differ greatly in relative expertise and function in the decision making process. Recipients may also differ in their perception of relative importance of political and economic influences on interpretation. Consequently, the interpretation of these probabilistic terms can vary greatly due to the framing of the statements, and whether verbal or numerical terms are used. We present a review from the psychology literature on how the framing of information influences communication of these probability terms. It is also unclear as to how people rate their perception of an event's likelihood throughout a time frame when a forecast time window is stated. Previous research has identified that, when presented with a 10-year time window forecast, participants viewed the likelihood of an event occurring ‘today’ as being of less than that in year 10. Here we show that this skew in perception also occurs for short-term time windows (under one week) that are of most relevance for emergency warnings. In addition, unlike the long-time window statements, the use of the phrasing “within the next…” instead of “in the next…” does not mitigate this skew, nor do we observe significant differences between the perceived likelihoods of scientists and non-scientists. This finding suggests that effects occurring due to the shorter time window may be ‘masking’ any differences in perception due to wording or career background observed for long-time window forecasts. These results have implications for scientific advice, warning forecasts, emergency management decision-making, and public information as any skew in perceived event likelihood towards the end of a forecast time window may result in
Comparisons of likelihood and machine learning methods of individual classification
Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.
2002-01-01
Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of
Development of an optimal velocity selection method with velocity obstacle
Energy Technology Data Exchange (ETDEWEB)
Kim, Min Geuk; Oh, Jun Ho [KAIST, Daejeon (Korea, Republic of)
2015-08-15
The Velocity obstacle (VO) method is one of the most well-known methods for local path planning, allowing consideration of dynamic obstacles and unexpected obstacles. Typical VO methods separate a velocity map into a collision area and a collision-free area. A robot can avoid collisions by selecting its velocity from within the collision-free area. However, if there are numerous obstacles near a robot, the robot will have very few velocity candidates. In this paper, a method for choosing optimal velocity components using the concept of pass-time and vertical clearance is proposed for the efficient movement of a robot. The pass-time is the time required for a robot to pass by an obstacle. By generating a latticized available velocity map for a robot, each velocity component can be evaluated using a cost function that considers the pass-time and other aspects. From the output of the cost function, even a velocity component that will cause a collision in the future can be chosen as a final velocity if the pass-time is sufficiently long enough.
Nonlinear peculiar-velocity analysis and PCA
Energy Technology Data Exchange (ETDEWEB)
Dekel, A. [and others
2001-02-20
We allow for nonlinear effects in the likelihood analysis of peculiar velocities, and obtain {approximately}35%-lower values for the cosmological density parameter and for the amplitude of mass-density fluctuations. The power spectrum in the linear regime is assumed to be of the flat {Lambda}CDM model (h = 0:65, n = 1) with only {Omega}{sub m} free. Since the likelihood is driven by the nonlinear regime, we break the power spectrum at k{sub b} {approximately} 0.2 (h{sup {minus}1} Mpc){sup {minus}1} and fit a two-parameter power-law at k > k{sub b} . This allows for an unbiased fit in the linear regime. Tests using improved mock catalogs demonstrate a reduced bias and a better fit. We find for the Mark III and SFI data {Omega}{sub m} = 0.35 {+-} 0.09 with {sigma}{sub 8}{Omega}P{sub m}{sup 0.6} = 0.55 {+-} 0.10 (90% errors). When allowing deviations from {Lambda}CDM, we find an indication for a wiggle in the power spectrum in the form of an excess near k {approximately} 0.05 and a deficiency at k {approximately} 0.1 (h{sup {minus}1} Mpc){sup {minus}1}--a cold flow which may be related to a feature indicated from redshift surveys and the second peak in the CMB anisotropy. A {chi}{sup 2} test applied to principal modes demonstrates that the nonlinear procedure improves the goodness of fit. The Principal Component Analysis (PCA) helps identifying spatial features of the data and fine-tuning the theoretical and error models. We address the potential for optimal data compression using PCA.
International Nuclear Information System (INIS)
Cearley, J.E.; Carruth, J.C.; Dixon, R.C.; Spencer, S.S.; Zuloaga, J.A. Jr.
1986-01-01
This patent describes a velocity control arrangement for a reciprocable, vertically oriented control rod for use in a nuclear reactor in a fluid medium, the control rod including a drive hub secured to and extending from one end therefrom. The control device comprises: a toroidally shaped control member spaced from and coaxially positioned around the hub and secured thereto by a plurality of spaced radial webs thereby providing an annular passage for fluid intermediate the hub and the toroidal member spaced therefrom in coaxial position. The side of the control member toward the control rod has a smooth generally conical surface. The side of the control member away from the control rod is formed with a concave surface constituting a single annular groove. The device also comprises inner and outer annular vanes radially spaced from one another and spaced from the side of the control member away from the control rod and positioned coaxially around and spaced from the hub and secured thereto by spaced radial webs thereby providing an annular passage for fluid intermediate the hub and the vanes. The vanes are angled toward the control member, the outer edge of the inner vane being closer to the control member and the inner edge of the outer vane being closer to the control member. When the control rod moves in the fluid in the direction toward the drive hub the vanes direct a flow of fluid turbulence which provides greater resistance to movement of the control rod in the direction toward the drive hub than in the other direction
Velocity Dispersions Across Bulge Types
International Nuclear Information System (INIS)
Fabricius, Maximilian; Bender, Ralf; Hopp, Ulrich; Saglia, Roberto; Drory, Niv; Fisher, David
2010-01-01
We present first results from a long-slit spectroscopic survey of bulge kinematics in local spiral galaxies. Our optical spectra were obtained at the Hobby-Eberly Telescope with the LRS spectrograph and have a velocity resolution of 45 km/s (σ*), which allows us to resolve the velocity dispersions in the bulge regions of most objects in our sample. We find that the velocity dispersion profiles in morphological classical bulge galaxies are always centrally peaked while the velocity dispersion of morphologically disk-like bulges stays relatively flat towards the center--once strongly barred galaxies are discarded.
On linear relationship between shock velocity and particle velocity
International Nuclear Information System (INIS)
Dandache, H.
1986-11-01
We attempt to derive the linear relationship between shock velocity U s and particle velocity U p from thermodynamic considerations, taking into account an ideal gas equation of state and a Mie-Grueneisen equation of state for solids. 23 refs
Applying exclusion likelihoods from LHC searches to extended Higgs sectors
International Nuclear Information System (INIS)
Bechtle, Philip; Heinemeyer, Sven; Staal, Oscar; Stefaniak, Tim; Weiglein, Georg
2015-01-01
LHC searches for non-standard Higgs bosons decaying into tau lepton pairs constitute a sensitive experimental probe for physics beyond the Standard Model (BSM), such as supersymmetry (SUSY). Recently, the limits obtained from these searches have been presented by the CMS collaboration in a nearly model-independent fashion - as a narrow resonance model - based on the full 8 TeV dataset. In addition to publishing a 95 % C.L. exclusion limit, the full likelihood information for the narrowresonance model has been released. This provides valuable information that can be incorporated into global BSM fits. We present a simple algorithm that maps an arbitrary model with multiple neutral Higgs bosons onto the narrow resonance model and derives the corresponding value for the exclusion likelihood from the CMS search. This procedure has been implemented into the public computer code HiggsBounds (version 4.2.0 and higher). We validate our implementation by cross-checking against the official CMS exclusion contours in three Higgs benchmark scenarios in the Minimal Supersymmetric Standard Model (MSSM), and find very good agreement. Going beyond validation, we discuss the combined constraints of the ττ search and the rate measurements of the SM-like Higgs at 125 GeV in a recently proposed MSSM benchmark scenario, where the lightest Higgs boson obtains SM-like couplings independently of the decoupling of the heavier Higgs states. Technical details for how to access the likelihood information within HiggsBounds are given in the appendix. The program is available at http:// higgsbounds.hepforge.org. (orig.)
Altered velocity processing in schizophrenia during pursuit eye tracking.
Directory of Open Access Journals (Sweden)
Matthias Nagel
Full Text Available Smooth pursuit eye movements (SPEM are needed to keep the retinal image of slowly moving objects within the fovea. Depending on the task, about 50%-80% of patients with schizophrenia have difficulties in maintaining SPEM. We designed a study that comprised different target velocities as well as testing for internal (extraretinal guidance of SPEM in the absence of a visual target. We applied event-related fMRI by presenting four velocities (5, 10, 15, 20°/s both with and without intervals of target blanking. 17 patients and 16 healthy participants were included. Eye movements were registered during scanning sessions. Statistical analysis included mixed ANOVAs and regression analyses of the target velocity on the Blood Oxygen Level Dependency (BOLD signal. The main effect group and the interaction of velocity×group revealed reduced activation in V5 and putamen but increased activation of cerebellar regions in patients. Regression analysis showed that activation in supplementary eye field, putamen, and cerebellum was not correlated to target velocity in patients in contrast to controls. Furthermore, activation in V5 and in intraparietal sulcus (putative LIP bilaterally was less strongly correlated to target velocity in patients than controls. Altered correlation of target velocity and neural activation in the cortical network supporting SPEM (V5, SEF, LIP, putamen implies impaired transformation of the visual motion signal into an adequate motor command in patients. Cerebellar regions seem to be involved in compensatory mechanisms although cerebellar activity in patients was not related to target velocity.
Australian food life style segments and elaboration likelihood differences
DEFF Research Database (Denmark)
Brunsø, Karen; Reid, Mike
As the global food marketing environment becomes more competitive, the international and comparative perspective of consumers' attitudes and behaviours becomes more important for both practitioners and academics. This research employs the Food-Related Life Style (FRL) instrument in Australia...... in order to 1) determine Australian Life Style Segments and compare these with their European counterparts, and to 2) explore differences in elaboration likelihood among the Australian segments, e.g. consumers' interest and motivation to perceive product related communication. The results provide new...
Maximum-likelihood method for numerical inversion of Mellin transform
International Nuclear Information System (INIS)
Iqbal, M.
1997-01-01
A method is described for inverting the Mellin transform which uses an expansion in Laguerre polynomials and converts the Mellin transform to Laplace transform, then the maximum-likelihood regularization method is used to recover the original function of the Mellin transform. The performance of the method is illustrated by the inversion of the test functions available in the literature (J. Inst. Math. Appl., 20 (1977) 73; Math. Comput., 53 (1989) 589). Effectiveness of the method is shown by results obtained through demonstration by means of tables and diagrams
How to Improve the Likelihood of CDM Approval?
DEFF Research Database (Denmark)
Brandt, Urs Steiner; Svendsen, Gert Tinggaard
2014-01-01
How can the likelihood of Clean Development Mechanism (CDM) approval be improved in the face of institutional shortcomings? To answer this question, we focus on the three institutional shortcomings of income sharing, risk sharing and corruption prevention concerning afforestation/reforestation (A....../R). Furthermore, three main stakeholders are identified, namely investors, governments and agents in a principal-agent model regarding monitoring and enforcement capacity. Developing countries such as West Africa have, despite huge potentials, not been integrated in A/R CDM projects yet. Remote sensing, however...
Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution
Directory of Open Access Journals (Sweden)
Hare Krishna
2017-01-01
Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.
Elemental composition of cosmic rays using a maximum likelihood method
International Nuclear Information System (INIS)
Ruddick, K.
1996-01-01
We present a progress report on our attempts to determine the composition of cosmic rays in the knee region of the energy spectrum. We have used three different devices to measure properties of the extensive air showers produced by primary cosmic rays: the Soudan 2 underground detector measures the muon flux deep underground, a proportional tube array samples shower density at the surface of the earth, and a Cherenkov array observes light produced high in the atmosphere. We have begun maximum likelihood fits to these measurements with the hope of determining the nuclear mass number A on an event by event basis. (orig.)
Likelihood-Based Inference in Nonlinear Error-Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbæk, Anders
We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...
Process criticality accident likelihoods, consequences and emergency planning
International Nuclear Information System (INIS)
McLaughlin, T.P.
1992-01-01
Evaluation of criticality accident risks in the processing of significant quantities of fissile materials is both complex and subjective, largely due to the lack of accident statistics. Thus, complying with national and international standards and regulations which require an evaluation of the net benefit of a criticality accident alarm system, is also subjective. A review of guidance found in the literature on potential accident magnitudes is presented for different material forms and arrangements. Reasoned arguments are also presented concerning accident prevention and accident likelihoods for these material forms and arrangements. (Author)
Likelihood Estimation of Gamma Ray Bursts Duration Distribution
Horvath, Istvan
2005-01-01
Two classes of Gamma Ray Bursts have been identified so far, characterized by T90 durations shorter and longer than approximately 2 seconds. It was shown that the BATSE 3B data allow a good fit with three Gaussian distributions in log T90. In the same Volume in ApJ. another paper suggested that the third class of GRBs is may exist. Using the full BATSE catalog here we present the maximum likelihood estimation, which gives us 0.5% probability to having only two subclasses. The MC simulation co...
Process criticality accident likelihoods, consequences, and emergency planning
Energy Technology Data Exchange (ETDEWEB)
McLaughlin, T.P.
1991-01-01
Evaluation of criticality accident risks in the processing of significant quantities of fissile materials is both complex and subjective, largely due to the lack of accident statistics. Thus, complying with standards such as ISO 7753 which mandates that the need for an alarm system be evaluated, is also subjective. A review of guidance found in the literature on potential accident magnitudes is presented for different material forms and arrangements. Reasoned arguments are also presented concerning accident prevention and accident likelihoods for these material forms and arrangements. 13 refs., 1 fig., 1 tab.
Improved Likelihood Function in Particle-based IR Eye Tracking
DEFF Research Database (Denmark)
Satria, R.; Sorensen, J.; Hammoud, R.
2005-01-01
In this paper we propose a log likelihood-ratio function of foreground and background models used in a particle filter to track the eye region in dark-bright pupil image sequences. This model fuses information from both dark and bright pupil images and their difference image into one model. Our...... enhanced tracker overcomes the issues of prior selection of static thresholds during the detection of feature observations in the bright-dark difference images. The auto-initialization process is performed using cascaded classifier trained using adaboost and adapted to IR eye images. Experiments show good...
Estimating likelihood of future crashes for crash-prone drivers
Subasish Das; Xiaoduan Sun; Fan Wang; Charles Leboeuf
2015-01-01
At-fault crash-prone drivers are usually considered as the high risk group for possible future incidents or crashes. In Louisiana, 34% of crashes are repeatedly committed by the at-fault crash-prone drivers who represent only 5% of the total licensed drivers in the state. This research has conducted an exploratory data analysis based on the driver faultiness and proneness. The objective of this study is to develop a crash prediction model to estimate the likelihood of future crashes for the a...
Similar tests and the standardized log likelihood ratio statistic
DEFF Research Database (Denmark)
Jensen, Jens Ledet
1986-01-01
When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Directory of Open Access Journals (Sweden)
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.
Sodium Velocity Maps on Mercury
Potter, A. E.; Killen, R. M.
2011-01-01
The objective of the current work was to measure two-dimensional maps of sodium velocities on the Mercury surface and examine the maps for evidence of sources or sinks of sodium on the surface. The McMath-Pierce Solar Telescope and the Stellar Spectrograph were used to measure Mercury spectra that were sampled at 7 milliAngstrom intervals. Observations were made each day during the period October 5-9, 2010. The dawn terminator was in view during that time. The velocity shift of the centroid of the Mercury emission line was measured relative to the solar sodium Fraunhofer line corrected for radial velocity of the Earth. The difference between the observed and calculated velocity shift was taken to be the velocity vector of the sodium relative to Earth. For each position of the spectrograph slit, a line of velocities across the planet was measured. Then, the spectrograph slit was stepped over the surface of Mercury at 1 arc second intervals. The position of Mercury was stabilized by an adaptive optics system. The collection of lines were assembled into an images of surface reflection, sodium emission intensities, and Earthward velocities over the surface of Mercury. The velocity map shows patches of higher velocity in the southern hemisphere, suggesting the existence of sodium sources there. The peak earthward velocity occurs in the equatorial region, and extends to the terminator. Since this was a dawn terminator, this might be an indication of dawn evaporation of sodium. Leblanc et al. (2008) have published a velocity map that is similar.
Identifying Clusters with Mixture Models that Include Radial Velocity Observations
Czarnatowicz, Alexis; Ybarra, Jason E.
2018-01-01
The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).
Preliminary evaluation of vector flow and spectral velocity estimation
DEFF Research Database (Denmark)
Pedersen, Mads Møller; Pihl, Michael Johannes; Haugaard, Per
Spectral estimation is considered as the golden standard in ultrasound velocity estimation. For spectral velocity estimation the blood flow angle is set by the ultrasound operator. Vector flow provides temporal and spatial estimates of the blood flow angle and velocity. A comparison of vector flow...... line covering the vessel diameter. A commercial ultrasound scanner (ProFocus 2202, BK Medical, Denmark) and a 7.6 MHz linear transducer was used (8670, BK Medical). The mean vector blood flow angle estimations were calculated {52(18);55(23);60(16)}°. For comparison the fixed angles for spectral...... estimation were obtained {52;56;52}°. The mean vector velocity estimates at PS {76(15);95(17);77(16)}cm/s and at end diastole (ED) {17(6);18(6);24(6)}cm/s were calculated. For comparison spectral velocity estimates at PS {77;110;76}cm/s and ED {18;18;20}cm/s were obtained. The mean vector angle estimates...
Introduction to vector velocity imaging
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Udesen, Jesper; Hansen, Kristoffer Lindskov
Current ultrasound scanners can only estimate the velocity along the ultrasound beam and this gives rise to the cos() factor on all velocity estimates. This is a major limitation as most vessels are close to perpendicular to the beam. Also the angle varies as a function of space and time making ...
... The medical history includes questions that help blood bank staff decide if a person is healthy enough to donate blood. They'll ... Food and Drug Administration (FDA) regulates U.S. blood banks. All blood ... operating. Sometimes people who donate blood notice a few minor side ...
Velocity Estimation of the Main Portal Vein with Transverse Oscillation
DEFF Research Database (Denmark)
Brandt, Andreas Hjelm; Hansen, Kristoffer Lindskov; Nielsen, Michael Bachmann
2015-01-01
This study evaluates if Transverse Oscillation (TO) can provide reliable and accurate peak velocity estimates of blood flow the main portal vein. TO was evaluated against the recommended and most widely used technique for portal flow estimation, Spectral Doppler Ultrasound (SDU). The main portal...
Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets
Litvinenko, Alexander
2017-11-01
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets
Litvinenko, Alexander; Sun, Ying; Genton, Marc G.; Keyes, David E.
2017-01-01
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\H$-) matrix format with computational cost $\\mathcal{O}(k^2n \\log^2 n/p)$ and storage $\\mathcal{O}(kn \\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Superfast maximum-likelihood reconstruction for quantum tomography
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
Likelihood inference for a fractionally cointegrated vector autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren; Ørregård Nielsen, Morten
2012-01-01
such that the process X_{t} is fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß'X_{t} is fractional of order d-b, and no other fractionality order is possible. We define the statistical model by 0inference when the true values satisfy b0¿1/2 and d0-b0......We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters...... process in the parameters when errors are i.i.d. with suitable moment conditions and initial values are bounded. When the limit is deterministic this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of (ß...
Likelihood-Based Inference of B Cell Clonal Families.
Directory of Open Access Journals (Sweden)
Duncan K Ralph
2016-10-01
Full Text Available The human immune system depends on a highly diverse collection of antibody-making B cells. B cell receptor sequence diversity is generated by a random recombination process called "rearrangement" forming progenitor B cells, then a Darwinian process of lineage diversification and selection called "affinity maturation." The resulting receptors can be sequenced in high throughput for research and diagnostics. Such a collection of sequences contains a mixture of various lineages, each of which may be quite numerous, or may consist of only a single member. As a step to understanding the process and result of this diversification, one may wish to reconstruct lineage membership, i.e. to cluster sampled sequences according to which came from the same rearrangement events. We call this clustering problem "clonal family inference." In this paper we describe and validate a likelihood-based framework for clonal family inference based on a multi-hidden Markov Model (multi-HMM framework for B cell receptor sequences. We describe an agglomerative algorithm to find a maximum likelihood clustering, two approximate algorithms with various trade-offs of speed versus accuracy, and a third, fast algorithm for finding specific lineages. We show that under simulation these algorithms greatly improve upon existing clonal family inference methods, and that they also give significantly different clusters than previous methods when applied to two real data sets.
Simulation-based marginal likelihood for cluster strong lensing cosmology
Killedar, M.; Borgani, S.; Fabjan, D.; Dolag, K.; Granato, G.; Meneghetti, M.; Planelles, S.; Ragone-Figueroa, C.
2018-01-01
Comparisons between observed and predicted strong lensing properties of galaxy clusters have been routinely used to claim either tension or consistency with Λ cold dark matter cosmology. However, standard approaches to such cosmological tests are unable to quantify the preference for one cosmology over another. We advocate approximating the relevant Bayes factor using a marginal likelihood that is based on the following summary statistic: the posterior probability distribution function for the parameters of the scaling relation between Einstein radii and cluster mass, α and β. We demonstrate, for the first time, a method of estimating the marginal likelihood using the X-ray selected z > 0.5 Massive Cluster Survey clusters as a case in point and employing both N-body and hydrodynamic simulations of clusters. We investigate the uncertainty in this estimate and consequential ability to compare competing cosmologies, which arises from incomplete descriptions of baryonic processes, discrepancies in cluster selection criteria, redshift distribution and dynamical state. The relation between triaxial cluster masses at various overdensities provides a promising alternative to the strong lensing test.
Risk factors and likelihood of Campylobacter colonization in broiler flocks
Directory of Open Access Journals (Sweden)
SL Kuana
2007-09-01
Full Text Available Campylobacter was investigated in cecal droppings, feces, and cloacal swabs of 22 flocks of 3 to 5 week-old broilers. Risk factors and the likelihood of the presence of this agent in these flocks were determined. Management practices, such as cleaning and disinfection, feeding, drinkers, and litter treatments, were assessed. Results were evaluated using Odds Ratio (OR test, and their significance was tested by Fisher's test (p<0.05. A Campylobacter prevalence of 81.8% was found in the broiler flocks (18/22, and within positive flocks, it varied between 85 and 100%. Campylobacter incidence among sample types was homogenous, being 81.8% in cecal droppings, 80.9% in feces, and 80.4% in cloacal swabs (230. Flocks fed by automatic feeding systems presented higher incidence of Campylobacter as compared to those fed by tube feeders. Litter was reused in 63.6% of the farm, and, despite the lack of statistical significance, there was higher likelihood of Campylobacter incidence when litter was reused. Foot bath was not used in 45.5% of the flocks, whereas the use of foot bath associated to deficient lime management increased the number of positive flocks, although with no statiscal significance. The evaluated parameters were not significantly associated with Campylobacter colonization in the assessed broiler flocks.
Menyoal Elaboration Likelihood Model (ELM dan Teori Retorika
Directory of Open Access Journals (Sweden)
Yudi Perbawaningsih
2012-06-01
Full Text Available Abstract: Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM. This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of the message, and vice versa. Separating the two routes of the persuasion process as described in the ELM theory would not be relevant. Abstrak: Persuasi adalah proses komunikasi untuk membentuk atau mengubah sikap, yang dapat dipahami dengan teori Retorika dan teori Elaboration Likelihood Model (ELM. Penelitian ini mengelaborasi teori tersebut dalam Kuliah Umum sebagai sarana mempersuasi mahasiswa untuk memilih konsentrasi studi studi yang didasarkan pada proses pengolahan informasi. Menggunakan metode survey, didapatkan hasil yaitu tidaklah cukup relevan memisahkan pesan dan narasumber dalam melihat efektivitas persuasi. Keduanya menyatu yang berarti bahwa kualitas narasumber ditentukan oleh kualitas pesan yang disampaikannya, dan sebaliknya. Memisahkan proses persuasi dalam dua lajur seperti yang dijelaskan dalam ELM teori menjadi tidak relevan.
Corporate brand extensions based on the purchase likelihood: governance implications
Directory of Open Access Journals (Sweden)
Spyridon Goumas
2018-03-01
Full Text Available This paper is examining the purchase likelihood of hypothetical service brand extensions from product companies focusing on consumer electronics based on sector categorization and perceptions of fit between the existing product category and image of the company. Prior research has recognized that levels of brand knowledge eases the transference of associations and affect to the new products. Similarity to the existing products of the parent company and perceived image also influence the success of brand extensions. However, sector categorization may interfere with this relationship. The purpose of this study is to examine Greek consumers’ attitudes towards hypothetical brand extensions, and how these are affected by consumers’ existing knowledge about the brand, sector categorization and perceptions of image and category fit of cross-sector extensions. This aim is examined in the context of technological categories, where less-known companies exhibited significance in purchase likelihood, and contradictory with the existing literature, service companies did not perform as positively as expected. Additional insights to the existing literature about sector categorization are provided. The effect of both image and category fit is also examined and predictions regarding the effect of each are made.
Gauging the likelihood of stable cavitation from ultrasound contrast agents.
Bader, Kenneth B; Holland, Christy K
2013-01-07
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form I(CAV) = P(r)/f (where P(r) is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.
Safe semi-supervised learning based on weighted likelihood.
Kawakita, Masanori; Takeuchi, Jun'ichi
2014-05-01
We are interested in developing a safe semi-supervised learning that works in any situation. Semi-supervised learning postulates that n(') unlabeled data are available in addition to n labeled data. However, almost all of the previous semi-supervised methods require additional assumptions (not only unlabeled data) to make improvements on supervised learning. If such assumptions are not met, then the methods possibly perform worse than supervised learning. Sokolovska, Cappé, and Yvon (2008) proposed a semi-supervised method based on a weighted likelihood approach. They proved that this method asymptotically never performs worse than supervised learning (i.e., it is safe) without any assumption. Their method is attractive because it is easy to implement and is potentially general. Moreover, it is deeply related to a certain statistical paradox. However, the method of Sokolovska et al. (2008) assumes a very limited situation, i.e., classification, discrete covariates, n(')→∞ and a maximum likelihood estimator. In this paper, we extend their method by modifying the weight. We prove that our proposal is safe in a significantly wide range of situations as long as n≤n('). Further, we give a geometrical interpretation of the proof of safety through the relationship with the above-mentioned statistical paradox. Finally, we show that the above proposal is asymptotically safe even when n(')
A novel velocity estimator using multiple frequency carriers
DEFF Research Database (Denmark)
Zhang, Zhuo; Jakobsson, Andreas; Nikolov, Svetoslav
2004-01-01
. In this paper, we propose a nonlinear least squares (NLS) estimator. Typically, NLS estimators are computationally cumbersome, in general requiring the minimization of a multidimensional and often multimodal cost function. Here, by noting that the unknown velocity will result in a common known frequency......Most modern ultrasound scanners use the so-called pulsed-wave Doppler technique to estimate the blood velocities. Among the narrowband-based methods, the autocorrelation estimator and the Fourier-based method are the most commonly used approaches. Due to the low level of the blood echo, the signal......-to-noise ratio is low, and some averaging in depth is applied to improve the estimate. Further, due to velocity gradients in space and time, the spectrum may get smeared. An alternative approach is to use a pulse with multiple frequency carriers, and do some form of averaging in the frequency domain. However...
Diffraction imaging and velocity analysis using oriented velocity continuation
Decker, Luke
2014-08-05
We perform seismic diffraction imaging and velocity analysis by separating diffractions from specular reflections and decomposing them into slope components. We image slope components using extrapolation in migration velocity in time-space-slope coordinates. The extrapolation is described by a convection-type partial differential equation and implemented efficiently in the Fourier domain. Synthetic and field data experiments show that the proposed algorithm is able to detect accurate time-migration velocities by automatically measuring the flatness of events in dip-angle gathers.
Collision Based Blood Cell Distribution of the Blood Flow
Cinar, Yildirim
2003-11-01
Introduction: The goal of the study is the determination of the energy transferring process between colliding masses and the application of the results to the distribution of the cell, velocity and kinetic energy in arterial blood flow. Methods: Mathematical methods and models were used to explain the collision between two moving systems, and the distribution of linear momentum, rectilinear velocity, and kinetic energy in a collision. Results: According to decrease of mass of the second system, the velocity and momentum of constant mass of the first system are decreased, and linearly decreasing mass of the second system captures a larger amount of the kinetic energy and the rectilinear velocity of the collision system on a logarithmic scale. Discussion: The cause of concentration of blood cells at the center of blood flow an artery is not explained by Bernoulli principle alone but the kinetic energy and velocity distribution due to collision between the big mass of the arterial wall and the small mass of blood cells must be considered as well.
230Th and 234Th as coupled tracers of particle cycling in the ocean: A maximum likelihood approach
Wang, Wei-Lei; Armstrong, Robert A.; Cochran, J. Kirk; Heilbrun, Christina
2016-05-01
We applied maximum likelihood estimation to measurements of Th isotopes (234,230Th) in Mediterranean Sea sediment traps that separated particles according to settling velocity. This study contains two unique aspects. First, it relies on settling velocities that were measured using sediment traps, rather than on measured particle sizes and an assumed relationship between particle size and sinking velocity. Second, because of the labor and expense involved in obtaining these data, they were obtained at only a few depths, and their analysis required constructing a new type of box-like model, which we refer to as a "two-layer" model, that we then analyzed using likelihood techniques. Likelihood techniques were developed in the 1930s by statisticians, and form the computational core of both Bayesian and non-Bayesian statistics. Their use has recently become very popular in ecology, but they are relatively unknown in geochemistry. Our model was formulated by assuming steady state and first-order reaction kinetics for thorium adsorption and desorption, and for particle aggregation, disaggregation, and remineralization. We adopted a cutoff settling velocity (49 m/d) from Armstrong et al. (2009) to separate particles into fast- and slow-sinking classes. A unique set of parameters with no dependence on prior values was obtained. Adsorption rate constants for both slow- and fast-sinking particles are slightly higher in the upper layer than in the lower layer. Slow-sinking particles have higher adsorption rate constants than fast-sinking particles. Desorption rate constants are higher in the lower layer (slow-sinking particles: 13.17 ± 1.61, fast-sinking particles: 13.96 ± 0.48) than in the upper layer (slow-sinking particles: 7.87 ± 0.60 y-1, fast-sinking particles: 1.81 ± 0.44 y-1). Aggregation rate constants were higher, 1.88 ± 0.04, in the upper layer and just 0.07 ± 0.01 y-1 in the lower layer. Disaggregation rate constants were just 0.30 ± 0.10 y-1 in the upper
... Body Make Blood? It's not made in a kitchen, but blood has ingredients, just like a recipe. ... these ingredients together and you have blood — an essential part of the circulatory system. Thanks to your ...
... detect these minor antigens. It is done before transfusions, except in emergency situations. Alternative Names Cross matching; Rh typing; ABO blood typing; Blood group; Anemia - immune hemolytic blood type; ...
... smear URL of this page: //medlineplus.gov/ency/article/003665.htm Blood smear To use the sharing features on this ... view of cellular parasites Malaria, photomicrograph of cellular parasites Red blood cells, sickle cells Red blood cells, sickle and ...
Radial velocity asymmetries from jets with variable velocity profiles
International Nuclear Information System (INIS)
Cerqueira, A. H.; Vasconcelos, M. J.; Velazquez, P. F.; Raga, A. C.; De Colle, F.
2006-01-01
We have computed a set of 3-D numerical simulations of radiatively cooling jets including variabilities in both the ejection direction (precession) and the jet velocity (intermittence), using the Yguazu-a code. In order to investigate the effects of jet rotation on the shape of the line profiles, we also introduce an initial toroidal rotation velocity profile. Since the Yguazu-a code includes an atomic/ionic network, we are able to compute the emission coefficients for several emission lines, and we generate line profiles for the Hα, [O I]λ6300, [S II]λ6716 and [N II]λ6548 lines. Using initial parameters that are suitable for the DG Tau microjet, we show that the computed radial velocity shift for the medium-velocity component of the line profile as a function of distance from the jet axis is strikingly similar for rotating and non-rotating jet models
Maximum likelihood positioning algorithm for high-resolution PET scanners
International Nuclear Information System (INIS)
Gross-Weege, Nicolas; Schug, David; Hallen, Patrick; Schulz, Volkmar
2016-01-01
Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods: The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II D PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML
Fractals control in particle's velocity
International Nuclear Information System (INIS)
Zhang Yongping; Liu Shutang; Shen Shulan
2009-01-01
Julia set, a fractal set of the literature of nonlinear physics, has significance for the engineering applications. For example, the fractal structure characteristics of the generalized M-J set could visually reflect the change rule of particle's velocity. According to the real world requirement, the system need show various particle's velocity in some cases. Thus, the control of the nonlinear behavior, i.e., Julia set, has attracted broad attention. In this work, an auxiliary feedback control is introduced to effectively control the Julia set that visually reflects the change rule of particle's velocity. It satisfies the performance requirement of the real world problems.
International Nuclear Information System (INIS)
Augensen, H.J.; Buscombe, W.
1978-01-01
Using the model of the Galaxy presented by Eggen, Lynden-Bell and Sandage (1962), plane galactic orbits have been calculated for 800 southern high-velocity stars which possess parallax, proper motion, and radial velocity data. The stars with trigonometric parallaxes were selected from Buscombe and Morris (1958), supplemented by more recent spectroscopic data. Photometric parallaxes from infrared color indices were used for bright red giants studied by Eggen (1970), and for red dwarfs for which Rodgers and Eggen (1974) determined radial velocities. A color-color diagram based on published values of (U-B) and (B-V) for most of these stars is shown. (Auth.)
... Metabolic Panel (BMP) Blood Test: Complete Blood Count Basic Blood Chemistry Tests Getting a Blood Test (Video) Blood Test: Basic Metabolic Panel Blood Test: Comprehensive Metabolic Panel Blood ...
CSIR Research Space (South Africa)
Kok, S
2012-07-01
Full Text Available continuously as the correlation function hyper-parameters approach zero. Since the global minimizer of the maximum likelihood function is an asymptote in this case, it is unclear if maximum likelihood estimation (MLE) remains valid. Numerical ill...
Transfer Entropy as a Log-Likelihood Ratio
Barnett, Lionel; Bossomaier, Terry
2012-09-01
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
A Non-standard Empirical Likelihood for Time Series
DEFF Research Database (Denmark)
Nordman, Daniel J.; Bunzel, Helle; Lahiri, Soumendra N.
Standard blockwise empirical likelihood (BEL) for stationary, weakly dependent time series requires specifying a fixed block length as a tuning parameter for setting confidence regions. This aspect can be difficult and impacts coverage accuracy. As an alternative, this paper proposes a new version...... of BEL based on a simple, though non-standard, data-blocking rule which uses a data block of every possible length. Consequently, the method involves no block selection and is also anticipated to exhibit better coverage performance. Its non-standard blocking scheme, however, induces non......-standard asymptotics and requires a significantly different development compared to standard BEL. We establish the large-sample distribution of log-ratio statistics from the new BEL method for calibrating confidence regions for mean or smooth function parameters of time series. This limit law is not the usual chi...
Neutron spectra unfolding with maximum entropy and maximum likelihood
International Nuclear Information System (INIS)
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach
Sohail, Muhammad Sadiq
2012-06-01
This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.
Calibration of two complex ecosystem models with different likelihood functions
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model
Preliminary attempt on maximum likelihood tomosynthesis reconstruction of DEI data
International Nuclear Information System (INIS)
Wang Zhentian; Huang Zhifeng; Zhang Li; Kang Kejun; Chen Zhiqiang; Zhu Peiping
2009-01-01
Tomosynthesis is a three-dimension reconstruction method that can remove the effect of superimposition with limited angle projections. It is especially promising in mammography where radiation dose is concerned. In this paper, we propose a maximum likelihood tomosynthesis reconstruction algorithm (ML-TS) on the apparent absorption data of diffraction enhanced imaging (DEI). The motivation of this contribution is to develop a tomosynthesis algorithm in low-dose or noisy circumstances and make DEI get closer to clinic application. The theoretical statistical models of DEI data in physics are analyzed and the proposed algorithm is validated with the experimental data at the Beijing Synchrotron Radiation Facility (BSRF). The results of ML-TS have better contrast compared with the well known 'shift-and-add' algorithm and FBP algorithm. (authors)
H.264 SVC Complexity Reduction Based on Likelihood Mode Decision
Directory of Open Access Journals (Sweden)
L. Balaji
2015-01-01
Full Text Available H.264 Advanced Video Coding (AVC was prolonged to Scalable Video Coding (SVC. SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method.
H.264 SVC Complexity Reduction Based on Likelihood Mode Decision.
Balaji, L; Thyagharajan, K K
2015-01-01
H.264 Advanced Video Coding (AVC) was prolonged to Scalable Video Coding (SVC). SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD) is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method.
Likelihood Approximation With Hierarchical Matrices For Large Spatial Datasets
Litvinenko, Alexander
2017-09-03
We use available measurements to estimate the unknown parameters (variance, smoothness parameter, and covariance length) of a covariance function by maximizing the joint Gaussian log-likelihood function. To overcome cubic complexity in the linear algebra, we approximate the discretized covariance function in the hierarchical (H-) matrix format. The H-matrix format has a log-linear computational cost and storage O(kn log n), where the rank k is a small integer and n is the number of locations. The H-matrix technique allows us to work with general covariance matrices in an efficient way, since H-matrices can approximate inhomogeneous covariance functions, with a fairly general mesh that is not necessarily axes-parallel, and neither the covariance matrix itself nor its inverse have to be sparse. We demonstrate our method with Monte Carlo simulations and an application to soil moisture data. The C, C++ codes and data are freely available.
Music genre classification via likelihood fusion from multiple feature models
Shiu, Yu; Kuo, C.-C. J.
2005-01-01
Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.
Marginal Maximum Likelihood Estimation of Item Response Models in R
Directory of Open Access Journals (Sweden)
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Maximum likelihood estimation of phase-type distributions
DEFF Research Database (Denmark)
Esparza, Luz Judith R
for both univariate and multivariate cases. Methods like the EM algorithm and Markov chain Monte Carlo are applied for this purpose. Furthermore, this thesis provides explicit formulae for computing the Fisher information matrix for discrete and continuous phase-type distributions, which is needed to find......This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions...... confidence regions for their estimated parameters. Finally, a new general class of distributions, called bilateral matrix-exponential distributions, is defined. These distributions have the entire real line as domain and can be used, for instance, for modelling. In addition, this class of distributions...
The elaboration likelihood model and communication about food risks.
Frewer, L J; Howard, C; Hedderley, D; Shepherd, R
1997-12-01
Factors such as hazard type and source credibility have been identified as important in the establishment of effective strategies for risk communication. The elaboration likelihood model was adapted to investigate the potential impact of hazard type, information source, and persuasive content of information on individual engagement in elaborative, or thoughtful, cognitions about risk messages. One hundred sixty respondents were allocated to one of eight experimental groups, and the effects of source credibility, persuasive content of information and hazard type were systematically varied. The impact of the different factors on beliefs about the information and elaborative processing examined. Low credibility was particularly important in reducing risk perceptions, although persuasive content and hazard type were also influential in determining whether elaborative processing occurred.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE
International Nuclear Information System (INIS)
Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Green, Gregory M.; Hogg, David W.
2015-01-01
We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf
Likelihood of illegal alcohol sales at professional sport stadiums.
Toomey, Traci L; Erickson, Darin J; Lenk, Kathleen M; Kilian, Gunna R
2008-11-01
Several studies have assessed the propensity for illegal alcohol sales at licensed alcohol establishments and community festivals, but no previous studies examined the propensity for these sales at professional sport stadiums. In this study, we assessed the likelihood of alcohol sales to both underage youth and obviously intoxicated patrons at professional sports stadiums across the United States, and assessed the factors related to likelihood of both types of alcohol sales. We conducted pseudo-underage (i.e., persons age 21 or older who appear under 21) and pseudo-intoxicated (i.e., persons feigning intoxication) alcohol purchase attempts at stadiums that house professional hockey, basketball, baseball, and football teams. We conducted the purchase attempts at 16 sport stadiums located in 5 states. We measured 2 outcome variables: pseudo-underage sale (yes, no) and pseudo-intoxicated sale (yes, no), and 3 types of independent variables: (1) seller characteristics, (2) purchase attempt characteristics, and (3) event characteristics. Following univariate and bivariate analyses, we a separate series of logistic generalized mixed regression models for each outcome variable. The overall sales rates to the pseudo-underage and pseudo-intoxicated buyers were 18% and 74%, respectively. In the multivariate logistic analyses, we found that the odds of a sale to a pseudo-underage buyer in the stands was 2.9 as large as the odds of a sale at the concession booths (30% vs. 13%; p = 0.01). The odds of a sale to an obviously intoxicated buyer in the stands was 2.9 as large as the odds of a sale at the concession booths (89% vs. 73%; p = 0.02). Similar to studies assessing illegal alcohol sales at licensed alcohol establishments and community festivals, findings from this study shows the need for interventions specifically focused on illegal alcohol sales at professional sporting events.
Targeted maximum likelihood estimation for a binary treatment: A tutorial.
Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E
2018-04-23
When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
González-Badillo, Juan José; Rodríguez-Rosell, David; Sánchez-Medina, Luis; Gorostiaga, Esteban M; Pareja-Blanco, Fernando
2014-01-01
The purpose of this study was to compare the effect on strength gains of two isoinertial resistance training (RT) programmes that only differed in actual concentric velocity: maximal (MaxV) vs. half-maximal (HalfV) velocity. Twenty participants were assigned to a MaxV (n = 9) or HalfV (n = 11) group and trained 3 times per week during 6 weeks using the bench press (BP). Repetition velocity was controlled using a linear velocity transducer. A complementary study (n = 10) aimed to analyse whether the acute metabolic (blood lactate and ammonia) and mechanical response (velocity loss) was different between the MaxV and HalfV protocols used. Both groups improved strength performance from pre- to post-training, but MaxV resulted in significantly greater gains than HalfV in all variables analysed: one-repetition maximum (1RM) strength (18.2 vs. 9.7%), velocity developed against all (20.8 vs. 10.0%), light (11.5 vs. 4.5%) and heavy (36.2 vs. 17.3%) loads common to pre- and post-tests. Light and heavy loads were identified with those moved faster or slower than 0.80 m · s(-1) (∼ 60% 1RM in BP). Lactate tended to be significantly higher for MaxV vs. HalfV, with no differences observed for ammonia which was within resting values. Both groups obtained the greatest improvements at the training velocities (≤ 0.80 m · s(-1)). Movement velocity can be considered a fundamental component of RT intensity, since, for a given %1RM, the velocity at which loads are lifted largely determines the resulting training effect. BP strength gains can be maximised when repetitions are performed at maximal intended velocity.
International Nuclear Information System (INIS)
Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan
2012-01-01
The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration
Settling velocities in batch sedimentation
International Nuclear Information System (INIS)
Fricke, A.M.; Thompson, B.E.
1982-10-01
The sedimentation of mixtures containing one and two sizes of spherical particles (44 and 62 μm in diameter) was studied. Radioactive tracing with 57 Co was used to measure the settling velocities. The ratio of the settling velocity U of uniformly sized particles to the velocity predicted to Stokes' law U 0 was correlated to an expression of the form U/U 0 = epsilon/sup α/, where epsilon is the liquid volume fraction and α is an empirical constant, determined experimentally to be 4.85. No effect of viscosity on the ratio U/U 0 was observed as the viscosity of the liquid medium was varied from 1x10 -3 to 5x10 -3 Pa.s. The settling velocities of particles in a bimodal mixture were fit by the same correlation; the ratio U/U 0 was independent of the concentrations of different-sized particles
Maximum-likelihood estimation of the hyperbolic parameters from grouped observations
DEFF Research Database (Denmark)
Jensen, Jens Ledet
1988-01-01
a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...
A short proof that phylogenetic tree reconstruction by maximum likelihood is hard.
Roch, Sebastien
2006-01-01
Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel.
A Short Proof that Phylogenetic Tree Reconstruction by Maximum Likelihood is Hard
Roch, S.
2005-01-01
Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel.
Online Wavelet Complementary velocity Estimator.
Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin
2018-02-01
In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Planck 2013 results. XV. CMB power spectra and likelihood
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Leach, S.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Orieux, F.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; White, M.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best estimate of the CMB angular power spectrum from Planck over three decades in multipole moment, ℓ, covering 2 ≤ ℓ ≤ 2500. The main source of uncertainty at ℓ ≲ 1500 is cosmic variance. Uncertainties in small-scale foreground modelling and instrumental noise dominate the error budget at higher ℓs. For ℓ impact of residual foreground and instrumental uncertainties on the final cosmological parameters. We find good internal agreement among the high-ℓ cross-spectra with residuals below a few μK2 at ℓ ≲ 1000, in agreement with estimated calibration uncertainties. We compare our results with foreground-cleaned CMB maps derived from all Planck frequencies, as well as with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. We further show that the best-fit ΛCDM cosmology is in excellent agreement with preliminary PlanckEE and TE polarisation spectra. We find that the standard ΛCDM cosmology is well constrained by Planck from the measurements at ℓ ≲ 1500. One specific example is the spectral index of scalar perturbations, for which we report a 5.4σ deviation from scale invariance, ns = 1. Increasing the multipole range beyond ℓ ≃ 1500 does not increase our accuracy for the ΛCDM parameters, but instead allows us to study extensions beyond the standard model. We find no indication of significant departures from the ΛCDM framework. Finally, we report a tension between the Planck best-fit ΛCDM model and the low-ℓ spectrum in the form of a power deficit of 5-10% at ℓ ≲ 40, with a statistical significance of 2.5-3σ. Without a theoretically motivated model for
Likelihood ratio model for classification of forensic evidence
Energy Technology Data Exchange (ETDEWEB)
Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)
2009-05-29
One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this
Likelihood ratio model for classification of forensic evidence
International Nuclear Information System (INIS)
Zadora, G.; Neocleous, T.
2009-01-01
One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H 1 )/p(E|H 2 ). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI b ) and after (RI a ) the annealing process, in the form of dRI = log 10 |RI a - RI b |. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this model outperformed two other
Transverse Oscillations for Phased Array Vector Velocity Imaging
DEFF Research Database (Denmark)
Pihl, Michael Johannes; Jensen, Jørgen Arendt
2010-01-01
of superficial blood vessels. To broaden the usability of the method, it should be expanded to a phased array geometry enabling vector velocity imaging of the heart. Therefore, the scan depth has to be increased to 10-15 cm. This paper presents suitable pulse echo fields (PEF). Two lines are beamformed...... (correlation coefficient, R: -0.76), and therefore predict estimator performance. CV is correlated with the standard deviation (R=0.74). The results demonstrate the potential for using a phased array for vector velocity imaging at larger depths, and potentially for imaging the heart....
Spectral velocity estimation using autocorrelation functions for sparse data sets
DEFF Research Database (Denmark)
2006-01-01
The distribution of velocities of blood or tissue is displayed using ultrasound scanners by finding the power spectrum of the received signal. This is currently done by making a Fourier transform of the received signal and then showing spectra in an M-mode display. It is desired to show a B......-mode image for orientation, and data for this has to acquired interleaved with the flow data. The power spectrum can be calculated from the Fourier transform of the autocorrelation function Ry (k), where its span of lags k is given by the number of emission N in the data segment for velocity estimation...
A simulation study of likelihood inference procedures in rayleigh distribution with censored data
International Nuclear Information System (INIS)
Baklizi, S. A.; Baker, H. M.
2001-01-01
Inference procedures based on the likelihood function are considered for the one parameter Rayleigh distribution with type1 and type 2 censored data. Using simulation techniques, the finite sample performances of the maximum likelihood estimator and the large sample likelihood interval estimation procedures based on the Wald, the Rao, and the likelihood ratio statistics are investigated. It appears that the maximum likelihood estimator is unbiased. The approximate variance estimates obtained from the asymptotic normal distribution of the maximum likelihood estimator are accurate under type 2 censored data while they tend to be smaller than the actual variances when considering type1 censored data of small size. It appears also that interval estimation based on the Wald and Rao statistics need much more sample size than interval estimation based on the likelihood ratio statistic to attain reasonable accuracy. (authors). 15 refs., 4 tabs
Velocity distribution in snow avalanches
Nishimura, K.; Ito, Y.
1997-12-01
In order to investigate the detailed structure of snow avalanches, we have made snow flow experiments at the Miyanomori ski jump in Sapporo and systematic observations in the Shiai-dani, Kurobe Canyon. In the winter of 1995-1996, a new device to measure static pressures was used to estimate velocities in the snow cloud that develops above the flowing layer of avalanches. Measurements during a large avalanche in the Shiai-dani which damaged and destroyed some instruments indicate velocities increased rapidly to more than 50 m/s soon after the front. Velocities decreased gradually in the following 10 s. Velocities of the lower flowing layer were also calculated by differencing measurement of impact pressure. Both recordings in the snow cloud and in the flowing layer changed with a similar trend and suggest a close interaction between the two layers. In addition, the velocity showed a periodic change. Power spectrum analysis of the impact pressure and the static pressure depression showed a strong peak at a frequency between 4 and 6 Hz, which might imply the existence of either ordered structure or a series of surges in the flow.
Velocity measurement accuracy in optical microhemodynamics: experiment and simulation
International Nuclear Information System (INIS)
Chayer, Boris; Cloutier, Guy; L Pitts, Katie; Fenech, Marianne
2012-01-01
Micro particle image velocimetry (µPIV) is a common method to assess flow behavior in blood microvessels in vitro as well as in vivo. The use of red blood cells (RBCs) as tracer particles, as generally considered in vivo, creates a large depth of correlation (DOC), even as large as the vessel itself, which decreases the accuracy of the method. The limitations of µPIV for blood flow measurements based on RBC tracking still have to be evaluated. In this study, in vitro and in silico models were used to understand the effect of the DOC on blood flow measurements using µPIV RBC tracer particles. We therefore employed a µPIV technique to assess blood flow in a 15 µm radius glass tube with a high-speed CMOS camera. The tube was perfused with a sample of 40% hematocrit blood. The flow measured by a cross-correlating speckle tracking technique was compared to the flow rate of the pump. In addition, a three-dimensional mechanical RBC-flow model was used to simulate optical moving speckle at 20% and 40% hematocrits, in 15 and 20 µm radius circular tubes, at different focus planes, flow rates and for various velocity profile shapes. The velocity profiles extracted from the simulated pictures were compared with good agreement with the corresponding velocity profiles implemented in the mechanical model. The flow rates from both the in vitro flow phantom and the mathematical model were accurately measured with less than 10% errors. Simulation results demonstrated that the hematocrit (paired t tests, p = 0.5) and the tube radius (p = 0.1) do not influence the precision of the measured flow rate, whereas the shape of the velocity profile (p < 0.001) and the location of the focus plane (p < 0.001) do, as indicated by measured errors ranging from 3% to 97%. In conclusion, the use of RBCs as tracer particles makes a large DOC and affects the image processing required to estimate the flow velocities. We found that the current µPIV method is acceptable to estimate the flow rate
Efficient algorithms for maximum likelihood decoding in the surface code
Bravyi, Sergey; Suchara, Martin; Vargo, Alexander
2014-09-01
We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.
Maximum likelihood sequence estimation for optical complex direct modulation.
Che, Di; Yuan, Feng; Shieh, William
2017-04-17
Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.
Maximum likelihood estimation for cytogenetic dose-response curves
International Nuclear Information System (INIS)
Frome, E.L; DuFrain, R.J.
1983-10-01
In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa[γd + g(t, tau)d 2 ], where t is the time and d is dose. The coefficient of the d 2 term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure
Ringing Artefact Reduction By An Efficient Likelihood Improvement Method
Fuderer, Miha
1989-10-01
In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..
Scale invariant for one-sided multivariate likelihood ratio tests
Directory of Open Access Journals (Sweden)
Samruam Chongcharoen
2010-07-01
Full Text Available Suppose 1 2 , ,..., n X X X is a random sample from Np ( ,V distribution. Consider 0 1 2 : ... 0 p H and1 : 0 for 1, 2,..., i H i p , let 1 0 H H denote the hypothesis that 1 H holds but 0 H does not, and let ~ 0 H denote thehypothesis that 0 H does not hold. Because the likelihood ratio test (LRT of 0 H versus 1 0 H H is complicated, severalad hoc tests have been proposed. Tang, Gnecco and Geller (1989 proposed an approximate LRT, Follmann (1996 suggestedrejecting 0 H if the usual test of 0 H versus ~ 0 H rejects 0 H with significance level 2 and a weighted sum of the samplemeans is positive, and Chongcharoen, Singh and Wright (2002 modified Follmann’s test to include information about thecorrelation structure in the sum of the sample means. Chongcharoen and Wright (2007, 2006 give versions of the Tang-Gnecco-Geller tests and Follmann-type tests, respectively, with invariance properties. With LRT’s scale invariant desiredproperty, we investigate its powers by using Monte Carlo techniques and compare them with the tests which we recommendin Chongcharoen and Wright (2007, 2006.
Maximum-likelihood estimation of recent shared ancestry (ERSA).
Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B
2011-05-01
Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.
Quantifying uncertainty, variability and likelihood for ordinary differential equation models
LENUS (Irish Health Repository)
Weisse, Andrea Y
2010-10-28
Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.
Affective mapping: An activation likelihood estimation (ALE) meta-analysis.
Kirby, Lauren A J; Robinson, Jennifer L
2017-11-01
Functional neuroimaging has the spatial resolution to explain the neural basis of emotions. Activation likelihood estimation (ALE), as opposed to traditional qualitative meta-analysis, quantifies convergence of activation across studies within affective categories. Others have used ALE to investigate a broad range of emotions, but without the convenience of the BrainMap database. We used the BrainMap database and analysis resources to run separate meta-analyses on coordinates reported for anger, anxiety, disgust, fear, happiness, humor, and sadness. Resultant ALE maps were compared to determine areas of convergence between emotions, as well as to identify affect-specific networks. Five out of the seven emotions demonstrated consistent activation within the amygdala, whereas all emotions consistently activated the right inferior frontal gyrus, which has been implicated as an integration hub for affective and cognitive processes. These data provide the framework for models of affect-specific networks, as well as emotional processing hubs, which can be used for future studies of functional or effective connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.
Dark matter CMB constraints and likelihoods for poor particle physicists
Energy Technology Data Exchange (ETDEWEB)
Cline, James M.; Scott, Pat, E-mail: jcline@physics.mcgill.ca, E-mail: patscott@physics.mcgill.ca [Department of Physics, McGill University, 3600 rue University, Montréal, QC, H3A 2T8 (Canada)
2013-03-01
The cosmic microwave background provides constraints on the annihilation and decay of light dark matter at redshifts between 100 and 1000, the strength of which depends upon the fraction of energy ending up in the form of electrons and photons. The resulting constraints are usually presented for a limited selection of annihilation and decay channels. Here we provide constraints on the annihilation cross section and decay rate, at discrete values of the dark matter mass m{sub χ}, for all the annihilation and decay channels whose secondary spectra have been computed using PYTHIA in arXiv:1012.4515 (''PPPC 4 DM ID: a poor particle physicist cookbook for dark matter indirect detection''), namely e, μ, τ, V → e, V → μ, V → τ, u, d s, c, b, t, γ, g, W, Z and h. By interpolating in mass, these can be used to find the CMB constraints and likelihood functions from WMAP7 and Planck for a wide range of dark matter models, including those with annihilation or decay into a linear combination of different channels.
Dark matter CMB constraints and likelihoods for poor particle physicists
International Nuclear Information System (INIS)
Cline, James M.; Scott, Pat
2013-01-01
The cosmic microwave background provides constraints on the annihilation and decay of light dark matter at redshifts between 100 and 1000, the strength of which depends upon the fraction of energy ending up in the form of electrons and photons. The resulting constraints are usually presented for a limited selection of annihilation and decay channels. Here we provide constraints on the annihilation cross section and decay rate, at discrete values of the dark matter mass m χ , for all the annihilation and decay channels whose secondary spectra have been computed using PYTHIA in arXiv:1012.4515 (''PPPC 4 DM ID: a poor particle physicist cookbook for dark matter indirect detection''), namely e, μ, τ, V → e, V → μ, V → τ, u, d s, c, b, t, γ, g, W, Z and h. By interpolating in mass, these can be used to find the CMB constraints and likelihood functions from WMAP7 and Planck for a wide range of dark matter models, including those with annihilation or decay into a linear combination of different channels
Maximum likelihood estimation for cytogenetic dose-response curves
International Nuclear Information System (INIS)
Frome, E.L.; DuFrain, R.J.
1986-01-01
In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure
Physical activity may decrease the likelihood of children developing constipation.
Seidenfaden, Sandra; Ormarsson, Orri Thor; Lund, Sigrun H; Bjornsson, Einar S
2018-01-01
Childhood constipation is common. We evaluated children diagnosed with constipation, who were referred to an Icelandic paediatric emergency department, and determined the effect of lifestyle factors on its aetiology. The parents of children who were diagnosed with constipation and participated in a phase IIB clinical trial on laxative suppositories answered an online questionnaire about their children's lifestyle and constipation in March-April 2013. The parents of nonconstipated children that visited the paediatric department of Landspitali University Hospital or an Icelandic outpatient clinic answered the same questionnaire. We analysed responses regarding 190 children aged one year to 18 years: 60 with constipation and 130 without. We found that 40% of the constipated children had recurrent symptoms, 27% had to seek medical attention more than once and 33% received medication per rectum. The 47 of 130 control group subjects aged 10-18 were much more likely to exercise more than three times a week (72%) and for more than a hour (62%) than the 26 of 60 constipated children of the same age (42% and 35%, respectively). Constipation risk factors varied with age and many children diagnosed with constipation had recurrent symptoms. Physical activity may affect the likelihood of developing constipation in older children. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Maximum likelihood pedigree reconstruction using integer linear programming.
Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A
2013-01-01
Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible. © 2012 Wiley Periodicals, Inc.
Kinnear, John; Jackson, Ruth
2017-07-01
Although physicians are highly trained in the application of evidence-based medicine, and are assumed to make rational decisions, there is evidence that their decision making is prone to biases. One of the biases that has been shown to affect accuracy of judgements is that of representativeness and base-rate neglect, where the saliency of a person's features leads to overestimation of their likelihood of belonging to a group. This results in the substitution of 'subjective' probability for statistical probability. This study examines clinicians' propensity to make estimations of subjective probability when presented with clinical information that is considered typical of a medical condition. The strength of the representativeness bias is tested by presenting choices in textual and graphic form. Understanding of statistical probability is also tested by omitting all clinical information. For the questions that included clinical information, 46.7% and 45.5% of clinicians made judgements of statistical probability, respectively. Where the question omitted clinical information, 79.9% of clinicians made a judgement consistent with statistical probability. There was a statistically significant difference in responses to the questions with and without representativeness information (χ2 (1, n=254)=54.45, pprobability. One of the causes for this representativeness bias may be the way clinical medicine is taught where stereotypic presentations are emphasised in diagnostic decision making. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Race of source effects in the elaboration likelihood model.
White, P H; Harkins, S G
1994-11-01
In a series of experiments, we investigated the effect of race of source on persuasive communications in the Elaboration Likelihood Model (R.E. Petty & J.T. Cacioppo, 1981, 1986). In Experiment 1, we found no evidence that White participants responded to a Black source as a simple negative cue. Experiment 2 suggested the possibility that exposure to a Black source led to low-involvement message processing. In Experiments 3 and 4, a distraction paradigm was used to test this possibility, and it was found that participants under low involvement were highly motivated to process a message presented by a Black source. In Experiment 5, we found that attitudes toward the source's ethnic group, rather than violations of expectancies, accounted for this processing effect. Taken together, the results of these experiments are consistent with S.L. Gaertner and J.F. Dovidio's (1986) theory of aversive racism, which suggests that Whites, because of a combination of egalitarian values and underlying negative racial attitudes, are very concerned about not appearing unfavorable toward Blacks, leading them to be highly motivated to process messages presented by a source from this group.
Maximum likelihood estimation for cytogenetic dose-response curves
Energy Technology Data Exchange (ETDEWEB)
Frome, E.L; DuFrain, R.J.
1983-10-01
In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.
Maximum likelihood approach for several stochastic volatility models
International Nuclear Information System (INIS)
Camprodon, Jordi; Perelló, Josep
2012-01-01
Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)
Velocity Estimate Following Air Data System Failure
National Research Council Canada - National Science Library
McLaren, Scott A
2008-01-01
.... A velocity estimator (VEST) algorithm was developed to combine the inertial and wind velocities to provide an estimate of the aircraft's current true velocity to be used for command path gain scheduling and for display in the cockpit...
DEFF Research Database (Denmark)
Olsen, Michael; Greve, Sara; Blicher, Marie
2016-01-01
OBJECTIVE: Carotid-femoral pulse wave velocity (cfPWV) adds significantly to traditional cardiovascular (CV) risk prediction, but is not widely available. Therefore, it would be helpful if cfPWV could be replaced by an estimated carotid-femoral pulse wave velocity (ePWV) using age and mean blood...... pressure and previously published equations. The aim of this study was to investigate whether ePWV could predict CV events independently of traditional cardiovascular risk factors and/or cfPWV. DESIGN AND METHOD: cfPWV was measured and ePWV calculated in 2366 apparently healthy subjects from four age...
Hemodynamic study of ischemic limb by velocity measurement in foot
International Nuclear Information System (INIS)
Shionoya, S.; Hirai, M.; Kawai, S.; Ohta, T.; Seko, T.
1981-01-01
By means of a tracer technique with 99mTc-pertechnetate, provided with seven zonal regions of interest, 6 mm in width, placed at equal spaces of 18 mm, from the toe tip to the midfoot at a right angle to the long axis of the foot, arterial flow velocity in the foot during reactive hyperemia was measured. The mean velocity in the foot was 5.66 +/- 1.78 cm/sec in 14 normal limbs, 1.58 +/- 1.07 cm/sec in 29 limbs with distal thromboangiitis obliterans (TAO), 0.89 +/- 0.61 cm/sec in 13 limbs with proximal TAO, and 0.97 +/- 0.85 cm/sec in 15 limbs with arteriosclerosis obliterans (ASO). The velocity returned to normal in all 12 limbs after successful arterial reconstruction, whereas the foot or toe blood pressure remained pathologic in 9 of the 12 limbs postoperatively; the velocity reverted to normal in 4 of 13 limbs after lumbar sympathectomy. When the velocity was normalized after operation, the ulceration healed favorably, and the ischemic limb was salvaged. The most characteristic feature of peripheral arterial occlusive disease of the lower extremity was a stagnation of arterial circulation in the foot, and the flow velocity in the foot was a sensitive predictive index of limb salvage
Cosmic string induced peculiar velocities
International Nuclear Information System (INIS)
van Dalen, A.; Schramm, D.N.
1987-02-01
We calculate analytically the probability distribution for peculiar velocities on scales from 10h -1 to 60h -1 Mpc with cosmic string loops as the dominant source of primordial gravitational perturbations. We consider a range of parameters βGμ appropriate for both hot (HDM) and cold (CDM) dark matter scenarios. An Ω = 1 CDM Universe is assumed with the loops randomly placed on a smooth background. It is shown how the effects can be estimated of loops breaking up and being born with a spectrum of sizes. It is found that to obtain large scale streaming velocities of at least 400 km/s it is necessary that either a large value for βGμ or the effect of loop fissioning and production details be considerable. Specifically, for optimal CDM string parameters Gμ = 10 -6 , β = 9, h = .5, and scales of 60h -1 Mpc, the parent size spectrum must be 36 times larger than the evolved daughter spectrum to achieve peculiar velocities of at least 400 km/s with a probability of 63%. With this scenario the microwave background dipole will be less than 800 km/s with only a 10% probability. The string induced velocity spectrum is relatively flat out to scales of about 2t/sub eq//a/sub eq/ and then drops off rather quickly. The flatness is a signature of string models of galaxy formation. With HDM a larger value of βGμ is necessary for galaxy formation since accretion on small scales starts later. Hence, with HDM, the peculiar velocity spectrum will be larger on large scales and the flat region will extend to larger scales. If large scale peculiar velocities greater than 400 km/s are real then it is concluded that strings plus CDM have difficulties. The advantages of strings plus HDM in this regard will be explored in greater detail in a later paper. 27 refs., 4 figs., 1 tab
Angle independent velocity spectrum determination
DEFF Research Database (Denmark)
2014-01-01
An ultrasound imaging system (100) includes a transducer array (102) that emits an ultrasound beam and produces at least one transverse pulse-echo field that oscillates in a direction transverse to the emitted ultrasound beam and that receive echoes produced in response thereto and a spectral vel...... velocity estimator (110) that determines a velocity spectrum for flowing structure, which flows at an angle of 90 degrees and flows at angles less than 90 degrees with respect to the emitted ultrasound beam, based on the received echoes....
... this page: //medlineplus.gov/ency/patientinstructions/000431.htm Blood transfusions To use the sharing features on this page, ... There are many reasons you may need a blood transfusion: After knee or hip replacement surgery, or other ...
... positive or Rh-negative blood may be given to Rh-positive patients. The rules for plasma are the reverse: ... ethnic and racial groups have different frequency of the main blood types in their populations. Approximately ...
Martin, Jeremiah T; Ferraris, Victor A
2015-01-01
Patient blood management requires multi-modality and multidisciplinary collaboration to identify patients who are at increased risk of requiring blood transfusion and therefore decrease exposure to blood products. Transfusion is associated with poor postoperative outcomes, and guidelines exist to minimize transfusion requirements. This review highlights recent studies and efforts to apply patient blood management across disease processes and health care systems. Copyright © 2015 Elsevier Inc. All rights reserved.
Estimating likelihood of future crashes for crash-prone drivers
Directory of Open Access Journals (Sweden)
Subasish Das
2015-06-01
Full Text Available At-fault crash-prone drivers are usually considered as the high risk group for possible future incidents or crashes. In Louisiana, 34% of crashes are repeatedly committed by the at-fault crash-prone drivers who represent only 5% of the total licensed drivers in the state. This research has conducted an exploratory data analysis based on the driver faultiness and proneness. The objective of this study is to develop a crash prediction model to estimate the likelihood of future crashes for the at-fault drivers. The logistic regression method is used by employing eight years' traffic crash data (2004–2011 in Louisiana. Crash predictors such as the driver's crash involvement, crash and road characteristics, human factors, collision type, and environmental factors are considered in the model. The at-fault and not-at-fault status of the crashes are used as the response variable. The developed model has identified a few important variables, and is used to correctly classify at-fault crashes up to 62.40% with a specificity of 77.25%. This model can identify as many as 62.40% of the crash incidence of at-fault drivers in the upcoming year. Traffic agencies can use the model for monitoring the performance of an at-fault crash-prone drivers and making roadway improvements meant to reduce crash proneness. From the findings, it is recommended that crash-prone drivers should be targeted for special safety programs regularly through education and regulations.
Smoking increases the likelihood of Helicobacter pylori treatment failure.
Itskoviz, David; Boltin, Doron; Leibovitzh, Haim; Tsadok Perets, Tsachi; Comaneshter, Doron; Cohen, Arnon; Niv, Yaron; Levi, Zohar
2017-07-01
Data regarding the impact of smoking on the success of Helicobacter pylori (H. pylori) eradication are conflicting, partially due to the fact that sociodemographic status is associated with both smoking and H. pylori treatment success. We aimed to assess the effect of smoking on H. pylori eradication rates after controlling for sociodemographic confounders. Included were subjects aged 15 years or older, with a first time positive C 13 -urea breath test (C 13 -UBT) between 2007 to 2014, who underwent a second C 13 -UBT after receiving clarithromycin-based triple therapy. Data regarding age, gender, socioeconomic status (SES), smoking (current smokers or "never smoked"), and drug use were extracted from the Clalit health maintenance organization database. Out of 120,914 subjects with a positive first time C 13 -UBT, 50,836 (42.0%) underwent a second C 13 -UBT test. After excluding former smokers, 48,130 remained who were eligible for analysis. The mean age was 44.3±18.2years, 69.2% were females, 87.8% were Jewish and 12.2% Arabs, 25.5% were current smokers. The overall eradication failure rates were 33.3%: 34.8% in current smokers and 32.8% in subjects who never smoked. In a multivariate analysis, eradication failure was positively associated with current smoking (Odds Ratio {OR} 1.15, 95% CI 1.10-1.20, psmoking was found to significantly increase the likelihood of unsuccessful first-line treatment for H. pylori infection. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
Obstetric History and Likelihood of Preterm Birth of Twins.
Easter, Sarah Rae; Little, Sarah E; Robinson, Julian N; Mendez-Figueroa, Hector; Chauhan, Suneet P
2018-01-05
The objective of this study was to investigate the relationship between preterm birth in a prior pregnancy and preterm birth in a twin pregnancy. We performed a secondary analysis of a randomized controlled trial evaluating 17-α-hydroxyprogesterone caproate in twins. Women were classified as nulliparous, multiparous with a prior term birth, or multiparous with a prior preterm birth. We used logistic regression to examine the odds of spontaneous preterm birth of twins before 35 weeks according to past obstetric history. Of the 653 women analyzed, 294 were nulliparas, 310 had a prior term birth, and 49 had a prior preterm birth. Prior preterm birth increased the likelihood of spontaneous delivery before 35 weeks (adjusted odds ratio [aOR]: 2.44, 95% confidence interval [CI]: 1.28-4.66), whereas prior term delivery decreased these odds (aOR: 0.55, 95% CI: 0.38-0.78) in the current twin pregnancy compared with the nulliparous reference group. This translated into a lower odds of composite neonatal morbidity (aOR: 0.38, 95% CI: 0.27-0.53) for women with a prior term delivery. For women carrying twins, a history of preterm birth increases the odds of spontaneous preterm birth, whereas a prior term birth decreases odds of spontaneous preterm birth and neonatal morbidity for the current twin pregnancy. These results offer risk stratification and reassurance for clinicians. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
ROTATIONAL VELOCITIES FOR M DWARFS
International Nuclear Information System (INIS)
Jenkins, J. S.; Ramsey, L. W.; Jones, H. R. A.; Pavlenko, Y.; Barnes, J. R.; Pinfield, D. J.; Gallardo, J.
2009-01-01
We present spectroscopic rotation velocities (v sin i) for 56 M dwarf stars using high-resolution Hobby-Eberly Telescope High Resolution Spectrograph red spectroscopy. In addition, we have also determined photometric effective temperatures, masses, and metallicities ([Fe/H]) for some stars observed here and in the literature where we could acquire accurate parallax measurements and relevant photometry. We have increased the number of known v sin i values for mid M stars by around 80% and can confirm a weakly increasing rotation velocity with decreasing effective temperature. Our sample of v sin is peak at low velocities (∼3 km s -1 ). We find a change in the rotational velocity distribution between early M and late M stars, which is likely due to the changing field topology between partially and fully convective stars. There is also a possible further change in the rotational distribution toward the late M dwarfs where dust begins to play a role in the stellar atmospheres. We also link v sin i to age and show how it can be used to provide mid-M star age limits. When all literature velocities for M dwarfs are added to our sample, there are 198 with v sin i ≤ 10 km s -1 and 124 in the mid-to-late M star regime (M3.0-M9.5) where measuring precision optical radial velocities is difficult. In addition, we also search the spectra for any significant Hα emission or absorption. Forty three percent were found to exhibit such emission and could represent young, active objects with high levels of radial-velocity noise. We acquired two epochs of spectra for the star GJ1253 spread by almost one month and the Hα profile changed from showing no clear signs of emission, to exhibiting a clear emission peak. Four stars in our sample appear to be low-mass binaries (GJ1080, GJ3129, Gl802, and LHS3080), with both GJ3129 and Gl802 exhibiting double Hα emission features. The tables presented here will aid any future M star planet search target selection to extract stars with low v
Spectral velocity estimation in ultrasound using sparse data sets
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2006-01-01
Velocity distributions in blood vessels can be displayed using ultrasound scanners by making a Fourier transform of the received signal and then showing spectra in an M-mode display. It is desired to show a B-mode image for orientation, and data for this have to be acquired interleaved with the f......Velocity distributions in blood vessels can be displayed using ultrasound scanners by making a Fourier transform of the received signal and then showing spectra in an M-mode display. It is desired to show a B-mode image for orientation, and data for this have to be acquired interleaved...... with the flow data. This either halves the effective pulse repetition frequency fprf or gaps appear in the spectrum from B-mode emissions. This paper presents a techniques for maintaining the highest possible fprf and at the same time show a B-mode image. The power spectrum can be calculated from the Fourier...
High-order Composite Likelihood Inference for Max-Stable Distributions and Processes
Castruccio, Stefano; Huser, Raphaë l; Genton, Marc G.
2015-01-01
In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.
High-order Composite Likelihood Inference for Max-Stable Distributions and Processes
Castruccio, Stefano
2015-09-29
In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.
Castruccio, Stefano; Huser, Raphaë l; Genton, Marc G.
2016-01-01
In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of points is a very challenging problem and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.
Directory of Open Access Journals (Sweden)
Saeed Abroun
2014-05-01
Full Text Available Stem cells are naïve or master cells. This means they can transform into special 200 cell types as needed by body, and each of these cells has just one function. Stem cells are found in many parts of the human body, although some sources have richer concentrations than others. Some excellent sources of stem cells, such as bone marrow, peripheral blood, cord blood, other tissue stem cells and human embryos, which last one are controversial and their use can be illegal in some countries. Cord blood is a sample of blood taken from a newborn baby's umbilical cord. It is a rich source of stem cells, umbilical cord blood and tissue are collected from material that normally has no use following a child’s birth. Umbilical cord blood and tissue cells are rich sources of stem cells, which have been used in the treatment of over 80 diseases including leukemia, lymphoma and anemia as bone marrow stem cell potency. The most common disease category has been leukemia. The next largest group is inherited diseases. Patients with lymphoma, myelodysplasia and severe aplastic anemia have also been successfully transplanted with cord blood. Cord blood is obtained by syringing out the placenta through the umbilical cord at the time of childbirth, after the cord has been detached from the newborn. Collecting stem cells from umbilical blood and tissue is ethical, pain-free, safe and simple. When they are needed to treat your child later in life, there will be no rejection or incompatibility issues, as the procedure will be using their own cells. In contrast, stem cells from donors do have these potential problems. By consider about cord blood potency, cord blood banks (familial or public were established. In IRAN, four cord blood banks has activity, Shariati BMT center cord blood bank, Royan familial cord blood banks, Royan public cord blood banks and Iranian Blood Transfusion Organ cord blood banks. Despite 50,000 sample which storage in these banks, but the
Cost-effectiveness of private umbilical cord blood banking.
Kaimal, Anjali J; Smith, Catherine C; Laros, Russell K; Caughey, Aaron B; Cheng, Yvonne W
2009-10-01
To investigate the cost-effectiveness of private umbilical cord blood banking. A decision-analytic model was designed comparing private umbilical cord blood banking with no umbilical cord blood banking. Baseline assumptions included a cost of $3,620 for umbilical cord blood banking and storage for 20 years, a 0.04% chance of requiring an autologous stem cell transplant, a 0.07% chance of a sibling requiring an allogenic stem cell transplant, and a 50% reduction in risk of graft-versus-host disease if a sibling uses banked umbilical cord blood. Private cord blood banking is not cost-effective because it cost an additional $1,374,246 per life-year gained. In sensitivity analysis, if the cost of umbilical cord blood banking is less than $262 or the likelihood of a child needing a stem cell transplant is greater than 1 in 110, private umbilical cord blood banking becomes cost-effective. Currently, private umbilical cord blood banking is cost-effective only for children with a very high likelihood of needing a stem cell transplant. Patients considering private blood banking should be informed of the remote likelihood that a unit will be used for a child or another family member. III.
Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB
Millar, Russell B
2011-01-01
This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis
Estimation of vessel diameter and blood flow dynamics from laser speckle images
DEFF Research Database (Denmark)
Postnov, Dmitry D.; Tuchin, Valery V.; Sosnovtseva, Olga
2016-01-01
Laser speckle imaging is a rapidly developing method to study changes of blood velocity in the vascular networks. However, to assess blood flow and vascular responses it is crucial to measure vessel diameter in addition to blood velocity dynamics. We suggest an algorithm that allows for dynamical...
Miehls, Scott M.; Johnson, Nicholas; Haro, Alexander
2017-01-01
We tested the efficacy of a vertically oriented field of pulsed direct current (VEPDC) created by an array of vertical electrodes for guiding downstream-moving juvenile Sea Lampreys Petromyzon marinus to a bypass channel in an artificial flume at water velocities of 10–50 cm/s. Sea Lampreys were more likely to be captured in the bypass channel than in other sections of the flume regardless of electric field status (on or off) or water velocity. Additionally, Sea Lampreys were more likely to be captured in the bypass channel when the VEPDC was active; however, an interaction between the effects of VEPDC and water velocity was observed, as the likelihood of capture decreased with increases in water velocity. The distribution of Sea Lampreys shifted from right to left across the width of the flume toward the bypass channel when the VEPDC was active at water velocities less than 25 cm/s. The VEPDC appeared to have no effect on Sea Lamprey distribution in the flume at water velocities greater than 25 cm/s. We also conducted separate tests to determine the threshold at which Sea Lampreys would become paralyzed. Individuals were paralyzed at a mean power density of 37.0 µW/cm3. Future research should investigate the ability of juvenile Sea Lampreys to detect electric fields and their specific behavioral responses to electric field characteristics so as to optimize the use of this technology as a nonphysical guidance tool across variable water velocities.
Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15
Zhang, Jinming
2005-01-01
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
DEFF Research Database (Denmark)
Nielsen, Jan; Parner, Erik
2010-01-01
In this paper, we model multivariate time-to-event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right- and interval-censored data are considered. The suggested approach is applied on two types of family studies using the gamma...
Ros, B.P.; Bijma, F.; de Munck, J.C.; de Gunst, M.C.M.
2016-01-01
This paper deals with multivariate Gaussian models for which the covariance matrix is a Kronecker product of two matrices. We consider maximum likelihood estimation of the model parameters, in particular of the covariance matrix. There is no explicit expression for the maximum likelihood estimator
Likelihood functions for the analysis of single-molecule binned photon sequences
Energy Technology Data Exchange (ETDEWEB)
Gopich, Irina V., E-mail: irinag@niddk.nih.gov [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD 20892 (United States)
2012-03-02
Graphical abstract: Folding of a protein with attached fluorescent dyes, the underlying conformational trajectory of interest, and the observed binned photon trajectory. Highlights: Black-Right-Pointing-Pointer A sequence of photon counts can be analyzed using a likelihood function. Black-Right-Pointing-Pointer The exact likelihood function for a two-state kinetic model is provided. Black-Right-Pointing-Pointer Several approximations are considered for an arbitrary kinetic model. Black-Right-Pointing-Pointer Improved likelihood functions are obtained to treat sequences of FRET efficiencies. - Abstract: We consider the analysis of a class of experiments in which the number of photons in consecutive time intervals is recorded. Sequence of photon counts or, alternatively, of FRET efficiencies can be studied using likelihood-based methods. For a kinetic model of the conformational dynamics and state-dependent Poisson photon statistics, the formalism to calculate the exact likelihood that this model describes such sequences of photons or FRET efficiencies is developed. Explicit analytic expressions for the likelihood function for a two-state kinetic model are provided. The important special case when conformational dynamics are so slow that at most a single transition occurs in a time bin is considered. By making a series of approximations, we eventually recover the likelihood function used in hidden Markov models. In this way, not only is insight gained into the range of validity of this procedure, but also an improved likelihood function can be obtained.
Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.
Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.
1996-01-01
Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate
Likelihood ratio data to report the validation of a forensic fingerprint evaluation method
Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier
2017-01-01
Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5–12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a
A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation
Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf
2017-01-01
This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes’ inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of
Predictors of Self-Reported Likelihood of Working with Older Adults
Eshbaugh, Elaine M.; Gross, Patricia E.; Satrom, Tatum
2010-01-01
This study examined the self-reported likelihood of working with older adults in a future career among 237 college undergraduates at a midsized Midwestern university. Although aging anxiety was not significantly related to likelihood of working with older adults, those students who had a greater level of death anxiety were less likely than other…
Krings, Franciska; Facchin, Stephanie
2009-01-01
This study demonstrated relations between men's perceptions of organizational justice and increased sexual harassment proclivities. Respondents reported higher likelihood to sexually harass under conditions of low interactional justice, suggesting that sexual harassment likelihood may increase as a response to perceived injustice. Moreover, the…
Sampling variability in forensic likelihood-ratio computation: A simulation study
Ali, Tauseef; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; Meuwly, Didier
2015-01-01
Recently, in the forensic biometric community, there is a growing interest to compute a metric called “likelihood- ratio‿ when a pair of biometric specimens is compared using a biometric recognition system. Generally, a biomet- ric recognition system outputs a score and therefore a likelihood-ratio
Statistical modelling of survival data with random effects h-likelihood approach
Ha, Il Do; Lee, Youngjo
2017-01-01
This book provides a groundbreaking introduction to the likelihood inference for correlated survival data via the hierarchical (or h-) likelihood in order to obtain the (marginal) likelihood and to address the computational difficulties in inferences and extensions. The approach presented in the book overcomes shortcomings in the traditional likelihood-based methods for clustered survival data such as intractable integration. The text includes technical materials such as derivations and proofs in each chapter, as well as recently developed software programs in R (“frailtyHL”), while the real-world data examples together with an R package, “frailtyHL” in CRAN, provide readers with useful hands-on tools. Reviewing new developments since the introduction of the h-likelihood to survival analysis (methods for interval estimation of the individual frailty and for variable selection of the fixed effects in the general class of frailty models) and guiding future directions, the book is of interest to research...
The likelihood principle and its proof – a never-ending story…
DEFF Research Database (Denmark)
Jørgensen, Thomas Martini
2015-01-01
An ongoing controversy in philosophy of statistics is the so-called “likelihood principle” essentially stating that all evidence which is obtained from an experiment about an unknown quantity θ is contained in the likelihood function of θ. Common classical statistical methodology, such as the use...... of significance tests, and confidence intervals, depends on the experimental procedure and unrealized events and thus violates the likelihood principle. The likelihood principle was identified by that name and proved in a famous paper by Allan Birnbaum in 1962. However, ever since both the principle itself...... as well as the proof has been highly debated. This presentation will illustrate the debate of both the principle and its proof, from 1962 and up to today. An often-used experiment to illustrate the controversy between classical interpretation and evidential confirmation based on the likelihood principle...
International Nuclear Information System (INIS)
Chandy, Mammen
1998-01-01
Viable lymphocytes are present in blood and cellular blood components used for transfusion. If the patient who receives a blood transfusion is immunocompetent these lymphocytes are destroyed immediately. However if the patient is immunodefficient or immunosuppressed the transfused lymphocytes survive, recognize the recipient as foreign and react producing a devastating and most often fatal syndrome of transfusion graft versus host disease [T-GVHD]. Even immunocompetent individuals can develop T-GVHD if the donor is a first degree relative since like the Trojan horse the transfused lymphocytes escape detection by the recipient's immune system, multiply and attack recipient tissues. T-GVHD can be prevented by irradiating the blood and different centers use doses ranging from 1.5 to 4.5 Gy. All transfusions where the donor is a first degree relative and transfusions to neonates, immunosuppressed patients and bone marrow transplant recipients need to be irradiated. Commercial irradiators specifically designed for irradiation of blood and cellular blood components are available: however they are expensive. India needs to have blood irradiation facilities available in all large tertiary institutions where immunosuppressed patients are treated. The Atomic Energy Commission of India needs to develop a blood irradiator which meets international standards for use in tertiary medical institutions in the country. (author)
SC Unit
2008-01-01
A blood donation, organized by EFS (Etablissement Français du Sang) of Annemasse will take place On Wednesday 12 November 2008, from 8:30 to 16:00, at CERN Restaurant 2 If possible, please, bring your blood group Card.
GS Department
2009-01-01
A blood donation is organised by the Cantonal Hospital of Geneva On Thursday 19 March 2009 from 9 a.m. to 5 p.m. CERN RESTAURANT 2 Number of donations during the last blood donations :135 donors in July 2008 122 donors in November 2008 Let’s do better in 2009 !!! Give 30 minutes of your time to save lives...
Velocity distribution of fragments of catastrophic impacts
Takagi, Yasuhiko; Kato, Manabu; Mizutani, Hitoshi
1992-01-01
Three dimensional velocities of fragments produced by laboratory impact experiments were measured for basalts and pyrophyllites. The velocity distribution of fragments obtained shows that the velocity range of the major fragments is rather narrow, at most within a factor of 3 and that no clear dependence of velocity on the fragment mass is observed. The NonDimensional Impact Stress (NDIS) defined by Mizutani et al. (1990) is found to be an appropriate scaling parameter to describe the overall fragment velocity as well as the antipodal velocity.
Gayet-Ageron, Angèle; Lautenschlager, Stephan; Ninet, Béatrice; Perneger, Thomas V; Combescure, Christophe
2013-05-01
To systematically review and estimate pooled sensitivity and specificity of the polymerase chain reaction (PCR) technique compared to recommended reference tests in the diagnosis of suspected syphilis at various stages and in various biological materials. Systematic review and meta-analysis. Search of three electronic bibliographic databases from January 1990 to January 2012 and the abstract books of five congresses specialized in the infectious diseases' field (1999-2011). Search key terms included syphilis, Treponema pallidum or neurosyphilis and molecular amplification, polymerase chain reaction or PCR. We included studies that used both reference tests to diagnose syphilis plus PCR and we presented pooled estimates of PCR sensitivity, specificity, and positive and negative likelihood ratios (LR) per syphilis stages and biological materials. Of 1160 identified abstracts, 69 were selected and 46 studies used adequate reference tests to diagnose syphilis. Sensitivity was highest in the swabs from primary genital or anal chancres (78.4%; 95% CI: 68.2-86.0) and in blood from neonates with congenital syphilis (83.0%; 55.0-95.2). Most pooled specificities were ∼95%, except those in blood. A positive PCR is highly informative with a positive LR around 20 in ulcers or skin lesions. In the blood, the positive LR was syphilis diagnosis in lesions. PCR is a useful diagnostic tool in ulcers, especially when serology is still negative and in medical settings with a high prevalence of syphilis.
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
International Nuclear Information System (INIS)
Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Electron velocity and momentum density
International Nuclear Information System (INIS)
Perkins, G.A.
1978-01-01
A null 4-vector eta + sigma/sub μ/based on Dirac's relativistic electron equation, is shown explicitly for a plane wave and various Coulomb states. This 4-vector constitutes a mechanical ''model'' for the electron in those staes, and expresses the important spinor quantities represented conventionally by n, f, g, m, j, kappa, l, and s. The model for a plane wave agrees precisely with the relation between velocity and phase gradient customarily used in quantum theory, but the models for Coulomb states contradict that relation
A microfluidic chip for direct and rapid trapping of white blood cells from whole blood
Chen, Jingdong; Chen, Di; Yuan, Tao; Xie, Yao; Chen, Xiang
2013-01-01
Blood analysis plays a major role in medical and science applications and white blood cells (WBCs) are an important target of analysis. We proposed an integrated microfluidic chip for direct and rapid trapping WBCs from whole blood. The microfluidic chip consists of two basic functional units: a winding channel to mix and arrays of two-layer trapping structures to trap WBCs. Red blood cells (RBCs) were eliminated through moving the winding channel and then WBCs were trapped by the arrays of trapping structures. We fabricated the PDMS (polydimethylsiloxane) chip using soft lithography and determined the critical flow velocities of tartrazine and brilliant blue water mixing and whole blood and red blood cell lysis buffer mixing in the winding channel. They are 0.25 μl/min and 0.05 μl/min, respectively. The critical flow velocity of the whole blood and red blood cell lysis buffer is lower due to larger volume of the RBCs and higher kinematic viscosity of the whole blood. The time taken for complete lysis of whole blood was about 85 s under the flow velocity 0.05 μl/min. The RBCs were lysed completely by mixing and the WBCs were trapped by the trapping structures. The chip trapped about 2.0 × 103 from 3.3 × 103 WBCs. PMID:24404026
Directory of Open Access Journals (Sweden)
Cătălina Lionte
2016-12-01
Full Text Available Purpose: Acute exposure to a systemic poison represents an important segment of medical emergencies. We aimed to estimate the likelihood of systemic poison-induced morbidity in a population admitted in a tertiary referral center from North East Romania, based on the determinant factors. Methodology: This was a prospective observational cohort study on adult poisoned patients. Demographic, clinical and laboratory characteristics were recorded in all patients. We analyzed three groups of patients, based on the associated morbidity during hospitalization. We identified significant differences between groups and predictors with significant effects on morbidity using multiple multinomial logistic regressions. ROC analysis proved that a combination of tests could improve diagnostic accuracy of poison-related morbidity. Main findings: Of the 180 patients included, aged 44.7 ± 17.2 years, 51.1% males, 49.4% had no poison-related morbidity, 28.9% developed a mild morbidity, and 21.7% had a severe morbidity, followed by death in 16 patients (8.9%. Multiple complications and deaths were recorded in patients aged 53.4 ± 17.6 years (p .001, with a lower Glasgow Coma Scale (GCS score upon admission and a significantly higher heart rate (101 ± 32 beats/min, p .011. Routine laboratory tests were significantly higher in patients with a recorded morbidity. Multiple logistic regression analysis demonstrated that a GCS < 8, a high white blood cells count (WBC, alanine aminotransferase (ALAT, myoglobin, glycemia and brain natriuretic peptide (BNP are strongly predictive for in-hospital severe morbidity. Originality: This is the first Romanian prospective study on adult poisoned patients, which identifies the factors responsible for in-hospital morbidity using logistic regression analyses, with resulting receiver operating characteristic (ROC curves. Conclusion: In acute intoxication with systemic poisons, we identified several clinical and laboratory variables
Instrument for measuring flow velocities
International Nuclear Information System (INIS)
Griffo, J.
1977-01-01
The design described here means to produce a 'more satisfying instrument with less cost' than comparable instruments known up to now. Instead of one single turbine rotor, two similar ones but with opposite blade inclination and sense of rotation are to be used. A cylindrical measuring body is carrying in its axis two bearing blocks whose shape is offering little flow resistance. On the shaft, supported by them, the two rotors run in opposite direction a relatively small axial distance apart. The speed of each rotor is picked up as pulse recurrence frequency by a transmitter and fed to an electronic measuring unit. Measuring errors as they are caused for single rotors by turbulent flow, profile distortion of the velocity, or viscous flow are to be eliminated by means of the contrarotating turbines and the subsequently added electronic unit, because in these cases the adulterating increase of the angular velocity of one rotor is compensated by a corresponding deceleration of the other rotor. The mean value then indicated by the electronic unit has high accurancy of measurement. (RW) [de
The fine-tuning cost of the likelihood in SUSY models
Ghilencea, D M
2013-01-01
In SUSY models, the fine tuning of the electroweak (EW) scale with respect to their parameters gamma_i={m_0, m_{1/2}, mu_0, A_0, B_0,...} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Delta of the usual likelihood L and the traditional fine tuning measure Delta of the EW scale. A similar result is obtained for the integrated likelihood over the set {gamma_i}, that can be written as a surface integral of the ratio L/Delta, with the surface in gamma_i space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Delta or equivalently, a small chi^2_{new}=chi^2_{old}+2*ln(Delta). This shows the fine-tuning cost to the likelihood ...
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
Application of Vectors to Relative Velocity
Tin-Lam, Toh
2004-01-01
The topic 'relative velocity' has recently been introduced into the Cambridge Ordinary Level Additional Mathematics syllabus under the application of Vectors. In this note, the results of relative velocity and the 'reduction to rest' technique of teaching relative velocity are derived mathematically from vector algebra, in the hope of providing…
Questions Students Ask: About Terminal Velocity.
Meyer, Earl R.; Nelson, Jim
1984-01-01
If a ball were given an initial velocity in excess of its terminal velocity, would the upward force of air resistance (a function of velocity) be greater than the downward force of gravity and thus push the ball back upwards? An answer to this question is provided. (JN)
Balance velocities of the Greenland ice sheet
DEFF Research Database (Denmark)
Joughin, I.; Fahnestock, M.; Ekholm, Simon
1997-01-01
We present a map of balance velocities for the Greenland ice sheet. The resolution of the underlying DEM, which was derived primarily from radar altimetery data, yields far greater detail than earlier balance velocity estimates for Greenland. The velocity contours reveal in striking detail......, the balance map is useful for ice-sheet modelling, mass balance studies, and field planning....
Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-03-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.
... water loss Poisoning from harmful substances such as ethanol , methanol , or ethylene glycol Problems producing urine In ... may be due to: Diabetes insipidus High blood sugar level ( hyperglycemia ) High level of nitrogen waste products ...
DEFF Research Database (Denmark)
Deleuran, Ida; Sheikh, Zainab Afshan; Hoeyer, Klaus
2015-01-01
The existing literature on donor screening in transfusion medicine tends to distinguish between social concerns about discrimination and medical concerns about safety. In this article, we argue that the bifurcation into social and medical concerns is problematic. We build our case on a qualitative...... study of the historical rise and current workings of safety practices in the Danish blood system. Here, we identify a strong focus on contamination in order to avoid 'tainted blood', at the expense of working with risks that could be avoided through enhanced blood monitoring practices. Of further...... significance to this focus are the social dynamics found at the heart of safety practices aimed at avoiding contamination. We argue that such dynamics need more attention, in order to achieve good health outcomes in transfusion medicine. Thus, we conclude that, to ensure continuously safe blood systems, we...
... sitting for long periods. If you travel by airplane, walk the aisle periodically. For long car trips, ... Your-Risk-for-Excessive-Blood-Clotting_UCM_448771_Article.jsp. Accessed April 18, 2016. What causes excessive ...
... Pregnancy Immobility (including prolonged inactivity, long trips by plane or car ) Smoking Oral contraceptives Certain cancers Trauma Certain surgeries Age (increased risk for people over age 60) A family history of blood clots Chronic inflammatory diseases Diabetes High ...
Maximal information analysis: I - various Wayne State plots and the most common likelihood principle
International Nuclear Information System (INIS)
Bonvicini, G.
2005-01-01
Statistical analysis using all moments of the likelihood L(y vertical bar α) (y being the data and α being the fit parameters) is presented. The relevant plots for various data fitting situations are presented. The goodness of fit (GOF) parameter (currently the χ 2 ) is redefined as the isoprobability level in a multidimensional space. Many useful properties of statistical analysis are summarized in a new statistical principle which states that the most common likelihood, and not the tallest, is the best possible likelihood, when comparing experiments or hypotheses
Simplified likelihood for the re-interpretation of public CMS results
The CMS Collaboration
2017-01-01
In this note, a procedure for the construction of simplified likelihoods for the re-interpretation of the results of CMS searches for new physics is presented. The procedure relies on the use of a reduced set of information on the background models used in these searches which can readily be provided by the CMS collaboration. A toy example is used to demonstrate the procedure and its accuracy in reproducing the full likelihood for setting limits in models for physics beyond the standard model. Finally, two representative searches from the CMS collaboration are used to demonstrate the validity of the simplified likelihood approach under realistic conditions.
Critical velocities in He II for independently varied superfluid and normal fluid velocities
International Nuclear Information System (INIS)
Baehr, M.L.
1984-01-01
Experiments were performed to measure the critical velocity in pure superflow and compare to the theoretical prediction; to measure the first critical velocity for independently varied superfluid and normal fluid velocities; and to investigate the propagation of the second critical velocity from the thermal counterflow line through the V/sub n/,-V/sub s/ quadrant. The experimental apparatus employed a thermal counterflow heater to adjust the normal fluid velocity, a fountain pump to vary the superfluid velocity, and a level sensing capacitor to measure the superfluid velocity. The results of the pure superfluid critical velocity measurements indicate that this velocity is temperature independent contrary to Schwarz's theory. It was found that the first critical velocity for independently varied V/sub n/ and V/sub s/ could be described by a linear function of V/sub n/ and was otherwise temperature independent. It was found that the second critical velocity could only be distinguished near the thermal counterflow line
Pelis, K
1997-01-01
Our internationally acclaimed journalist Sanguinia has returned safely from her historic assignment. Travelling from Homeric Greece to British Romanticism, she was witness to blood drinking, letting, bathing, and transfusion. In this report, she explores connections between the symbolic and the sadistic; the mythic and the medical--all in an effort to appreciate the layered meanings our culture has given to the movement of blood between our bodies.
In-vivo studies of new vector velocity and adaptive spectral estimators in medical ultrasound
DEFF Research Database (Denmark)
Hansen, Kristoffer Lindskov
In this PhD project new ultrasound techniques for blood flow measurements have been investigated in-vivo. The focus has mainly been on vector velocity techniques and four different approaches have been examined: Transverse Oscillation, Synthetic Transmit Aperture, Directional Beamforming and Plane...... in conventional Doppler ultrasound. That is angle dependency, reduced temporal resolution and low frame rate. Transverse Oscillation, Synthetic Transmit Aperture and Directional Beamforming can estimate the blood velocity angle independently. The three methods were validated in-vivo against magnetic resonance...... phase contrast angiography when measuring stroke volumes in simple vessel geometry on 11 volunteers. Using linear regression and Bland-Altman analyses good agreements were found, indicating that vector velocity methods can be used for quantitative blood flow measurements. Plane Wave Excitation can...
Continuous Blood Pressure Monitoring in Daily Life
Lopez, Guillaume; Shuzo, Masaki; Ushida, Hiroyuki; Hidaka, Keita; Yanagimoto, Shintaro; Imai, Yasushi; Kosaka, Akio; Delaunay, Jean-Jacques; Yamada, Ichiro
Continuous monitoring of blood pressure in daily life could improve early detection of cardiovascular disorders, as well as promoting healthcare. Conventional ambulatory blood pressure monitoring (ABPM) equipment can measure blood pressure at regular intervals for 24 hours, but is limited by long measuring time, low sampling rate, and constrained measuring posture. In this paper, we demonstrate a new method for continuous real-time measurement of blood pressure during daily activities. Our method is based on blood pressure estimation from pulse wave velocity (PWV) calculation, which formula we improved to take into account changes in the inner diameter of blood vessels. Blood pressure estimation results using our new method showed a greater precision of measured data during exercise, and a better accuracy than the conventional PWV method.
... switch to the Professional version Home Blood Disorders Biology of Blood Overview of Blood Resources In This ... Version. DOCTORS: Click here for the Professional Version Biology of Blood Overview of Blood Components of Blood ...
Full Text Available ... Blood Basics Blood Disorders Anemia Bleeding Disorders Blood Cancers Blood Clots Blood Clotting and Pregnancy Clots and ... Increased maternal age Other medical illness (e.g., cancer, infection) back to top How are Blood Clots ...
Full Text Available ... all publications For Patients Blood Basics Blood Disorders Anemia Bleeding Disorders Blood Cancers Blood Clots Blood Clotting and Pregnancy Clots and Travel DVT Myths vs. Facts Blood ...
Ma, Yin-Zhe; Gong, Guo-Dong; Sui, Ning; He, Ping
2018-03-01
We calculate the cross-correlation function between the kinetic Sunyaev-Zeldovich (kSZ) effect and the reconstructed peculiar velocity field using linear perturbation theory, with the aim of constraining the optical depth τ and peculiar velocity bias of central galaxies with Planck data. We vary the optical depth τ and the velocity bias function bv(k) = 1 + b(k/k0)n, and fit the model to the data, with and without varying the calibration parameter y0 that controls the vertical shift of the correlation function. By constructing a likelihood function and constraining the τ, b and n parameters, we find that the quadratic power-law model of velocity bias, bv(k) = 1 + b(k/k0)2, provides the best fit to the data. The best-fit values are τ = (1.18 ± 0.24) × 10-4, b=-0.84^{+0.16}_{-0.20} and y0=(12.39^{+3.65}_{-3.66})× 10^{-9} (68 per cent confidence level). The probability of b > 0 is only 3.12 × 10-8 for the parameter b, which clearly suggests a detection of scale-dependent velocity bias. The fitting results indicate that the large-scale (k ≤ 0.1 h Mpc-1) velocity bias is unity, while on small scales the bias tends to become negative. The value of τ is consistent with the stellar mass-halo mass and optical depth relationship proposed in the literature, and the negative velocity bias on small scales is consistent with the peak background split theory. Our method provides a direct tool for studying the gaseous and kinematic properties of galaxies.
Debris Likelihood, based on GhostNet, NASA Aqua MODIS, and GOES Imager, EXPERIMENTAL
National Oceanic and Atmospheric Administration, Department of Commerce — Debris Likelihood Index (Estimated) is calculated from GhostNet, NASA Aqua MODIS Chl a and NOAA GOES Imager SST data. THIS IS AN EXPERIMENTAL PRODUCT: intended...
A biclustering algorithm for binary matrices based on penalized Bernoulli likelihood
Lee, Seokho; Huang, Jianhua Z.
2013-01-01
We propose a new biclustering method for binary data matrices using the maximum penalized Bernoulli likelihood estimation. Our method applies a multi-layer model defined on the logits of the success probabilities, where each layer represents a
Performances of the likelihood-ratio classifier based on different data modelings
Chen, C.; Veldhuis, Raymond N.J.
2008-01-01
The classical likelihood ratio classifier easily collapses in many biometric applications especially with independent training-test subjects. The reason lies in the inaccurate estimation of the underlying user-specific feature density. Firstly, the feature density estimation suffers from
Finite mixture model: A maximum likelihood estimation approach on time series data
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Moral Identity Predicts Doping Likelihood via Moral Disengagement and Anticipated Guilt.
Kavussanu, Maria; Ring, Christopher
2017-08-01
In this study, we integrated elements of social cognitive theory of moral thought and action and the social cognitive model of moral identity to better understand doping likelihood in athletes. Participants (N = 398) recruited from a variety of team sports completed measures of moral identity, moral disengagement, anticipated guilt, and doping likelihood. Moral identity predicted doping likelihood indirectly via moral disengagement and anticipated guilt. Anticipated guilt about potential doping mediated the relationship between moral disengagement and doping likelihood. Our findings provide novel evidence to suggest that athletes, who feel that being a moral person is central to their self-concept, are less likely to use banned substances due to their lower tendency to morally disengage and the more intense feelings of guilt they expect to experience for using banned substances.
MR flow velocity measurement using 2D phase contrast, assessment of imaging parameters
International Nuclear Information System (INIS)
Akata, Soichi; Fukushima, Akihiro; Abe, Kimihiko; Darkanzanli, A.; Gmitro, A.F.; Unger, E.C.; Capp, M.P.
1999-01-01
The two-dimensional (2D) phase contrast technique using balanced gradient pulses is utilized to measure flow velocities of cerebrospinal fluid and blood. Various imaging parameters affect the accuracy of flow velocity measurements to varying degrees. Assessment of the errors introduced by changing the imaging parameters are presented and discussed in this paper. A constant flow phantom consisting of a pump, a polyethylene tube and a flow meter was assembled. A clinical 1.5 Tesla MR imager was used to perform flow velocity measurements. The phase contrast technique was used to estimate the flow velocity of saline through the phantom. The effects of changes in matrix size, flip angle, flow compensation, and velocity encoding (VENC) value were tested in the pulse sequence. Gd-DTPA doped saline was used to study the effect of changing T1 on the accuracy of flow velocity measurement. Matrix size (within practical values), flip angle, and flow compensation had minimum impact on flow velocity measurements. T1 of the solution also had no effect on the accuracy of measuring the flow velocity. On the other hand, it was concluded that errors as high as 20% can be expected in the flow velocity measurements if the VENC value is not properly chosen. (author)
MR flow velocity measurement using 2D phase contrast, assessment of imaging parameters
Energy Technology Data Exchange (ETDEWEB)
Akata, Soichi; Fukushima, Akihiro; Abe, Kimihiko [Tokyo Medical Coll. (Japan); Darkanzanli, A.; Gmitro, A.F.; Unger, E.C.; Capp, M.P.
1999-11-01
The two-dimensional (2D) phase contrast technique using balanced gradient pulses is utilized to measure flow velocities of cerebrospinal fluid and blood. Various imaging parameters affect the accuracy of flow velocity measurements to varying degrees. Assessment of the errors introduced by changing the imaging parameters are presented and discussed in this paper. A constant flow phantom consisting of a pump, a polyethylene tube and a flow meter was assembled. A clinical 1.5 Tesla MR imager was used to perform flow velocity measurements. The phase contrast technique was used to estimate the flow velocity of saline through the phantom. The effects of changes in matrix size, flip angle, flow compensation, and velocity encoding (VENC) value were tested in the pulse sequence. Gd-DTPA doped saline was used to study the effect of changing T1 on the accuracy of flow velocity measurement. Matrix size (within practical values), flip angle, and flow compensation had minimum impact on flow velocity measurements. T1 of the solution also had no effect on the accuracy of measuring the flow velocity. On the other hand, it was concluded that errors as high as 20% can be expected in the flow velocity measurements if the VENC value is not properly chosen. (author)
Cash, W.
1979-01-01
Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
DEFF Research Database (Denmark)
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Directory of Open Access Journals (Sweden)
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Practical Statistics for LHC Physicists: Descriptive Statistics, Probability and Likelihood (1/3)
CERN. Geneva
2015-01-01
These lectures cover those principles and practices of statistics that are most relevant for work at the LHC. The first lecture discusses the basic ideas of descriptive statistics, probability and likelihood. The second lecture covers the key ideas in the frequentist approach, including confidence limits, profile likelihoods, p-values, and hypothesis testing. The third lecture covers inference in the Bayesian approach. Throughout, real-world examples will be used to illustrate the practical application of the ideas. No previous knowledge is assumed.
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet
2005-01-01
The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....
Hyperglycemia - control; Hypoglycemia - control; Diabetes - blood sugar control; Blood glucose - managing ... sugar ( hypoglycemia ) Recognize and treat high blood sugar ( hyperglycemia ) Plan healthy meals Monitor your blood sugar (glucose) ...
In-vivo evaluation of three ultrasound vector velocity techniques with MR angiography
DEFF Research Database (Denmark)
Hansen, Kristoffer Lindskov; Udesen, Jesper; Oddershede, Niels
2008-01-01
In conventional Doppler ultrasound (US) the blood velocity is only estimated along the US beam direction. The estimate is angle corrected assuming laminar flow parallel to the vessel boundaries. As the now in the vascular system never is purely laminar, the velocities estimated with conventional...... additionally constructed and mean differences for the three comparisons were: DB/MRA = 0.17 ml; STA/MRA = 0.07 ml; TO/MRA = 0.24 ml. The three US vector velocity techniques yield quantitative insight in to flow dynamics and can potentially give the clinician a powerful tool in cardiovascular disease assessment....
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Pihl, Michael Johannes; Udesen, Jesper
2010-01-01
Medical ultrasound systems measure the blood velocity by tracking the blood cells motion along the ultrasound field. The is done by pulsing in the same direction a number of times and then find e.q. the shift in phase between consecutive pulses. Properly normalized this is directly proportional...... a double oscillating field. A special estimator is then used for finding both the axial and lateral velocity component, so that both magnitude and phase can be calculated. The method for generating double oscillating ultrasound fields and the special estimator are described and its performance revealed...
International Nuclear Information System (INIS)
Bovy, Jo; Hogg, David W.
2010-01-01
The velocity distribution of nearby stars (∼<100 pc) contains many overdensities or 'moving groups', clumps of comoving stars, that are inconsistent with the standard assumption of an axisymmetric, time-independent, and steady-state Galaxy. We study the age and metallicity properties of the low-velocity moving groups based on the reconstruction of the local velocity distribution in Paper I of this series. We perform stringent, conservative hypothesis testing to establish for each of these moving groups whether it could conceivably consist of a coeval population of stars. We conclude that they do not: the moving groups are neither trivially associated with their eponymous open clusters nor with any other inhomogeneous star formation event. Concerning a possible dynamical origin of the moving groups, we test whether any of the moving groups has a higher or lower metallicity than the background population of thin disk stars, as would generically be the case if the moving groups are associated with resonances of the bar or spiral structure. We find clear evidence that the Hyades moving group has higher than average metallicity and weak evidence that the Sirius moving group has lower than average metallicity, which could indicate that these two groups are related to the inner Lindblad resonance of the spiral structure. Further, we find weak evidence that the Hercules moving group has higher than average metallicity, as would be the case if it is associated with the bar's outer Lindblad resonance. The Pleiades moving group shows no clear metallicity anomaly, arguing against a common dynamical origin for the Hyades and Pleiades groups. Overall, however, the moving groups are barely distinguishable from the background population of stars, raising the likelihood that the moving groups are associated with transient perturbations.
Anticipating cognitive effort: roles of perceived error-likelihood and time demands.
Dunn, Timothy L; Inzlicht, Michael; Risko, Evan F
2017-11-13
Why are some actions evaluated as effortful? In the present set of experiments we address this question by examining individuals' perception of effort when faced with a trade-off between two putative cognitive costs: how much time a task takes vs. how error-prone it is. Specifically, we were interested in whether individuals anticipate engaging in a small amount of hard work (i.e., low time requirement, but high error-likelihood) vs. a large amount of easy work (i.e., high time requirement, but low error-likelihood) as being more effortful. In between-subject designs, Experiments 1 through 3 demonstrated that individuals anticipate options that are high in perceived error-likelihood (yet less time consuming) as more effortful than options that are perceived to be more time consuming (yet low in error-likelihood). Further, when asked to evaluate which of the two tasks was (a) more effortful, (b) more error-prone, and (c) more time consuming, effort-based and error-based choices closely tracked one another, but this was not the case for time-based choices. Utilizing a within-subject design, Experiment 4 demonstrated overall similar pattern of judgments as Experiments 1 through 3. However, both judgments of error-likelihood and time demand similarly predicted effort judgments. Results are discussed within the context of extant accounts of cognitive control, with considerations of how error-likelihood and time demands may independently and conjunctively factor into judgments of cognitive effort.
The likelihood ratio as a random variable for linked markers in kinship analysis.
Egeland, Thore; Slooten, Klaas
2016-11-01
The likelihood ratio is the fundamental quantity that summarizes the evidence in forensic cases. Therefore, it is important to understand the theoretical properties of this statistic. This paper is the last in a series of three, and the first to study linked markers. We show that for all non-inbred pairwise kinship comparisons, the expected likelihood ratio in favor of a type of relatedness depends on the allele frequencies only via the number of alleles, also for linked markers, and also if the true relationship is another one than is tested for by the likelihood ratio. Exact expressions for the expectation and variance are derived for all these cases. Furthermore, we show that the expected likelihood ratio is a non-increasing function if the recombination rate increases between 0 and 0.5 when the actual relationship is the one investigated by the LR. Besides being of theoretical interest, exact expressions such as obtained here can be used for software validation as they allow to verify the correctness up to arbitrary precision. The paper also presents results and advice of practical importance. For example, we argue that the logarithm of the likelihood ratio behaves in a fundamentally different way than the likelihood ratio itself in terms of expectation and variance, in agreement with its interpretation as weight of evidence. Equipped with the results presented and freely available software, one may check calculations and software and also do power calculations.
Zhou, Lihong; Yuan, Liming; Thomas, Rick; Iannacchione, Anthony
2017-01-01
When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction fac...
Characteristic wave velocities in spherical electromagnetic cloaks
International Nuclear Information System (INIS)
Yaghjian, A D; Maci, S; Martini, E
2009-01-01
We investigate the characteristic wave velocities in spherical electromagnetic cloaks, namely, phase, ray, group and energy-transport velocities. After deriving explicit expressions for the phase and ray velocities (the latter defined as the phase velocity along the direction of the Poynting vector), special attention is given to the determination of group and energy-transport velocities, because a cursory application of conventional formulae for local group and energy-transport velocities can lead to a discrepancy between these velocities if the permittivity and permeability dyadics are not equal over a frequency range about the center frequency. In contrast, a general theorem can be proven from Maxwell's equations that the local group and energy-transport velocities are equal in linear, lossless, frequency dispersive, source-free bianisotropic material. This apparent paradox is explained by showing that the local fields of the spherical cloak uncouple into an E wave and an H wave, each with its own group and energy-transport velocities, and that the group and energy-transport velocities of either the E wave or the H wave are equal and thus satisfy the general theorem.
Geotail observations of FTE velocities
Directory of Open Access Journals (Sweden)
G. I. Korotova
2009-01-01
Full Text Available We discuss the plasma velocity signatures expected in association with flux transfer events (FTEs. Events moving faster than or opposite the ambient media should generate bipolar inward/outward (outward/inward flow perturbations normal to the nominal magnetopause in the magnetosphere (magnetosheath. Flow perturbations directly upstream and downstream from the events should be in the direction of event motion. Flows on the flanks should be in the direction opposite the motion of events moving at subsonic and subAlfvénic speeds relative to the ambient plasma. Events moving with the ambient flow should generate no flow perturbations in the ambient plasma. Alfvén waves propagating parallel (antiparallel to the axial magnetic field of FTEs may generate anticorrelated (correlated magnetic field and flow perturbations within the core region of FTEs. We present case studies illustrating many of these signatures. In the examples considered, Alfvén waves propagate along event axes away from the inferred reconnection site. A statistical study of FTEs observed by Geotail over a 3.5-year period reveals that FTEs within the magnetosphere invariably move faster than the ambient flow, while those in the magnetosheath move both faster and slower than the ambient flow.
Reciprocally-Rotating Velocity Obstacles
Giese, Andrew
2014-05-01
© 2014 IEEE. Modern multi-agent systems frequently use highlevel planners to extract basic paths for agents, and then rely on local collision avoidance to ensure that the agents reach their destinations without colliding with one another or dynamic obstacles. One state-of-the-art local collision avoidance technique is Optimal Reciprocal Collision Avoidance (ORCA). Despite being fast and efficient for circular-shaped agents, ORCA may deadlock when polygonal shapes are used. To address this shortcoming, we introduce Reciprocally-Rotating Velocity Obstacles (RRVO). RRVO generalizes ORCA by introducing a notion of rotation for polygonally-shaped agents. This generalization permits more realistic motion than ORCA and does not suffer from as much deadlock. In this paper, we present the theory of RRVO and show empirically that it does not suffer from the deadlock issue ORCA has, permits agents to reach goals faster, and has a comparable collision rate at the cost of performance overhead quadratic in the (typically small) user-defined parameter δ.
High velocity impact experiment (HVIE)
Energy Technology Data Exchange (ETDEWEB)
Toor, A.; Donich, T.; Carter, P.
1998-02-01
The HVIE space project was conceived as a way to measure the absolute EOS for approximately 10 materials at pressures up to {approximately}30 Mb with order-of-magnitude higher accuracy than obtainable in any comparable experiment conducted on earth. The experiment configuration is such that each of the 10 materials interacts with all of the others thereby producing one-hundred independent, simultaneous EOS experiments The materials will be selected to provide critical information to weapons designers, National Ignition Facility target designers and planetary and geophysical scientists. In addition, HVIE will provide important scientific information to other communities, including the Ballistic Missile Defense Organization and the lethality and vulnerability community. The basic HVIE concept is to place two probes in counter rotating, highly elliptical orbits and collide them at high velocity (20 km/s) at 100 km altitude above the earth. The low altitude of the experiment will provide quick debris strip-out of orbit due to atmospheric drag. The preliminary conceptual evaluation of the HVIE has found no show stoppers. The design has been very easy to keep within the lift capabilities of commonly available rides to low earth orbit including the space shuttle. The cost of approximately 69 million dollars for 100 EOS experiment that will yield the much needed high accuracy, absolute measurement data is a bargain!
Group Velocity for Leaky Waves
Rzeznik, Andrew; Chumakova, Lyubov; Rosales, Rodolfo
2017-11-01
In many linear dispersive/conservative wave problems one considers solutions in an infinite medium which is uniform everywhere except for a bounded region. In general, localized inhomogeneities of the medium cause partial internal reflection, and some waves leak out of the domain. Often one only desires the solution in the inhomogeneous region, with the exterior accounted for by radiation boundary conditions. Formulating such conditions requires definition of the direction of energy propagation for leaky waves in multiple dimensions. In uniform media such waves have the form exp (d . x + st) where d and s are complex and related by a dispersion relation. A complex s is required since these waves decay via radiation to infinity, even though the medium is conservative. We present a modified form of Whitham's Averaged Lagrangian Theory along with modulation theory to extend the classical idea of group velocity to leaky waves. This allows for solving on the bounded region by representing the waves as a linear combination of leaky modes, each exponentially decaying in time. This presentation is part of a joint project, and applications of these results to example GFD problems will be presented by L. Chumakova in the talk ``Leaky GFD Problems''. This work is partially supported by NSF Grants DMS-1614043, DMS-1719637, and 1122374, and by the Hertz Foundation.
Computing discharge using the index velocity method
Levesque, Victor A.; Oberg, Kevin A.
2012-01-01
Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression
Middle cerebral artery flow velocity waveforms in fetal hypoxaemia.
Vyas, S; Nicolaides, K H; Bower, S; Campbell, S
1990-09-01
In 81 small-for-gestational age fetuses (SGA) colour flow imaging was used to identify the fetal middle cerebral artery for subsequent pulsed Doppler studies. Impedence to flow (pulsatility index; PI) was significantly lower, and mean blood velocity was significantly higher, than the respective reference ranges with gestation. Fetal blood sampling by cordocentesis was performed in all SGA fetuses and a significant quadratic relation was found between fetal hypoxaemia and the degree of reduction in the PI of FVWs from the fetal middle cerebral artery. Thus, maximum reduction in PI is reached when the fetal PO2 is 2-4 SD below the normal mean for gestation. When the oxygen deficit is greater there is a tendency for the PI to rise, and this presumably reflects the development of brain oedema.
... of your immune system, which fights infections and diseases. Abnormal white blood cell levels may be a sign ... fall outside the normal range for many reasons. Abnormal results might be a sign of a disorder or disease. Other factors—such as diet, menstrual ...
Self-separation of blood plasma from whole blood during the capillary flow in microchannel
Nunna, Bharath Babu; Zhuang, Shiqiang; Lee, Eon Soo
2017-11-01
Self-separation of blood plasma from whole blood in microchannels is of great importance due to the enormous range of applications in healthcare and diagnostics. Blood is a multiphase complex fluid, composed of cells suspended in blood plasma. RBCs are the suspended particles whose shape changes during the flow of blood. The primary constituents of blood are erythrocytes or red blood cells (RBCs), leukocytes or white blood cells (WBCs), thrombocytes or platelets and blood plasma. The existence of RBCs in blood makes the blood a non-Newtonian fluid. The current study of separation of blood plasma from whole blood during self-driven flows in a single microchannel without bifurcation, by enhancing the capillary effects. The change in the capillary effect results in a change in contact angle which directly influences the capillary flow. The flow velocity directly influences the net force acting on the RBCs and influence the separation process. The experiments are performed on the PDMS microchannels with different contact angles by altering the surface characteristics using plasma treatment. The change in the separation length is studied during the capillary flow of blood in microchannel. Bharath Babu Nunna is a researcher in mechanical engineering and implementing the novel and innovative technologies in the biomedical devices to enhance the sensitivity of the disease diagnosis.
The fine-tuning cost of the likelihood in SUSY models
International Nuclear Information System (INIS)
Ghilencea, D.M.; Ross, G.G.
2013-01-01
In SUSY models, the fine-tuning of the electroweak (EW) scale with respect to their parameters γ i ={m 0 ,m 1/2 ,μ 0 ,A 0 ,B 0 ,…} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Δ of the usual likelihood L and the traditional fine-tuning measure Δ of the EW scale. A similar result is obtained for the integrated likelihood over the set {γ i }, that can be written as a surface integral of the ratio L/Δ, with the surface in γ i space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Δ or equivalently, a small χ new 2 =χ old 2 +2lnΔ. This shows the fine-tuning cost to the likelihood (χ new 2 ) of the EW scale stability enforced by SUSY, that is ignored in data fits. A good χ new 2 /d.o.f.≈1 thus demands SUSY models have a fine-tuning amount Δ≪exp(d.o.f./2), which provides a model-independent criterion for acceptable fine-tuning. If this criterion is not met, one can thus rule out SUSY models without a further χ 2 /d.o.f. analysis. Numerical methods to fit the data can easily be adapted to account for this effect.
Remote determination of the velocity index and mean streamwise velocity profiles
Johnson, E. D.; Cowen, E. A.
2017-09-01
When determining volumetric discharge from surface measurements of currents in a river or open channel, the velocity index is typically used to convert surface velocities to depth-averaged velocities. The velocity index is given by, k=Ub/Usurf, where Ub is the depth-averaged velocity and Usurf is the local surface velocity. The USGS (United States Geological Survey) standard value for this coefficient, k = 0.85, was determined from a series of laboratory experiments and has been widely used in the field and in laboratory measurements of volumetric discharge despite evidence that the velocity index is site-specific. Numerous studies have documented that the velocity index varies with Reynolds number, flow depth, and relative bed roughness and with the presence of secondary flows. A remote method of determining depth-averaged velocity and hence the velocity index is developed here. The technique leverages the findings of Johnson and Cowen (2017) and permits remote determination of the velocity power-law exponent thereby, enabling remote prediction of the vertical structure of the mean streamwise velocity, the depth-averaged velocity, and the velocity index.
On whistler-mode group velocity
International Nuclear Information System (INIS)
Sazhin, S.S.
1986-01-01
An analytical of the group velocity of whistler-mode waves propagating parallel to the magnetic field in a hot anisotropic plasma is presented. Some simple approximate formulae, which can be used for the magnetospheric applications, are derived. These formulae can predict some properties of this group velocity which were not previously recognized or were obtained by numerical methods. In particular, it is pointed out that the anisotropy tends to compensate for the influence of the electron temperature on the value of the group velocity when the wave frequency is well below the electron gyrofrequency. It is predicted, that under conditions at frequencies near the electron gyrofrequency, this velocity tends towards zero
Velocity measurement of conductor using electromagnetic induction
International Nuclear Information System (INIS)
Kim, Gu Hwa; Kim, Ho Young; Park, Joon Po; Jeong, Hee Tae; Lee, Eui Wan
2002-01-01
A basic technology was investigated to measure the speed of conductor by non-contact electromagnetic method. The principle of the velocity sensor was electromagnetic induction. To design electromagnet for velocity sensor, 2D electromagnetic analysis was performed using FEM software. The sensor output was analyzed according to the parameters of velocity sensor, such as the type of magnetizing currents and the lift-off. Output of magnetic sensor was linearly depended on the conductor speed and magnetizing current. To compensate the lift-off changes during measurement of velocity, the other magnetic sensor was put at the pole of electromagnet.
Conduction velocity of antigravity muscle action potentials.
Christova, L; Kosarov, D; Christova, P
1992-01-01
The conduction velocity of the impulses along the muscle fibers is one of the parameters of the extraterritorial potentials of the motor units allowing for the evaluation of the functional state of the muscles. There are no data about the conduction velocities of antigravity muscleaction potentials. In this paper we offer a method for measuring conduction velocity of potentials of single MUs and the averaged potentials of the interference electromiogram (IEMG) lead-off by surface electrodes from mm. sternocleidomastoideus, trapezius, deltoideus (caput laterale) and vastus medialis. The measured mean values of the conduction velocity of antigravity muscles potentials can be used for testing the functional state of the muscles.
High-speed video capillaroscopy method for imaging and evaluation of moving red blood cells
Gurov, Igor; Volkov, Mikhail; Margaryants, Nikita; Pimenov, Aleksei; Potemkin, Andrey
2018-05-01
The video capillaroscopy system with high image recording rate to resolve moving red blood cells with velocity up to 5 mm/s into a capillary is considered. Proposed procedures of the recorded video sequence processing allow evaluating spatial capillary area, capillary diameter and central line with high accuracy and reliability independently on properties of individual capillary. Two-dimensional inter frame procedure is applied to find lateral shift of neighbor images in the blood flow area with moving red blood cells and to measure directly the blood flow velocity along a capillary central line. The developed method opens new opportunities for biomedical diagnostics, particularly, due to long-time continuous monitoring of red blood cells velocity into capillary. Spatio-temporal representation of capillary blood flow is considered. Experimental results of direct measurement of blood flow velocity into separate capillary as well as capillary net are presented and discussed.
Thermographic venous blood flow characterization with external cooling stimulation
Saxena, Ashish; Ng, E. Y. K.; Raman, Vignesh
2018-05-01
Experimental characterization of blood flow in a human forearm is done with the application of continuous external cooling based active thermography method. Qualitative and quantitative detection of the blood vessel in a thermal image is done, along with the evaluation of blood vessel diameter, blood flow direction, and velocity in the target blood vessel. Subtraction based image manipulation is performed to enhance the feature contrast of the thermal image acquired after the removal of external cooling. To demonstrate the effect of occlusion diseases (obstruction), an external cuff based occlusion is applied after the removal of cooling and its effect on the skin rewarming is studied. Using external cooling, a transit time method based blood flow velocity estimation is done. From the results obtained, it is evident that an external cooling based active thermography method can be used to develop a diagnosis tool for superficial blood vessel diseases.
Expert elicitation on ultrafine particles: likelihood of health effects and causal pathways
Directory of Open Access Journals (Sweden)
Brunekreef Bert
2009-07-01
Full Text Available Abstract Background Exposure to fine ambient particulate matter (PM has consistently been associated with increased morbidity and mortality. The relationship between exposure to ultrafine particles (UFP and health effects is less firmly established. If UFP cause health effects independently from coarser fractions, this could affect health impact assessment of air pollution, which would possibly lead to alternative policy options to be considered to reduce the disease burden of PM. Therefore, we organized an expert elicitation workshop to assess the evidence for a causal relationship between exposure to UFP and health endpoints. Methods An expert elicitation on the health effects of ambient ultrafine particle exposure was carried out, focusing on: 1 the likelihood of causal relationships with key health endpoints, and 2 the likelihood of potential causal pathways for cardiac events. Based on a systematic peer-nomination procedure, fourteen European experts (epidemiologists, toxicologists and clinicians were selected, of whom twelve attended. They were provided with a briefing book containing key literature. After a group discussion, individual expert judgments in the form of ratings of the likelihood of causal relationships and pathways were obtained using a confidence scheme adapted from the one used by the Intergovernmental Panel on Climate Change. Results The likelihood of an independent causal relationship between increased short-term UFP exposure and increased all-cause mortality, hospital admissions for cardiovascular and respiratory diseases, aggravation of asthma symptoms and lung function decrements was rated medium to high by most experts. The likelihood for long-term UFP exposure to be causally related to all cause mortality, cardiovascular and respiratory morbidity and lung cancer was rated slightly lower, mostly medium. The experts rated the likelihood of each of the six identified possible causal pathways separately. Out of these
Full Text Available ... For Patients Blood Disorders Blood Clots Blood Clotting & Pregnancy If you are pregnant, or you have just ... The risk of developing a blood clot during pregnancy is increased by the following: Previous blood clots ...