Energy Technology Data Exchange (ETDEWEB)
Narasimhadhan, A.V.; Rajgopal, Kasi
2011-07-01
This paper presents a new hybrid filtered backprojection (FBP) algorithm for fan-beam and cone-beam scan. The hybrid reconstruction kernel is the sum of the ramp and Hilbert filters. We modify the redundancy weighting function to reduce the inverse square distance weighting in the backprojection to inverse distance weight. The modified weight also eliminates the derivative associated with the Hilbert filter kernel. Thus, the proposed reconstruction algorithm has the advantages of the inverse distance weight in the backprojection. We evaluate the performance of the new algorithm in terms of the magnitude level and uniformity in noise for the fan-beam geometry. The computer simulations show that the spatial resolution is nearly identical to the standard fan-beam ramp filtered algorithm while the noise is spatially uniform and the noise variance is reduced. (orig.)
Evaluation of an inverse distance weighting method for patching ...
African Journals Online (AJOL)
2016-07-03
Jul 3, 2016 ... 1Agricultural Research Council – Institute for Soil, Climate and Water, .... local influence that diminishes with distance and weights change ..... most excessively high rainfall events are obtained from convective clouds which.
Assessment of Groundwater Chemical Quality, Using Inverse Distance Weighted Method
Directory of Open Access Journals (Sweden)
Sh. Ashraf
2013-04-01
Full Text Available An interpolation technique, ordinary Inverse Distance Weighted (IDW, was used to obtain the spatial distribution of groundwater quality parameters in Damghan plain of Iran. According to Scofield guidelines for TDS value, 60% of the water samples were harmful for irrigation purposes. Regarding to EC parameter, more than 60% of studied area was laid in bad range for irrigation purposes. The most dominant anion was Cl- and 10% of water samples showed a very hazardous class. According to Doneen guidelines for chloride value, 100% of collected water from the aquifer had slight to moderate problems for irrigation water purposes. The predominant cations in Damghan plain aquifer were according to Na+> Ca++> Mg++> K+. Sodium ion was the dominant cation and regarding to Na+ content guidelines, almost all groundwater samples had problem for foliar application. Calcium ion distribution was within usual range. The magnesium ion concentration is generally lower than sodium and calcium. The majority of the samples showed Mg++amount within usual range. Also K+ value ranged from 0.1 to 0.23 meq/L and all the water samples had potassium values within the permissible limit. Based on SAR criterion 80 % of collected water had slight to moderate problems. The SSP values were found from 2.87 to 6.87%. According to SAR value, thirty percent of ground water samples were doubtful class. The estimated amounts of RSC were ranged from 0.4-2 and based on RSC criterion, twenty percent of groundwater samples had slight to moderate problems.
Assessment of Groundwater Chemical Quality, Using Inverse Distance Weighted Method
Directory of Open Access Journals (Sweden)
Sh. Ashraf
2014-02-01
Full Text Available An interpolation technique, ordinary Inverse Distance Weighted (IDW, was used to obtain the spatial distribution of groundwater quality parameters in Damghan plain of Iran. According to Scofield guidelines for TDS value, 60% of the water samples were harmful for irrigation purposes. Regarding to EC parameter, more than 60% of studied area was laid in bad range for irrigation purposes. The most dominant anion was Cl- and 10% of water samples showed a very hazardous class. According to Doneen guidelines for chloride value, 100% of collected water from the aquifer had slight to moderate problems for irrigation water purposes. The predominant cations in Damghan plain aquifer were according to Na+> Ca++> Mg++> K+. Sodium ion was the dominant cation and regarding to Na+ content guidelines, almost all groundwater samples had problem for foliar application. Calcium ion distribution was within usual range. The magnesium ion concentration is generally lower than sodium and calcium. The majority of the samples showed Mg++amount within usual range. Also K+ value ranged from 0.1 to 0.23 meq/L and all the water samples had potassium values within the permissible limit. Based on SAR criterion 80 % of collected water had slight to moderate problems. The SSP values were found from 2.87 to 6.87%. According to SAR value, thirty percent of ground water samples were doubtful class. The estimated amounts of RSC were ranged from 0.4-2 and based on RSC criterion, twenty percent of groundwater samples had slight to moderate problems Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font
Directory of Open Access Journals (Sweden)
Y. Gholipour
Full Text Available This paper focuses on a metamodel-based design optimization algorithm. The intention is to improve its computational cost and convergence rate. Metamodel-based optimization method introduced here, provides the necessary means to reduce the computational cost and convergence rate of the optimization through a surrogate. This algorithm is a combination of a high quality approximation technique called Inverse Distance Weighting and a meta-heuristic algorithm called Harmony Search. The outcome is then polished by a semi-tabu search algorithm. This algorithm adopts a filtering system and determines solution vectors where exact simulation should be applied. The performance of the algorithm is evaluated by standard truss design problems and there has been a significant decrease in the computational effort and improvement of convergence rate.
Comparing ordinary kriging and inverse distance weighting for soil as pollution in Beijing.
Qiao, Pengwei; Lei, Mei; Yang, Sucai; Yang, Jun; Guo, Guanghui; Zhou, Xiaoyong
2018-06-01
Spatial interpolation method is the basis of soil heavy metal pollution assessment and remediation. The existing evaluation index for interpolation accuracy did not combine with actual situation. The selection of interpolation methods needs to be based on specific research purposes and research object characteristics. In this paper, As pollution in soils of Beijing was taken as an example. The prediction accuracy of ordinary kriging (OK) and inverse distance weighted (IDW) were evaluated based on the cross validation results and spatial distribution characteristics of influencing factors. The results showed that, under the condition of specific spatial correlation, the cross validation results of OK and IDW for every soil point and the prediction accuracy of spatial distribution trend are similar. But the prediction accuracy of OK for the maximum and minimum is less than IDW, while the number of high pollution areas identified by OK are less than IDW. It is difficult to identify the high pollution areas fully by OK, which shows that the smoothing effect of OK is obvious. In addition, with increasing of the spatial correlation of As concentration, the cross validation error of OK and IDW decreases, and the high pollution area identified by OK is approaching the result of IDW, which can identify the high pollution areas more comprehensively. However, because the semivariogram constructed by OK interpolation method is more subjective and requires larger number of soil samples, IDW is more suitable for spatial prediction of heavy metal pollution in soils.
Mei, Gang; Xu, Liangliang; Xu, Nengxiong
2017-09-01
This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.
Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk
2018-04-20
Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.
Tarmizi, S. N. M.; Asmat, A.; Sumari, S. M.
2014-02-01
PM10 is one of the air contaminants that can be harmful to human health. Meteorological factors and changes of monsoon season may affect the distribution of these particles. The objective of this study is to determine the temporal and spatial particulate matter (PM10) concentration distribution in Klang Valley, Malaysia by using the Inverse Distance Weighted (IDW) method at different monsoon season and meteorological conditions. PM10 and meteorological data were obtained from the Malaysian Department of Environment (DOE). Particles distribution data were added to the geographic database on a seasonal basis. Temporal and spatial patterns of PM10 concentration distribution were determined by using ArcGIS 9.3. The higher PM10 concentrations are observed during Southwest monsoon season. The values are lower during the Northeast monsoon season. Different monsoon seasons show different meteorological conditions that effect PM10 distribution.
International Nuclear Information System (INIS)
Tarmizi, S N M; Asmat, A; Sumari, S M
2014-01-01
PM 10 is one of the air contaminants that can be harmful to human health. Meteorological factors and changes of monsoon season may affect the distribution of these particles. The objective of this study is to determine the temporal and spatial particulate matter (PM 10 ) concentration distribution in Klang Valley, Malaysia by using the Inverse Distance Weighted (IDW) method at different monsoon season and meteorological conditions. PM 10 and meteorological data were obtained from the Malaysian Department of Environment (DOE). Particles distribution data were added to the geographic database on a seasonal basis. Temporal and spatial patterns of PM 10 concentration distribution were determined by using ArcGIS 9.3. The higher PM 10 concentrations are observed during Southwest monsoon season. The values are lower during the Northeast monsoon season. Different monsoon seasons show different meteorological conditions that effect PM 10 distribution
ORDERED WEIGHTED DISTANCE MEASURE
Institute of Scientific and Technical Information of China (English)
Zeshui XU; Jian CHEN
2008-01-01
The aim of this paper is to develop an ordered weighted distance (OWD) measure, which is thegeneralization of some widely used distance measures, including the normalized Hamming distance, the normalized Euclidean distance, the normalized geometric distance, the max distance, the median distance and the min distance, etc. Moreover, the ordered weighted averaging operator, the generalized ordered weighted aggregation operator, the ordered weighted geometric operator, the averaging operator, the geometric mean operator, the ordered weighted square root operator, the square root operator, the max operator, the median operator and the min operator axe also the special cases of the OWD measure. Some methods depending on the input arguments are given to determine the weights associated with the OWD measure. The prominent characteristic of the OWD measure is that it can relieve (or intensify) the influence of unduly large or unduly small deviations on the aggregation results by assigning them low (or high) weights. This desirable characteristic makes the OWD measure very suitable to be used in many actual fields, including group decision making, medical diagnosis, data mining, and pattern recognition, etc. Finally, based on the OWD measure, we develop a group decision making approach, and illustrate it with a numerical example.
Directory of Open Access Journals (Sweden)
Mehmet Arif Özyazıcı
2015-11-01
Full Text Available The aim of this study was to determine plant nutrients content and to in terms of soil variables their soil database and generate maps of their distribution on agricultural land in Central and Eastern Black Sea Region using geographical information system (GIS. In this research, total 3400 soil samples (0-20 cm depth were taken at 2.5 x 2.5 km grid points representing agricultural soils. Total nitrogen, extractable calcium, magnesium, sodium, boron, iron, copper, zinc and manganese contents were analysed in collected soil samples. Analysis results of these samples were classified and evaluated for deficiency, sufficiency or excess with respect to plant nutrients. Afterwards, in terms of GIS, a soil database and maps for current status of the study area were created by using inverse distance weighted (IDW interpolation method. According to this research results, it was determined sufficient plant nutrient elements in terms of total nitrogen, extractable iron, copper and manganese in arable soils of Central and Eastern Blacksea Region while, extractable calcium, magnesium, sodium were found good and moderate level in 66.88%, 81.44% and 64.56% of total soil samples, respectively. In addition, insufficient boron and zinc concentration were found in 34.35% and 51.36% of soil samples, respectively.
Phylogenetic inference with weighted codon evolutionary distances.
Criscuolo, Alexis; Michel, Christian J
2009-04-01
We develop a new approach to estimate a matrix of pairwise evolutionary distances from a codon-based alignment based on a codon evolutionary model. The method first computes a standard distance matrix for each of the three codon positions. Then these three distance matrices are weighted according to an estimate of the global evolutionary rate of each codon position and averaged into a unique distance matrix. Using a large set of both real and simulated codon-based alignments of nucleotide sequences, we show that this approach leads to distance matrices that have a significantly better treelikeness compared to those obtained by standard nucleotide evolutionary distances. We also propose an alternative weighting to eliminate the part of the noise often associated with some codon positions, particularly the third position, which is known to induce a fast evolutionary rate. Simulation results show that fast distance-based tree reconstruction algorithms on distance matrices based on this codon position weighting can lead to phylogenetic trees that are at least as accurate as, if not better, than those inferred by maximum likelihood. Finally, a well-known multigene dataset composed of eight yeast species and 106 codon-based alignments is reanalyzed and shows that our codon evolutionary distances allow building a phylogenetic tree which is similar to those obtained by non-distance-based methods (e.g., maximum parsimony and maximum likelihood) and also significantly improved compared to standard nucleotide evolutionary distance estimates.
Weighted Branching Simulation Distance for Parametric Weighted Kripke Structures
DEFF Research Database (Denmark)
Foshammer, Louise; Larsen, Kim Guldstrand; Mariegaard, Anders
2016-01-01
This paper concerns branching simulation for weighted Kripke structures with parametric weights. Concretely, we consider a weighted extension of branching simulation where a single transitions can be matched by a sequence of transitions while preserving the branching behavior. We relax this notion...... to allow for a small degree of deviation in the matching of weights, inducing a directed distance on states. The distance between two states can be used directly to relate properties of the states within a sub-fragment of weighted CTL. The problem of relating systems thus changes to minimizing the distance...... which, in the general parametric case, corresponds to finding suitable parameter valuations such that one system can approximately simulate another. Although the distance considers a potentially infinite set of transition sequences we demonstrate that there exists an upper bound on the length...
Directory of Open Access Journals (Sweden)
Lixin Li
2014-09-01
Full Text Available Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-01-01
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-09-03
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation
RFDR with Adiabatic Inversion Pulses: Application to Internuclear Distance Measurements
International Nuclear Information System (INIS)
Leppert, Joerg; Ohlenschlaeger, Oliver; Goerlach, Matthias; Ramachandran, Ramadurai
2004-01-01
In the context of the structural characterisation of biomolecular systems via MAS solid state NMR, the potential utility of homonuclear dipolar recoupling with adiabatic inversion pulses has been assessed via numerical simulations and experimental measurements. The results obtained suggest that it is possible to obtain reliable estimates of internuclear distances via an analysis of the initial cross-peak intensity buildup curves generated from two-dimensional adiabatic inversion pulse driven longitudinal magnetisation exchange experiments
Distance weighting for improved tomographic reconstructions
International Nuclear Information System (INIS)
Koeppe, R.A.; Holden, J.E.
1984-01-01
An improved method for the reconstruction of emission computed axial tomography images has been developed. The method is a modification of filtered back-projection, where the back projected values are weighted to reflect the loss of formation, with distance from the camera, which is inherent in gamma camera imaging. This information loss is a result of: loss of spatial resolution with distance, attenuation, and scatter. The weighting scheme can best be described by considering the contributions of any two opposing views to the reconstruction image pixels. The weight applied to the projections of one view is set to equal the relative amount of the original activity that was initially received in that projection, assuming a uniform attenuating medium. This yields a weighting value which is a function of distance into the image with a value of one for pixels ''near the camera'', a value of .5 at the image center, and a value of zero on the opposite side. Tomographic reconstructions produced with this method show improved spatial resolution when compared to conventional 360 0 reconstructions. The improvement is in the tangential direction, where simulations have indicated a FWHM improvement of 1 to 1.5 millimeters. The resolution in the radial direction is essentially the same for both methods. Visual inspection of the reconstructed images show improved resolution and contrast
THE WEIGHTED POINCARÉ DISTANCE IN THE HALF PLANE
Byun, Jisoo; Baek, Seung Min; Cho, Hong Rae; Lee, Han-Wool
2014-01-01
In this paper we introduce the weighted Poincaré distance and the induced distance by the weighted Bloch type space. We prove that the weighted Poincaré distance is identical to the inner distance generated by the induced distance.
Inverse-designed stretchable metalens with tunable focal distance
Callewaert, Francois; Velev, Vesselin; Jiang, Shizhou; Sahakian, Alan Varteres; Kumar, Prem; Aydin, Koray
2018-02-01
In this paper, we present an inverse-designed 3D-printed all-dielectric stretchable millimeter wave metalens with a tunable focal distance. A computational inverse-design method is used to design a flat metalens made of disconnected polymer building blocks with complex shapes, as opposed to conventional monolithic lenses. The proposed metalens provides better performance than a conventional Fresnel lens, using lesser amount of material and enabling larger focal distance tunability. The metalens is fabricated using a commercial 3D-printer and attached to a stretchable platform. Measurements and simulations show that the focal distance can be tuned by a factor of 4 with a stretching factor of only 75%, a nearly diffraction-limited focal spot, and with a 70% relative focusing efficiency, defined as the ratio between power focused in the focal spot and power going through the focal plane. The proposed platform can be extended for design and fabrication of multiple electromagnetic devices working from visible to microwave radiation depending on scaling of the devices.
Full waveform inversion for time-distance helioseismology
International Nuclear Information System (INIS)
Hanasoge, Shravan M.; Tromp, Jeroen
2014-01-01
Inferring interior properties of the Sun from photospheric measurements of the seismic wavefield constitutes the helioseismic inverse problem. Deviations in seismic measurements (such as wave travel times) from their fiducial values estimated for a given model of the solar interior imply that the model is inaccurate. Contemporary inversions in local helioseismology assume that properties of the solar interior are linearly related to measured travel-time deviations. It is widely known, however, that this assumption is invalid for sunspots and active regions and is likely for supergranular flows. Here, we introduce nonlinear optimization, executed iteratively, as a means of inverting for the subsurface structure of large-amplitude perturbations. Defining the penalty functional as the L 2 norm of wave travel-time deviations, we compute the total misfit gradient of this functional with respect to the relevant model parameters at each iteration around the corresponding model. The model is successively improved using either steepest descent, conjugate gradient, or the quasi-Newton limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Performing nonlinear iterations requires privileging pixels (such as those in the near field of the scatterer), a practice that is not compliant with the standard assumption of translational invariance. Measurements for these inversions, although similar in principle to those used in time-distance helioseismology, require some retooling. For the sake of simplicity in illustrating the method, we consider a two-dimensional inverse problem with only a sound-speed perturbation.
ipw: An R Package for Inverse Probability Weighting
Directory of Open Access Journals (Sweden)
Ronald B. Geskus
2011-10-01
Full Text Available We describe the R package ipw for estimating inverse probability weights. We show how to use the package to fit marginal structural models through inverse probability weighting, to estimate causal effects. Our package can be used with data from a point treatment situation as well as with a time-varying exposure and time-varying confounders. It can be used with binomial, categorical, ordinal and continuous exposure variables.
30 CFR 285.543 - Example of how the inverse distance formula works.
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Example of how the inverse distance formula works. 285.543 Section 285.543 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR... Financial Assurance Requirements Revenue Sharing with States § 285.543 Example of how the inverse distance...
Effect of Weight Transfer on a Vehicle's Stopping Distance.
Whitmire, Daniel P.; Alleman, Timothy J.
1979-01-01
An analysis of the minimum stopping distance problem is presented taking into account the effect of weight transfer on nonskidding vehicles and front- or rear-wheels-skidding vehicles. Expressions for the minimum stopping distances are given in terms of vehicle geometry and the coefficients of friction. (Author/BB)
Echocardiographic left ventricular masses in distance runners and weight lifters
Longhurst, J. C.; Gonyea, W. J.; Mitchell, J. H.; Kelly, A. R.
1980-01-01
The relationships of different forms of exercise training to left ventricular mass and body mass are investigated by echocardiographic studies of weight lifters, long-distance runners, and comparatively sized untrained control subjects. Left ventricular mass determinations by the Penn convention reveal increased absolute left ventricular masses in long-distance runners and competitive weight lifters with respect to controls matched for age, body weight, and body surface area, and a significant correlation between ventricular mass and lean body mass. When normalized to lean body mass, the ventricular masses of distance runners are found to be significantly higher than those of the other groups, suggesting that dynamic training elevates left ventricular mass compared to static training and no training, while static training increases ventricular mass only to the extent that lean body mass is increased.
Weighted Distances in Scale-Free Configuration Models
Adriaans, Erwin; Komjáthy, Júlia
2018-01-01
In this paper we study first-passage percolation in the configuration model with empirical degree distribution that follows a power-law with exponent τ \\in (2,3) . We assign independent and identically distributed (i.i.d.) weights to the edges of the graph. We investigate the weighted distance (the length of the shortest weighted path) between two uniformly chosen vertices, called typical distances. When the underlying age-dependent branching process approximating the local neighborhoods of vertices is found to produce infinitely many individuals in finite time—called explosive branching process—Baroni, Hofstad and the second author showed in Baroni et al. (J Appl Probab 54(1):146-164, 2017) that typical distances converge in distribution to a bounded random variable. The order of magnitude of typical distances remained open for the τ \\in (2,3) case when the underlying branching process is not explosive. We close this gap by determining the first order of magnitude of typical distances in this regime for arbitrary, not necessary continuous edge-weight distributions that produce a non-explosive age-dependent branching process with infinite mean power-law offspring distributions. This sequence tends to infinity with the amount of vertices, and, by choosing an appropriate weight distribution, can be tuned to be any growing function that is O(log log n) , where n is the number of vertices in the graph. We show that the result remains valid for the the erased configuration model as well, where we delete loops and any second and further edges between two vertices.
Inverse odds ratio-weighted estimation for causal mediation analysis.
Tchetgen Tchetgen, Eric J
2013-11-20
An important scientific goal of studies in the health and social sciences is increasingly to determine to what extent the total effect of a point exposure is mediated by an intermediate variable on the causal pathway between the exposure and the outcome. A causal framework has recently been proposed for mediation analysis, which gives rise to new definitions, formal identification results and novel estimators of direct and indirect effects. In the present paper, the author describes a new inverse odds ratio-weighted approach to estimate so-called natural direct and indirect effects. The approach, which uses as a weight the inverse of an estimate of the odds ratio function relating the exposure and the mediator, is universal in that it can be used to decompose total effects in a number of regression models commonly used in practice. Specifically, the approach may be used for effect decomposition in generalized linear models with a nonlinear link function, and in a number of other commonly used models such as the Cox proportional hazards regression for a survival outcome. The approach is simple and can be implemented in standard software provided a weight can be specified for each observation. An additional advantage of the method is that it easily incorporates multiple mediators of a categorical, discrete or continuous nature. Copyright © 2013 John Wiley & Sons, Ltd.
Speckle Suppression by Weighted Euclidean Distance Anisotropic Diffusion
Directory of Open Access Journals (Sweden)
Fengcheng Guo
2018-05-01
Full Text Available To better reduce image speckle noise while also maintaining edge information in synthetic aperture radar (SAR images, we propose a novel anisotropic diffusion algorithm using weighted Euclidean distance (WEDAD. Presented here is a modified speckle reducing anisotropic diffusion (SRAD method, which constructs a new edge detection operator using weighted Euclidean distances. The new edge detection operator can adaptively distinguish between homogenous and heterogeneous image regions, effectively generate anisotropic diffusion coefficients for each image pixel, and filter each pixel at different scales. Additionally, the effects of two different weighting methods (Gaussian weighting and non-linear weighting of de-noising were analyzed. The effect of different adjustment coefficient settings on speckle suppression was also explored. A series of experiments were conducted using an added noise image, GF-3 SAR image, and YG-29 SAR image. The experimental results demonstrate that the proposed method can not only significantly suppress speckle, thus improving the visual effects, but also better preserve the edge information of images.
On P-weight and P-distance inequalities
DEFF Research Database (Denmark)
Britz, Thomas Johann
2006-01-01
Jang and Park asked in [On a MacWilliams type identity and a perfectness for a binary linear (n, n - 1, j)-poset code, Discrete Math. 265 (2003) 85-104] whether, for each poset P = {l...., n}, the P-weights and P-distances satisfy the inequalities w(P)(x) - w(P)(y)......Jang and Park asked in [On a MacWilliams type identity and a perfectness for a binary linear (n, n - 1, j)-poset code, Discrete Math. 265 (2003) 85-104] whether, for each poset P = {l...., n}, the P-weights and P-distances satisfy the inequalities w(P)(x) - w(P)(y)...
[Inverse probability weighting (IPW) for evaluating and "correcting" selection bias].
Narduzzi, Silvia; Golini, Martina Nicole; Porta, Daniela; Stafoggia, Massimo; Forastiere, Francesco
2014-01-01
the Inverse probability weighting (IPW) is a methodology developed to account for missingness and selection bias caused by non-randomselection of observations, or non-random lack of some information in a subgroup of the population. to provide an overview of IPW methodology and an application in a cohort study of the association between exposure to traffic air pollution (nitrogen dioxide, NO₂) and 7-year children IQ. this methodology allows to correct the analysis by weighting the observations with the probability of being selected. The IPW is based on the assumption that individual information that can predict the probability of inclusion (non-missingness) are available for the entire study population, so that, after taking account of them, we can make inferences about the entire target population starting from the nonmissing observations alone.The procedure for the calculation is the following: firstly, we consider the entire population at study and calculate the probability of non-missing information using a logistic regression model, where the response is the nonmissingness and the covariates are its possible predictors.The weight of each subject is given by the inverse of the predicted probability. Then the analysis is performed only on the non-missing observations using a weighted model. IPW is a technique that allows to embed the selection process in the analysis of the estimates, but its effectiveness in "correcting" the selection bias depends on the availability of enough information, for the entire population, to predict the non-missingness probability. In the example proposed, the IPW application showed that the effect of exposure to NO2 on the area of verbal intelligence quotient of children is stronger than the effect showed from the analysis performed without regard to the selection processes.
Efficient Underwater RSS Value to Distance Inversion Using the Lambert Function
Directory of Open Access Journals (Sweden)
Majid Hosseini
2014-01-01
Full Text Available There are many applications for using wireless sensor networks (WSN in ocean science; however, identifying the exact location of a sensor by itself (localization is still a challenging problem, where global positioning system (GPS devices are not applicable underwater. Precise distance measurement between two sensors is a tool of localization and received signal strength (RSS, reflecting transmission loss (TL phenomena, is widely used in terrestrial WSNs for that matter. Underwater acoustic sensor networks have not been used (UASN, due to the complexity of the TL function. In this paper, we addressed these problems by expressing underwater TL via the Lambert W function, for accurate distance inversion by the Halley method, and compared this to Newton-Raphson inversion. Mathematical proof, MATLAB simulation, and real device implementation demonstrate the accuracy and efficiency of the proposed equation in distance calculation, with fewer iterations, computation stability for short and long distances, and remarkably short processing time. Then, the sensitivities of Lambert W function and Newton-Raphson inversion to alteration in TL were examined. The simulation results showed that Lambert W function is more stable to errors than Newton-Raphson inversion. Finally, with a likelihood method, it was shown that RSS is a practical tool for distance measurement in UASN.
Matching factorization theorems with an inverse-error weighting
Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; Pisano, Cristian; Signori, Andrea
2018-06-01
We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections to the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H0 boson and Drell-Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins-Soper-Sterman subtraction scheme. It is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.
Inverse probability weighting for covariate adjustment in randomized studies.
Shen, Changyu; Li, Xiaochun; Li, Lingling
2014-02-20
Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.
A new recoil distance technique using low energy coulomb excitation in inverse kinematics
Energy Technology Data Exchange (ETDEWEB)
Rother, W., E-mail: wolfram.rother@googlemail.com [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Dewald, A.; Pascovici, G.; Fransen, C.; Friessner, G.; Hackstein, M. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Ilie, G. [Wright Nuclear Structure Laboratory, Yale University, New Haven, CT 06520 (United States); National Institute of Physics and Nuclear Engineering, P.O. Box MG-6, Bucharest-Magurele (Romania); Iwasaki, H. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Jolie, J. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Melon, B. [Dipartimento di Fisica, Universita di Firenze and INFN Sezione di Firenze, Sesto Fiorentino (Firenze) I-50019 (Italy); Petkov, P. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); INRNE-BAS, Sofia (Bulgaria); Pfeiffer, M. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Pissulla, Th. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Bundesumweltministerium, Robert-Schuman-Platz 3, D - 53175 Bonn (Germany); Zell, K.-O. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Jakobsson, U.; Julin, R.; Jones, P.; Ketelhut, S.; Nieminen, P.; Peura, P. [Department of Physics, University of Jyvaeskylae, P.O. Box 35, FI-40014 (Finland); and others
2011-10-21
We report on the first experiment combining the Recoil Distance Doppler Shift technique and multistep Coulomb excitation in inverse kinematics at beam energies of 3-10 A MeV. The setup involves a standard plunger device equipped with a degrader foil instead of the normally used stopper foil. An array of particle detectors is positioned at forward angles to detect target-like recoil nuclei which are used as a trigger to discriminate against excitations in the degrader foil. The method has been successfully applied to measure lifetimes in {sup 128}Xe and is suited to be a useful tool for experiments with radioactive ion beams.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
Application of weighted early-arrival waveform inversion to shallow land data
Yu, Han; Zhang, Dongliang; Wang, Xin
2014-01-01
predictions and shows that the effects of noise and unpredicted amplitude variations in the inversion are reduced using this weighted early arrival waveform inversion (WEWI). We also apply this method to a 2D land data set for estimating the near
Determination of the complexity of distance weights in Mexican city systems
Directory of Open Access Journals (Sweden)
Igor Lugo
2017-03-01
Full Text Available This study tests distance weights based on the economic geography assumption of straight lines and the complex networks approach of empirical road segments in the Mexican system of cities to determine the best distance specification. We generated network graphs by using geospatial data and computed weights by measuring shortest paths, thereby characterizing their probability distributions and comparing them with spatial null models. Findings show that distributions are sufficiently different and are associated with asymmetrical beta distributions. Straight lines over- and underestimated distances compared to the empirical data, and they showed compatibility with random models. Therefore, accurate distance weights depend on the type of the network specification.
Signed distance computation using the angle weighted pseudonormal
DEFF Research Database (Denmark)
Bærentzen, Jakob Andreas; Aanæs, Henrik
2005-01-01
, the surface is not C/sup 1/ continuous, hence, the normal is undefined at these loci. In this paper, we undertake to show that the angle weighted pseudonormal (originally proposed by Thurmer and Wuthrich and independently by Sequin) has the important property that it allows us to discriminate between points...
Application of the Inverse Square Law distance in conventional radiology and mammography
International Nuclear Information System (INIS)
Hoff, Gabriela; Lima, Nathan Willig
2014-01-01
The Inverse Square Law (ISL) is a mathematical rule largely used to adjust the KERMA and exposure to different distances of focal spot having as reference a determined point in space. Taking into account the limitations of this Mathematical Law and its application, we have as main objective to verify the applicability of the ISL to determine exposure on radio-diagnostic area (maximum tensions between 30kVp and 150kVp). Experimental data was collected, deterministic calculation and simulation using Monte Carlo Method (Geant4 toolkit) was applied. The experimental data was collected using a calibrated ionizing chamber TNT 12000 from Fluke. The conventional X-ray equipment used was a Multix Top of Siemens, with Tungsten track and total filtration equivalent to 2,5 mm of Aluminum; and the mammographic equipment was a Mammomat Inspiration from Siemens, presenting the track-add filtration combinations of Molybdenum-Molybdenum (25 μm), Molybdenum-Rhodium (30 μm), Tungsten-Rhodium (50 μm). Both equipment have the Quality Control testes in agreement to Brazilian regulations. On conventional radiology the measurements were performed at following focal spot-detector distance (FsDD): 40cm, 50 cm, 60 cm, 70 cm, 80 cm, 90 cm, 100 cm for peak tensions of 66 kVp, 81 kVp e 125 kVp. On mammography the measurements were performed at FsDD: 60cm, 50cm, 40 cm and 26 cm for peak tensions of 25kVp, 30kVp and 35kVp. Based on the results it is possible conclude that the ISL presents lower performance in correct measurement on Mammography spectra (induce to larger errors on estimate data), but it can cause significant impact on both areas depending on the spectra energy and distance to correct. (author)
High Girth Column-Weight-Two LDPC Codes Based on Distance Graphs
Directory of Open Access Journals (Sweden)
Gabofetswe Malema
2007-01-01
Full Text Available LDPC codes of column weight of two are constructed from minimal distance graphs or cages. Distance graphs are used to represent LDPC code matrices such that graph vertices that represent rows and edges are columns. The conversion of a distance graph into matrix form produces an adjacency matrix with column weight of two and girth double that of the graph. The number of 1's in each row (row weight is equal to the degree of the corresponding vertex. By constructing graphs with different vertex degrees, we can vary the rate of corresponding LDPC code matrices. Cage graphs are used as examples of distance graphs to design codes with different girths and rates. Performance of obtained codes depends on girth and structure of the corresponding distance graphs.
Babier, Aaron; Boutilier, Justin J.; Sharpe, Michael B.; McNiven, Andrea L.; Chan, Timothy C. Y.
2018-05-01
We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate ‘inverse plans’ that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to
Directory of Open Access Journals (Sweden)
Lei Chen
2018-01-01
Full Text Available Conflict management in Dempster-Shafer theory (D-S theory is a hot topic in information fusion. In this paper, a novel weighted evidence combination rule based on evidence distance and uncertainty measure is proposed. The proposed approach consists of two steps. First, the weight is determined based on the evidence distance. Then, the weight value obtained in first step is modified by taking advantage of uncertainty. Our proposed method can efficiently handle high conflicting evidences with better performance of convergence. A numerical example and an application based on sensor fusion in fault diagnosis are given to demonstrate the efficiency of our proposed method.
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
Fukuda, J.; Johnson, K. M.
2009-12-01
Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress
Nguyen, Quynh C.; Osypuk, Theresa L.; Schmidt, Nicole M.; Glymour, M. Maria; Tchetgen Tchetgen, Eric J.
2015-01-01
Despite the recent flourishing of mediation analysis techniques, many modern approaches are difficult to implement or applicable to only a restricted range of regression models. This report provides practical guidance for implementing a new technique utilizing inverse odds ratio weighting (IORW) to estimate natural direct and indirect effects for mediation analyses. IORW takes advantage of the odds ratio's invariance property and condenses information on the odds ratio for the relationship be...
Effect of marital distance on birth weight and length of offspring
Directory of Open Access Journals (Sweden)
Kozieł Sławomir
2017-09-01
Full Text Available Marital distance (MD, the geographical distance between birthplaces of spouses, is considered an agent favouring occurrence of heterosis and can be used as a measure of its level. Heterosis itself is a phenomenon of hybrid vigour and seems to be an important factor regulating human growth and development. The main aim of the study is to examine potential effects of MD on birth weight and length of offspring, controlling for socioeconomic status (SES, mother’s age and birth order. Birth weight (2562 boys and 2572 girls and length (2526 boys, 2542 girls of children born in Ostrowiec Swietokrzyski (Poland in 1980, 1983, 1985 and 1988 were recorded during cross-sectional surveys carried out between 1994-1999. Data regarding the socio-demographic variables of families were provided by the parents. Analysis of covariance showed that MD significantly affected both birth weight and length, allowing for sex, birth order, mother’s age and SES of family. For both sexes, a greater marital distance was associated with a higher birth weight and a longer birth length. Our results support the hypothesis that a greater geographical distance between the birth places of parents may contribute to the heterosis effects in offspring. Better birth outcomes may be one of the manifestations of these effects.
A distance weighted-based approach for self-organized aggregation in robot swarms
Khaldi, Belkacem; Harrou, Fouzi; Cherif, Foudil; Sun, Ying
2017-01-01
topology to keep the robots together. A distance-weighted function based on a Smoothed Particle Hydrodynamic (SPH) interpolation approach is used as a key factor to identify the K-Nearest neighbors taken into account when aggregating the robots. The intra
Cardiovascular responses to static exercise in distance runners and weight lifters
Longhurst, J. C.; Kelly, A. R.; Gonyea, W. J.; Mitchell, J. H.
1980-01-01
Three groups of athletes including long-distance runners, competitive and amateur weight lifters, and age- and sex-matched control subjects have been studied by hemodynamic and echocardiographic methods in order to determine the effect of the training programs on the cardiovascular response to static exercise. Blood pressure, heart rate, and double product data at rest and at fatigue suggest that competitive endurance (dynamic exercise) training alters the cardiovascular response to static exercise. In contrast to endurance exercise, weight lifting (static exercise) training does not alter the cardiovascular response to static exercise: weight lifters responded to static exercise in a manner very similar to that of the control subjects.
Tree-average distances on certain phylogenetic networks have their weights uniquely determined.
Willson, Stephen J
2012-01-01
A phylogenetic network N has vertices corresponding to species and arcs corresponding to direct genetic inheritance from the species at the tail to the species at the head. Measurements of DNA are often made on species in the leaf set, and one seeks to infer properties of the network, possibly including the graph itself. In the case of phylogenetic trees, distances between extant species are frequently used to infer the phylogenetic trees by methods such as neighbor-joining. This paper proposes a tree-average distance for networks more general than trees. The notion requires a weight on each arc measuring the genetic change along the arc. For each displayed tree the distance between two leaves is the sum of the weights along the path joining them. At a hybrid vertex, each character is inherited from one of its parents. We will assume that for each hybrid there is a probability that the inheritance of a character is from a specified parent. Assume that the inheritance events at different hybrids are independent. Then for each displayed tree there will be a probability that the inheritance of a given character follows the tree; this probability may be interpreted as the probability of the tree. The tree-average distance between the leaves is defined to be the expected value of their distance in the displayed trees. For a class of rooted networks that includes rooted trees, it is shown that the weights and the probabilities at each hybrid vertex can be calculated given the network and the tree-average distances between the leaves. Hence these weights and probabilities are uniquely determined. The hypotheses on the networks include that hybrid vertices have indegree exactly 2 and that vertices that are not leaves have a tree-child.
Borges, Juliano H; Carter, Stephen J; Singh, Harshvardhan; Hunter, Gary R
2018-05-16
The aims of this study were to: (1) determine the relationships between maximum oxygen uptake ([Formula: see text]O 2max ) and walking economy during non-graded and graded walking among overweight women and (2) examine potential differences in [Formula: see text]O 2max and walking economy before and after weight loss. One-hundred and twenty-four premenopausal women with a body mass index (BMI) between 27 and 30 kg/m 2 were randomly assigned to one of three groups: (a) diet only; (b) diet and aerobic exercise training; and (c) diet and resistance exercise training. All were furnished with standard, very-low calorie diet to reduce BMI to < 25 kg/m 2 . [Formula: see text]O 2max was measured using a modified-Bruce protocol while walking economy (1-net [Formula: see text]O 2 ) was obtained during fixed-speed (4.8 k·h -1 ), steady-state treadmill walking at 0% grade and 2.5% grade. Assessments were conducted before and after achieving target BMI. Prior to weight loss, [Formula: see text]O 2max was inversely related (P < 0.05) with non-graded and graded walking economy (r = - 0.28 to - 0.35). Similar results were also observed following weight loss (r = - 0.22 to - 0.28). Additionally, we also detected a significant inverse relationship (P < 0.05) between the changes (∆, after weight loss) in ∆[Formula: see text]O 2max , adjusted for fat-free mass, with non-graded and graded ∆walking economy (r = - 0.37 to - 0.41). Our results demonstrate [Formula: see text]O 2max and walking economy are inversely related (cross-sectional) before and after weight loss. Importantly though, ∆[Formula: see text]O 2max and ∆walking economy were also found to be inversely related, suggesting a strong synchrony between maximal aerobic capacity and metabolic cost of exercise.
Amalia, Junita; Purhadi, Otok, Bambang Widjanarko
2017-11-01
Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.
Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.
2011-01-01
Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927
Inversion recovery RARE: Clinical application of T2-weighted CSF-suppressed rapid sequence
International Nuclear Information System (INIS)
Goetz, G.F.; Hennig, J.; Ziyeh, S.
1995-01-01
Inversion-Recovery RARE is a strongly T 2 -weighted fast sequence in which the CSF appears dark. This sequence was used in more than 100 patients. Retrospective analysis of 80 patients with cerebrovascular and inflammatory disease was carried out. The IR-RARE sequence proved to be particularly suitable for identifying small lesions in the neighbourhood of the subarachnoid space. We illustrate the typical contrast provided by this sequence, and describe its characteristics, exemplifying the advantages it offers for the diagnosis of multiple sclerosis, cerebral microangiopathy and brain infarction. (orig.) [de
A distance weighted-based approach for self-organized aggregation in robot swarms
Khaldi, Belkacem
2017-12-14
In this paper, a Distance-Weighted K Nearest Neighboring (DW-KNN) topology is proposed to study self-organized aggregation as an emergent swarming behavior within robot swarms. A virtual physics approach is applied among the proposed neighborhood topology to keep the robots together. A distance-weighted function based on a Smoothed Particle Hydrodynamic (SPH) interpolation approach is used as a key factor to identify the K-Nearest neighbors taken into account when aggregating the robots. The intra virtual physical connectivity among these neighbors is achieved using a virtual viscoelastic-based proximity model. With the ARGoS based-simulator, we model and evaluate the proposed approach showing various self-organized aggregations performed by a swarm of N foot-bot robots.
Radiation polymerization of acrylamide with super-high molecular weight in inverse emulsion
International Nuclear Information System (INIS)
Ye Qiang; Ge Xuewu; Xu Xiangling; Zhang Zhicheng
1998-01-01
The inverse emulsion polymerization of acrylamide has been studied with γ-ray initiation. Polyacrylamide with super high molecular weight over ten million (11 x 10 6 ), which is very important in application as flocculant, is obtained. In this work, some methods are taken to enhance the molecular weight as follows: (1) In order to prepare soluble polyacrylamide with super high molecular weight, the better conditions are: the emulsifier content is about 2% and the monomer concentration is about 20%∼24% in the composition of monomer emulsion, and the absorbed dose is about 500∼600 Gy. (2) Initiating with high dose rate and polymerizing with low dose rate can not only enhance the molecular weight of product, but also curtail the polymerizing time. (3) Stopping radiation when the conversion gets to about 10% and post-polymerizing outside the radiation source until the conversion gets to 82% can obtain polyacrylamide with super high molecular weight, and shorten the irradiation time as well
Nguyen, Quynh C; Osypuk, Theresa L; Schmidt, Nicole M; Glymour, M Maria; Tchetgen Tchetgen, Eric J
2015-03-01
Despite the recent flourishing of mediation analysis techniques, many modern approaches are difficult to implement or applicable to only a restricted range of regression models. This report provides practical guidance for implementing a new technique utilizing inverse odds ratio weighting (IORW) to estimate natural direct and indirect effects for mediation analyses. IORW takes advantage of the odds ratio's invariance property and condenses information on the odds ratio for the relationship between the exposure (treatment) and multiple mediators, conditional on covariates, by regressing exposure on mediators and covariates. The inverse of the covariate-adjusted exposure-mediator odds ratio association is used to weight the primary analytical regression of the outcome on treatment. The treatment coefficient in such a weighted regression estimates the natural direct effect of treatment on the outcome, and indirect effects are identified by subtracting direct effects from total effects. Weighting renders treatment and mediators independent, thereby deactivating indirect pathways of the mediators. This new mediation technique accommodates multiple discrete or continuous mediators. IORW is easily implemented and is appropriate for any standard regression model, including quantile regression and survival analysis. An empirical example is given using data from the Moving to Opportunity (1994-2002) experiment, testing whether neighborhood context mediated the effects of a housing voucher program on obesity. Relevant Stata code (StataCorp LP, College Station, Texas) is provided. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Regularized Laplace-Fourier-Domain Full Waveform Inversion Using a Weighted l 2 Objective Function
Jun, Hyunggu; Kwon, Jungmin; Shin, Changsoo; Zhou, Hongbo; Cogan, Mike
2017-03-01
Full waveform inversion (FWI) can be applied to obtain an accurate velocity model that contains important geophysical and geological information. FWI suffers from the local minimum problem when the starting model is not sufficiently close to the true model. Therefore, an accurate macroscale velocity model is essential for successful FWI, and Laplace-Fourier-domain FWI is appropriate for obtaining such a velocity model. However, conventional Laplace-Fourier-domain FWI remains an ill-posed and ill-conditioned problem, meaning that small errors in the data can result in large differences in the inverted model. This approach also suffers from certain limitations related to the logarithmic objective function. To overcome the limitations of conventional Laplace-Fourier-domain FWI, we introduce a weighted l 2 objective function, instead of the logarithmic objective function, as the data-domain objective function, and we also introduce two different model-domain regularizations: first-order Tikhonov regularization and prior model regularization. The weighting matrix for the data-domain objective function is constructed to suitably enhance the far-offset information. Tikhonov regularization smoothes the gradient, and prior model regularization allows reliable prior information to be taken into account. Two hyperparameters are obtained through trial and error and used to control the trade-off and achieve an appropriate balance between the data-domain and model-domain gradients. The application of the proposed regularizations facilitates finding a unique solution via FWI, and the weighted l 2 objective function ensures a more reasonable residual, thereby improving the stability of the gradient calculation. Numerical tests performed using the Marmousi synthetic dataset show that the use of the weighted l 2 objective function and the model-domain regularizations significantly improves the Laplace-Fourier-domain FWI. Because the Laplace-Fourier-domain FWI is improved, the
Douk, Hamid Shafaei; Aghamiri, Mahmoud Reza; Ghorbani, Mahdi; Farhood, Bagher; Bakhshandeh, Mohsen; Hemmati, Hamid Reza
2018-01-01
The aim of this study is to evaluate the accuracy of the inverse square law (ISL) method for determining location of virtual electron source ( S Vir ) in Siemens Primus linac. So far, different experimental methods have presented for determining virtual and effective electron source location such as Full Width at Half Maximum (FWHM), Multiple Coulomb Scattering (MCS), and Multi Pinhole Camera (MPC) and Inverse Square Law (ISL) methods. Among these methods, Inverse Square Law is the most common used method. Firstly, Siemens Primus linac was simulated using MCNPX Monte Carlo code. Then, by using dose profiles obtained from the Monte Carlo simulations, the location of S Vir was calculated for 5, 7, 8, 10, 12 and 14 MeV electron energies and 10 cm × 10 cm, 15 cm × 15 cm, 20 cm × 20 cm and 25 cm × 25 cm field sizes. Additionally, the location of S Vir was obtained by the ISL method for the mentioned electron energies and field sizes. Finally, the values obtained by the ISL method were compared to the values resulted from Monte Carlo simulation. The findings indicate that the calculated S Vir values depend on beam energy and field size. For a specific energy, with increase of field size, the distance of S Vir increases for most cases. Furthermore, for a special applicator, with increase of electron energy, the distance of S Vir increases for most cases. The variation of S Vir values versus change of field size in a certain energy is more than the variation of S Vir values versus change of electron energy in a certain field size. According to the results, it is concluded that the ISL method can be considered as a good method for calculation of S Vir location in higher electron energies (14 MeV).
Application of weighted early-arrival waveform inversion to shallow land data
Yu, Han
2014-03-01
Seismic imaging of deep land targets is usually difficult since the near-surface velocities are not accurately estimated. Recent studies have shown that inverting traces weighted by the energy of the early-arrivals can improve the accuracy of estimating shallow velocities. In this work, it is explained by showing that the associated misfit gradient function tends to be sensitive to the kinetics of wave propagation and insensitive to the dynamics. A synthetic example verifies the theoretical predictions and shows that the effects of noise and unpredicted amplitude variations in the inversion are reduced using this weighted early arrival waveform inversion (WEWI). We also apply this method to a 2D land data set for estimating the near-surface velocity distribution. The reverse time migration images suggest that, compared to the tomogram inverted directly from the early arrival waveforms, the WEWI tomogram provides a more convincing velocity model and more focused reflections in the deeper part of the image. © 2014 Elsevier B.V.
Prasetiyowati, S. S.; Sibaroni, Y.
2018-03-01
Dengue hemorrhagic disease, is a disease caused by the Dengue virus of the Flavivirus genus Flaviviridae family. Indonesia is the country with the highest case of dengue in Southeast Asia. In addition to mosquitoes as vectors and humans as hosts, other environmental and social factors are also the cause of widespread dengue fever. To prevent the occurrence of the epidemic of the disease, fast and accurate action is required. Rapid and accurate action can be taken, if there is appropriate information support on the occurrence of the epidemic. Therefore, a complete and accurate information on the spread pattern of endemic areas is necessary, so that precautions can be done as early as possible. The information on dispersal patterns can be obtained by various methods, which are based on empirical and theoretical considerations. One of the methods used is based on the estimated number of infected patients in a region based on spatial and time. The first step of this research is conducted by predicting the number of DHF patients in 2016 until 2018 based on 2010 to 2015 data using GSTAR (1, 1). In the second phase, the distribution pattern prediction of dengue disease area is conducted. Furthermore, based on the characteristics of DHF epidemic trends, i.e. down, stable or rising, the analysis of distribution patterns of dengue fever distribution areas with IDW and Kriging (ordinary and universal Kriging) were conducted in this study. The difference between IDW and Kriging, is the initial process that underlies the prediction process. Based on the experimental results, it is known that the dispersion pattern of epidemic areas of dengue disease with IDW and Ordinary Kriging is similar in the period of time.
Exploring the Subtleties of Inverse Probability Weighting and Marginal Structural Models.
Breskin, Alexander; Cole, Stephen R; Westreich, Daniel
2018-05-01
Since being introduced to epidemiology in 2000, marginal structural models have become a commonly used method for causal inference in a wide range of epidemiologic settings. In this brief report, we aim to explore three subtleties of marginal structural models. First, we distinguish marginal structural models from the inverse probability weighting estimator, and we emphasize that marginal structural models are not only for longitudinal exposures. Second, we explore the meaning of the word "marginal" in "marginal structural model." Finally, we show that the specification of a marginal structural model can have important implications for the interpretation of its parameters. Each of these concepts have important implications for the use and understanding of marginal structural models, and thus providing detailed explanations of them may lead to better practices for the field of epidemiology.
Bonsu, Kwadwo Osei; Owusu, Isaac Kofi; Buabeng, Kwame Ohene; Reidpath, Daniel D; Kadirvelu, Amudha
2017-04-01
Randomized control trials of statins have not demonstrated significant benefits in outcomes of heart failure (HF). However, randomized control trials may not always be generalizable. The aim was to determine whether statin and statin type-lipophilic or -hydrophilic improve long-term outcomes in Africans with HF. This was a retrospective longitudinal study of HF patients aged ≥18 years hospitalized at a tertiary healthcare center between January 1, 2009 and December 31, 2013 in Ghana. Patients were eligible if they were discharged from first admission for HF (index admission) and followed up to time of all-cause, cardiovascular, and HF mortality or end of study. Multivariable time-dependent Cox model and inverse-probability-of-treatment weighting of marginal structural model were used to estimate associations between statin treatment and outcomes. Adjusted hazard ratios were also estimated for lipophilic and hydrophilic statin compared with no statin use. The study included 1488 patients (mean age 60.3±14.2 years) with 9306 person-years of observation. Using the time-dependent Cox model, the 5-year adjusted hazard ratios with 95% CI for statin treatment on all-cause, cardiovascular, and HF mortality were 0.68 (0.55-0.83), 0.67 (0.54-0.82), and 0.63 (0.51-0.79), respectively. Use of inverse-probability-of-treatment weighting resulted in estimates of 0.79 (0.65-0.96), 0.77 (0.63-0.96), and 0.77 (0.61-0.95) for statin treatment on all-cause, cardiovascular, and HF mortality, respectively, compared with no statin use. Among Africans with HF, statin treatment was associated with significant reduction in mortality. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Ialongo, S.; Cella, F.; Fedi, M.; Florio, G.
2011-12-01
Most geophysical inversion problems are characterized by a number of data considerably higher than the number of the unknown parameters. This corresponds to solve highly underdetermined systems. To get a unique solution, a priori information must be therefore introduced. We here analyze the inversion of the gravity gradient tensor (GGT). Previous approaches to invert jointly or independently more gradient components are by Li (2001) proposing an algorithm using a depth weighting function and Zhdanov et alii (2004), providing a well focused inversion of gradient data. Both the methods give a much-improved solution compared with the minimum length solution, which is invariably shallow and not representative of the true source distribution. For very undetermined problems, this feature is due to the role of the depth weighting matrices used by both the methods. Recently, Cella and Fedi (2011) showed however that for magnetic and gravity data the depth weighting function has to be defined carefully, under a preliminary application of Euler Deconvolution or Depth from Extreme Point methods, yielding the appropriate structural index and then using it as the rate decay of the weighting function. We therefore propose to extend this last approach to invert jointly or independently the GGT tensor using the structural index as weighting function rate decay. In case of a joint inversion, gravity data can be added as well. This multicomponent case is also relevant because the simultaneous use of several components and gravity increase the number of data and reduce the algebraic ambiguity compared to the inversion of a single component. The reduction of such ambiguity was shown in Fedi et al, (2005) decisive to get an improved depth resolution in inverse problems, independently from any form of depth weighting function. The method is demonstrated to synthetic cases and applied to real cases, such as the Vredefort impact area (South Africa), characterized by a complex density
Tumour nuclear oestrogen receptor beta 1 correlates inversely with parathyroid tumour weight.
Haglund, Felix; Rosin, Gustaf; Nilsson, Inga-Lena; Juhlin, C Christofer; Pernow, Ylva; Norenstedt, Sophie; Dinets, Andrii; Larsson, Catharina; Hartman, Johan; Höög, Anders
2015-03-01
Primary hyperparathyroidism (PHPT) is a common endocrinopathy, frequently caused by a parathyroid adenoma, rarely by a parathyroid carcinoma that lacks effective oncological treatment. As the majority of cases are present in postmenopausal women, oestrogen signalling has been implicated in the tumourigenesis. Oestrogen receptor beta 1 (ERB1) and ERB2 have been recently identified in parathyroid adenomas, the former inducing genes coupled to tumour apoptosis. We applied immunohistochemistry and slide digitalisation to quantify nuclear ERB1 and ERB2 in 172 parathyroid adenomas, atypical adenomas and carcinomas, and ten normal parathyroid glands. All the normal parathyroid glands expressed ERB1 and ERB2. The majority of tumours expressed ERB1 (70.6%) at varying intensities, and ERB2 (96.5%) at strong intensities. Parathyroid carcinomas expressed ERB1 in three out of six cases and ERB2 in five out of six cases. The intensity of tumour nuclear ERB1 staining significantly correlated inversely with tumour weight (P=0.011), and patients whose tumours were classified as ERB1-negative had significantly greater tumour weight as well as higher serum calcium (P=0.002) and parathyroid hormone levels (P=0.003). Additionally, tumour nuclear ERB1 was not expressed differentially with respect to sex or age of the patient. Levels of tumour nuclear ERB2 did not correlate with clinical characteristics. In conclusion, decreased ERB1 immunoreactivity is associated with increased tumour weight in parathyroid adenomas. Given the previously reported correlation with tumour-suppressive signalling, selective oestrogen receptor modulation (SERMs) may play a role in the treatment of parathyroid carcinomas. Future studies of SERMs and oestrogen treatment in PHPT should consider tumour weight as a potential factor in pharmacological responsiveness. © 2015 The authors.
Convers, Jaime; Custodio, Susana
2016-04-01
Rapid assessment of seismological parameters pertinent to the nucleation and rupture of earthquakes are now routinely calculated by local and regional seismic networks. With the increasing number of stations, fast data transmission, and advanced computer power, we can now go beyond accurate magnitude and epicentral locations, to rapid estimations of other higher-order earthquake parameters such as seismic moment tensor. Although an increased number of stations can minimize azimuthal gaps, it also increases computation time, and potentially introduces poor quality data that often leads to a lower the stability of automated inversions. In this presentation, we focus on moment tensor calculations for earthquakes occurring offshore the southwestern Iberian peninsula. The available regional seismic data in this region has a significant azimuthal gap that results from the geographical setting. In this case, increasing the number of data from stations spanning a small area (and at a small azimuthal angle) increases the calculation time without necessarily improving the accuracy of the inversion. Additionally, limited regional data coverage makes it imperative to exclude poor-quality data, as their negative effect on moment tensor inversions is often significant. In our work, we analyze methods to minimize the effects of large azimuthal gaps in a regional station coverage, of potential bias by uneven station distribution, and of poor data quality in moment tensor inversions obtained for earthquakes offshore the southwestern Iberian peninsula. We calculate moment tensors using the KIWI tools, and we implement different configurations of station-weighing, and cross-correlation of neighboring stations, with the aim of automatically estimating and selecting high-quality data, improving the accuracy of results, and reducing the computation time of moment tensor inversions. As the available recent intermediate-size events offshore the Iberian peninsula is limited due to the long
International Nuclear Information System (INIS)
Švanda, Michal
2013-01-01
The consistency of time-distance inversions for horizontal components of the plasma flow on supergranular scales in the upper solar convection zone is checked by comparing the results derived using two k-ω filtering procedures—ridge filtering and phase-speed filtering—commonly used in time-distance helioseismology. I show that both approaches result in similar flow estimates when finite-frequency sensitivity kernels are used. I further demonstrate that the performance of the inversion improves (in terms of a simultaneously better averaging kernel and a lower noise level) when the two approaches are combined together in one inversion. Using the combined inversion, I invert for horizontal flows in the upper 10 Mm of the solar convection zone. The flows connected with supergranulation seem to be coherent only for the top ∼5 Mm; deeper down there is a hint of change of the convection scales toward structures larger than supergranules
Inversion of Orkney M5.5 earthquake South Africa using strain meters at very close distances
Yasutomi, T.; Mori, J. J.; Yamada, M.; Ogasawara, H.; Okubo, M.; Ogasawara, H.; Ishida, A.
2017-12-01
The largest event recorded in a South African gold mining region, a M5.5 earthquake took place near Orkney on 5 August 2014. The mainshock and afterhocks were recorded by 46 geophones at 2-3 km depths, 3 Ishii borehole strainmeters at 2.9km depth, and 17 surface strong motion instruments at close distances. The upper edge of the planar distribution of aftershock activity dips almost vertically and was only several hundred meters below the sites where the strainmeters were installed. In addition the seismic data, drilling across this fault is now in progress (Jun 2017 to December 2017) and will contribute valuable geological and stress information. Although the geophones data were saturated during the mainshock, the strainmeters recorded clear nearfield waveforms. We try to model the source of the M5.5 mainshock using the nearfield strainmeter data. Two strain meters located at same place, depth at 2.8km. Remaining one is located depth at 2.9km. Distance of each other is only 150m. Located at depth 2.9km recorded large stable strain, on the other hand, located at depth 2.8 km recorded three or four times smaller stable strain than 2.9km. These data indicates the distance between M5.5 fault and 2.9km depth strainmeter is a few hundred meters order. The strain Green functions were calculated assuming an infinite medium and using a finite difference method. We use small aftershocks to verify the Green function. Matching of the waveforms for the small events validates and Green functions used for the mainshock inversion. We present a model of the source rupture using these strain data. The nearfield data provide good resolution of the nearby earthquake rupture. There are two large subevents, one near the hypocenter and the second several hundred meters to the west.
Classification of EEG Signals using adaptive weighted distance nearest neighbor algorithm
Directory of Open Access Journals (Sweden)
E. Parvinnia
2014-01-01
Full Text Available Electroencephalogram (EEG signals are often used to diagnose diseases such as seizure, alzheimer, and schizophrenia. One main problem with the recorded EEG samples is that they are not equally reliable due to the artifacts at the time of recording. EEG signal classification algorithms should have a mechanism to handle this issue. It seems that using adaptive classifiers can be useful for the biological signals such as EEG. In this paper, a general adaptive method named weighted distance nearest neighbor (WDNN is applied for EEG signal classification to tackle this problem. This classification algorithm assigns a weight to each training sample to control its influence in classifying test samples. The weights of training samples are used to find the nearest neighbor of an input query pattern. To assess the performance of this scheme, EEG signals of thirteen schizophrenic patients and eighteen normal subjects are analyzed for the classification of these two groups. Several features including, fractal dimension, band power and autoregressive (AR model are extracted from EEG signals. The classification results are evaluated using Leave one (subject out cross validation for reliable estimation. The results indicate that combination of WDNN and selected features can significantly outperform the basic nearest-neighbor and the other methods proposed in the past for the classification of these two groups. Therefore, this method can be a complementary tool for specialists to distinguish schizophrenia disorder.
International Nuclear Information System (INIS)
Shin, Ho Cheol; Park, Moon Ghu; You, Skin
2006-01-01
Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto associative kernel regression (AAKR) by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression
Directory of Open Access Journals (Sweden)
Leonardo Oliveira Reis
2013-01-01
Full Text Available Background. Protective factors against Gleason upgrading and its impact on outcomes after surgery warrant better definition. Patients and Methods. Consecutive 343 patients were categorized at biopsy (BGS and prostatectomy (PGS as Gleason score, ≤6, 7, and ≥8; 94 patients (27.4% had PSA recurrence, mean followup 80.2 months (median 99. Independent predictors of Gleason upgrading (logistic regression and disease-free survival (DFS (Kaplan-Meier, log-rank were determined. Results. Gleason discordance was 45.7% (37.32% upgrading and 8.45% downgrading. Upgrading risk decreased by 2.4% for each 1 g of prostate weight increment, while it increased by 10.2% for every 1 ng/mL of PSA, 72.0% for every 0.1 unity of PSA density and was 21 times higher for those with BGS 7. Gleason upgrading showed increased clinical stage (P=0.019, higher tumor extent (P=0.009, extraprostatic extension (P=0.04, positive surgical margins (P<0.001, seminal vesicle invasion (P=0.003, less “insignificant” tumors (P<0.001, and also worse DFS, χ2=4.28, df=1, P=0.039. However, when setting the final Gleason score (BGS ≤6 to PGS 7 versus BGS 7 to PGS 7, avoiding allocation bias, DFS impact is not confirmed, χ2=0.40, df=1, P=0.530.Conclusions. Gleason upgrading is substantial and confers worse outcomes. Prostate weight is inversely related to upgrading and its protective effect warrants further evaluation.
DEFF Research Database (Denmark)
Wong, Monica H T; Holst, Claus; Astrup, Arne
2012-01-01
Successful weight maintenance following weight loss is challenging for many people. Identifying predictors of longer-term success will help target clinical resources more effectively. To date, focus has been predominantly on the identification of predictors of weight loss. The goal of the current...
Directory of Open Access Journals (Sweden)
J Swain
2017-12-01
Full Text Available Indian Space Research Organization had launched Oceansat-2 on 23 September 2009, and the scatterometer onboard was a space-borne sensor capable of providing ocean surface winds (both speed and direction over the globe for a mission life of 5 years. The observations of ocean surface winds from such a space-borne sensor are the potential source of data covering the global oceans and useful for driving the state-of-the-art numerical models for simulating ocean state if assimilated/blended with weather prediction model products. In this study, an efficient interpolation technique of inverse distance and time is demonstrated using the Oceansat-2 wind measurements alone for a selected month of June 2010 to generate gridded outputs. As the data are available only along the satellite tracks and there are obvious data gaps due to various other reasons, Oceansat-2 winds were subjected to spatio-temporal interpolation, and 6-hour global wind fields for the global oceans were generated over 1 × 1 degree grid resolution. Such interpolated wind fields can be used to drive the state-of-the-art numerical models to predict/hindcast ocean-state so as to experiment and test the utility/performance of satellite measurements alone in the absence of blended fields. The technique can be tested for other satellites, which provide wind speed as well as direction data. However, the accuracy of input winds is obviously expected to have a perceptible influence on the predicted ocean-state parameters. Here, some attempts are also made to compare the interpolated Oceansat-2 winds with available buoy measurements and it was found that they are reasonably in good agreement with a correlation coefficient of R > 0.8 and mean deviation 1.04 m/s and 25° for wind speed and direction, respectively.
International Nuclear Information System (INIS)
Kazama, Toshiki; Nasu, Katsuhiro; Kuroki, Yoshifumi; Nawano, Shigeru; Ito, Hisao
2009-01-01
Fat suppression is essential for diffusion-weighted imaging (DWI) in the body. However, the chemical shift selective (CHESS) pulse often fails to suppress fat signals in the breast. The purpose of this study was to compare DWI using CHESS and DWI using short inversion time inversion recovery (STIR) in terms of fat suppression and the apparent diffusion coefficient (ADC) value. DWI using STIR, DWI using CHESS, and contrast-enhanced T1-weighted images were obtained in 32 patients with breast carcinoma. Uniformity of fat suppression, ADC, signal intensity, and visualization of the breast tumors were evaluated. In 44% (14/32) of patients there was insufficient fat suppression in the breasts on DWI using CHESS, whereas 0% was observed on DWI using STIR (P<0.0001). The ADCs obtained for DWI using STIR were 4.3% lower than those obtained for DWI using CHESS (P<0.02); there was a strong correlation of the ADC measurement (r=0.93, P<0.001). DWI using STIR may be excellent for fat suppression; and the ADC obtained in this sequence was well correlated with that obtained with DWI using CHESS. DWI using STIR may be useful when the fat suppression technique in DWI using CHESS does not work well. (author)
International Nuclear Information System (INIS)
Al-Saeed, O.; Athyal, R. P.; Ismail, M.; Rudwan, M.; Khafajee, S.
2009-01-01
Full text: Tl-weighted fluid-attenuated inversion recovery (FLAIR) sequence is a relatively new pulse sequence for intracranial MR imaging. This study was performed to compare the image quality of Tl-weighted FLAIR with the Tl-weighted FSE sequence. Twenty patients with brain lesions underwent Tl-weighted fast spin-echo (FSE) and Tl-weighted FLAIR during the same imaging session. Four quantitative and three qualitative criteria were used to compare the two sequences after contrast. Two of four quantitative criteria pertained to lesion characteristics: lesion to white matter (WM) contrast-to-noise ratio (CNR) and lesion to cerebrospinal fluid (CSF) CNR, and two related to signals from normal tissue: grey matter to WM CNR and WM to CSF CNR. The three qualitative criteria were conspicuousness of the lesion, the presence of image artefacts and the overall image contrast. Both Tl-weighted FSE and FLAIR images were effective in demonstrating lesions. Image contrast was superior in Tl-weighted FLAIR images with significantly improved grey matter-WM CNRs and CSF-WM CNRs. The overall image contrast was judged to be superior on Tl-weighted FLAIR images compared with Tl-weighted FSE images by all neuroradiologists. Two of three reviewers considered that the FLAIR images had slightly increased imaging artefacts that, however, did not interfere with image interpretation. Tl-weighted FLAIR imaging provides improved lesion-to-background and grey to WM contrast-to-noise ratios. Superior conspicuity of lesions and overall image contrast is obtained in comparable acquisition times. These indicate an important role for Tl-weighted FLAIR in intracranial imaging and highlight its advantage over the more widely practiced Tl-weighted FSE sequence
Herbst, Elizabeth B; Unnikrishnan, Sunil; Wang, Shiying; Klibanov, Alexander L; Hossack, John A; Mauldin, Frank William
2017-02-01
The use of ultrasound imaging for cancer diagnosis and screening can be enhanced with the use of molecularly targeted microbubbles. Nonlinear imaging strategies such as pulse inversion (PI) and "contrast pulse sequences" (CPS) can be used to differentiate microbubble signal, but often fail to suppress highly echogenic tissue interfaces. This failure results in false-positive detection and potential misdiagnosis. In this study, a novel acoustic radiation force (ARF)-based approach was developed for superior microbubble signal detection. The feasibility of this technique, termed ARF decorrelation-weighted PI (ADW-PI), was demonstrated in vivo using a subcutaneous mouse tumor model. Tumors were implanted in the hindlimb of C57BL/6 mice by subcutaneous injection of MC38 cells. Lipid-shelled microbubbles were conjugated to anti-VEGFR2 antibody and administered via bolus injection. An image sequence using ARF pulses to generate microbubble motion was combined with PI imaging on a Verasonics Vantage programmable scanner. ADW-PI images were generated by combining PI images with interframe signal decorrelation data. For comparison, CPS images of the same mouse tumor were acquired using a Siemens Sequoia clinical scanner. Microbubble-bound regions in the tumor interior exhibited significantly higher signal decorrelation than static tissue (n = 9, P < 0.001). The application of ARF significantly increased microbubble signal decorrelation (n = 9, P < 0.01). Using these decorrelation measurements, ADW-PI imaging demonstrated significantly improved microbubble contrast-to-tissue ratio when compared with corresponding CPS or PI images (n = 9, P < 0.001). Contrast-to-tissue ratio improved with ADW-PI by approximately 3 dB compared with PI images and 2 dB compared with CPS images. Acoustic radiation force can be used to generate adherent microbubble signal decorrelation without microbubble bursting. When combined with PI, measurements of the resulting microbubble signal
International Nuclear Information System (INIS)
Lavdas, Eleftherios; Vlychou, Marianna; Arikidis, Nikos; Kapsalaki, Eftychia; Roka, Violetta; Fezoulidis, Ioannis V.
2010-01-01
Background: T1-weighted fluid-attenuated inversion recovery (FLAIR) sequence has been reported to provide improved contrast between lesions and normal anatomical structures compared to T1-weighted fast spin-echo (FSE) imaging at 1.5T regarding imaging of the lumbar spine. Purpose: To compare T1-weighted FSE and fast T1-weighted FLAIR imaging in normal anatomic structures and degenerative and metastatic lesions of the lumbar spine at 3.0T. Material and Methods: Thirty-two consecutive patients (19 females, 13 males; mean age 44 years, range 30-67 years) with lesions of the lumbar spine were prospectively evaluated. Sagittal images of the lumbar spine were obtained using T1-weighted FSE and fast T1-weighted FLAIR sequences. Both qualitative and quantitative analyses measuring the signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and relative contrast (ReCon) between degenerative and metastatic lesions and normal anatomic structures were conducted, comparing these sequences. Results: On quantitative evaluation, SNRs of cerebrospinal fluid (CSF), nerve root, and fat around the root of fast T1-weighted FLAIR imaging were significantly lower than those of T1-weighted FSE images (P<0.001). CNRs of normal spinal cord/CSF and disc herniation/ CSF for fast T1-weighted FLAIR images were significantly higher than those for T1-weighted FSE images (P<0.001). ReCon of normal spinal cord/CSF, disc herniation/CSF, and vertebral lesions/CSF for fast T1-weighted FLAIR images were significantly higher than those for T1-weighted FSE images (P<0.001). On qualitative evaluation, it was found that CSF nulling and contrast at the spinal cord (cauda equina)/CSF interface for T1-weighted FLAIR images were significantly superior compared to those for T1-weighted FSE images (P<0.001), and the disc/spinal cord (cauda equina) interface was better for T1-weighted FLAIR images (P<0.05). Conclusion: The T1-weighted FLAIR sequence may be considered as the preferred lumbar spine imaging
Melo, Ingrid Sofia Vieira de; Costa, Clara Andrezza Crisóstomo Bezerra; Santos, João Victor Laurindo Dos; Santos, Aldenir Feitosa Dos; Florêncio, Telma Maria de Menezes Toledo; Bueno, Nassib Bezerra
2017-01-01
The consumption of ultra-processed foods may be associated with the development of chronic diseases, both in adults and in children/adolescents. This consumption is growing worldwide, especially in low and middle-income countries. Nevertheless, its magnitude in small, poor cities from the countryside is not well characterized, especially in adolescents. This study aimed to assess the consumption of minimally processed, processed and ultra-processed foods by adolescents from a poor Brazilian city and to determine if it was associated with excess weight, high waist circumference and high blood pressure. Cross-sectional study, conducted at a public federal school that offers technical education together with high school, located in the city of Murici. Adolescents of both sexes and aged between 14-19 years old were included. Anthropometric characteristics (weight, height, waist circumference), blood pressure, and dietary intake data were assessed. Associations were calculated using Poisson regression models, adjusted by sex and age. At total, 249 adolescents were included, being 55.8% girls, with a mean age of 16 years-old. The consumption of minimally processed foods was inversely associated with excess weight (Adjusted Prevalence Ratio: 0.61, 95% Confidence Interval: [0.39-0.96], P = 0.03). Although the consumption of ultra-processed foods was not associated with excess weight, high blood pressure and high waist circumference, 46.2% of the sample reported eating these products more than weekly. Consumption of minimally processed food is inversely associated with excess weight in adolescents. Investments in nutritional education aiming the prevention of chronic diseases associated with the consumption of these foods are necessary.
Karim, Mohammad Ehsanul; Platt, Robert W
2017-06-15
Correct specification of the inverse probability weighting (IPW) model is necessary for consistent inference from a marginal structural Cox model (MSCM). In practical applications, researchers are typically unaware of the true specification of the weight model. Nonetheless, IPWs are commonly estimated using parametric models, such as the main-effects logistic regression model. In practice, assumptions underlying such models may not hold and data-adaptive statistical learning methods may provide an alternative. Many candidate statistical learning approaches are available in the literature. However, the optimal approach for a given dataset is impossible to predict. Super learner (SL) has been proposed as a tool for selecting an optimal learner from a set of candidates using cross-validation. In this study, we evaluate the usefulness of a SL in estimating IPW in four different MSCM simulation scenarios, in which we varied the specification of the true weight model specification (linear and/or additive). Our simulations show that, in the presence of weight model misspecification, with a rich and diverse set of candidate algorithms, SL can generally offer a better alternative to the commonly used statistical learning approaches in terms of MSE as well as the coverage probabilities of the estimated effect in an MSCM. The findings from the simulation studies guided the application of the MSCM in a multiple sclerosis cohort from British Columbia, Canada (1995-2008), to estimate the impact of beta-interferon treatment in delaying disability progression. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Measuring distance through dense weighted networks: The case of hospital-associated pathogens.
Directory of Open Access Journals (Sweden)
Tjibbe Donker
2017-08-01
Full Text Available Hospital networks, formed by patients visiting multiple hospitals, affect the spread of hospital-associated infections, resulting in differences in risks for hospitals depending on their network position. These networks are increasingly used to inform strategies to prevent and control the spread of hospital-associated pathogens. However, many studies only consider patients that are received directly from the initial hospital, without considering the effect of indirect trajectories through the network. We determine the optimal way to measure the distance between hospitals within the network, by reconstructing the English hospital network based on shared patients in 2014-2015, and simulating the spread of a hospital-associated pathogen between hospitals, taking into consideration that each intermediate hospital conveys a delay in the further spread of the pathogen. While the risk of transferring a hospital-associated pathogen between directly neighbouring hospitals is a direct reflection of the number of shared patients, the distance between two hospitals far-away in the network is determined largely by the number of intermediate hospitals in the network. Because the network is dense, most long distance transmission chains in fact involve only few intermediate steps, spreading along the many weak links. The dense connectivity of hospital networks, together with a strong regional structure, causes hospital-associated pathogens to spread from the initial outbreak in a two-step process: first, the directly surrounding hospitals are affected through the strong connections, second all other hospitals receive introductions through the multitude of weaker links. Although the strong connections matter for local spread, weak links in the network can offer ideal routes for hospital-associated pathogens to travel further faster. This hold important implications for infection prevention and control efforts: if a local outbreak is not controlled in time
Directory of Open Access Journals (Sweden)
Javad Nematian
2015-04-01
Full Text Available Vertex and p-center problems are two well-known types of the center problem. In this paper, a p-center problem with uncertain demand-weighted distance will be introduced in which the demands are considered as fuzzy random variables (FRVs and the objective of the problem is to minimize the maximum distance between a node and its nearest facility. Then, by introducing new methods, the proposed problem is converted to deterministic integer programming (IP problems where these methods will be obtained through the implementation of the possibility theory and fuzzy random chance-constrained programming (FRCCP. Finally, the proposed methods are applied for locating bicycle stations in the city of Tabriz in Iran as a real case study. The computational results of our study show that these methods can be implemented for the center problem with uncertain frameworks.
Tian, Jinyan; Li, Xiaojuan; Duan, Fuzhou; Wang, Junqian; Ou, Yang
2016-05-10
The rapid development of Unmanned Aerial Vehicle (UAV) remote sensing conforms to the increasing demand for the low-altitude very high resolution (VHR) image data. However, high processing speed of massive UAV data has become an indispensable prerequisite for its applications in various industry sectors. In this paper, we developed an effective and efficient seam elimination approach for UAV images based on Wallis dodging and Gaussian distance weight enhancement (WD-GDWE). The method encompasses two major steps: first, Wallis dodging was introduced to adjust the difference of brightness between the two matched images, and the parameters in the algorithm were derived in this study. Second, a Gaussian distance weight distribution method was proposed to fuse the two matched images in the overlap region based on the theory of the First Law of Geography, which can share the partial dislocation in the seam to the whole overlap region with an effect of smooth transition. This method was validated at a study site located in Hanwang (Sichuan, China) which was a seriously damaged area in the 12 May 2008 enchuan Earthquake. Then, a performance comparison between WD-GDWE and the other five classical seam elimination algorithms in the aspect of efficiency and effectiveness was conducted. Results showed that WD-GDWE is not only efficient, but also has a satisfactory effectiveness. This method is promising in advancing the applications in UAV industry especially in emergency situations.
Directory of Open Access Journals (Sweden)
Mohammad Hassan Ehrampoush
2017-12-01
Conclusion: According to higher concentration of PM10 compared to WHO standard values particularly in spring, necessary actions and solutions should be taken for the pollution reduction. This study indicated that Kriging model has a better efficiency for spatial analysis of suspended particles, compared to IDW method.
Laraia, Barbara A; Downing, Janelle M; Zhang, Y Tara; Dow, William H; Kelly, Maggi; Blanchard, Samuel D; Adler, Nancy; Schillinger, Dean; Moffet, Howard; Warton, E Margaret; Karter, Andrew J
2017-05-01
Associations between neighborhood food environment and adult body mass index (BMI; weight (kg)/height (m)2) derived using cross-sectional or longitudinal random-effects models may be biased due to unmeasured confounding and measurement and methodological limitations. In this study, we assessed the within-individual association between change in food environment from 2006 to 2011 and change in BMI among adults with type 2 diabetes using clinical data from the Kaiser Permanente Diabetes Registry collected from 2007 to 2011. Healthy food environment was measured using the kernel density of healthful food venues. Fixed-effects models with a 1-year-lagged BMI were estimated. Separate models were fitted for persons who moved and those who did not. Sensitivity analysis using different lag times and kernel density bandwidths were tested to establish the consistency of findings. On average, patients lost 1 pound (0.45 kg) for each standard-deviation improvement in their food environment. This relationship held for persons who remained in the same location throughout the 5-year study period but not among persons who moved. Proximity to food venues that promote nutritious foods alone may not translate into clinically meaningful diet-related health changes. Community-level policies for improving the food environment need multifaceted strategies to invoke clinically meaningful change in BMI among adult patients with diabetes. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Energy Technology Data Exchange (ETDEWEB)
Hoff, Gabriela; Lima, Nathan Willig, E-mail: ghoff.gesic@gmail.com [Pontificia Universidade Catolica do Rio Grande do Sul (PUCRS), Porto Alegre, RS (Brazil). Faculdade de Fisica
2014-07-01
The Inverse Square Law (ISL) is a mathematical rule largely used to adjust the KERMA and exposure to different distances of focal spot having as reference a determined point in space. Taking into account the limitations of this Mathematical Law and its application, we have as main objective to verify the applicability of the ISL to determine exposure on radio-diagnostic area (maximum tensions between 30kVp and 150kVp). Experimental data was collected, deterministic calculation and simulation using Monte Carlo Method (Geant4 toolkit) was applied. The experimental data was collected using a calibrated ionizing chamber TNT 12000 from Fluke. The conventional X-ray equipment used was a Multix Top of Siemens, with Tungsten track and total filtration equivalent to 2,5 mm of Aluminum; and the mammographic equipment was a Mammomat Inspiration from Siemens, presenting the track-add filtration combinations of Molybdenum-Molybdenum (25 μm), Molybdenum-Rhodium (30 μm), Tungsten-Rhodium (50 μm). Both equipment have the Quality Control testes in agreement to Brazilian regulations. On conventional radiology the measurements were performed at following focal spot-detector distance (FsDD): 40cm, 50 cm, 60 cm, 70 cm, 80 cm, 90 cm, 100 cm for peak tensions of 66 kVp, 81 kVp e 125 kVp. On mammography the measurements were performed at FsDD: 60cm, 50cm, 40 cm and 26 cm for peak tensions of 25kVp, 30kVp and 35kVp. Based on the results it is possible conclude that the ISL presents lower performance in correct measurement on Mammography spectra (induce to larger errors on estimate data), but it can cause significant impact on both areas depending on the spectra energy and distance to correct. (author)
Almirall, Daniel; Griffin, Beth Ann; McCaffrey, Daniel F.; Ramchand, Rajeev; Yuen, Robert A.; Murphy, Susan A.
2014-01-01
This article considers the problem of examining time-varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time-varying causal effects of interest in a conditional mean model for a continuous response given time-varying treatments and moderators. We present an easy-to-use estimator of the SNMM that combines an existing regression-with-residuals (RR) approach with an inverse-probability-of-treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time-varying causal effects if the time-varying moderators are also the sole time-varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time-varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time-varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time-varying moderators and time-varying confounders. We illustrate the methodology in a case study to assess if time-varying substance use moderates treatment effects on future substance use. PMID:23873437
International Nuclear Information System (INIS)
Lee, Ming-Wei; Chen, Yi-Chun
2014-01-01
In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally. -- Highlights: • A rapid interpolation method of system matrices (H) is proposed, named DW-GIMGPE. • Reduce H acquisition time by 15.2× with simplified grid scan and 2× interpolation. • Reconstructions of a hot-rod phantom with measured and DW-GIMGPE H were similar. • The imaging study of normal
Herbst, Elizabeth; Unnikrishnan, Sunil; Wang, Shiying; Klibanov, Alexander L.; Hossack, John A.; Mauldin, F. William
2016-01-01
Objectives The use of ultrasound imaging for cancer diagnosis and screening can be enhanced with the use of molecularly targeted microbubbles. Nonlinear imaging strategies such as pulse inversion (PI) and “contrast pulse sequences” (CPS) can be used to differentiate microbubble signal, but often fail to suppress highly echogenic tissue interfaces. This failure results in false positive detection and potential misdiagnosis. In this study, a novel Acoustic Radiation Force (ARF) based approach was developed for superior microbubble signal detection. The feasibility of this technique, termed ARF-decorrelation-weighted PI (ADW-PI), was demonstrated in vivo using a subcutaneous mouse tumor model. Materials and Methods Tumors were implanted in the hindlimb of C57BL/6 mice by subcutaneous injection of MC38 cells. Lipid-shelled microbubbles were conjugated to anti-VEGFR2 antibody and administered via bolus injection. An image sequence using ARF pulses to generate microbubble motion was combined with PI imaging on a Verasonics Vantage programmable scanner. ADW-PI images were generated by combining PI images with inter-frame signal decorrelation data. For comparison, CPS images of the same mouse tumor were acquired using a Siemens Sequoia clinical scanner. Results Microbubble-bound regions in the tumor interior exhibited significantly higher signal decorrelation than static tissue (n = 9, p < 0.001). The application of ARF significantly increased microbubble signal decorrelation (n = 9, p < 0.01). Using these decorrelation measurements, ADW-PI imaging demonstrated significantly improved microbubble contrast-to-tissue ratio (CTR) when compared to corresponding CPS or PI images (n = 9, p < 0.001). CTR improved with ADW-PI by approximately 3 dB compared to PI images and 2 dB compared to CPS images. Conclusions Acoustic radiation force can be used to generate adherent microbubble signal decorrelation without microbubble bursting. When combined with pulse inversion
Wang, Z.; Kato, T.; Wang, Y.
2015-12-01
The spatiotemporal fault slip history of the 2008 Iwate-Miyagi Nairiku earthquake, Japan, is obtained by the joint inversion of 1-Hz GPS waveforms and near-field strong motion records. 1-Hz GPS data from GEONET is processed by GAMIT/GLOBK and then a low-pass filter of 0.05 Hz is applied. The ground surface strong motion records from stations of K-NET and Kik-Net are band-pass filtered for the range of 0.05 ~ 0.3 Hz and integrated once to obtain velocity. The joint inversion exploits a broader frequency band for near-field ground motions, which provides excellent constraints for both the detailed slip history and slip distribution. A fully Bayesian inversion method is performed to simultaneously and objectively determine the rupture model, the unknown relative weighting of multiple data sets and the unknown smoothing hyperparameters. The preferred rupture model is stable for different choices of velocity structure model and station distribution, with maximum slip of ~ 8.0 m and seismic moment of 2.9 × 1019 Nm (Mw 6.9). By comparison with the single inversion of strong motion records, the cumulative slip distribution of joint inversion shows sparser slip distribution with two slip asperities. One common slip asperity extends from the hypocenter southeastward to the ground surface of breakage; another slip asperity, which is unique for joint inversion contributed by 1-Hz GPS waveforms, appears in the deep part of fault where very few aftershocks are occurring. The differential moment rate function of joint and single inversions obviously indicates that rich high frequency waves are radiated in the first three seconds but few low frequency waves.
McCormack, Gavin R; Virk, Jagdeep S
2014-09-01
Higher levels of sedentary behavior are associated with adverse health outcomes. Over-reliance on private motor vehicles for transportation is a potential contributor to the obesity epidemic. The objective of this study was to review evidence on the relationship between motor vehicle travel distance and time and weight status among adults. Keywords associated with driving and weight status were entered into four databases (PubMed Medline Transportation Research Information Database and Web of Science) and retrieved article titles and abstracts screened for relevance. Relevant articles were assessed for their eligibility for inclusion in the review (English-language articles a sample ≥ 16 years of age included a measure of time or distance traveling in a motor vehicle and weight status and estimated the association between driving and weight status). The database search yielded 2781 articles, from which 88 were deemed relevant and 10 studies met the inclusion criteria. Of the 10 studies included in the review, 8 found a statistically significant positive association between time and distance traveled in a motor vehicle and weight status. Multilevel interventions that make alternatives to driving private motor vehicles more convenient, such as walking and cycling, are needed to promote healthy weight in the adult population. Copyright © 2014 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Chang, J; Gu, X; Lu, W; Jiang, S; Song, T
2016-01-01
Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidate and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will
Energy Technology Data Exchange (ETDEWEB)
Chang, J; Gu, X; Lu, W; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States); Song, T [Southern Medical University, Guangzhou, Guangdong (China)
2016-06-15
Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidate and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will
International Nuclear Information System (INIS)
Erdem, L. Oktay; Erdem, C. Zuhal; Acikgoz, Bektas; Gundogdu, Sadi
2005-01-01
Objective: To compare fast T1-weighted fluid-attenuated inversion recovery (FLAIR) and T1-weighted turbo spin-echo (TSE) imaging of the degenerative disc disease of the lumbar spine. Materials and methods: Thirty-five consecutive patients (19 females, 16 males; mean age 41 years, range 31-67 years) with suspected degenerative disc disease of the lumbar spine were prospectively evaluated. Sagittal images of the lumbar spine were obtained using T1-weighted TSE and fast T1-weighted FLAIR sequences. Two radiologists compared these sequences both qualitatively and quantitatively. Results: On qualitative evaluation, CSF nulling, contrast at the disc-CSF interface, the disc-spinal cord (cauda equina) interface, and the spinal cord (cauda equina)-CSF interface of fast T1-weighted FLAIR images were significantly higher than those for T1-weighted TSE images (P < 0.001). On quantitative evaluation of the first 15 patients, signal-to-noise ratios of cerebrospinal fluid of fast T1-weighted FLAIR imaging were significantly lower than those for T1-weighted TSE images (P < 0.05). Contrast-to-noise ratios of spinal cord/CSF and normal bone marrow/disc for fast T1-weighted FLAIR images were significantly higher than those for T1-weighted TSE images (P < 0.05). Conclusion: Results in our study have shown that fast T1-weighted FLAIR imaging may be a valuable imaging modality in the armamentarium of lumbar spinal T1-weighted MR imaging, because the former technique has definite superior advantages such as CSF nulling, conspicuousness of the normal anatomic structures and changes in the lumbar spinal discogenic disease and image contrast and also almost equally acquisition times
Energy Technology Data Exchange (ETDEWEB)
Erdem, L. Oktay [Department of Radiology, Zonguldak Karaelmas University, School of Medicine, 6700 Kozlu, Zonguldak (Turkey)]. E-mail: sunarerdem@yahoo.com; Erdem, C. Zuhal [Department of Radiology, Zonguldak Karaelmas University, School of Medicine, 6700 Kozlu, Zonguldak (Turkey); Acikgoz, Bektas [Department of Neurosurgery, Zonguldak Karaelmas University, School of Medicine, Zonguldak (Turkey); Gundogdu, Sadi [Department of Radiology, Zonguldak Karaelmas University, School of Medicine, 6700 Kozlu, Zonguldak (Turkey)
2005-08-01
Objective: To compare fast T1-weighted fluid-attenuated inversion recovery (FLAIR) and T1-weighted turbo spin-echo (TSE) imaging of the degenerative disc disease of the lumbar spine. Materials and methods: Thirty-five consecutive patients (19 females, 16 males; mean age 41 years, range 31-67 years) with suspected degenerative disc disease of the lumbar spine were prospectively evaluated. Sagittal images of the lumbar spine were obtained using T1-weighted TSE and fast T1-weighted FLAIR sequences. Two radiologists compared these sequences both qualitatively and quantitatively. Results: On qualitative evaluation, CSF nulling, contrast at the disc-CSF interface, the disc-spinal cord (cauda equina) interface, and the spinal cord (cauda equina)-CSF interface of fast T1-weighted FLAIR images were significantly higher than those for T1-weighted TSE images (P < 0.001). On quantitative evaluation of the first 15 patients, signal-to-noise ratios of cerebrospinal fluid of fast T1-weighted FLAIR imaging were significantly lower than those for T1-weighted TSE images (P < 0.05). Contrast-to-noise ratios of spinal cord/CSF and normal bone marrow/disc for fast T1-weighted FLAIR images were significantly higher than those for T1-weighted TSE images (P < 0.05). Conclusion: Results in our study have shown that fast T1-weighted FLAIR imaging may be a valuable imaging modality in the armamentarium of lumbar spinal T1-weighted MR imaging, because the former technique has definite superior advantages such as CSF nulling, conspicuousness of the normal anatomic structures and changes in the lumbar spinal discogenic disease and image contrast and also almost equally acquisition times.
DEFF Research Database (Denmark)
Zheng, Miaobing; Rangan, Anna; Allman-Farinelli, Margaret
2015-01-01
The aim of the present study was to examine the associations of sugary drink consumption and its substitution with alternative beverages with body weight gain among young children predisposed to future weight gain. Secondary analysis of the Healthy Start Study, a 1·5-year randomised controlled...... trial designed to prevent overweight among Danish children aged 2-6 years (n 366), was carried out. Multivariate linear regression models were used to investigate the associations of beverage consumption with change in body weight (Δweight) or BMI(ΔBMI) z-score. Substitution models were used...... to extrapolate the influence of replacing sugary drinks with alternative beverages (water, milk and diet drinks) on Δweight or ΔBMI z-score. Sugary drink intake at baseline and substitution of sugary drinks with milk were associated with both Δweight and ΔBMI z-score. Every 100 g/d increase in sugary drink...
Directory of Open Access Journals (Sweden)
María Fernanda Garcés
2017-04-01
Conclusions: Inversions of intron 22 and 1 were found in half of this group of patients. These results are reproducible and useful to identify the two most frequent mutations in severe hemophilia A patients.
Energy Technology Data Exchange (ETDEWEB)
Son, Seok Hyun; Chang, Seung Kuk; Eun, Choong Ki [Pusan Paik Hospital, Inje Univ. College of Medicine, Kimhae (Korea, Republic of)
1999-12-01
To determine the usefulness of fluid attenuated inversion recovery(FLAIR) imaging for the in detection of high signal intensity of hippocampus or amygdala in mesial temporal sclerosis (MTS), compared with that of turbo spin-echo T2-weighted imaging. Two neuroradiologists independently analyzed randomly mixed MR images of 20 lesions of 17 patients in whom MTS had been diagnosed, and ten normal controls. All subjects underwent both who performed both FLAIR and turbo spin-echo T2-weighted imaging, in a blind fashion. In order to determine hippocampal morphology, oblique coronal images perpendicular to the long axis of the hippocampus were obtained. The detection rate of high signal intensity in hippocampus or amygdala, the radiologists' preferred imaging sequence, and intersubject consistency of detection were evaluated. Signal intensity in hippocampus or amygdala was considered high if substantially higher than signal intensity in the cortex of adjacent temporo-parietal lobe. In all normal controls, FLAIR and spin-echo T2-weighted images showed normal signal intensity in hippocampus or amygdala. In MTS, the mean detection rate of high signal intensity in hippocampus or amygdala, as seen on FLAIR images was 93%, compared with 43% on spin-echo T2-weighted images. In all cases in which signal intensity on FLAIR images was normal, signal intensity on spin-echo T2-weighted images was also normal. The radiologists preferred the contrast properties of FLAIR to those of spin-echo T2-weighted images. In the diagnosis of MTS using MRI, FLAIR images are more useful for the detection of high signal intensity of hippocampus or amygdala than are spin-echo T2-weighted images. In the diagnosis of MTS, FLAIR imaging is therefore a suitable alternative to spin-echo T2-weighted imaging.
International Nuclear Information System (INIS)
Son, Seok Hyun; Chang, Seung Kuk; Eun, Choong Ki
1999-01-01
To determine the usefulness of fluid attenuated inversion recovery(FLAIR) imaging for the in detection of high signal intensity of hippocampus or amygdala in mesial temporal sclerosis (MTS), compared with that of turbo spin-echo T2-weighted imaging. Two neuroradiologists independently analyzed randomly mixed MR images of 20 lesions of 17 patients in whom MTS had been diagnosed, and ten normal controls. All subjects underwent both who performed both FLAIR and turbo spin-echo T2-weighted imaging, in a blind fashion. In order to determine hippocampal morphology, oblique coronal images perpendicular to the long axis of the hippocampus were obtained. The detection rate of high signal intensity in hippocampus or amygdala, the radiologists' preferred imaging sequence, and intersubject consistency of detection were evaluated. Signal intensity in hippocampus or amygdala was considered high if substantially higher than signal intensity in the cortex of adjacent temporo-parietal lobe. In all normal controls, FLAIR and spin-echo T2-weighted images showed normal signal intensity in hippocampus or amygdala. In MTS, the mean detection rate of high signal intensity in hippocampus or amygdala, as seen on FLAIR images was 93%, compared with 43% on spin-echo T2-weighted images. In all cases in which signal intensity on FLAIR images was normal, signal intensity on spin-echo T2-weighted images was also normal. The radiologists preferred the contrast properties of FLAIR to those of spin-echo T2-weighted images. In the diagnosis of MTS using MRI, FLAIR images are more useful for the detection of high signal intensity of hippocampus or amygdala than are spin-echo T2-weighted images. In the diagnosis of MTS, FLAIR imaging is therefore a suitable alternative to spin-echo T2-weighted imaging
Directory of Open Access Journals (Sweden)
Lei Zeng
2016-01-01
Full Text Available Cone beam computed tomography (CBCT is a new detection method for 3D nondestructive testing of printed circuit boards (PCBs. However, the obtained 3D image of PCBs exhibits low contrast because of several factors, such as the occurrence of metal artifacts and beam hardening, during the process of CBCT imaging. Histogram equalization (HE algorithms cannot effectively extend the gray difference between a substrate and a metal in 3D CT images of PCBs, and the reinforcing effects are insignificant. To address this shortcoming, this study proposes an image enhancement algorithm based on gray and its distance double-weighting HE. Considering the characteristics of 3D CT images of PCBs, the proposed algorithm uses gray and its distance double-weighting strategy to change the form of the original image histogram distribution, suppresses the grayscale of a nonmetallic substrate, and expands the grayscale of wires and other metals. The proposed algorithm also enhances the gray difference between a substrate and a metal and highlights metallic materials. The proposed algorithm can enhance the gray value of wires and other metals in 3D CT images of PCBs. It applies enhancement strategies of changing gray and its distance double-weighting mechanism to adapt to this particular purpose. The flexibility and advantages of the proposed algorithm are confirmed by analyses and experimental results.
Matoza, Robin S.; Chouet, Bernard A.; Dawson, Phillip B.; Shearer, Peter M.; Haney, Matthew M.; Waite, Gregory P.; Moran, Seth C.; Mikesell, T. Dylan
2015-01-01
Long-period (LP, 0.5-5 Hz) seismicity, observed at volcanoes worldwide, is a recognized signature of unrest and eruption. Cyclic LP “drumbeating” was the characteristic seismicity accompanying the sustained dome-building phase of the 2004–2008 eruption of Mount St. Helens (MSH), WA. However, together with the LP drumbeating was a near-continuous, randomly occurring series of tiny LP seismic events (LP “subevents”), which may hold important additional information on the mechanism of seismogenesis at restless volcanoes. We employ template matching, phase-weighted stacking, and full-waveform inversion to image the source mechanism of one multiplet of these LP subevents at MSH in July 2005. The signal-to-noise ratios of the individual events are too low to produce reliable waveform-inversion results, but the events are repetitive and can be stacked. We apply network-based template matching to 8 days of continuous velocity waveform data from 29 June to 7 July 2005 using a master event to detect 822 network triggers. We stack waveforms for 359 high-quality triggers at each station and component, using a combination of linear and phase-weighted stacking to produce clean stacks for use in waveform inversion. The derived source mechanism pointsto the volumetric oscillation (~10 m3) of a subhorizontal crack located at shallow depth (~30 m) in an area to the south of Crater Glacier in the southern portion of the breached MSH crater. A possible excitation mechanism is the sudden condensation of metastable steam from a shallow pressurized hydrothermal system as it encounters cool meteoric water in the outer parts of the edifice, perhaps supplied from snow melt.
Lewin, Joel W; O'Rourke, Nicholas A; Chiow, Adrian K H; Bryant, Richard; Martin, Ian; Nathanson, Leslie K; Cavallucci, David J
2016-02-01
This study compares long-term outcomes between intention-to-treat laparoscopic and open approaches to colorectal liver metastases (CLM), using inverse probability of treatment weighting (IPTW) based on propensity scores to control for selection bias. Patients undergoing liver resection for CLM by 5 surgeons at 3 institutions from 2000 to early 2014 were analysed. IPTW based on propensity scores were generated and used to assess the marginal treatment effect of the laparoscopic approach via a weighted Cox proportional hazards model. A total of 298 operations were performed in 256 patients. 7 patients with planned two-stage resections were excluded leaving 284 operations in 249 patients for analysis. After IPTW, the population was well balanced. With a median follow up of 36 months, 5-year overall survival (OS) and recurrence-free survival (RFS) for the cohort were 59% and 38%. 146 laparoscopic procedures were performed in 140 patients, with weighted 5-year OS and RFS of 54% and 36% respectively. In the open group, 138 procedures were performed in 122 patients, with a weighted 5-year OS and RFS of 63% and 38% respectively. There was no significant difference between the two groups in terms of OS or RFS. In the Brisbane experience, after accounting for bias in treatment assignment, long term survival after LLR for CLM is equivalent to outcomes in open surgery. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Bolch, Charlotte A; Chu, Haitao; Jarosek, Stephanie; Cole, Stephen R; Elliott, Sean; Virnig, Beth
2017-07-10
To illustrate the 10-year risks of urinary adverse events (UAEs) among men diagnosed with prostate cancer and treated with different types of therapy, accounting for the competing risk of death. Prostate cancer is the second most common malignancy among adult males in the United States. Few studies have reported the long-term post-treatment risk of UAEs and those that have, have not appropriately accounted for competing deaths. This paper conducts an inverse probability of treatment (IPT) weighted competing risks analysis to estimate the effects of different prostate cancer treatments on the risk of UAE, using a matched-cohort of prostate cancer/non-cancer control patients from the Surveillance, Epidemiology and End Results (SEER) Medicare database. Study dataset included men age 66 years or older that are 83% white and had a median follow-up time of 4.14 years. Patients that underwent combination radical prostatectomy and external beam radiotherapy experienced the highest risk of UAE (IPT-weighted competing risks: HR 3.65 with 95% CI (3.28, 4.07); 10-yr. cumulative incidence = 36.5%). Findings suggest that IPT-weighted competing risks analysis provides an accurate estimator of the cumulative incidence of UAE taking into account the competing deaths as well as measured confounding bias.
Bertoli, Simona; Spadafranca, Angela; Bes-Rastrollo, Maira; Martinez-Gonzalez, Miguel Angel; Ponissi, Veronica; Beggio, Valentina; Leone, Alessandro; Battezzati, Alberto
2015-02-01
The key factors influencing the development of Binge Eating Disorder (BED) are not well known. Adherence to the Mediterranean diet (MD) has been suspected to reduce the risk of several mental illnesses such as depression and anxiety. There are no existing studies that have examined the relationships between BED and MD. Cross-sectional study of 1472 participants (71.3% women; mean age: 44.8 ± 12.7) at high risk of BED. A MD score (MED-score) was derived from a validated food frequency questionnaire and BED by Binge Eating Scale questionnaire (BES). Body mass index, waist circumference and total body fat (%) were assessed by anthropometric measurements. 376 (25.5%) cases of self reported BED were identified. 11.1% of participants had a good adherence to MD (MED-score ≥ 9). After adjustments for age, gender, nutritional status, education, and physical activity level, high MED-score was associated with lower odds for BED (odds ratios and 95% confidence intervals of a BED disorder for successive levels of MED-score were 1 (reference), 0.77 (0.44, 1.36), 0.66 (0.37, 1.15), 0.50 (0.26, 0.96), and 0.45 (0.22, 0.55) (P for trend: binge eaters. These results demonstrate an inverse association between MD and the development of BED in a clinical setting among subjects at risk of BED. Therefore, we should be cautious about generalizing the results to the whole population, although reverse causality and confounding cannot be excluded as explanation. Further prospective studies are warranted. Copyright © 2014 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Lee, Hoseok; Ahn, Joong Mo; Kang, Yusuhn; Oh, Joo Han; Lee, Eugene; Lee, Joon Woo; Kang, Heung Sik
2018-01-01
To compare the T1-weighted spectral presaturation with inversion-recovery sequences (T1 SPIR) with T2-weighted turbo spin-echo sequences (T2 TSE) on 3T magnetic resonance arthrography (MRA) in the evaluation of the subscapularis (SSC) tendon tear with arthroscopic findings as the reference standard. This retrospective study included 120 consecutive patients who had undergone MRA within 3 months between April and December 2015. Two musculoskeletal radiologists blinded to the arthroscopic results evaluated T1 SPIR and T2 TSE images in separate sessions for the integrity of the SSC tendon, examining normal/articular-surface partial-thickness tear (PTTa)/full-thickness tear (FTT). Diagnostic performance of T1 SPIR and T2 TSE was calculated with arthroscopic results as the reference standard, and sensitivity, specificity, and accuracy were compared using the McNemar test. Interobserver agreement was measured with kappa (κ) statistics. There were 74 SSC tendon tears (36 PTTa and 38 FTT) confirmed by arthroscopy. Significant differences were found in the sensitivity and accuracy between T1 SPIR and T2 TSE using the McNemar test, with respective rates of 95.9-94.6% vs. 71.6-75.7% and 90.8-91.7% vs. 79.2-83.3% for detecting tear; 55.3% vs. 31.6-34.2% and 85.8% vs. 78.3-79.2%, respectively, for FTT; and 91.7-97.2% vs. 58.3-61.1% and 89% vs. 78-79.3%, respectively, for PTTa. Interobserver agreement for T1 SPIR was almost perfect for T1 SPIR (κ = 0.839) and substantial for T2 TSE (κ = 0.769). T1-weighted spectral presaturation with inversion-recovery sequences is more sensitive and accurate compared to T2 TSE in detecting SSC tendon tear on 3T MRA.
Representing distance, consuming distance
DEFF Research Database (Denmark)
Larsen, Gunvor Riber
Title: Representing Distance, Consuming Distance Abstract: Distance is a condition for corporeal and virtual mobilities, for desired and actual travel, but yet it has received relatively little attention as a theoretical entity in its own right. Understandings of and assumptions about distance...... are being consumed in the contemporary society, in the same way as places, media, cultures and status are being consumed (Urry 1995, Featherstone 2007). An exploration of distance and its representations through contemporary consumption theory could expose what role distance plays in forming...
Guo, Xiaohui; Tresserra-Rimbau, Anna; Estruch, Ramón; Martínez-González, Miguel A.; Medina-Remón, Alexander; Fitó, Montserrat; Corella, Dolores; Salas-Salvadó, Jordi; Portillo, Maria Puy; Moreno, Juan J.; Pi-Sunyer, Xavier; Lamuela-Raventós, Rosa M.
2017-01-01
Overweight and obesity have been steadily increasing in recent years and currently represent a serious threat to public health. Few human studies have investigated the relationship between polyphenol intake and body weight. Our aim was to assess the relationship between urinary polyphenol levels and body weight. A cross-sectional study was performed with 573 participants from the PREDIMED (Prevención con Dieta Mediterránea) trial (ISRCTN35739639). Total polyphenol levels were measured by a reliable biomarker, total urinary polyphenol excretion (TPE), determined by the Folin-Ciocalteu method in urine samples. Participants were categorized into five groups according to their TPE at the fifth year. Multiple linear regression models were used to assess the relationships between TPE and obesity parameters; body weight (BW), body mass index (BMI), waist circumference (WC), and waist-to-height ratio (WHtR). After a five years follow up, significant inverse correlations were observed between TPE at the 5th year and BW (β = −1.004; 95% CI: −1.634 to −0.375, p = 0.002), BMI (β = −0.320; 95% CI: −0.541 to −0.098, p = 0.005), WC (β = −0.742; 95% CI: −1.326 to −0.158, p = 0.013), and WHtR (β = −0.408; 95% CI: −0.788 to −0.028, p = 0.036) after adjustments for potential confounders. To conclude, a greater polyphenol intake may thus contribute to reducing body weight in elderly people at high cardiovascular risk. PMID:28467383
International Nuclear Information System (INIS)
Azad, Rajiv; Tayal, Mohit; Azad, Sheenam; Sharma, Garima; Srivastava, Rajendra Kumar
2017-01-01
To compare the contrast-enhanced fluid-attenuated inversion recovery (CE-FLAIR), the CE T1-weighted (CE-T1W) sequence with fat suppression (FS) and magnetization transfer (MT) for early detection and characterization of infectious meningitis. Fifty patients and 10 control subjects were evaluated with the CE-FLAIR and the CE-T1W sequences with FS and MT. Qualitative assessment was done by two observers for presence and grading of abnormal leptomeningeal enhancement. Quantitative assessment included computation of net meningeal enhancement, using single pixel signal intensity software. A newly devised FLAIR based scoring system, based on certain imaging features including ventricular dilatation, ependymal enhancement, infarcts and subdural effusions was used to indicate the etiology. Data were analysed using the Student's t test, Cohen's Kappa coefficient, Pearson's correlation coefficient, the intraclass correlation coefficient, one way analysis of variance, and Fisher's exact test with Bonferroni correction as the post hoc test. The CE-FLAIR sequence demonstrated a better sensitivity (100%), diagnostic accuracy (95%), and a stronger correlation with the cerebrospinal fluid, total leukocyte count (r = 0.75), protein (r = 0.77), adenosine deaminase (r = 0.81) and blood glucose (r = -0.6) values compared to the CE-T1W sequences. Qualitative grades and quantitative meningeal enhancement on the CE-FLAIR sequence were also significantly greater than those on the other sequences. The FLAIR based scoring system yielded a diagnostic accuracy of 91.6% and a sensitivity of 96%. A strong inverse Pearson's correlation (r = -0.95) was found between the assigned score and patient's Glasgow Coma Scale at the time of admission. The CE-FLAIR sequence is better suited for evaluating infectious meningitis and could be included as a part of the routine MR imaging protocol
Energy Technology Data Exchange (ETDEWEB)
Azad, Rajiv; Tayal, Mohit; Azad, Sheenam; Sharma, Garima; Srivastava, Rajendra Kumar [SGRR Institute of Medical and Health Sciences, Patel Nagar, Dehradun (India)
2017-11-15
To compare the contrast-enhanced fluid-attenuated inversion recovery (CE-FLAIR), the CE T1-weighted (CE-T1W) sequence with fat suppression (FS) and magnetization transfer (MT) for early detection and characterization of infectious meningitis. Fifty patients and 10 control subjects were evaluated with the CE-FLAIR and the CE-T1W sequences with FS and MT. Qualitative assessment was done by two observers for presence and grading of abnormal leptomeningeal enhancement. Quantitative assessment included computation of net meningeal enhancement, using single pixel signal intensity software. A newly devised FLAIR based scoring system, based on certain imaging features including ventricular dilatation, ependymal enhancement, infarcts and subdural effusions was used to indicate the etiology. Data were analysed using the Student's t test, Cohen's Kappa coefficient, Pearson's correlation coefficient, the intraclass correlation coefficient, one way analysis of variance, and Fisher's exact test with Bonferroni correction as the post hoc test. The CE-FLAIR sequence demonstrated a better sensitivity (100%), diagnostic accuracy (95%), and a stronger correlation with the cerebrospinal fluid, total leukocyte count (r = 0.75), protein (r = 0.77), adenosine deaminase (r = 0.81) and blood glucose (r = -0.6) values compared to the CE-T1W sequences. Qualitative grades and quantitative meningeal enhancement on the CE-FLAIR sequence were also significantly greater than those on the other sequences. The FLAIR based scoring system yielded a diagnostic accuracy of 91.6% and a sensitivity of 96%. A strong inverse Pearson's correlation (r = -0.95) was found between the assigned score and patient's Glasgow Coma Scale at the time of admission. The CE-FLAIR sequence is better suited for evaluating infectious meningitis and could be included as a part of the routine MR imaging protocol.
Ding, Qianlan; Wu, Xi; Lu, Yeling; Chen, Changming; Shen, Rui; Zhang, Xi; Jiang, Zhengwen; Wang, Xuefeng
2016-07-01
To develop a digitalized intron 22 inversion (Inv22) detection in patients with severe haemophilia A. The design included two tests: A genotyping test included two multiplex pre-amplification of LD-PCR (PLP) with two combinations of five primers to amplify wild-type and chimeric int22h alleles; a carrier mosaicism test was similar to the genotyping test except only amplification of chimeric int22h alleles by removing one primer from each of two combinations. AccuCopy detection was used to quantify PLP products. PLP product patterns in the genotyping test allowed identifying all known Inv22. Quantitative patterns accurately represented the product patterns. The results of 164 samples detected by the genotyping test were consistent with those obtained by LD-PCR detection. Limit of detection (LOD) of the carrier mosaicism test was at least 2% of heterozygous cells with Inv22. Performing the test in two obligate mothers with negative Inv22 from two sporadic pedigrees mosaic rates of blood and hair root of the mother from pedigree 1 were 8.3% and >20%, respectively and negative results were obtained in pedigree 2. AccuCopy quantification combined with PLP (AQ-PLP) method was confirmed to be rapid and reliable for genotyping Inv22 and highly sensitive to carrier mosaicism detection. Copyright © 2016 Elsevier B.V. All rights reserved.
Fast Exact Euclidean Distance (FEED) Transformation
Schouten, Theo; Kittler, J.; van den Broek, Egon; Petrou, M.; Nixon, M.
2004-01-01
Fast Exact Euclidean Distance (FEED) transformation is introduced, starting from the inverse of the distance transformation. The prohibitive computational cost of a naive implementation of traditional Euclidean Distance Transformation, is tackled by three operations: restriction of both the number
Energy Technology Data Exchange (ETDEWEB)
Donmez, F.Y.; Aslan, H.; Coskun, M. (Dept. of Radiology, Faculty of Medicine, Baskent Univ., Ankara (Turkey))
2009-04-15
Background: Acute disseminated encephalomyelitis (ADEM) may be a rapidly progressive disease with different clinical outcomes. Purpose: To investigate the radiological findings of fulminant ADEM on diffusion-weighted imaging (DWI) and fluid-attenuated inversion recovery (FLAIR) images, and to correlate these findings with clinical outcome. Material and Methods: Initial and follow-up magnetic resonance imaging (MRI) scans in eight patients were retrospectively evaluated for distribution of lesions on FLAIR images and presence of hemorrhage or contrast enhancement. DWI of the patients was evaluated as to cytotoxic versus vasogenic edema. The clinical records were analyzed, and MRI results and clinical outcome were correlated. Results: Four of the eight patients died, three had full recovery, and one had residual cortical blindness. The distribution of the hyperintense lesions on FLAIR sequence was as follows: frontal (37.5%), parietal (50%), temporal (37.5%), occipital (62.5%), basal ganglia (50%), pons (37.5%), mesencephalon (37.5%), and cerebellum (50%). Three of the patients who died had brainstem involvement. Two patients had a cytotoxic edema, one of whom died, and the other developed cortical blindness. Six patients had vasogenic edema: three of these patients had a rapid progression to coma and died; three of them recovered. Conclusion: DWI is not always helpful for evaluating the evolution or predicting the outcome of ADEM. However, extension of the lesions, particularly brainstem involvement, may have an influence on the prognosis.
International Nuclear Information System (INIS)
Klang, Eyal; Aharoni, Dvora; Rimon, Uri; Eshed, Iris; Hermann, Kay-Geert; Herman, Amir; Shazar, Nachshon
2014-01-01
To assess the contribution of contrast material in detecting and evaluating enthesitis of pelvic entheses by MRI. Sixty-seven hip or pelvic 1.5-T MRIs (30:37 male:female, mean age: 53 years) were retrospectively evaluated for the presence of hamstring and gluteus medius (GM) enthesitis by two readers (a resident and an experienced radiologist). Short tau inversion recovery (STIR) and T1-weighted pre- and post-contrast (T1+Gd) images were evaluated by each reader at two sessions. A consensus reading of two senior radiologists was regarded as the gold standard. Clinical data was retrieved from patients' referral form and medical files. Cohen's kappa was used for intra- and inter-observer agreement calculation. Diagnostic properties were calculated against the gold standard reading. A total of 228 entheses were evaluated. Gold standard analysis diagnosed 83 (36 %) enthesitis lesions. Intra-reader reliability for the experienced reader was significantly (p = 0.0001) higher in the T1+Gd images compared to the STIR images (hamstring: k = 0.84/0.45, GM: k = 0.84/0.47). Sensitivity and specificity increased from 0.74/0.8 to 0.87/0.9 in the STIR images and T1+Gd sequences. Intra-reader reliability for the inexperienced reader was lower (p > 0.05). Evidence showing that contrast material improves the reliability, sensitivity, and specificity of detecting enthesitis supports its use in this setting. (orig.)
Analytic processing of distance.
Dopkins, Stephen; Galyer, Darin
2018-01-01
How does a human observer extract from the distance between two frontal points the component corresponding to an axis of a rectangular reference frame? To find out we had participants classify pairs of small circles, varying on the horizontal and vertical axes of a computer screen, in terms of the horizontal distance between them. A response signal controlled response time. The error rate depended on the irrelevant vertical as well as the relevant horizontal distance between the test circles with the relevant distance effect being larger than the irrelevant distance effect. The results implied that the horizontal distance between the test circles was imperfectly extracted from the overall distance between them. The results supported an account, derived from the Exemplar Based Random Walk model (Nosofsky & Palmieri, 1997), under which distance classification is based on the overall distance between the test circles, with relevant distance being extracted from overall distance to the extent that the relevant and irrelevant axes are differentially weighted so as to reduce the contribution of irrelevant distance to overall distance. The results did not support an account, derived from the General Recognition Theory (Ashby & Maddox, 1994), under which distance classification is based on the relevant distance between the test circles, with the irrelevant distance effect arising because a test circle's perceived location on the relevant axis depends on its location on the irrelevant axis, and with relevant distance being extracted from overall distance to the extent that this dependency is absent. Copyright © 2017 Elsevier B.V. All rights reserved.
Fahed, Robert; Lecler, Augustin; Sabben, Candice; Khoury, Naim; Ducroux, Célina; Chalumeau, Vanessa; Botta, Daniele; Kalsoum, Erwah; Boisseau, William; Duron, Loïc; Cabral, Dominique; Koskas, Patricia; Benaïssa, Azzedine; Koulakian, Hasmik; Obadia, Michael; Maïer, Benjamin; Weisenburger-Lile, David; Lapergue, Bertrand; Wang, Adrien; Redjem, Hocine; Ciccio, Gabriele; Smajda, Stanislas; Desilles, Jean-Philippe; Mazighi, Mikaël; Ben Maacha, Malek; Akkari, Inès; Zuber, Kevin; Blanc, Raphaël; Raymond, Jean; Piotin, Michel
2018-01-01
We aimed to study the intrarater and interrater agreement of clinicians attributing DWI-ASPECTS (Diffusion-Weighted Imaging-Alberta Stroke Program Early Computed Tomography Scores) and DWI-FLAIR (Diffusion-Weighted Imaging-Fluid Attenuated Inversion Recovery) mismatch in patients with acute ischemic stroke referred for mechanical thrombectomy. Eighteen raters independently scored anonymized magnetic resonance imaging scans of 30 participants from a multicentre thrombectomy trial, in 2 different reading sessions. Agreement was measured using Fleiss κ and Cohen κ statistics. Interrater agreement for DWI-ASPECTS was slight (κ=0.17 [0.14-0.21]). Four raters (22.2%) had a substantial (or higher) intrarater agreement. Dichotomization of the DWI-ASPECTS (0-5 versus 6-10 or 0-6 versus 7-10) increased the interrater agreement to a substantial level (κ=0.62 [0.48-0.75] and 0.68 [0.55-0.79], respectively) and more raters reached a substantial (or higher) intrarater agreement (17/18 raters [94.4%]). Interrater agreement for DWI-FLAIR mismatch was moderate (κ=0.43 [0.33-0.57]); 11 raters (61.1%) reached a substantial (or higher) intrarater agreement. Agreement between clinicians assessing DWI-ASPECTS and DWI-FLAIR mismatch may not be sufficient to make repeatable clinical decisions in mechanical thrombectomy. The dichotomization of the DWI-ASPECTS (0-5 versus 0-6 or 0-6 versus 7-10) improved interrater and intrarater agreement, however, its relevance for patients selection for mechanical thrombectomy needs to be validated in a randomized trial. © 2017 American Heart Association, Inc.
Vock, David M; Wolfson, Julian; Bandyopadhyay, Sunayan; Adomavicius, Gediminas; Johnson, Paul E; Vazquez-Benitez, Gabriela; O'Connor, Patrick J
2016-06-01
Models for predicting the probability of experiencing various health outcomes or adverse events over a certain time frame (e.g., having a heart attack in the next 5years) based on individual patient characteristics are important tools for managing patient care. Electronic health data (EHD) are appealing sources of training data because they provide access to large amounts of rich individual-level data from present-day patient populations. However, because EHD are derived by extracting information from administrative and clinical databases, some fraction of subjects will not be under observation for the entire time frame over which one wants to make predictions; this loss to follow-up is often due to disenrollment from the health system. For subjects without complete follow-up, whether or not they experienced the adverse event is unknown, and in statistical terms the event time is said to be right-censored. Most machine learning approaches to the problem have been relatively ad hoc; for example, common approaches for handling observations in which the event status is unknown include (1) discarding those observations, (2) treating them as non-events, (3) splitting those observations into two observations: one where the event occurs and one where the event does not. In this paper, we present a general-purpose approach to account for right-censored outcomes using inverse probability of censoring weighting (IPCW). We illustrate how IPCW can easily be incorporated into a number of existing machine learning algorithms used to mine big health care data including Bayesian networks, k-nearest neighbors, decision trees, and generalized additive models. We then show that our approach leads to better calibrated predictions than the three ad hoc approaches when applied to predicting the 5-year risk of experiencing a cardiovascular adverse event, using EHD from a large U.S. Midwestern healthcare system. Copyright © 2016 Elsevier Inc. All rights reserved.
Shobugawa, Yugo; Wiafe, Seth A; Saito, Reiko; Suzuki, Tsubasa; Inaida, Shinako; Taniguchi, Kiyosu; Suzuki, Hiroshi
2012-06-19
Annual influenza epidemics occur worldwide resulting in considerable morbidity and mortality. Spreading pattern of influenza is not well understood because it is often hampered by the quality of surveillance data that limits the reliability of analysis. In Japan, influenza is reported on a weekly basis from 5,000 hospitals and clinics nationwide under the scheme of the National Infectious Disease Surveillance. The collected data are available to the public as weekly reports which were summarized into number of patient visits per hospital or clinic in each of the 47 prefectures. From this surveillance data, we analyzed the spatial spreading patterns of influenza epidemics using weekly weighted standard distance (WSD) from the 1999/2000 through 2008/2009 influenza seasons in Japan. WSD is a single numerical value representing the spatial compactness of influenza outbreak, which is small in case of clustered distribution and large in case of dispersed distribution. We demonstrated that the weekly WSD value or the measure of spatial compactness of the distribution of reported influenza cases, decreased to its lowest value before each epidemic peak in nine out of ten seasons analyzed. The duration between the lowest WSD week and the peak week of influenza cases ranged from minus one week to twenty weeks. The duration showed significant negative association with the proportion of influenza A/H3N2 cases in early phase of each outbreak (correlation coefficient was -0.75, P = 0.012) and significant positive association with the proportion of influenza B cases in the early phase (correlation coefficient was 0.64, P = 0.045), but positively correlated with the proportion of influenza A/H1N1 strain cases (statistically not significant). It is assumed that the lowest WSD values just before influenza peaks are due to local outbreak which results in small standard distance values. As influenza cases disperse nationwide and an epidemic reaches its peak, WSD value changed to be a
Directory of Open Access Journals (Sweden)
Shobugawa Yugo
2012-06-01
Full Text Available Abstract Background Annual influenza epidemics occur worldwide resulting in considerable morbidity and mortality. Spreading pattern of influenza is not well understood because it is often hampered by the quality of surveillance data that limits the reliability of analysis. In Japan, influenza is reported on a weekly basis from 5,000 hospitals and clinics nationwide under the scheme of the National Infectious Disease Surveillance. The collected data are available to the public as weekly reports which were summarized into number of patient visits per hospital or clinic in each of the 47 prefectures. From this surveillance data, we analyzed the spatial spreading patterns of influenza epidemics using weekly weighted standard distance (WSD from the 1999/2000 through 2008/2009 influenza seasons in Japan. WSD is a single numerical value representing the spatial compactness of influenza outbreak, which is small in case of clustered distribution and large in case of dispersed distribution. Results We demonstrated that the weekly WSD value or the measure of spatial compactness of the distribution of reported influenza cases, decreased to its lowest value before each epidemic peak in nine out of ten seasons analyzed. The duration between the lowest WSD week and the peak week of influenza cases ranged from minus one week to twenty weeks. The duration showed significant negative association with the proportion of influenza A/H3N2 cases in early phase of each outbreak (correlation coefficient was −0.75, P = 0.012 and significant positive association with the proportion of influenza B cases in the early phase (correlation coefficient was 0.64, P = 0.045, but positively correlated with the proportion of influenza A/H1N1 strain cases (statistically not significant. It is assumed that the lowest WSD values just before influenza peaks are due to local outbreak which results in small standard distance values. As influenza cases disperse nationwide and an
2012-01-01
Background Annual influenza epidemics occur worldwide resulting in considerable morbidity and mortality. Spreading pattern of influenza is not well understood because it is often hampered by the quality of surveillance data that limits the reliability of analysis. In Japan, influenza is reported on a weekly basis from 5,000 hospitals and clinics nationwide under the scheme of the National Infectious Disease Surveillance. The collected data are available to the public as weekly reports which were summarized into number of patient visits per hospital or clinic in each of the 47 prefectures. From this surveillance data, we analyzed the spatial spreading patterns of influenza epidemics using weekly weighted standard distance (WSD) from the 1999/2000 through 2008/2009 influenza seasons in Japan. WSD is a single numerical value representing the spatial compactness of influenza outbreak, which is small in case of clustered distribution and large in case of dispersed distribution. Results We demonstrated that the weekly WSD value or the measure of spatial compactness of the distribution of reported influenza cases, decreased to its lowest value before each epidemic peak in nine out of ten seasons analyzed. The duration between the lowest WSD week and the peak week of influenza cases ranged from minus one week to twenty weeks. The duration showed significant negative association with the proportion of influenza A/H3N2 cases in early phase of each outbreak (correlation coefficient was −0.75, P = 0.012) and significant positive association with the proportion of influenza B cases in the early phase (correlation coefficient was 0.64, P = 0.045), but positively correlated with the proportion of influenza A/H1N1 strain cases (statistically not significant). It is assumed that the lowest WSD values just before influenza peaks are due to local outbreak which results in small standard distance values. As influenza cases disperse nationwide and an epidemic reaches its peak
International Nuclear Information System (INIS)
Stehling, C.; Niederstadt, T.; Kraemer, S.; Kugel, H.; Schwindt, W.; Heindel, W.; Bachmann, R.
2005-01-01
Purpose: The increased T1 relaxation times at 3.0 Tesla lead to a reduced T1 contrast, requiring adaptation of imaging protocols for high magnetic fields. This prospective study assesses the performance of three techniques for T1-weighted imaging (T1w) at 3.0 T with regard to gray-white differentiation and contrast-to-noise-ratio (CNR). Materials and Methods: Thirty-one patients were examined at a 3.0 T system with axial T1 w inversion recovery (IR), spin-echo (SE) and gradient echo (GE) sequences and after contrast enhancement (CE) with CE-SE and CE-GE sequences. For qualitative analysis, the images were ranked with regard to artifacts, gray-white differentiation, image noise and overall diagnostic quality. For quantitative analysis, the CNR was calculated, and cortex and basal ganglia were compared with the white matter. Results: In the qualitative analysis, IR was judged superior to SE and GE for gray-white differentiation, image noise and overall diagnostic quality, but inferior to the GE sequence with regard to artifacts. CE-GE proved superior to CE-SE in all categories. In the quantitative analysis, CNR of the based ganglia was highest for IR, followed by GE and SE. For the CNR of the cortex, no significant difference was found between IR (16.9) and GE (15.4) but both were superior to the SE (9.4). The CNR of the cortex was significantly higher for CE-GE compared to CE-SE (12.7 vs. 7.6, p<0.001), but the CNR of the basal ganglia was not significantly different. Conclusion: For unenhanced T1w imaging at 3.0 T, the IR technique is, despite increased artifacts, the method of choice due to its superior gray-white differentiation and best overall image quality. For CE-studies, GE sequences are recommended. For cerebral imaging, SE sequences give unsatisfactory results at 3.0 T. (orig.)
Wang, Zheng-Xin; Li, Dan-Dan; Zheng, Hong-Hao
2018-01-30
In China's industrialization process, the effective regulation of energy and environment can promote the positive externality of energy consumption while reducing negative externality, which is an important means for realizing the sustainable development of an economic society. The study puts forward an improved technique for order preference by similarity to an ideal solution based on entropy weight and Mahalanobis distance (briefly referred as E-M-TOPSIS). The performance of the approach was verified to be satisfactory. By separately using traditional and improved TOPSIS methods, the study carried out the empirical appraisals on the external performance of China's energy regulation during 1999~2015. The results show that the correlation between the performance indexes causes the significant difference between the appraisal results of E-M-TOPSIS and traditional TOPSIS. The E-M-TOPSIS takes the correlation between indexes into account and generally softens the closeness degree compared with traditional TOPSIS. Moreover, it makes the relative closeness degree fluctuate within a small-amplitude. The results conform to the practical condition of China's energy regulation and therefore the E-M-TOPSIS is favorably applicable for the external performance appraisal of energy regulation. Additionally, the external economic performance and social responsibility performance (including environmental and energy safety performances) based on the E-M-TOPSIS exhibit significantly different fluctuation trends. The external economic performance dramatically fluctuates with a larger fluctuation amplitude, while the social responsibility performance exhibits a relatively stable interval fluctuation. This indicates that compared to the social responsibility performance, the fluctuation of external economic performance is more sensitive to energy regulation.
National Research Council Canada - National Science Library
Braddock, Joseph
1997-01-01
A study reviewing the existing Army Distance Learning Plan (ADLP) and current Distance Learning practices, with a focus on the Army's training and educational challenges and the benefits of applying Distance Learning techniques...
Directory of Open Access Journals (Sweden)
Eduardo Terrero Matos
2015-04-01
Full Text Available El aprovechamiento del potencial energético del viento requiere obtener suficientes y adecuadas mediciones de la velocidad y la dirección del viento. A partir de estas se modela el comportamiento de estas variables y se calculan los parámetros que caracterizan el potencial; con estos resultados se diseñan los parques eólicos seleccionando los aerogeneradores más convenientes, determinando sus ubicaciones espaciales y diseñando la infraestructura tecnológica. La presente investigación tiene como objetivo resolver uno de los problemas prácticos más comunes durante este proceso: la ausencia de suficientes datos medidos. La solución que se propone se basa en la estimación de los datos ausentes mediante el Método de Inverso de una Potencia de la Distancia, el cual es aplicado a un caso de estudio denominado Colina 4. Los resultados muestran que el método es viable para cualquier caso semejante y que los valores estimados son coherentes con los datos medidos.The use of the wind energy potential requires obtaining sufficient and appropriate measurements of velocity and wind direction. From these measurements, the behavior of these variables is modeled and the parameters of this potential are calculated; with these results the wind farms are designed, selecting the most convenient wind turbines, determining their space locations and designing the technological infrastructure. The present research aims to solve one of the most common practical problems during this process: the absence of sufficient measured data. The solution proposed is based on the estimation of the missing data by means of the Inverse of a Power of the Distance Method, which is applied to a case of study named Colina 4. The results show that the method is viable for any similar case and that the estimated values are coherent with the measured data.
International Nuclear Information System (INIS)
Namatame, Hirofumi; Taniguchi, Masaki
1994-01-01
Photoelectron spectroscopy is regarded as the most powerful means since it can measure almost perfectly the occupied electron state. On the other hand, inverse photoelectron spectroscopy is the technique for measuring unoccupied electron state by using the inverse process of photoelectron spectroscopy, and in principle, the similar experiment to photoelectron spectroscopy becomes feasible. The development of the experimental technology for inverse photoelectron spectroscopy has been carried out energetically by many research groups so far. At present, the heightening of resolution of inverse photoelectron spectroscopy, the development of inverse photoelectron spectroscope in which light energy is variable and so on are carried out. But the inverse photoelectron spectroscope for vacuum ultraviolet region is not on the market. In this report, the principle of inverse photoelectron spectroscopy and the present state of the spectroscope are described, and the direction of the development hereafter is groped. As the experimental equipment, electron guns, light detectors and so on are explained. As the examples of the experiment, the inverse photoelectron spectroscopy of semimagnetic semiconductors and resonance inverse photoelectron spectroscopy are reported. (K.I.)
Inversion: A Most Useful Kind of Transformation.
Dubrovsky, Vladimir
1992-01-01
The transformation assigning to every point its inverse with respect to a circle with given radius and center is called an inversion. Discusses inversion with respect to points, circles, angles, distances, space, and the parallel postulate. Exercises related to these topics are included. (MDH)
Ting-Yu Chen
2014-01-01
Interval type-2 fuzzy sets (T2FSs) with interval membership grades are suitable for dealing with imprecision or uncertainties in many real-world problems. In the Interval type-2 fuzzy context, the aim of this paper is to develop an interactive signed distance-based simple additive weighting (SAW) method for solving multiple criteria group decision-making problems with linguistic ratings and incomplete preference information. This paper first formulates a group decision-making problem with unc...
Ingram, WT
2012-01-01
Inverse limits provide a powerful tool for constructing complicated spaces from simple ones. They also turn the study of a dynamical system consisting of a space and a self-map into a study of a (likely more complicated) space and a self-homeomorphism. In four chapters along with an appendix containing background material the authors develop the theory of inverse limits. The book begins with an introduction through inverse limits on [0,1] before moving to a general treatment of the subject. Special topics in continuum theory complete the book. Although it is not a book on dynamics, the influen
Willis, Erik A; Szabo-Reed, Amanda N; Ptomey, Lauren T; Steger, Felicia L; Honas, Jeffery J; Al-Hihi, Eyad M; Lee, Robert; Vansaghi, Lisa; Washburn, Richard A; Donnelly, Joseph E
2016-03-01
Management of obesity in the context of the primary care physician visit is of limited efficacy in part because of limited ability to engage participants in sustained behavior change between physician visits. Therefore, healthcare systems must find methods to address obesity that reach beyond the walls of clinics and hospitals and address the issues of lifestyle modification in a cost-conscious way. The dramatic increase in technology and online social networks may present healthcare providers with innovative ways to deliver weight management programs that could have an impact on health care at the population level. A randomized study will be conducted on 70 obese adults (BMI 30.0-45.0 kg/m(2)) to determine if weight loss (6 months) is equivalent between weight management interventions utilizing behavioral strategies by either a conference call or social media approach. The primary outcome, body weight, will be assessed at baseline and 6 months. Secondary outcomes including waist circumference, energy and macronutrient intake, and physical activity will be assessed on the same schedule. In addition, a cost analysis and process evaluation will be completed. Copyright © 2016 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Henning Höfig
2014-11-01
Full Text Available Förster resonance energy transfer (FRET is an important tool for studying the structural and dynamical properties of biomolecules. The fact that both the internal dynamics of the biomolecule and the movements of the biomolecule-attached dyes can occur on similar timescales of nanoseconds is an inherent problem in FRET studies. By performing single-molecule FRET-filtered lifetime measurements, we are able to characterize the amplitude of the motions of fluorescent probes attached to double-stranded DNA standards by means of flexible linkers. With respect to previously proposed experimental approaches, we improved the precision and the accuracy of the inter-dye distance distribution parameters by filtering out the donor-only population with pulsed interleaved excitation. A coarse-grained model is employed to reproduce the experimentally determined inter-dye distance distributions. This approach can easily be extended to intrinsically flexible proteins allowing, under certain conditions, to decouple the macromolecule amplitude of motions from the contribution of the dye linkers.
Härkänen, Tommi; Kaikkonen, Risto; Virtala, Esa; Koskinen, Seppo
2014-11-06
To assess the nonresponse rates in a questionnaire survey with respect to administrative register data, and to correct the bias statistically. The Finnish Regional Health and Well-being Study (ATH) in 2010 was based on a national sample and several regional samples. Missing data analysis was based on socio-demographic register data covering the whole sample. Inverse probability weighting (IPW) and doubly robust (DR) methods were estimated using the logistic regression model, which was selected using the Bayesian information criteria. The crude, weighted and true self-reported turnout in the 2008 municipal election and prevalences of entitlements to specially reimbursed medication, and the crude and weighted body mass index (BMI) means were compared. The IPW method appeared to remove a relatively large proportion of the bias compared to the crude prevalence estimates of the turnout and the entitlements to specially reimbursed medication. Several demographic factors were shown to be associated with missing data, but few interactions were found. Our results suggest that the IPW method can improve the accuracy of results of a population survey, and the model selection provides insight into the structure of missing data. However, health-related missing data mechanisms are beyond the scope of statistical methods, which mainly rely on socio-demographic information to correct the results.
Directory of Open Access Journals (Sweden)
Robert F. Love
2001-01-01
Full Text Available Distance predicting functions may be used in a variety of applications for estimating travel distances between points. To evaluate the accuracy of a distance predicting function and to determine its parameters, a goodness-of-fit criteria is employed. AD (Absolute Deviations, SD (Squared Deviations and NAD (Normalized Absolute Deviations are the three criteria that are mostly employed in practice. In the literature some assumptions have been made about the properties of each criterion. In this paper, we present statistical analyses performed to compare the three criteria from different perspectives. For this purpose, we employ the ℓkpθ-norm as the distance predicting function, and statistically compare the three criteria by using normalized absolute prediction error distributions in seventeen geographical regions. We find that there exist no significant differences between the criteria. However, since the criterion SD has desirable properties in terms of distance modelling procedures, we suggest its use in practice.
DEFF Research Database (Denmark)
Nordlund, David; Heiberg, Einar; Carlsson, Marcus
2016-01-01
Background - Contrast-enhanced steady state free precession (CE-SSFP) and T2-weighted short tau inversion recovery (T2-STIR) have been clinically validated to estimate myocardium at risk (MaR) by cardiovascular magnetic resonance while using myocardial perfusion single-photon emission computed...... tomography as reference standard. Myocardial perfusion single-photon emission computed tomography has been used to describe the coronary perfusion territories during myocardial ischemia. Compared with myocardial perfusion single-photon emission computed tomography, cardiovascular magnetic resonance offers...... to show the main coronary perfusion territories using CE-SSFP and T2-STIR. The good agreement between CE-SSFP and T2-STIR from this study and myocardial perfusion single-photon emission computed tomography from previous studies indicates that these 3 methods depict MaR accurately in individual patients...
Directory of Open Access Journals (Sweden)
Hendriksen Ingrid JM
2006-05-01
Full Text Available Abstract Background The prevalence of overweight is increasing and its consequences will cause a major public health burden in the near future. Cost-effective interventions for weight control among the general population are therefore needed. The ALIFE@Work study is investigating a novel lifestyle intervention, aimed at the working population, with individual counselling through either phone or e-mail. This article describes the design of the study and the participant flow up to and including randomisation. Methods/Design ALIFE@Work is a controlled trial, with randomisation to three arms: a control group, a phone based intervention group and an internet based intervention group. The intervention takes six months and is based on a cognitive behavioural approach, addressing physical activity and diet. It consists of 10 lessons with feedback from a personal counsellor, either by phone or e-mail, between each lesson. Lessons contain educational content combined with behaviour change strategies. Assignments in each lesson teach the participant to apply these strategies to every day life. The study population consists of employees from seven Dutch companies. The most important inclusion criteria are having a body mass index (BMI ≥ 25 kg/m2 and being an employed adult. Primary outcomes of the study are body weight and BMI, diet and physical activity. Other outcomes are: perceived health; empowerment; stage of change and self-efficacy concerning weight control, physical activity and eating habits; work performance/productivity; waist circumference, sum of skin folds, blood pressure, total blood cholesterol level and aerobic fitness. A cost-utility- and a cost-effectiveness analysis will be performed as well. Physiological outcomes are measured at baseline and after six and 24 months. Other outcomes are measured by questionnaire at baseline and after six, 12, 18 and 24 months. Statistical analyses for short term (six month results are performed with
Directory of Open Access Journals (Sweden)
Joel Sereno
2010-01-01
Full Text Available Inverse kinematics is the process of converting a Cartesian point in space into a set of joint angles to more efficiently move the end effector of a robot to a desired orientation. This project investigates the inverse kinematics of a robotic hand with fingers under various scenarios. Assuming the parameters of a provided robot, a general equation for the end effector point was calculated and used to plot the region of space that it can reach. Further, the benefits obtained from the addition of a prismatic joint versus an extra variable angle joint were considered. The results confirmed that having more movable parts, such as prismatic points and changing angles, increases the effective reach of a robotic hand.
International Nuclear Information System (INIS)
Desesquelles, P.
1997-01-01
Computer Monte Carlo simulations occupy an increasingly important place between theory and experiment. This paper introduces a global protocol for the comparison of model simulations with experimental results. The correlated distributions of the model parameters are determined using an original recursive inversion procedure. Multivariate analysis techniques are used in order to optimally synthesize the experimental information with a minimum number of variables. This protocol is relevant in all fields if physics dealing with event generators and multi-parametric experiments. (authors)
Ganesan, K; Bydder, G M
2014-09-01
This study compared T1 fluid attenuation inversion recovery (FLAIR) and T1 turbo spin echo (TSE) sequences for evaluation of cervical spine degenerative disease at 3 T. 72 patients (44 males and 28 females; mean age of 39 years; age range, 27-75 years) with suspected cervical spine degenerative disease were prospectively evaluated. Sagittal images of the spine were obtained using T1 FLAIR and T1 TSE sequences. Two experienced neuroradiologists compared the sequences qualitatively and quantitatively. On qualitative evaluation, cerebrospinal fluid (CSF) nulling and contrast at cord-CSF, disc-CSF and disc-cord interfaces were significantly higher on fast T1 FLAIR images than on T1 TSE images (p degenerative disease, owing to higher cord-CSF, disc-cord and disc-CSF contrast. However, intrinsic cord contrast is low on T1 FLAIR images. T1 FLAIR is more promising and sensitive than T1 TSE for evaluation of degenerative spondyloarthropathy and may provide a foundation for development of MR protocols for early detection of degenerative and neoplastic diseases.
Directory of Open Access Journals (Sweden)
Roberto A. Cecílio
2003-12-01
Full Text Available As equações de chuvas intensas representam excelente alternativa para a determinação das precipitações críticas utilizadas em projetos de engenharia; contudo, a sua determinação é muito trabalhosa. No Brasil, entretanto, dispõe-se, em diversos Estados, de um expressivo número de equações, o que torna possível a obtenção, por interpolação, dos parâmetros da equação de chuvas intensas para locais em que estes não são conhecidos. Neste trabalho compararam-se, considerando-se as informações disponíveis em 171 localidades do Estado de Minas Gerais, 625 diferentes combinações entre os quatro parâmetros da equação de chuvas intensas ("K", "a", "b" e "c" interpolados com a utilização da metodologia do inverso da potência da distância, através de cinco diferentes potências. Percebeu-se, em todas as combinações, tendência de superestimativa da intensidade de precipitação. A interpolação de "K" e "c" com o inverso da quinta potência da distância, "a" com o inverso da distância e "b" com o inverso do cubo da distância, apresentou melhores resultados na estimativa da intensidade de precipitação.Intense precipitation equations represent an excellent alternative to determinate critical rainfalls used in engineering designs but its obtainment is time-consuming. However, in some Brazilian States a great number of these equations have been determined for several places, permitting to obtain the equation parameters, by interpolation, for the places where these still have not bean known. In this paper 625 different combinations were compared for the four intense precipitation parameters ("K", "a", "b" and "c" interpolated by the inverse distance to a power method (using five different powers. Data from 171 places located in Minas Gerais State were used. In all the combinations, a tendency of overestimation was noticed for the precipitation intensity. "K" and "c" parameters interpolated by the inverse distance to
Singleton, John; Sengupta, P.; Middleditch, J.; Graves, T.; Schmidt, A.; Perez, M.; Ardavan, H.; Ardavan, A.; Fasel, J.
2010-01-01
Soon after the discovery of pulsars, it was realized that their unique periodic emissions must be associated with a source that rotates. Despite this insight and forty one years of subsequent effort, a detailed understanding of the pulsar emission mechanism has proved elusive. Here, using data for 983 pulsars taken from the Parkes Multibeam Survey, we show that their fluxes at 1400 MHz (S(1400)) decay with distance d according to a non-standard power-law; we suggest that S(1400) is proportional to 1/d. This distance dependence is revealed by two independent statistical techniques, (i) the Maximum Likelihood Method and (ii) analysis of the distance evolution of the cumulative distribution functions of pulsar flux. Moreover, the derived power law is valid for both millisecond and longer-period pulsars, and is robust against possible errors in the NE2001 method for obtaining pulsar distances from dispersion measure. This observation provides strong support for a mechanism of pulsar emission due to superluminal (faster than light in vacuo) polarization currents. Such superluminal polarization currents have been extensively studied by Bolotovskii, Ginzburg and others, who showed both that they do not violate Special Relativity (since the oppositely-charged particles that make them move relatively slowly) and that they form a bona-fide source term in Maxwell's equations. Subsequently, emission of radiation by superluminal polarization currents was demonstrated in laboratory experiments. By extending these ideas to a superluminal polarization current whose distribution pattern follows a circular orbit, we can explain the 1/d dependence of the flux suggested by our analyses of the observational data. In addition, we show that a model of pulsar emission due to such a rotating superluminal polarization current can predict the the frequency spectrum of nine pulsars over 16 orders of magnitude of frequency quantitatively. This work is supported by the DoE LDRD program at Los
Directory of Open Access Journals (Sweden)
Katarina Pucelj
2006-12-01
Full Text Available I would like to underline the role and importance of knowledge, which is acquired by individuals as a result of a learning process and experience. I have established that a form of learning, such as distance learning definitely contributes to a higher learning quality and leads to innovative, dynamic and knowledgebased society. Knowledge and skills enable individuals to cope with and manage changes, solve problems and also create new knowledge. Traditional learning practices face new circumstances, new and modern technologies appear, which enable quick and quality-oriented knowledge implementation. The centre of learning process at distance learning is to increase the quality of life of citizens, their competitiveness on the workforce market and ensure higher economic growth. Intellectual capital is the one, which represents the biggest capital of each society and knowledge is the key factor for succes of everybody, who are fully aware of this. Flexibility, openness and willingness of people to follow new IT solutions form suitable environment for developing and deciding to take up distance learning.
Austin, Peter C; Schuster, Tibor; Platt, Robert W
2015-10-15
Estimating statistical power is an important component of the design of both randomized controlled trials (RCTs) and observational studies. Methods for estimating statistical power in RCTs have been well described and can be implemented simply. In observational studies, statistical methods must be used to remove the effects of confounding that can occur due to non-random treatment assignment. Inverse probability of treatment weighting (IPTW) using the propensity score is an attractive method for estimating the effects of treatment using observational data. However, sample size and power calculations have not been adequately described for these methods. We used an extensive series of Monte Carlo simulations to compare the statistical power of an IPTW analysis of an observational study with time-to-event outcomes with that of an analysis of a similarly-structured RCT. We examined the impact of four factors on the statistical power function: number of observed events, prevalence of treatment, the marginal hazard ratio, and the strength of the treatment-selection process. We found that, on average, an IPTW analysis had lower statistical power compared to an analysis of a similarly-structured RCT. The difference in statistical power increased as the magnitude of the treatment-selection model increased. The statistical power of an IPTW analysis tended to be lower than the statistical power of a similarly-structured RCT.
Distance Dependent Model for the Delay Power Spectrum of In-room Radio Channels
DEFF Research Database (Denmark)
Steinböck, Gerhard; Pedersen, Troels; Fleury, Bernard Henri
2013-01-01
A model based on experimental observations of the delay power spectrum in closed rooms is proposed. The model includes the distance between the transmitter and the receiver as a parameter which makes it suitable for range based radio localization. The experimental observations motivate the proposed...... model of the delay power spectrum with a primary (early) component and a reverberant component (tail). The primary component is modeled as a Dirac delta function weighted according to an inverse distance power law (d-n). The reverberant component is an exponentially decaying function with onset equal...... to the propagation time between transmitter and receiver. Its power decays exponentially with distance. The proposed model allows for the prediction of e.g. the path loss, mean delay, root mean squared (rms) delay spread, and kurtosis versus the distance. The model predictions are validated by measurements...
Shayganfar, A; Sarrami, A H; Fathi, S; Shaygannejad, V; Shamsian, S
2018-04-22
In primary studies with 3 T Magnets, phase sensitive reconstruction of T1-weighted inversion recovery (PSIR) have showed ability to depict the cervical multiple sclerosis (MS) lesions some of which may not be detected by short tau inversion recovery (STIR). Regarding to more availability of 1.5 T MRI, this study was designed to evaluate the eligibility of PSIR in 1.5 T for detection of spinal cord MS lesions. In a study between September 2016 till March 2017 the patients with proven diagnosis of MS enrolled to the study. The standard protocol (sagittal STIR and T2W FSE and axial T2W FSE) as well as sagittal PSIR sequences were performed using a 1.5 T magnet. The images were studied and the lesions were localized and recorded as sharp or faint on each sequence. Of 25 patients (22 females and 3 males, with mean age of 33.5 ± 9.8 years and mean disease duration of 5.4 ± 3.9 years) 69 lesions in STIR, 53 lesions in T2W FSE, 47 lesions in Magnitude reconstruction of PSIR (Magnitude), and 30 lesions in phase sensitive (real) reconstruction PSIR were detected. A Wilcoxon signed-rank test showed STIR has a statistically significant higher detection rate of the plaques rather than other three sequences. (STIR and T2W FSE, Z = -4.000, p definition of the plaques rather than other three sequences. This study shows that in the setting of a 1.5 T magnet field, STIR significantly has a superiority over both of the PSIR reconstructions (i.e. real and magnitude) for the detection as well as the boundary definition of the cervical cord lesions of MS. These results have a good relevance to clinical practice by using MRI scanners and sequences routinely available, however, it is discrepant with other reports performed by 3 T Magnet fields. Copyright © 2018 Elsevier B.V. All rights reserved.
Inverse comptonization vs. thermal synchrotron
International Nuclear Information System (INIS)
Fenimore, E.E.; Klebesadel, R.W.; Laros, J.G.
1983-01-01
There are currently two radiation mechanisms being considered for gamma-ray bursts: thermal synchrotron and inverse comptonization. They are mutually exclusive since thermal synchrotron requires a magnetic field of approx. 10 12 Gauss whereas inverse comptonization cannot produce a monotonic spectrum if the field is larger than 10 11 and is too inefficient relative to thermal synchrotron unless the field is less than 10 9 Gauss. Neither mechanism can explain completely the observed characteristics of gamma-ray bursts. However, we conclude that thermal synchrotron is more consistent with the observations if the sources are approx. 40 kpc away whereas inverse comptonization is more consistent if they are approx. 300 pc away. Unfortunately, the source distance is still not known and, thus, the radiation mechanism is still uncertain
A Comparison of Weights Matrices on Computation of Dengue Spatial Autocorrelation
Suryowati, K.; Bekti, R. D.; Faradila, A.
2018-04-01
Spatial autocorrelation is one of spatial analysis to identify patterns of relationship or correlation between locations. This method is very important to get information on the dispersal patterns characteristic of a region and linkages between locations. In this study, it applied on the incidence of Dengue Hemorrhagic Fever (DHF) in 17 sub districts in Sleman, Daerah Istimewa Yogyakarta Province. The link among location indicated by a spatial weight matrix. It describe the structure of neighbouring and reflects the spatial influence. According to the spatial data, type of weighting matrix can be divided into two types: point type (distance) and the neighbourhood area (contiguity). Selection weighting function is one determinant of the results of the spatial analysis. This study use queen contiguity based on first order neighbour weights, queen contiguity based on second order neighbour weights, and inverse distance weights. Queen contiguity first order and inverse distance weights shows that there is the significance spatial autocorrelation in DHF, but not by queen contiguity second order. Queen contiguity first and second order compute 68 and 86 neighbour list
Fast Exact Euclidean Distance (FEED): A new class of adaptable distance transforms
Schouten, Theo E.; van den Broek, Egon
2014-01-01
A new unique class of foldable distance transforms of digital images (DT) is introduced, baptized: Fast Exact Euclidean Distance (FEED) transforms. FEED class algorithms calculate the DT starting directly from the definition or rather its inverse. The principle of FEED class algorithms is
Fast Exact Euclidean Distance (FEED) : A new class of adaptable distance transforms
Schouten, Theo E.; van den Broek, Egon L.
2014-01-01
A new unique class of foldable distance transforms of digital images (DT) is introduced, baptized: Fast Exact Euclidean Distance (FEED) transforms. FEED class algorithms calculate the DT startingdirectly from the definition or rather its inverse. The principle of FEED class algorithms is introduced,
Inverse problems of geophysics
International Nuclear Information System (INIS)
Yanovskaya, T.B.
2003-07-01
This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given
Housley, Daniel; Caine, Abby; Cherubini, Giunio; Taeymans, Olivier
2017-07-01
Sagittal T2-weighted sequences (T2-SAG) are the foundation of spinal protocols when screening for the presence of intervertebral disc extrusion. We often utilize sagittal short-tau inversion recovery sequences (STIR-SAG) as an adjunctive screening series, and experience suggests that this combined approach provides superior detection rates. We hypothesized that STIR-SAG would provide higher sensitivity than T2-SAG in the identification and localization of intervertebral disc extrusion. We further hypothesized that the parallel evaluation of paired T2-SAG and STIR-SAG series would provide a higher sensitivity than could be achieved with either independent sagittal series when viewed in isolation. This retrospective diagnostic accuracy study blindly reviewed T2-SAG and STIR-SAG sequences from dogs (n = 110) with surgically confirmed intervertebral disc extrusion. A consensus between two radiologists found no significant difference in sensitivity between T2-SAG and STIR-SAG during the identification of intervertebral disc extrusion (T2-SAG: 92.7%, STIR-SAG: 94.5%, P = 0.752). Nevertheless, STIR-SAG accurately identified intervertebral disc extrusion in 66.7% of cases where the evaluation of T2-SAG in isolation had provided a false negative diagnosis. Additionally, one radiologist found that the parallel evaluation of paired T2-SAG and STIR-SAG series provided a significantly higher sensitivity than T2-SAG in isolation, during the identification of intervertebral disc extrusion (T2-SAG: 78.2%, paired T2-SAG, and STIR-SAG: 90.9%, P = 0.017). A similar nonsignificant trend was observed when the consensus of both radiologists was taken into consideration (T2-SAG: 92.7%, paired T2-SAG, and STIR-SAG = 97.3%, P = 0.392). We therefore conclude that STIR-SAG is capable of identifying intervertebral disc extrusion that is inconspicuous in T2-SAG, and that STIR-SAG should be considered a useful adjunctive sequence during preliminary sagittal screening for intervertebral disc
Directory of Open Access Journals (Sweden)
Longxiang Li
Full Text Available Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.
Fuzzy logic guided inverse treatment planning
International Nuclear Information System (INIS)
Yan Hui; Yin Fangfang; Guan Huaiqun; Kim, Jae Ho
2003-01-01
A fuzzy logic technique was applied to optimize the weighting factors in the objective function of an inverse treatment planning system for intensity-modulated radiation therapy (IMRT). Based on this technique, the optimization of weighting factors is guided by the fuzzy rules while the intensity spectrum is optimized by a fast-monotonic-descent method. The resultant fuzzy logic guided inverse planning system is capable of finding the optimal combination of weighting factors for different anatomical structures involved in treatment planning. This system was tested using one simulated (but clinically relevant) case and one clinical case. The results indicate that the optimal balance between the target dose and the critical organ dose is achieved by a refined combination of weighting factors. With the help of fuzzy inference, the efficiency and effectiveness of inverse planning for IMRT are substantially improved
Amirpour Haredasht, Sara; Polson, Dale; Main, Rodger; Lee, Kyuyoung; Holtkamp, Derald; Martínez-López, Beatriz
2017-06-07
Porcine reproductive and respiratory syndrome (PRRS) is one of the most economically devastating infectious diseases for the swine industry. A better understanding of the disease dynamics and the transmission pathways under diverse epidemiological scenarios is a key for the successful PRRS control and elimination in endemic settings. In this paper we used a two step parameter-driven (PD) Bayesian approach to model the spatio-temporal dynamics of PRRS and predict the PRRS status on farm in subsequent time periods in an endemic setting in the US. For such purpose we used information from a production system with 124 pig sites that reported 237 PRRS cases from 2012 to 2015 and from which the pig trade network and geographical location of farms (i.e., distance was used as a proxy of airborne transmission) was available. We estimated five PD models with different weights namely: (i) geographical distance weight which contains the inverse distance between each pair of farms in kilometers, (ii) pig trade weight (PT ji ) which contains the absolute number of pig movements between each pair of farms, (iii) the product between the distance weight and the standardized relative pig trade weight, (iv) the product between the standardized distance weight and the standardized relative pig trade weight, and (v) the product of the distance weight and the pig trade weight. The model that included the pig trade weight matrix provided the best fit to model the dynamics of PRRS cases on a 6-month basis from 2012 to 2015 and was able to predict PRRS outbreaks in the subsequent time period with an area under the ROC curve (AUC) of 0.88 and the accuracy of 85% (105/124). The result of this study reinforces the importance of pig trade in PRRS transmission in the US. Methods and results of this study may be easily adapted to any production system to characterize the PRRS dynamics under diverse epidemic settings to more timely support decision-making.
Distance and Cable Length Measurement System
Hernández, Sergio Elias; Acosta, Leopoldo; Toledo, Jonay
2009-01-01
A simple, economic and successful design for distance and cable length detection is presented. The measurement system is based on the continuous repetition of a pulse that endlessly travels along the distance to be detected. There is a pulse repeater at both ends of the distance or cable to be measured. The endless repetition of the pulse generates a frequency that varies almost inversely with the distance to be measured. The resolution and distance or cable length range could be adjusted by varying the repetition time delay introduced at both ends and the measurement time. With this design a distance can be measured with centimeter resolution using electronic system with microsecond resolution, simplifying classical time of flight designs which require electronics with picosecond resolution. This design was also applied to position measurement. PMID:22303169
water quality assessment and mapping using inverse distance
African Journals Online (AJOL)
USER
pollutants, particularly oxides of sulphur and nitrogen triggered by .... Dissolved Trace Elements and Heavy Metals in the. Upper River (China) ... Food Program Institute (IFPRI), 11(1): 1-6, 2012. [8] Adamu, D. M. .... Soil and Water Conservation.
The inverse square law of gravitation
International Nuclear Information System (INIS)
Cook, A.H.
1987-01-01
The inverse square law of gravitation is very well established over the distances of celestial mechanics, while in electrostatics the law has been shown to be followed to very high precision. However, it is only within the last century that any laboratory experiments have been made to test the inverse square law for gravitation, and all but one has been carried out in the last ten years. At the same time, there has been considerable interest in the possibility of deviations from the inverse square law, either because of a possible bearing on unified theories of forces, including gravitation or, most recently, because of a possible additional fifth force of nature. In this article the various lines of evidence for the inverse square law are summarized, with emphasis upon the recent laboratory experiments. (author)
Laterally constrained inversion for CSAMT data interpretation
Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun
2015-10-01
Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.
Designing legible fonts for distance reading
DEFF Research Database (Denmark)
Beier, Sofie
2016-01-01
This chapter reviews existing knowledge on distance legibility of fonts, and finds that for optimal distance reading, letters and numbers benefit from relative wide shapes, open inner counters and a large x-height; fonts should further be widely spaced, and the weight should not be too heavy or t...
Deza, Michel Marie
2016-01-01
This 4th edition of the leading reference volume on distance metrics is characterized by updated and rewritten sections on some items suggested by experts and readers, as well a general streamlining of content and the addition of essential new topics. Though the structure remains unchanged, the new edition also explores recent advances in the use of distances and metrics for e.g. generalized distances, probability theory, graph theory, coding theory, data analysis. New topics in the purely mathematical sections include e.g. the Vitanyi multiset-metric, algebraic point-conic distance, triangular ratio metric, Rossi-Hamming metric, Taneja distance, spectral semimetric between graphs, channel metrization, and Maryland bridge distance. The multidisciplinary sections have also been supplemented with new topics, including: dynamic time wrapping distance, memory distance, allometry, atmospheric depth, elliptic orbit distance, VLBI distance measurements, the astronomical system of units, and walkability distance. Lea...
Acute puerperal uterine inversion
International Nuclear Information System (INIS)
Hussain, M.; Liaquat, N.; Noorani, K.; Bhutta, S.Z; Jabeen, T.
2004-01-01
Objective: To determine the frequency, causes, clinical presentations, management and maternal mortality associated with acute puerperal inversion of the uterus. Materials and Methods: All the patients who developed acute puerperal inversion of the uterus either in or outside the JPMC were included in the study. Patients of chronic uterine inversion were not included in the present study. Abdominal and vaginal examination was done to confirm and classify inversion into first, second or third degrees. Results: 57036 deliveries and 36 acute uterine inversions occurred during the study period, so the frequency of uterine inversion was 1 in 1584 deliveries. Mismanagement of third stage of labour was responsible for uterine inversion in 75% of patients. Majority of the patients presented with shock, either hypovolemic (69%) or neurogenic (13%) in origin. Manual replacement of the uterus under general anaesthesia with 2% halothane was successfully done in 35 patients (97.5%). Abdominal hysterectomy was done in only one patient. There were three maternal deaths due to inversion. Conclusion: Proper education and training regarding placental delivery, diagnosis and management of uterine inversion must be imparted to the maternity care providers especially to traditional birth attendants and family physicians to prevent this potentially life-threatening condition. (author)
Training for Distance Teaching through Distance Learning.
Cadorath, Jill; Harris, Simon; Encinas, Fatima
2002-01-01
Describes a mixed-mode bachelor degree course in English language teaching at the Universidad Autonoma de Puebla (Mexico) that was designed to help practicing teachers write appropriate distance education materials by giving them the experience of being distance students. Includes a course outline and results of a course evaluation. (Author/LRW)
The Distance Standard Deviation
Edelmann, Dominic; Richards, Donald; Vogel, Daniel
2017-01-01
The distance standard deviation, which arises in distance correlation analysis of multivariate data, is studied as a measure of spread. New representations for the distance standard deviation are obtained in terms of Gini's mean difference and in terms of the moments of spacings of order statistics. Inequalities for the distance variance are derived, proving that the distance standard deviation is bounded above by the classical standard deviation and by Gini's mean difference. Further, it is ...
Inverse logarithmic potential problem
Cherednichenko, V G
1996-01-01
The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.
Inverse Kinematics using Quaternions
DEFF Research Database (Denmark)
Henriksen, Knud; Erleben, Kenny; Engell-Nørregård, Morten
In this project I describe the status of inverse kinematics research, with the focus firmly on the methods that solve the core problem. An overview of the different methods are presented Three common methods used in inverse kinematics computation have been chosen as subject for closer inspection....
Deza, Michel Marie
2014-01-01
This updated and revised third edition of the leading reference volume on distance metrics includes new items from very active research areas in the use of distances and metrics such as geometry, graph theory, probability theory and analysis. Among the new topics included are, for example, polyhedral metric space, nearness matrix problems, distances between belief assignments, distance-related animal settings, diamond-cutting distances, natural units of length, Heidegger’s de-severance distance, and brain distances. The publication of this volume coincides with intensifying research efforts into metric spaces and especially distance design for applications. Accurate metrics have become a crucial goal in computational biology, image analysis, speech recognition and information retrieval. Leaving aside the practical questions that arise during the selection of a ‘good’ distance function, this work focuses on providing the research community with an invaluable comprehensive listing of the main available di...
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning
International Nuclear Information System (INIS)
Lee, Taewoo; Hammad, Muhannad; Chan, Timothy C. Y.; Craig, Tim; Sharpe, Michael B.
2013-01-01
Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. A regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl 2 distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the
Inverse design of multicomponent assemblies
Piñeros, William D.; Lindquist, Beth A.; Jadrich, Ryan B.; Truskett, Thomas M.
2018-03-01
Inverse design can be a useful strategy for discovering interactions that drive particles to spontaneously self-assemble into a desired structure. Here, we extend an inverse design methodology—relative entropy optimization—to determine isotropic interactions that promote assembly of targeted multicomponent phases, and we apply this extension to design interactions for a variety of binary crystals ranging from compact triangular and square architectures to highly open structures with dodecagonal and octadecagonal motifs. We compare the resulting optimized (self- and cross) interactions for the binary assemblies to those obtained from optimization of analogous single-component systems. This comparison reveals that self-interactions act as a "primer" to position particles at approximately correct coordination shell distances, while cross interactions act as the "binder" that refines and locks the system into the desired configuration. For simpler binary targets, it is possible to successfully design self-assembling systems while restricting one of these interaction types to be a hard-core-like potential. However, optimization of both self- and cross interaction types appears necessary to design for assembly of more complex or open structures.
International Nuclear Information System (INIS)
Burkhard, N.R.
1979-01-01
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
Directory of Open Access Journals (Sweden)
Leonardo Gomes
2004-06-01
Full Text Available Blowflies utilize discrete and ephemeral sites for breeding and larval nutrition. After the exhaustion of food, the larvae begin dispersing to search for sites to pupate or for additional food source, process referred as postfeeding larval dispersal. Some aspects of this process were investigated in Lucilia cuprina (Wiedemann, 1830, utilizing a circular arena to permit the radial dispersion of larvae from the food source in the center. To determine the localization of each pupa, the arena was split into 72 equal sectors from the center. For each pupa, distance from the center of arena, weight and depth were determined. Statistical tests were performed to verify the relation among weight, depth and distance of burying for pupation. It was verified that the larvae that disperse farthest are those with lowest weights. The majority of individuals reached the depth of burying for pupation between 7 and 18 cm. The study of this process of dispersion can be utilized in the estimation of postmortem interval (PMI for human corpses in medico-criminal investigations.
Székely, Gábor J.; Rizzo, Maria L.
2010-01-01
Distance correlation is a new class of multivariate dependence coefficients applicable to random vectors of arbitrary and not necessarily equal dimension. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but generalize and extend these classical bivariate measures of dependence. Distance correlation characterizes independence: it is zero if and only if the random vectors are independent. The notion of covariance with...
van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime
2016-01-01
This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,
Sharp spatially constrained inversion
DEFF Research Database (Denmark)
Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.
2013-01-01
We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....
International Nuclear Information System (INIS)
Rosenwald, J.-C.
2008-01-01
The lecture addressed the following topics: Optimizing radiotherapy dose distribution; IMRT contributes to optimization of energy deposition; Inverse vs direct planning; Main steps of IMRT; Background of inverse planning; General principle of inverse planning; The 3 main components of IMRT inverse planning; The simplest cost function (deviation from prescribed dose); The driving variable : the beamlet intensity; Minimizing a 'cost function' (or 'objective function') - the walker (or skier) analogy; Application to IMRT optimization (the gradient method); The gradient method - discussion; The simulated annealing method; The optimization criteria - discussion; Hard and soft constraints; Dose volume constraints; Typical user interface for definition of optimization criteria; Biological constraints (Equivalent Uniform Dose); The result of the optimization process; Semi-automatic solutions for IMRT; Generalisation of the optimization problem; Driving and driven variables used in RT optimization; Towards multi-criteria optimization; and Conclusions for the optimization phase. (P.A.)
Safety distance between underground natural gas and water pipeline facilities
International Nuclear Information System (INIS)
Mohsin, R.; Majid, Z.A.; Yusof, M.Z.
2014-01-01
A leaking water pipe bursting high pressure water jet in the soil will create slurry erosion which will eventually erode the adjacent natural gas pipe, thus causing its failure. The standard 300 mm safety distance used to place natural gas pipe away from water pipeline facilities needs to be reviewed to consider accidental damage and provide safety cushion to the natural gas pipe. This paper presents a study on underground natural gas pipeline safety distance via experimental and numerical approaches. The pressure–distance characteristic curve obtained from this experimental study showed that the pressure was inversely proportional to the square of the separation distance. Experimental testing using water-to-water pipeline system environment was used to represent the worst case environment, and could be used as a guide to estimate appropriate safety distance. Dynamic pressures obtained from the experimental measurement and simulation prediction mutually agreed along the high-pressure water jetting path. From the experimental and simulation exercises, zero effect distance for water-to-water medium was obtained at an estimated horizontal distance at a minimum of 1500 mm, while for the water-to-sand medium, the distance was estimated at a minimum of 1200 mm. - Highlights: • Safe separation distance of underground natural gas pipes was determined. • Pressure curve is inversely proportional to separation distance. • Water-to-water system represents the worst case environment. • Measured dynamic pressures mutually agreed with simulation results. • Safe separation distance of more than 1200 mm should be applied
Haptic Discrimination of Distance
van Beek, Femke E.; Bergmann Tiest, Wouter M.; Kappers, Astrid M. L.
2014-01-01
While quite some research has focussed on the accuracy of haptic perception of distance, information on the precision of haptic perception of distance is still scarce, particularly regarding distances perceived by making arm movements. In this study, eight conditions were measured to answer four main questions, which are: what is the influence of reference distance, movement axis, perceptual mode (active or passive) and stimulus type on the precision of this kind of distance perception? A discrimination experiment was performed with twelve participants. The participants were presented with two distances, using either a haptic device or a real stimulus. Participants compared the distances by moving their hand from a start to an end position. They were then asked to judge which of the distances was the longer, from which the discrimination threshold was determined for each participant and condition. The precision was influenced by reference distance. No effect of movement axis was found. The precision was higher for active than for passive movements and it was a bit lower for real stimuli than for rendered stimuli, but it was not affected by adding cutaneous information. Overall, the Weber fraction for the active perception of a distance of 25 or 35 cm was about 11% for all cardinal axes. The recorded position data suggest that participants, in order to be able to judge which distance was the longer, tried to produce similar speed profiles in both movements. This knowledge could be useful in the design of haptic devices. PMID:25116638
Haptic discrimination of distance.
Directory of Open Access Journals (Sweden)
Femke E van Beek
Full Text Available While quite some research has focussed on the accuracy of haptic perception of distance, information on the precision of haptic perception of distance is still scarce, particularly regarding distances perceived by making arm movements. In this study, eight conditions were measured to answer four main questions, which are: what is the influence of reference distance, movement axis, perceptual mode (active or passive and stimulus type on the precision of this kind of distance perception? A discrimination experiment was performed with twelve participants. The participants were presented with two distances, using either a haptic device or a real stimulus. Participants compared the distances by moving their hand from a start to an end position. They were then asked to judge which of the distances was the longer, from which the discrimination threshold was determined for each participant and condition. The precision was influenced by reference distance. No effect of movement axis was found. The precision was higher for active than for passive movements and it was a bit lower for real stimuli than for rendered stimuli, but it was not affected by adding cutaneous information. Overall, the Weber fraction for the active perception of a distance of 25 or 35 cm was about 11% for all cardinal axes. The recorded position data suggest that participants, in order to be able to judge which distance was the longer, tried to produce similar speed profiles in both movements. This knowledge could be useful in the design of haptic devices.
Interface Simulation Distances
Directory of Open Access Journals (Sweden)
Pavol Černý
2012-10-01
Full Text Available The classical (boolean notion of refinement for behavioral interfaces of system components is the alternating refinement preorder. In this paper, we define a distance for interfaces, called interface simulation distance. It makes the alternating refinement preorder quantitative by, intuitively, tolerating errors (while counting them in the alternating simulation game. We show that the interface simulation distance satisfies the triangle inequality, that the distance between two interfaces does not increase under parallel composition with a third interface, and that the distance between two interfaces can be bounded from above and below by distances between abstractions of the two interfaces. We illustrate the framework, and the properties of the distances under composition of interfaces, with two case studies.
DEFF Research Database (Denmark)
Larsen, Gunvor Riber
The environmental impact of tourism mobility is linked to the distances travelled in order to reach a holiday destination, and with tourists travelling more and further than previously, an understanding of how the tourists view the distance they travel across becomes relevant. Based on interviews...... contribute to an understanding of how it is possible to change tourism travel behaviour towards becoming more sustainable. How tourists 'consume distance' is discussed, from the practical level of actually driving the car or sitting in the air plane, to the symbolic consumption of distance that occurs when...... travelling on holiday becomes part of a lifestyle and a social positioning game. Further, different types of tourist distance consumers are identified, ranging from the reluctant to the deliberate and nonchalant distance consumers, who display very differing attitudes towards the distance they all travel...
Moral distance in dictator games
Directory of Open Access Journals (Sweden)
Fernando Aguiar
2008-04-01
Full Text Available We perform an experimental investigation using a dictator game in which individuals must make a moral decision --- to give or not to give an amount of money to poor people in the Third World. A questionnaire in which the subjects are asked about the reasons for their decision shows that, at least in this case, moral motivations carry a heavy weight in the decision: the majority of dictators give the money for reasons of a consequentialist nature. Based on the results presented here and of other analogous experiments, we conclude that dicator behavior can be understood in terms of moral distance rather than social distance and that it systematically deviates from the egoism assumption in economic models and game theory. %extit{JEL}: A13, C72, C91
Traversing psychological distance.
Liberman, Nira; Trope, Yaacov
2014-07-01
Traversing psychological distance involves going beyond direct experience, and includes planning, perspective taking, and contemplating counterfactuals. Consistent with this view, temporal, spatial, and social distances as well as hypotheticality are associated, affect each other, and are inferred from one another. Moreover, traversing all distances involves the use of abstraction, which we define as forming a belief about the substitutability for a specific purpose of subjectively distinct objects. Indeed, across many instances of both abstraction and psychological distancing, more abstract constructs are used for more distal objects. Here, we describe the implications of this relation for prediction, choice, communication, negotiation, and self-control. We ask whether traversing distance is a general mental ability and whether distance should replace expectancy in expected-utility theories. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ziegler, Gerhard
2011-01-01
Distance protection provides the basis for network protection in transmission systems and meshed distribution systems. This book covers the fundamentals of distance protection and the special features of numerical technology. The emphasis is placed on the application of numerical distance relays in distribution and transmission systems.This book is aimed at students and engineers who wish to familiarise themselves with the subject of power system protection, as well as the experienced user, entering the area of numerical distance protection. Furthermore it serves as a reference guide for s
DEFF Research Database (Denmark)
Mosegaard, Klaus
2012-01-01
For non-linear inverse problems, the mathematical structure of the mapping from model parameters to data is usually unknown or partly unknown. Absence of information about the mathematical structure of this function prevents us from presenting an analytical solution, so our solution depends on our......-heuristics are inefficient for large-scale, non-linear inverse problems, and that the 'no-free-lunch' theorem holds. We discuss typical objections to the relevance of this theorem. A consequence of the no-free-lunch theorem is that algorithms adapted to the mathematical structure of the problem perform more efficiently than...... pure meta-heuristics. We study problem-adapted inversion algorithms that exploit the knowledge of the smoothness of the misfit function of the problem. Optimal sampling strategies exist for such problems, but many of these problems remain hard. © 2012 Springer-Verlag....
Inverse scale space decomposition
DEFF Research Database (Denmark)
Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane
2018-01-01
We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...
Lindstrom, Peter A.; And Others
This document consists of four units. The first of these views calculus applications to work, area, and distance problems. It is designed to help students gain experience in: 1) computing limits of Riemann sums; 2) computing definite integrals; and 3) solving elementary area, distance, and work problems by integration. The second module views…
Generalized inverses theory and computations
Wang, Guorong; Qiao, Sanzheng
2018-01-01
This book begins with the fundamentals of the generalized inverses, then moves to more advanced topics. It presents a theoretical study of the generalization of Cramer's rule, determinant representations of the generalized inverses, reverse order law of the generalized inverses of a matrix product, structures of the generalized inverses of structured matrices, parallel computation of the generalized inverses, perturbation analysis of the generalized inverses, an algorithmic study of the computational methods for the full-rank factorization of a generalized inverse, generalized singular value decomposition, imbedding method, finite method, generalized inverses of polynomial matrices, and generalized inverses of linear operators. This book is intended for researchers, postdocs, and graduate students in the area of the generalized inverses with an undergraduate-level understanding of linear algebra.
Some results on inverse scattering
International Nuclear Information System (INIS)
Ramm, A.G.
2008-01-01
A review of some of the author's results in the area of inverse scattering is given. The following topics are discussed: (1) Property C and applications, (2) Stable inversion of fixed-energy 3D scattering data and its error estimate, (3) Inverse scattering with 'incomplete' data, (4) Inverse scattering for inhomogeneous Schroedinger equation, (5) Krein's inverse scattering method, (6) Invertibility of the steps in Gel'fand-Levitan, Marchenko, and Krein inversion methods, (7) The Newton-Sabatier and Cox-Thompson procedures are not inversion methods, (8) Resonances: existence, location, perturbation theory, (9) Born inversion as an ill-posed problem, (10) Inverse obstacle scattering with fixed-frequency data, (11) Inverse scattering with data at a fixed energy and a fixed incident direction, (12) Creating materials with a desired refraction coefficient and wave-focusing properties. (author)
... Health Information Weight Management English English Español Weight Management Obesity is a chronic condition that affects more ... Liver (NASH) Heart Disease & Stroke Sleep Apnea Weight Management Topics About Food Portions Bariatric Surgery for Severe ...
Angle-domain inverse scattering migration/inversion in isotropic media
Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan
2018-07-01
The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.
Cohen, A.M.; Beineke, L.W.; Wilson, R.J.; Cameron, P.J.
2004-01-01
In this chapter we investigate the classification of distance-transitive graphs: these are graphs whose automorphism groups are transitive on each of the sets of pairs of vertices at distance i, for i = 0, 1,.... We provide an introduction into the field. By use of the classification of finite
Distance Education in Entwicklungslandern.
German Foundation for International Development, Bonn (West Germany).
Seminar and conference reports and working papers on distance education of adults, which reflect the experiences of many countries, are presented. Contents include the draft report of the 1979 International Seminar on Distance Education held in Addis Ababa, Ethiopia, which was jointly sponsored by the United Nations Economic Commission for Africa…
Deza, Michel Marie
2009-01-01
Distance metrics and distances have become an essential tool in many areas of pure and applied Mathematics. This title offers both independent introductions and definitions, while at the same time making cross-referencing easy through hyperlink-like boldfaced references to original definitions.
Directory of Open Access Journals (Sweden)
Dr. Nursel Selver RUZGAR,
2004-04-01
Full Text Available Distance Education in Turkey Assistant Professor Dr. Nursel Selver RUZGAR Technical Education Faculty Marmara University, TURKEY ABSTRACT Many countries of the world are using distance education with various ways, by internet, by post and by TV. In this work, development of distance education in Turkey has been presented from the beginning. After discussing types and applications for different levels of distance education in Turkey, the distance education was given in the cultural aspect of the view. Then, in order to create the tendencies and thoughts of graduates of Higher Education Institutions and Distance Education Institutions about being competitors in job markets, sufficiency of education level, advantages for education system, continuing education in different Institutions, a face-to-face survey was applied to 1284 graduates, 958 from Higher Education Institutions and 326 from Distance Education Institutions. The results were evaluated and discussed. In the last part of this work, suggestions to become widespread and improve the distance education in the country were made.
Inversion assuming weak scattering
DEFF Research Database (Denmark)
Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus
2013-01-01
due to the complex nature of the field. A method based on linear inversion is employed to infer information about the statistical properties of the scattering field from the obtained cross-spectral matrix. A synthetic example based on an active high-frequency sonar demonstrates that the proposed...
Bayesian seismic AVO inversion
Energy Technology Data Exchange (ETDEWEB)
Buland, Arild
2002-07-01
A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S
Calculation of the inverse data space via sparse inversion
Saragiotis, Christos
2011-01-01
The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from trivial as theory requires infinite time and offset recording. Furthermore regularization issues arise during inversion. We perform the inversion by minimizing the least-squares norm of the misfit function by constraining the $ell_1$ norm of the solution, being the inverse data space. In this way a sparse inversion approach is obtained. We show results on field data with an application to surface multiple removal.
First trimester phthalate exposure and anogenital distance in newborns
Swan, S.H.; Sathyanarayana, S.; Barrett, E.S.; Janssen, S.; Liu, F.; Nguyen, R.H.N.; Redmon, J.B.; Liu, Fan; Scher, Erica; Stasenko, Marina; Ayash, Erin; Schirmer, Melissa; Farrell, Jason; Thiet, Mari-Paule; Baskin, Laurence; Gray Chelsea Georgesen, Heather L.; Rody, Brooke J.; Terrell, Carrie A.; Kaur, Kapilmeet; Brantley, Erin; Fiore, Heather; Kochman, Lynda; Parlett, Lauren; Marino, Jessica; Hulbert, William; Mevorach, Robert; Pressman, Eva; Ivicek, Kristy; Salveson, Bobbie; Alcedo, Garry
2015-01-01
STUDY QUESTION Is first trimester phthalate exposure associated with anogenital distance (AGD), a biomarker of prenatal androgen exposure, in newborns? SUMMARY ANSWER Concentrations of diethylhexyl phthalate (DEHP) metabolites in first trimester maternal urine samples are inversely associated with AGD in male, but not female, newborns. WHAT IS KNOWN ALREADY AGD is a sexually dimorphic measure reflecting prenatal androgen exposure. Prenatal phthalate exposure has been associated with shorter male AGD in multiple animal studies. Prior human studies, which have been limited by small sample size and imprecise timing of exposure and/or outcome, have reported conflicting results. STUDY DESIGN, SIZE, DURATION The Infant Development and the Environment Study (TIDES) is a prospective cohort study of pregnant women recruited in prenatal clinics in San Francisco, CA, Minneapolis, MN, Rochester, NY and Seattle, WA in 2010–2012. Participants delivered 787 infants; 753 with complete data are included in this analysis. PARTICIPANTS/MATERIALS, SETTING, METHODS Any woman over 18 years old who was able to read and write English (or Spanish in CA), who was <13 weeks pregnant, whose pregnancy was not medically threatened and who planned to deliver in a study hospital was eligible to participate. Analyses include all infants whose mothers provided a first trimester urine sample and who were examined at or shortly after birth. Specific gravity (SpG) adjusted concentrations of phthalate metabolites in first trimester urine samples were examined in relation to genital measurements. In boys (N = 366), we obtained two measures of anogenital distance (AGD) (anoscrotal distance, or AGDAS and anopenile distance, AGDAP) as well as penile width (PW). In girls (N = 373), we measured anofourchette distance (AGDAF) and anoclitoral distance (AGDAC). We used multivariable regression models that adjusted for the infant's age at exam, gestational age, weight-for-length Z-score, time of day of urine
On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction
International Nuclear Information System (INIS)
Crop, F; Thierens, H; Rompaye, B Van; Paelinck, L; Vakaet, L; Wagter, C De
2008-01-01
The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry
Motivation in Distance Leaming
Directory of Open Access Journals (Sweden)
Daniela Brečko
1996-12-01
Full Text Available It is estimated that motivation is one of the most important psychological functions making it possible for people to leam even in conditions that do not meet their needs. In distance learning, a form of autonomous learning, motivation is of outmost importance. When adopting this method in learning an individual has to stimulate himself and take learning decisions on his or her own. These specific characteristics of distance learning should be taken into account. This all different factors maintaining the motivation of participants in distance learning are to be included. Moreover, motivation in distance learning can be stimulated with specific learning materials, clear instructions and guide-lines, an efficient feed back, personal contact between tutors and participants, stimulating learning letters, telephone calls, encouraging letters and through maintaining a positive relationship between tutor and participant.
Electrochemically driven emulsion inversion
Johans, Christoffer; Kontturi, Kyösti
2007-09-01
It is shown that emulsions stabilized by ionic surfactants can be inverted by controlling the electrical potential across the oil-water interface. The potential dependent partitioning of sodium dodecyl sulfate (SDS) was studied by cyclic voltammetry at the 1,2-dichlorobenzene|water interface. In the emulsion the potential control was achieved by using a potential-determining salt. The inversion of a 1,2-dichlorobenzene-in-water (O/W) emulsion stabilized by SDS was followed by conductometry as a function of added tetrapropylammonium chloride. A sudden drop in conductivity was observed, indicating the change of the continuous phase from water to 1,2-dichlorobenzene, i.e. a water-in-1,2-dichlorobenzene emulsion was formed. The inversion potential is well in accordance with that predicted by the hydrophilic-lipophilic deviation if the interfacial potential is appropriately accounted for.
DEFF Research Database (Denmark)
Gale, A.S.; Surlyk, Finn; Anderskouv, Kresten
2013-01-01
Evidence from regional stratigraphical patterns in Santonian−Campanian chalk is used to infer the presence of a very broad channel system (5 km across) with a depth of at least 50 m, running NNW−SSE across the eastern Isle of Wight; only the western part of the channel wall and fill is exposed. W......−Campanian chalks in the eastern Isle of Wight, involving penecontemporaneous tectonic inversion of the underlying basement structure, are rejected....
Reactivity in inverse micelles
International Nuclear Information System (INIS)
Brochette, Pascal
1987-01-01
This research thesis reports the study of the use of micro-emulsions of water in oil as reaction support. Only the 'inverse micelles' domain of the ternary mixing (water/AOT/isooctane) has been studied. The main addressed issues have been: the micro-emulsion disturbance in presence of reactants, the determination of reactant distribution and the resulting kinetic theory, the effect of the interface on electron transfer reactions, and finally protein solubilization [fr
Energy Technology Data Exchange (ETDEWEB)
Lambourne, Robert [Department of Physics and Astronomy, Open University, Milton Keynes (United Kingdom)
2005-11-01
This paper examines the challenges and rewards that can arise when the teaching of Einsteinian physics has to be accomplished by means of distance education. The discussion is mainly based on experiences gathered over the past 35 years at the UK Open University, where special and general relativity, relativistic cosmology and other aspects of Einsteinian physics, have been taught at a variety of levels, and using a range of techniques, to students studying at a distance.
Long distance quantum teleportation
Xia, Xiu-Xiu; Sun, Qi-Chao; Zhang, Qiang; Pan, Jian-Wei
2018-01-01
Quantum teleportation is a core protocol in quantum information science. Besides revealing the fascinating feature of quantum entanglement, quantum teleportation provides an ultimate way to distribute quantum state over extremely long distance, which is crucial for global quantum communication and future quantum networks. In this review, we focus on the long distance quantum teleportation experiments, especially those employing photonic qubits. From the viewpoint of real-world application, both the technical advantages and disadvantages of these experiments are discussed.
International Nuclear Information System (INIS)
Steinhauer, L.C.; Romea, R.D.; Kimura, W.D.
1997-01-01
A new method for laser acceleration is proposed based upon the inverse process of transition radiation. The laser beam intersects an electron-beam traveling between two thin foils. The principle of this acceleration method is explored in terms of its classical and quantum bases and its inverse process. A closely related concept based on the inverse of diffraction radiation is also presented: this concept has the significant advantage that apertures are used to allow free passage of the electron beam. These concepts can produce net acceleration because they do not satisfy the conditions in which the Lawson-Woodward theorem applies (no net acceleration in an unbounded vacuum). Finally, practical aspects such as damage limits at optics are employed to find an optimized set of parameters. For reasonable assumptions an acceleration gradient of 200 MeV/m requiring a laser power of less than 1 GW is projected. An interesting approach to multi-staging the acceleration sections is also presented. copyright 1997 American Institute of Physics
Intersections, ideals, and inversion
International Nuclear Information System (INIS)
Vasco, D.W.
1998-01-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly one dimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons
Intersections, ideals, and inversion
Energy Technology Data Exchange (ETDEWEB)
Vasco, D.W.
1998-10-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly onedimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons.
Testing earthquake source inversion methodologies
Page, Morgan T.; Mai, Paul Martin; Schorlemmer, Danijel
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data
Voxel inversion of airborne electromagnetic data for improved model integration
Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders
2014-05-01
Inversion of electromagnetic data has migrated from single site interpretations to inversions including entire surveys using spatial constraints to obtain geologically reasonable results. Though, the model space is usually linked to the actual observation points. For airborne electromagnetic (AEM) surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space, and the geophysical information has to be relocated for integration in (hydro)geological models. We have developed a new geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the soil properties is computed everywhere by means of an interpolation function (e.g. inverse distance or kriging). Given this definition of the voxel model space, the 1D forward responses of the AEM data are computed as follows: 1) a 1D model subdivision, in terms of model thicknesses, is defined for each 1D data set, creating "virtual" layers. 2) the "virtual" 1D models at the sounding positions are finalized by interpolating the soil properties (the resistivity) in the center of the "virtual" layers. 3) the forward response is computed in 1D for each "virtual" model. We tested the new inversion scheme on an AEM survey carried out with the SkyTEM system close to Odder, in Denmark. The survey comprises 106054 dual mode AEM soundings, and covers an area of approximately 13 km X 16 km. The voxel inversion was carried out on a structured grid of 260 X 325 X 29 xyz nodes (50 m xy spacing), for a total of 2450500 inversion parameters. A classical spatially constrained inversion (SCI) was carried out on the same data set, using 106054
Waveform inversion of lateral velocity variation from wavefield source location perturbation
Choi, Yun Seok; Alkhalifah, Tariq Ali
2013-01-01
It is challenge in waveform inversion to precisely define the deep part of the velocity model compared to the shallow part. The lateral velocity variation, or what referred to as the derivative of velocity with respect to the horizontal distance
Strength Training for Middle- and Long-Distance Performance: A Meta-Analysis.
Berryman, Nicolas; Mujika, Inigo; Arvisais, Denis; Roubeix, Marie; Binet, Carl; Bosquet, Laurent
2018-01-01
To assess the net effects of strength training on middle- and long-distance performance through a meta-analysis of the available literature. Three databases were searched, from which 28 of 554 potential studies met all inclusion criteria. Standardized mean differences (SMDs) were calculated and weighted by the inverse of variance to calculate an overall effect and its 95% confidence interval (CI). Subgroup analyses were conducted to determine whether the strength-training intensity, duration, and frequency and population performance level, age, sex, and sport were outcomes that might influence the magnitude of the effect. The implementation of a strength-training mesocycle in running, cycling, cross-country skiing, and swimming was associated with moderate improvements in middle- and long-distance performance (net SMD [95%CI] = 0.52 [0.33-0.70]). These results were associated with improvements in the energy cost of locomotion (0.65 [0.32-0.98]), maximal force (0.99 [0.80-1.18]), and maximal power (0.50 [0.34-0.67]). Maximal-force training led to greater improvements than other intensities. Subgroup analyses also revealed that beneficial effects on performance were consistent irrespective of the athletes' level. Taken together, these results provide a framework that supports the implementation of strength training in addition to traditional sport-specific training to improve middle- and long-distance performance, mainly through improvements in the energy cost of locomotion, maximal power, and maximal strength.
DEFF Research Database (Denmark)
Ackerman, Margareta; Ben-David, Shai; Branzei, Simina
2012-01-01
We investigate a natural generalization of the classical clustering problem, considering clustering tasks in which different instances may have different weights.We conduct the first extensive theoretical analysis on the influence of weighted data on standard clustering algorithms in both...... the partitional and hierarchical settings, characterizing the conditions under which algorithms react to weights. Extending a recent framework for clustering algorithm selection, we propose intuitive properties that would allow users to choose between clustering algorithms in the weighted setting and classify...
Losada, David E.; Barreiro, Alvaro
2003-01-01
Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…
Introduction to Schroedinger inverse scattering
International Nuclear Information System (INIS)
Roberts, T.M.
1991-01-01
Schroedinger inverse scattering uses scattering coefficients and bound state data to compute underlying potentials. Inverse scattering has been studied extensively for isolated potentials q(x), which tend to zero as vertical strokexvertical stroke→∞. Inverse scattering for isolated impurities in backgrounds p(x) that are periodic, are Heaviside steps, are constant for x>0 and periodic for x<0, or that tend to zero as x→∞ and tend to ∞ as x→-∞, have also been studied. This paper identifies literature for the five inverse problems just mentioned, and for four other inverse problems. Heaviside-step backgrounds are discussed at length. (orig.)
Epidemic spread over networks with agent awareness and social distancing
Paarporn, Keith; Eksin, Ceyhun; Weitz, Joshua S.; Shamma, Jeff S.
2016-01-01
with their neighbors (social distancing) when they believe the epidemic is currently prevalent or resume normal interactions when they believe there is low risk of becoming infected. The information is a weighted combination of three sources: 1) the average states
DEFF Research Database (Denmark)
Hansen, Finn J. S.; Clausen, Christian
2001-01-01
The case study represents an example of a top-down introduction of distance teaching as part of Danish trials with the introduction of multimedia in education. The study is concerned with the background, aim and context of the trial as well as the role and working of the technology and the organi......The case study represents an example of a top-down introduction of distance teaching as part of Danish trials with the introduction of multimedia in education. The study is concerned with the background, aim and context of the trial as well as the role and working of the technology...
Theoretical Principles of Distance Education.
Keegan, Desmond, Ed.
This book contains the following papers examining the didactic, academic, analytic, philosophical, and technological underpinnings of distance education: "Introduction"; "Quality and Access in Distance Education: Theoretical Considerations" (D. Randy Garrison); "Theory of Transactional Distance" (Michael G. Moore);…
Self-constrained inversion of potential fields
Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.
2013-11-01
We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.
Inverse Faraday Effect Revisited
Mendonça, J. T.; Ali, S.; Davies, J. R.
2010-11-01
The inverse Faraday effect is usually associated with circularly polarized laser beams. However, it was recently shown that it can also occur for linearly polarized radiation [1]. The quasi-static axial magnetic field by a laser beam propagating in plasma can be calculated by considering both the spin and the orbital angular momenta of the laser pulse. A net spin is present when the radiation is circularly polarized and a net orbital angular momentum is present if there is any deviation from perfect rotational symmetry. This orbital angular momentum has recently been discussed in the plasma context [2], and can give an additional contribution to the axial magnetic field, thus enhancing or reducing the inverse Faraday effect. As a result, this effect that is usually attributed to circular polarization can also be excited by linearly polarized radiation, if the incident laser propagates in a Laguerre-Gauss mode carrying a finite amount of orbital angular momentum.[4pt] [1] S. ALi, J.R. Davies and J.T. Mendonca, Phys. Rev. Lett., 105, 035001 (2010).[0pt] [2] J. T. Mendonca, B. Thidé, and H. Then, Phys. Rev. Lett. 102, 185005 (2009).
Fast Computing for Distance Covariance
Huo, Xiaoming; Szekely, Gabor J.
2014-01-01
Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O($n^2$) which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be...
Generalized Distance Transforms and Skeletons in Graphics Hardware
Strzodka, R.; Telea, A.
2004-01-01
We present a framework for computing generalized distance transforms and skeletons of two-dimensional objects using graphics hardware. Our method is based on the concept of footprint splatting. Combining different splats produces weighted distance transforms for different metrics, as well as the
Sorting signed permutations by inversions in O(nlogn) time.
Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E
2010-03-01
The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.
Displacement Parameter Inversion for a Novel Electromagnetic Underground Displacement Sensor
Directory of Open Access Journals (Sweden)
Nanying Shentu
2014-05-01
Full Text Available Underground displacement monitoring is an effective method to explore deep into rock and soil masses for execution of subsurface displacement measurements. It is not only an important means of geological hazards prediction and forecasting, but also a forefront, hot and sophisticated subject in current geological disaster monitoring. In previous research, the authors had designed a novel electromagnetic underground horizontal displacement sensor (called the H-type sensor by combining basic electromagnetic induction principles with modern sensing techniques and established a mutual voltage measurement theoretical model called the Equation-based Equivalent Loop Approach (EELA. Based on that work, this paper presents an underground displacement inversion approach named “EELA forward modeling-approximate inversion method”. Combining the EELA forward simulation approach with the approximate optimization inversion theory, it can deduce the underground horizontal displacement through parameter inversion of the H-type sensor. Comprehensive and comparative studies have been conducted between the experimentally measured and theoretically inversed values of horizontal displacement under counterpart conditions. The results show when the measured horizontal displacements are in the 0–100 mm range, the horizontal displacement inversion discrepancy is generally tested to be less than 3 mm under varied tilt angles and initial axial distances conditions, which indicates that our proposed parameter inversion method can predict underground horizontal displacement measurements effectively and robustly for the H-type sensor and the technique is applicable for practical geo-engineering applications.
Planning with Reachable Distances
Tang, Xinyu; Thomas, Shawna; Amato, Nancy M.
2009-01-01
reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the robot's number of degrees of freedom. In addition
DEFF Research Database (Denmark)
Jensen, Hanne Louise; de Neergaard, Maja
2016-01-01
De-severing Distance This paper draws on the growing body of mobility literature that shows how mobility can be viewed as meaningful everyday practices (Freudendal –Pedersen 2007, Cresswell 2006) this paper examines how Heidegger’s term de-severing can help us understand the everyday coping with ...
Draisma, J.; Horobet, E.; Ottaviani, G.; Sturmfels, B.; Thomas, R.R.; Zhi, L.; Watt, M.
2014-01-01
The nearest point map of a real algebraic variety with respect to Euclidean distance is an algebraic function. For instance, for varieties of low rank matrices, the Eckart-Young Theorem states that this map is given by the singular value decomposition. This article develops a theory of such nearest
Electromagnetic distance measurement
1967-01-01
This book brings together the work of forty-eight geodesists from twenty-five countries. They discuss various new electromagnetic distance measurement (EDM) instruments - among them the Tellurometer, Geodimeter, and air- and satellite-borne systems - and investigate the complex sources of error.
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
Rahman, Monsurur; Karim, Reza; Byramjee, Framarz
2015-01-01
Many educational institutions in the United States are currently offering programs through distance learning, and that trend is rising. In almost all spheres of education a developing country like Bangladesh needs to make available the expertise of the most qualified faculty to her distant people. But the fundamental question remains as to whether…
DEFF Research Database (Denmark)
Pedersen, Knud Ole Helgesen
1999-01-01
A method for implementing a digital distance relay in the power system is described.Instructions are given on how to program this relay on a 80537 based microcomputer system.The problem is used as a practical case study in the course 53113: Micocomputer applications in the power system.The relay...
Koekoek, J.; Koekoek, R.
1999-01-01
We look for differential equations satisfied by the generalized Jacobi polynomials which are orthogonal on the interval [-1,1] with respect to the weight function [Enlarge Image] where >-1, ß>-1M=0 and N=0. In order to find explicit formulas for the coefficients of these differential equations we
Directory of Open Access Journals (Sweden)
Markus Spiliotis
Full Text Available Inverse fusion PCR cloning (IFPC is an easy, PCR based three-step cloning method that allows the seamless and directional insertion of PCR products into virtually all plasmids, this with a free choice of the insertion site. The PCR-derived inserts contain a vector-complementary 5'-end that allows a fusion with the vector by an overlap extension PCR, and the resulting amplified insert-vector fusions are then circularized by ligation prior transformation. A minimal amount of starting material is needed and experimental steps are reduced. Untreated circular plasmid, or alternatively bacteria containing the plasmid, can be used as templates for the insertion, and clean-up of the insert fragment is not urgently required. The whole cloning procedure can be performed within a minimal hands-on time and results in the generation of hundreds to ten-thousands of positive colonies, with a minimal background.
International Nuclear Information System (INIS)
Hicks, H.R.; Dory, R.A.; Holmes, J.A.
1983-01-01
We illustrate in some detail a 2D inverse-equilibrium solver that was constructed to analyze tokamak configurations and stellarators (the latter in the context of the average method). To ensure that the method is suitable not only to determine equilibria, but also to provide appropriately represented data for existing stability codes, it is important to be able to control the Jacobian, tilde J is identical to delta(R,Z)/delta(rho, theta). The form chosen is tilde J = J 0 (rho)R/sup l/rho where rho is a flux surface label, and l is an integer. The initial implementation is for a fixed conducting-wall boundary, but the technique can be extended to a free-boundary model
Transmuted Generalized Inverse Weibull Distribution
Merovci, Faton; Elbatal, Ibrahim; Ahmed, Alaa
2013-01-01
A generalization of the generalized inverse Weibull distribution so-called transmuted generalized inverse Weibull dis- tribution is proposed and studied. We will use the quadratic rank transmutation map (QRTM) in order to generate a flexible family of probability distributions taking generalized inverse Weibull distribution as the base value distribution by introducing a new parameter that would offer more distributional flexibility. Various structural properties including explicit expression...
Mahalanobis Distance Based Iterative Closest Point
DEFF Research Database (Denmark)
Hansen, Mads Fogtmann; Blas, Morten Rufus; Larsen, Rasmus
2007-01-01
the notion of a mahalanobis distance map upon a point set with associated covariance matrices which in addition to providing correlation weighted distance implicitly provides a method for assigning correspondence during alignment. This distance map provides an easy formulation of the ICP problem that permits...... a fast optimization. Initially, the covariance matrices are set to the identity matrix, and all shapes are aligned to a randomly selected shape (equivalent to standard ICP). From this point the algorithm iterates between the steps: (a) obtain mean shape and new estimates of the covariance matrices from...... the aligned shapes, (b) align shapes to the mean shape. Three different methods for estimating the mean shape with associated covariance matrices are explored in the paper. The proposed methods are validated experimentally on two separate datasets (IMM face dataset and femur-bones). The superiority of ICP...
Gualtieri, J. A.; Le Moigne, J.; Packer, C. V.
1992-01-01
Comparing two binary images and assigning a quantitative measure to this comparison finds its purpose in such tasks as image recognition, image compression, and image browsing. This quantitative measurement may be computed by utilizing the Hausdorff distance of the images represented as two-dimensional point sets. In this paper, we review two algorithms that have been proposed to compute this distance, and we present a parallel implementation of one of them on the MasPar parallel processor. We study their complexity and the results obtained by these algorithms for two different types of images: a set of displaced pairs of images of Gaussian densities, and a comparison of a Canny edge image with several edge images from a hierarchical region growing code.
THE EXTRAGALACTIC DISTANCE DATABASE
International Nuclear Information System (INIS)
Tully, R. Brent; Courtois, Helene M.; Jacobs, Bradley A.; Rizzi, Luca; Shaya, Edward J.; Makarov, Dmitry I.
2009-01-01
A database can be accessed on the Web at http://edd.ifa.hawaii.edu that was developed to promote access to information related to galaxy distances. The database has three functional components. First, tables from many literature sources have been gathered and enhanced with links through a distinct galaxy naming convention. Second, comparisons of results both at the levels of parameters and of techniques have begun and are continuing, leading to increasing homogeneity and consistency of distance measurements. Third, new material is presented arising from ongoing observational programs at the University of Hawaii 2.2 m telescope, radio telescopes at Green Bank, Arecibo, and Parkes and with the Hubble Space Telescope. This new observational material is made available in tandem with related material drawn from archives and passed through common analysis pipelines.
Capachi, Casey
2013-01-01
Distance to Cure A three-part television series by Casey Capachi www.distancetocure.com Abstract How far would you go for health care? This three-part television series, featuring two introductory segments between each piece, focuses on the physical, cultural, and political obstacles facing rural Native American patients and the potential of health technology to break down those barriers to care. Part one,Telemedici...
Ultrametric Distance in Syntax
Directory of Open Access Journals (Sweden)
Roberts Mark D.
2015-04-01
Full Text Available Phrase structure trees have a hierarchical structure. In many subjects, most notably in taxonomy such tree structures have been studied using ultrametrics. Here syntactical hierarchical phrase trees are subject to a similar analysis, which is much simpler as the branching structure is more readily discernible and switched. The ambiguity of which branching height to choose, is resolved by postulating that branching occurs at the lowest height available. An ultrametric produces a measure of the complexity of sentences: presumably the complexity of sentences increases as a language is acquired so that this can be tested. All ultrametric triangles are equilateral or isosceles. Here it is shown that X̅ structure implies that there are no equilateral triangles. Restricting attention to simple syntax a minimum ultrametric distance between lexical categories is calculated. A matrix constructed from this ultrametric distance is shown to be different than the matrix obtained from features. It is shown that the definition of C-COMMAND can be replaced by an equivalent ultrametric definition. The new definition invokes a minimum distance between nodes and this is more aesthetically satisfying than previous varieties of definitions. From the new definition of C-COMMAND follows a new definition of of the central notion in syntax namely GOVERNMENT.
Calculation of the inverse data space via sparse inversion
Saragiotis, Christos; Doulgeris, Panagiotis C.; Verschuur, Dirk Jacob Eric
2011-01-01
The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from
Inverse feasibility problems of the inverse maximum flow problems
Indian Academy of Sciences (India)
199–209. c Indian Academy of Sciences. Inverse feasibility problems of the inverse maximum flow problems. ADRIAN DEACONU. ∗ and ELEONOR CIUREA. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Brasov, Brasov, Iuliu Maniu st. 50,. Romania.
Polymer sol-gel composite inverse opal structures.
Zhang, Xiaoran; Blanchard, G J
2015-03-25
We report on the formation of composite inverse opal structures where the matrix used to form the inverse opal contains both silica, formed using sol-gel chemistry, and poly(ethylene glycol), PEG. We find that the morphology of the inverse opal structure depends on both the amount of PEG incorporated into the matrix and its molecular weight. The extent of organization in the inverse opal structure, which is characterized by scanning electron microscopy and optical reflectance data, is mediated by the chemical bonding interactions between the silica and PEG constituents in the hybrid matrix. Both polymer chain terminus Si-O-C bonding and hydrogen bonding between the polymer backbone oxygens and silanol functionalities can contribute, with the polymer mediating the extent to which Si-O-Si bonds can form within the silica regions of the matrix due to hydrogen-bonding interactions.
AI-guided parameter optimization in inverse treatment planning
International Nuclear Information System (INIS)
Yan Hui; Yin Fangfang; Guan Huaiqun; Kim, Jae Ho
2003-01-01
An artificial intelligence (AI)-guided inverse planning system was developed to optimize the combination of parameters in the objective function for intensity-modulated radiation therapy (IMRT). In this system, the empirical knowledge of inverse planning was formulated with fuzzy if-then rules, which then guide the parameter modification based on the on-line calculated dose. Three kinds of parameters (weighting factor, dose specification, and dose prescription) were automatically modified using the fuzzy inference system (FIS). The performance of the AI-guided inverse planning system (AIGIPS) was examined using the simulated and clinical examples. Preliminary results indicate that the expected dose distribution was automatically achieved using the AI-guided inverse planning system, with the complicated compromising between different parameters accomplished by the fuzzy inference technique. The AIGIPS provides a highly promising method to replace the current trial-and-error approach
Complex networks in the Euclidean space of communicability distances
Estrada, Ernesto
2012-06-01
We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.
Face inversion increases attractiveness.
Leder, Helmut; Goller, Juergen; Forster, Michael; Schlageter, Lena; Paul, Matthew A
2017-07-01
Assessing facial attractiveness is a ubiquitous, inherent, and hard-wired phenomenon in everyday interactions. As such, it has highly adapted to the default way that faces are typically processed: viewing faces in upright orientation. By inverting faces, we can disrupt this default mode, and study how facial attractiveness is assessed. Faces, rotated at 90 (tilting to either side) and 180°, were rated on attractiveness and distinctiveness scales. For both orientations, we found that faces were rated more attractive and less distinctive than upright faces. Importantly, these effects were more pronounced for faces rated low in upright orientation, and smaller for highly attractive faces. In other words, the less attractive a face was, the more it gained in attractiveness by inversion or rotation. Based on these findings, we argue that facial attractiveness assessments might not rely on the presence of attractive facial characteristics, but on the absence of distinctive, unattractive characteristics. These unattractive characteristics are potentially weighed against an individual, attractive prototype in assessing facial attractiveness. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Inverse problem in hydrogeology
Carrera, Jesús; Alcolea, Andrés; Medina, Agustín; Hidalgo, Juan; Slooten, Luit J.
2005-03-01
The state of the groundwater inverse problem is synthesized. Emphasis is placed on aquifer characterization, where modelers have to deal with conceptual model uncertainty (notably spatial and temporal variability), scale dependence, many types of unknown parameters (transmissivity, recharge, boundary conditions, etc.), nonlinearity, and often low sensitivity of state variables (typically heads and concentrations) to aquifer properties. Because of these difficulties, calibration cannot be separated from the modeling process, as it is sometimes done in other fields. Instead, it should be viewed as one step in the process of understanding aquifer behavior. In fact, it is shown that actual parameter estimation methods do not differ from each other in the essence, though they may differ in the computational details. It is argued that there is ample room for improvement in groundwater inversion: development of user-friendly codes, accommodation of variability through geostatistics, incorporation of geological information and different types of data (temperature, occurrence and concentration of isotopes, age, etc.), proper accounting of uncertainty, etc. Despite this, even with existing codes, automatic calibration facilitates enormously the task of modeling. Therefore, it is contended that its use should become standard practice. L'état du problème inverse des eaux souterraines est synthétisé. L'accent est placé sur la caractérisation de l'aquifère, où les modélisateurs doivent jouer avec l'incertitude des modèles conceptuels (notamment la variabilité spatiale et temporelle), les facteurs d'échelle, plusieurs inconnues sur différents paramètres (transmissivité, recharge, conditions aux limites, etc.), la non linéarité, et souvent la sensibilité de plusieurs variables d'état (charges hydrauliques, concentrations) des propriétés de l'aquifère. A cause de ces difficultés, le calibrage ne peut êtreséparé du processus de modélisation, comme c'est le
Zhang, Dongliang
2013-01-01
To increase the illumination of the subsurface and to eliminate the dependency of FWI on the source wavelet, we propose multiples waveform inversion (MWI) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. These virtual sources are used to numerically generate downgoing wavefields that are correlated with the backprojected surface-related multiples to give the migration image. Since the recorded data are treated as the virtual sources, knowledge of the source wavelet is not required, and the subsurface illumination is greatly enhanced because the entire free surface acts as an extended source compared to the radiation pattern of a traditional point source. Numerical tests on the Marmousi2 model show that the convergence rate and the spatial resolution of MWI is, respectively, faster and more accurate then FWI. The potential pitfall with this method is that the multiples undergo more than one roundtrip to the surface, which increases attenuation and reduces spatial resolution. This can lead to less resolved tomograms compared to conventional FWI. The possible solution is to combine both FWI and MWI in inverting for the subsurface velocity distribution.
An interpretation of signature inversion
International Nuclear Information System (INIS)
Onishi, Naoki; Tajima, Naoki
1988-01-01
An interpretation in terms of the cranking model is presented to explain why signature inversion occurs for positive γ of the axially asymmetric deformation parameter and emerges into specific orbitals. By introducing a continuous variable, the eigenvalue equation can be reduced to a one dimensional Schroedinger equation by means of which one can easily understand the cause of signature inversion. (author)
Inverse problems for Maxwell's equations
Romanov, V G
1994-01-01
The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.
Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves
Chen, W.; Ni, S.; Wang, Z.
2011-12-01
In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.
Relativistic distances, sizes, lengths
International Nuclear Information System (INIS)
Strel'tsov, V.N.
1992-01-01
Such notion as light or retarded distance, field size, formation way, visible size of a body, relativistic or radar length and wave length of light from a moving atom are considered. The relation between these notions is cleared up, their classification is given. It is stressed that the formation way is defined by the field size of a moving particle. In the case of the electromagnetic field, longitudinal sizes increase proportionally γ 2 with growing charge velocity (γ is the Lorentz-factor). 18 refs
2016-03-02
whereBψ is any Bregman divergence and ηt is the learning rate parameter. From (Hall & Willett, 2015) we have: Theorem 1. G` = max θ∈Θ,`∈L ‖∇f(θ)‖ φmax = 1...Kullback-Liebler divergence between an initial guess of the matrix that parameterizes the Mahalanobis distance and a solution that satisfies a set of...Bregman divergence and ηt is the learning rate parameter. M̂0, µ̂0 are initialized to some initial value. In [18] a closed-form algorithm for solving
Algebraic properties of generalized inverses
Cvetković‐Ilić, Dragana S
2017-01-01
This book addresses selected topics in the theory of generalized inverses. Following a discussion of the “reverse order law” problem and certain problems involving completions of operator matrices, it subsequently presents a specific approach to solving the problem of the reverse order law for {1} -generalized inverses. Particular emphasis is placed on the existence of Drazin invertible completions of an upper triangular operator matrix; on the invertibility and different types of generalized invertibility of a linear combination of operators on Hilbert spaces and Banach algebra elements; on the problem of finding representations of the Drazin inverse of a 2x2 block matrix; and on selected additive results and algebraic properties for the Drazin inverse. In addition to the clarity of its content, the book discusses the relevant open problems for each topic discussed. Comments on the latest references on generalized inverses are also included. Accordingly, the book will be useful for graduate students, Ph...
Sakellaridis, Yiannis
2014-01-01
Let H be a split reductive group over a local non-archimedean field, and let H^ denote its Langlands dual group. We present an explicit formula for the generating function of an unramified L-function associated to a highest weight representation of the dual group, considered as a series of elements in the Hecke algebra of H. This offers an alternative approach to a solution of the same problem by Wen-Wei Li. Moreover, we generalize the notion of "Satake transform" and perform the analogous ca...
Deep-Focusing Time-Distance Helioseismology
Duvall, T. L., Jr.; Jensen, J. M.; Kosovichev, A. G.; Birch, A. C.; Fisher, Richard R. (Technical Monitor)
2001-01-01
Much progress has been made by measuring the travel times of solar acoustic waves from a central surface location to points at equal arc distance away. Depth information is obtained from the range of arc distances examined, with the larger distances revealing the deeper layers. This method we will call surface-focusing, as the common point, or focus, is at the surface. To obtain a clearer picture of the subsurface region, it would, no doubt, be better to focus on points below the surface. Our first attempt to do this used the ray theory to pick surface location pairs that would focus on a particular subsurface point. This is not the ideal procedure, as Born approximation kernels suggest that this focus should have zero sensitivity to sound speed inhomogeneities. However, the sensitivity is concentrated below the surface in a much better way than the old surface-focusing method, and so we expect the deep-focusing method to be more sensitive. A large sunspot group was studied by both methods. Inversions based on both methods will be compared.
Time-Distance Helioseismology: Noise Estimation
Gizon, L.; Birch, A. C.
2004-10-01
As in global helioseismology, the dominant source of noise in time-distance helioseismology measurements is realization noise due to the stochastic nature of the excitation mechanism of solar oscillations. Characterizing noise is important for the interpretation and inversion of time-distance measurements. In this paper we introduce a robust definition of travel time that can be applied to very noisy data. We then derive a simple model for the full covariance matrix of the travel-time measurements. This model depends only on the expectation value of the filtered power spectrum and assumes that solar oscillations are stationary and homogeneous on the solar surface. The validity of the model is confirmed through comparison with SOHO MDI measurements in a quiet-Sun region. We show that the correlation length of the noise in the travel times is about half the dominant wavelength of the filtered power spectrum. We also show that the signal-to-noise ratio in quiet-Sun travel-time maps increases roughly as the square root of the observation time and is at maximum for a distance near half the length scale of supergranulation.
PERBANDINGAN EUCLIDEAN DISTANCE DENGAN CANBERRA DISTANCE PADA FACE RECOGNITION
Directory of Open Access Journals (Sweden)
Sendhy Rachmat Wurdianarto
2014-08-01
Full Text Available Perkembangan ilmu pada dunia komputer sangatlah pesat. Salah satu yang menandai hal ini adalah ilmu komputer telah merambah pada dunia biometrik. Arti biometrik sendiri adalah karakter-karakter manusia yang dapat digunakan untuk membedakan antara orang yang satu dengan yang lainnya. Salah satu pemanfaatan karakter / organ tubuh pada setiap manusia yang digunakan untuk identifikasi (pengenalan adalah dengan memanfaatkan wajah. Dari permasalahan diatas dalam pengenalan lebih tentang aplikasi Matlab pada Face Recognation menggunakan metode Euclidean Distance dan Canberra Distance. Model pengembangan aplikasi yang digunakan adalah model waterfall. Model waterfall beriisi rangkaian aktivitas proses yang disajikan dalam proses analisa kebutuhan, desain menggunakan UML (Unified Modeling Language, inputan objek gambar diproses menggunakan Euclidean Distance dan Canberra Distance. Kesimpulan yang dapat ditarik adalah aplikasi face Recognation menggunakan metode euclidean Distance dan Canverra Distance terdapat kelebihan dan kekurangan masing-masing. Untuk kedepannya aplikasi tersebut dapat dikembangkan dengan menggunakan objek berupa video ataupun objek lainnya. Kata kunci : Euclidean Distance, Face Recognition, Biometrik, Canberra Distance
... Rights Employment Discrimination Health Care Professionals Law Enforcement Driver's License For Lawyers Food & Fitness Home Food MyFoodAdvisor ... Fit Types of Activity Weight Loss Assess Your Lifestyle Getting Started Food Choices In My Community Home ...
Distance collaborations with industry
Energy Technology Data Exchange (ETDEWEB)
Peskin, A.; Swyler, K.
1998-06-01
The college industry relationship has been identified as a key policy issue in Engineering Education. Collaborations between academic institutions and the industrial sector have a long history and a bright future. For Engineering and Engineering Technology programs in particular, industry has played a crucial role in many areas including advisement, financial support, and practical training of both faculty and students. Among the most important and intimate interactions are collaborative projects and formal cooperative education arrangements. Most recently, such collaborations have taken on a new dimension, as advances in technology have made possible meaningful technical collaboration at a distance. There are several obvious technology areas that have contributed significantly to this trend. Foremost is the ubiquitous presence of the Internet. Perhaps almost as important are advances in computer based imaging. Because visual images offer a compelling user experience, it affords greater knowledge transfer efficiency than other modes of delivery. Furthermore, the quality of the image appears to have a strongly correlated effect on insight. A good visualization facility offers both a means for communication and a shared information space for the subjects, which are among the essential features of both peer collaboration and distance learning.
International Nuclear Information System (INIS)
Kimura, W.D.
1993-01-01
The final report describes work performed to investigate inverse Cherenkov acceleration (ICA) as a promising method for laser particle acceleration. In particular, an improved configuration of ICA is being tested in a experiment presently underway on the Accelerator Test Facility (ATF). In the experiment, the high peak power (∼ 10 GW) linearly polarized ATF CO 2 laser beam is converted to a radially polarized beam. This is beam is focused with an axicon at the Cherenkov angle onto the ATF 50-MeV e-beam inside a hydrogen gas cell, where the gas acts as the phase matching medium of the interaction. An energy gain of ∼12 MeV is predicted assuming a delivered laser peak power of 5 GW. The experiment is divided into two phases. The Phase I experiments, which were completed in the spring of 1992, were conducted before the ATF e-beam was available and involved several successful tests of the optical systems. Phase II experiments are with the e-beam and laser beam, and are still in progress. The ATF demonstrated delivery of the e-beam to the experiment in Dec. 1992. A preliminary ''debugging'' run with the e-beam and laser beam occurred in May 1993. This revealed the need for some experimental modifications, which have been implemented. The second run is tentatively scheduled for October or November 1993. In parallel to the experimental efforts has been ongoing theoretical work to support the experiment and investigate improvement and/or offshoots. One exciting offshoot has been theoretical work showing that free-space laser acceleration of electrons is possible using a radially-polarized, axicon-focused laser beam, but without any phase-matching gas. The Monte Carlo code used to model the ICA process has been upgraded and expanded to handle different types of laser beam input profiles
TOPSIS with statistical distances: A new approach to MADM
Directory of Open Access Journals (Sweden)
Vijaya Babu Vommi
2017-01-01
Full Text Available Multiple attribute decision making (MADM methods are very useful in choosing the best alternative among the available finite but conflicting alternatives. TOPSIS is one of the MADM methods, which is simple in its methodology and logic. In TOPSIS, Euclidean distances of each alternative from the positive and negative ideal solutions are utilized to find the best alternative. In literature, apart from Euclidean distances, the city block distances have also been tried to find the separations measures. In general, the attribute data are distributed with unequal ranges and also possess moderate to high correlations. Hence, in the present paper, use of statistical distances is proposed in place of Euclidean distances. Procedures to find the best alternatives are developed using statistical and weighted statistical distances respectively. The proposed methods are illustrated with some industrial problems taken from literature. Results show that the proposed methods can be used as new alternatives in MADM for choosing the best solutions.
A Generalization of the Spherical Inversion
Ramírez, José L.; Rubiano, Gustavo N.
2017-01-01
In the present article, we introduce a generalization of the spherical inversion. In particular, we define an inversion with respect to an ellipsoid, and prove several properties of this new transformation. The inversion in an ellipsoid is the generalization of the elliptic inversion to the three-dimensional space. We also study the inverse images…
An adaptive distance measure for use with nonparametric models
International Nuclear Information System (INIS)
Garvey, D. R.; Hines, J. W.
2006-01-01
Distance measures perform a critical task in nonparametric, locally weighted regression. Locally weighted regression (LWR) models are a form of 'lazy learning' which construct a local model 'on the fly' by comparing a query vector to historical, exemplar vectors according to a three step process. First, the distance of the query vector to each of the exemplar vectors is calculated. Next, these distances are passed to a kernel function, which converts the distances to similarities or weights. Finally, the model output or response is calculated by performing locally weighted polynomial regression. To date, traditional distance measures, such as the Euclidean, weighted Euclidean, and L1-norm have been used as the first step in the prediction process. Since these measures do not take into consideration sensor failures and drift, they are inherently ill-suited for application to 'real world' systems. This paper describes one such LWR model, namely auto associative kernel regression (AAKR), and describes a new, Adaptive Euclidean distance measure that can be used to dynamically compensate for faulty sensor inputs. In this new distance measure, the query observations that lie outside of the training range (i.e. outside the minimum and maximum input exemplars) are dropped from the distance calculation. This allows for the distance calculation to be robust to sensor drifts and failures, in addition to providing a method for managing inputs that exceed the training range. In this paper, AAKR models using the standard and Adaptive Euclidean distance are developed and compared for the pressure system of an operating nuclear power plant. It is shown that using the standard Euclidean distance for data with failed inputs, significant errors in the AAKR predictions can result. By using the Adaptive Euclidean distance it is shown that high fidelity predictions are possible, in spite of the input failure. In fact, it is shown that with the Adaptive Euclidean distance prediction
Interactive Distance Learning in Connecticut.
Pietras, Jesse John; Murphy, Robert J.
This paper provides an overview of distance learning activities in Connecticut and addresses the feasibility of such activities. Distance education programs have evolved from the one dimensional electronic mail systems to the use of sophisticated digital fiber networks. The Middlesex Distance Learning Consortium has developed a long-range plan to…
Distance covariance for stochastic processes
DEFF Research Database (Denmark)
Matsui, Muneya; Mikosch, Thomas Valentin; Samorodnitsky, Gennady
2017-01-01
The distance covariance of two random vectors is a measure of their dependence. The empirical distance covariance and correlation can be used as statistical tools for testing whether two random vectors are independent. We propose an analog of the distance covariance for two stochastic processes...
DISTANCES TO DARK CLOUDS: COMPARING EXTINCTION DISTANCES TO MASER PARALLAX DISTANCES
International Nuclear Information System (INIS)
Foster, Jonathan B.; Jackson, James M.; Stead, Joseph J.; Hoare, Melvin G.; Benjamin, Robert A.
2012-01-01
We test two different methods of using near-infrared extinction to estimate distances to dark clouds in the first quadrant of the Galaxy using large near-infrared (Two Micron All Sky Survey and UKIRT Infrared Deep Sky Survey) surveys. Very long baseline interferometry parallax measurements of masers around massive young stars provide the most direct and bias-free measurement of the distance to these dark clouds. We compare the extinction distance estimates to these maser parallax distances. We also compare these distances to kinematic distances, including recent re-calibrations of the Galactic rotation curve. The extinction distance methods agree with the maser parallax distances (within the errors) between 66% and 100% of the time (depending on method and input survey) and between 85% and 100% of the time outside of the crowded Galactic center. Although the sample size is small, extinction distance methods reproduce maser parallax distances better than kinematic distances; furthermore, extinction distance methods do not suffer from the kinematic distance ambiguity. This validation gives us confidence that these extinction methods may be extended to additional dark clouds where maser parallaxes are not available.
Statistical perspectives on inverse problems
DEFF Research Database (Denmark)
Andersen, Kim Emil
of the interior of an object from electrical boundary measurements. One part of this thesis concerns statistical approaches for solving, possibly non-linear, inverse problems. Thus inverse problems are recasted in a form suitable for statistical inference. In particular, a Bayesian approach for regularisation...... problem is given in terms of probability distributions. Posterior inference is obtained by Markov chain Monte Carlo methods and new, powerful simulation techniques based on e.g. coupled Markov chains and simulated tempering is developed to improve the computational efficiency of the overall simulation......Inverse problems arise in many scientific disciplines and pertain to situations where inference is to be made about a particular phenomenon from indirect measurements. A typical example, arising in diffusion tomography, is the inverse boundary value problem for non-invasive reconstruction...
Size Estimates in Inverse Problems
Di Cristo, Michele
2014-01-01
Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded
Wave-equation dispersion inversion
Li, Jing; Feng, Zongcai; Schuster, Gerard T.
2016-01-01
We present the theory for wave-equation inversion of dispersion curves, where the misfit function is the sum of the squared differences between the wavenumbers along the predicted and observed dispersion curves. The dispersion curves are obtained
Testing earthquake source inversion methodologies
Page, Morgan T.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Parameter estimation and inverse problems
Aster, Richard C; Thurber, Clifford H
2005-01-01
Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...
Inversion Therapy: Can It Relieve Back Pain?
Inversion therapy: Can it relieve back pain? Does inversion therapy relieve back pain? Is it safe? Answers from Edward R. Laskowski, M.D. Inversion therapy doesn't provide lasting relief from back ...
Thermal measurements and inverse techniques
Orlande, Helcio RB; Maillet, Denis; Cotta, Renato M
2011-01-01
With its uncommon presentation of instructional material regarding mathematical modeling, measurements, and solution of inverse problems, Thermal Measurements and Inverse Techniques is a one-stop reference for those dealing with various aspects of heat transfer. Progress in mathematical modeling of complex industrial and environmental systems has enabled numerical simulations of most physical phenomena. In addition, recent advances in thermal instrumentation and heat transfer modeling have improved experimental procedures and indirect measurements for heat transfer research of both natural phe
Computation of inverse magnetic cascades
International Nuclear Information System (INIS)
Montgomery, D.
1981-10-01
Inverse cascades of magnetic quantities for turbulent incompressible magnetohydrodynamics are reviewed, for two and three dimensions. The theory is extended to the Strauss equations, a description intermediate between two and three dimensions appropriate to tokamak magnetofluids. Consideration of the absolute equilibrium Gibbs ensemble for the system leads to a prediction of an inverse cascade of magnetic helicity, which may manifest itself as a major disruption. An agenda for computational investigation of this conjecture is proposed
Blocky inversion of multichannel elastic impedance for elastic parameters
Mozayan, Davoud Karami; Gholami, Ali; Siahkoohi, Hamid Reza
2018-04-01
Petrophysical description of reservoirs requires proper knowledge of elastic parameters like P- and S-wave velocities (Vp and Vs) and density (ρ), which can be retrieved from pre-stack seismic data using the concept of elastic impedance (EI). We propose an inversion algorithm which recovers elastic parameters from pre-stack seismic data in two sequential steps. In the first step, using the multichannel blind seismic inversion method (exploited recently for recovering acoustic impedance from post-stack seismic data), high-resolution blocky EI models are obtained directly from partial angle-stacks. Using an efficient total-variation (TV) regularization, each angle-stack is inverted independently in a multichannel form without prior knowledge of the corresponding wavelet. The second step involves inversion of the resulting EI models for elastic parameters. Mathematically, under some assumptions, the EI's are linearly described by the elastic parameters in the logarithm domain. Thus a linear weighted least squares inversion is employed to perform this step. Accuracy of the concept of elastic impedance in predicting reflection coefficients at low and high angles of incidence is compared with that of exact Zoeppritz elastic impedance and the role of low frequency content in the problem is discussed. The performance of the proposed inversion method is tested using synthetic 2D data sets obtained from the Marmousi model and also 2D field data sets. The results confirm the efficiency and accuracy of the proposed method for inversion of pre-stack seismic data.
International Nuclear Information System (INIS)
Tsuchihashi, Toshio; Maki, Toshio; Suzuki, Takeshi
1997-01-01
The fast inversion recovery (fast IR) pulse sequence was evaluated. We compared the fast fluid attenuated inversion recovery (fast FLAIR) pulse sequence in which inversion time (TI) was established as equal to the water null point for the purpose of the water-suppressed T 2 -weighted image, with the fast short TI inversion recovery (fast STIR) pulse sequence in which TI was established as equal to the fat null point for purpose of fat suppression. In the fast FLAIR pulse sequence, the water null point was increased by making TR longer. In the FLAIR pulse sequence, the longitudinal magnetization contrast is determined by TI. If TI is increased, T 2 -weighted contrast improves in the same way as increasing TR for the SE pulse sequence. Therefore, images should be taken with long TR and long TI, which are longer than TR and longer than the water null point. On the other hand, the fat null point is not affected by TR in the fast STIR pulse sequence. However, effective TE was affected by variation of the null point. This increased in proportion to the increase in effective TE. Our evaluation indicated that the fast STIR pulse sequence can control the extensive signals from fat in a short time. (author)
EDITORIAL: Inverse Problems in Engineering
West, Robert M.; Lesnic, Daniel
2007-01-01
Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.
Defining functional distances over Gene Ontology
Directory of Open Access Journals (Sweden)
del Pozo Angela
2008-01-01
Full Text Available Abstract Background A fundamental problem when trying to define the functional relationships between proteins is the difficulty in quantifying functional similarities, even when well-structured ontologies exist regarding the activity of proteins (i.e. 'gene ontology' -GO-. However, functional metrics can overcome the problems in the comparing and evaluating functional assignments and predictions. As a reference of proximity, previous approaches to compare GO terms considered linkage in terms of ontology weighted by a probability distribution that balances the non-uniform 'richness' of different parts of the Direct Acyclic Graph. Here, we have followed a different approach to quantify functional similarities between GO terms. Results We propose a new method to derive 'functional distances' between GO terms that is based on the simultaneous occurrence of terms in the same set of Interpro entries, instead of relying on the structure of the GO. The coincidence of GO terms reveals natural biological links between the GO functions and defines a distance model Df which fulfils the properties of a Metric Space. The distances obtained in this way can be represented as a hierarchical 'Functional Tree'. Conclusion The method proposed provides a new definition of distance that enables the similarity between GO terms to be quantified. Additionally, the 'Functional Tree' defines groups with biological meaning enhancing its utility for protein function comparison and prediction. Finally, this approach could be for function-based protein searches in databases, and for analysing the gene clusters produced by DNA array experiments.
[Osteoarthritis from long-distance running?].
Hohmann, E; Wörtler, K; Imhoff, A
2005-06-01
Long distance running has become a fashionable recreational activity. This study investigated the effects of external impact loading on bone and cartilage introduced by performing a marathon race. Seven beginners were compared to six experienced recreational long distance runners and two professional athletes. All participants underwent magnetic resonance imaging of the hip and knee before and after a marathon run. Coronal T1 weighted and STIR sequences were used. The pre MRI served as a baseline investigation and monitored the training effect. All athletes demonstrated normal findings in the pre run scan. All but one athlete in the beginner group demonstrated joint effusions after the race. The experienced and professional runners failed to demonstrate pathology in the post run scans. Recreational and professional long distance runners tolerate high impact forces well. Beginners demonstrate significant changes on the post run scans. Whether those findings are a result of inadequate training (miles and duration) warrant further studies. We conclude that adequate endurance training results in adaptation mechanisms that allow the athlete to compensate for the stresses introduced by long distance running and do not predispose to the onset of osteoarthritis. Significant malalignment of the lower extremity may cause increased focal loading of joint and cartilage.
Distance Magic-Type and Distance Antimagic-Type Labelings of Graphs
Freyberg, Bryan J.
Generally speaking, a distance magic-type labeling of a graph G of order n is a bijection l from the vertex set of the graph to the first n natural numbers or to the elements of a group of order n, with the property that the weight of each vertex is the same. The weight of a vertex x is defined as the sum (or appropriate group operation) of all the labels of vertices adjacent to x. If instead we require that all weights differ, then we refer to the labeling as a distance antimagic-type labeling. This idea can be generalized for directed graphs; the weight will take into consideration the direction of the arcs. In this manuscript, we provide new results for d-handicap labeling, a distance antimagic-type labeling, and introduce a new distance magic-type labeling called orientable Gamma-distance magic labeling. A d-handicap distance antimagic labeling (or just d-handicap labeling for short) of a graph G = ( V,E) of order n is a bijection l from V to the set {1,2,...,n} with induced weight function [special characters omitted]. such that l(xi) = i and the sequence of weights w(x 1),w(x2),...,w (xn) forms an arithmetic sequence with constant difference d at least 1. If a graph G admits a d-handicap labeling, we say G is a d-handicap graph. A d-handicap incomplete tournament, H(n,k,d ) is an incomplete tournament of n teams ranked with the first n natural numbers such that each team plays exactly k games and the strength of schedule of the ith ranked team is d more than the i + 1st ranked team. That is, strength of schedule increases arithmetically with strength of team. Constructing an H(n,k,d) is equivalent to finding a d-handicap labeling of a k-regular graph of order n.. In Chapter 2 we provide general constructions for every d for large classes of both n and k, providing breadfth and depth to the catalog of known H(n,k,d)'s. In Chapters 3 - 6, we introduce a new type of labeling called orientable Gamma-distance magic labeling. Let Gamma be an abelian group of order
... such diets limit your nutritional intake, can be unhealthy, and tend to fail in the long run. The key to achieving and maintaining a healthy weight isn't about short-term dietary changes. It's about a lifestyle that includes healthy eating, regular physical activity, and ...
DEFF Research Database (Denmark)
Engelbrechtsen, Line; Iepsen, Eva Pers Winning; Galijatovic, Ehm Astrid Andersson
2016-01-01
increased during weight loss (p = 5.2 × 10−15) and showed inverse correlation with insulin resistance measured by HOMA–IR levels (r = −0.318, p = 0.025). Valine concentrations were lower in the control group compared to the GLP-1RA group during weight maintenance (p = 0.005). Conclusion Weight loss...
Directory of Open Access Journals (Sweden)
Meng-Meng Shan
2016-01-01
Full Text Available With respect to multicriteria supplier selection problems with interval 2-tuple linguistic information, a new decision making approach that uses distance measures is proposed. Motivated by the ordered weighted distance (OWD measures, in this paper, we develop some interval 2-tuple linguistic distance operators such as the interval 2-tuple weighted distance (ITWD, the interval 2-tuple ordered weighted distance (ITOWD, and the interval 2-tuple hybrid weighted distance (ITHWD operators. These aggregation operators are very useful for the treatment of input data in the form of interval 2-tuple linguistic variables. We study some desirable properties of the ITOWD operator and further generalize it by using the generalized and the quasi-arithmetic means. Finally, the new approach is utilized to complete a supplier selection study for an actual hospital from the healthcare industry.
Planning with Reachable Distances
Tang, Xinyu
2009-01-01
Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the robot\\'s number of degrees of freedom. In addition to supporting efficient sampling, we show that the RD-space formulation naturally supports planning, and in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1000 links in time comparable to open chain sampling, and we can generate samples for 1000-link multi-loop systems of varying topology in less than a second. © 2009 Springer-Verlag.
Weighted approximation with varying weight
Totik, Vilmos
1994-01-01
A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.
Are contemporary tourists consuming distance?
DEFF Research Database (Denmark)
Larsen, Gunvor Riber
2012. Background The background for this research, which explores how tourists represent distance and whether or not distance can be said to be consumed by contemporary tourists, is the increasing leisure mobility of people. Travelling for the purpose of visiting friends and relatives is increasing...... of understanding mobility at a conceptual level, and distance matters to people's manifest mobility: how they travel and how far they travel are central elements of their movements. Therefore leisure mobility (indeed all mobility) is the activity of relating across distance, either through actual corporeal...... metric representation. These representations are the focus for this research. Research Aim and Questions The aim of this research is thus to explore how distance is being represented within the context of leisure mobility. Further the aim is to explore how or whether distance is being consumed...
Solving inverse problems for biological models using the collage method for differential equations.
Capasso, V; Kunze, H E; La Torre, D; Vrscay, E R
2013-07-01
In the first part of this paper we show how inverse problems for differential equations can be solved using the so-called collage method. Inverse problems can be solved by minimizing the collage distance in an appropriate metric space. We then provide several numerical examples in mathematical biology. We consider applications of this approach to the following areas: population dynamics, mRNA and protein concentration, bacteria and amoeba cells interaction, tumor growth.
Review of the inverse scattering problem at fixed energy in quantum mechanics
Sabatier, P. C.
1972-01-01
Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.
Distance : between deixis and perspectivity
Meermann, Anastasia; Sonnenhauser, Barbara
2015-01-01
Discussing exemplary applications of the notion of distance in linguistic analysis, this paper shows that very different phenomena are described in terms of this concept. It is argued that in order to overcome the problems arising from this mixup, deixis, distance and perspectivity have to be distinguished and their interrelations need to be described. Thereby, distance emerges as part of a recursive process mediating between situation-bound deixis and discourse-level perspectivity. This is i...
Inverse source problems in elastodynamics
Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao
2018-04-01
We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.
Inversion of the star transform
International Nuclear Information System (INIS)
Zhao, Fan; Schotland, John C; Markel, Vadim A
2014-01-01
We define the star transform as a generalization of the broken ray transform introduced by us in previous work. The advantages of using the star transform include the possibility to reconstruct the absorption and the scattering coefficients of the medium separately and simultaneously (from the same data) and the possibility to utilize scattered radiation which, in the case of conventional x-ray tomography, is discarded. In this paper, we derive the star transform from physical principles, discuss its mathematical properties and analyze numerical stability of inversion. In particular, it is shown that stable inversion of the star transform can be obtained only for configurations involving odd number of rays. Several computationally-efficient inversion algorithms are derived and tested numerically. (paper)
Inverse photoemission of uranium oxides
International Nuclear Information System (INIS)
Roussel, P.; Morrall, P.; Tull, S.J.
2009-01-01
Understanding the itinerant-localised bonding role of the 5f electrons in the light actinides will afford an insight into their unusual physical and chemical properties. In recent years, the combination of core and valance band electron spectroscopies with theoretic modelling have already made significant progress in this area. However, information of the unoccupied density of states is still scarce. When compared to the forward photoemission techniques, measurements of the unoccupied states suffer from significantly less sensitivity and lower resolution. In this paper, we report on our experimental apparatus, which is designed to measure the inverse photoemission spectra of the light actinides. Inverse photoemission spectra of UO 2 and UO 2.2 along with the corresponding core and valance electron spectra are presented in this paper. UO 2 has been reported previously, although through its inclusion here it allows us to compare and contrast results from our experimental apparatus to the previous Bremsstrahlung Isochromat Spectroscopy and Inverse Photoemission Spectroscopy investigations
Optimization for nonlinear inverse problem
International Nuclear Information System (INIS)
Boyadzhiev, G.; Brandmayr, E.; Pinat, T.; Panza, G.F.
2007-06-01
The nonlinear inversion of geophysical data in general does not yield a unique solution, but a single model, representing the investigated field, is preferred for an easy geological interpretation of the observations. The analyzed region is constituted by a number of sub-regions where the multi-valued nonlinear inversion is applied, which leads to a multi-valued solution. Therefore, combining the values of the solution in each sub-region, many acceptable models are obtained for the entire region and this complicates the geological interpretation of geophysical investigations. In this paper are presented new methodologies, capable to select one model, among all acceptable ones, that satisfies different criteria of smoothness in the explored space of solutions. In this work we focus on the non-linear inversion of surface waves dispersion curves, which gives structural models of shear-wave velocity versus depth, but the basic concepts have a general validity. (author)
Some Phenomena on Negative Inversion Constructions
Sung, Tae-Soo
2013-01-01
We examine the characteristics of NDI (negative degree inversion) and its relation with other inversion phenomena such as SVI (subject-verb inversion) and SAI (subject-auxiliary inversion). The negative element in the NDI construction may be" not," a negative adverbial, or a negative verb. In this respect, NDI has similar licensing…
Dynamics of an N-vortex state at small distances
Ovchinnikov, Yu. N.
2013-01-01
We investigate the dynamics of a state of N vortices, placed at the initial instant at small distances from some point, close to the "weight center" of vortices. The general solution of the time-dependent Ginsburg-Landau equation for N vortices in a large time interval is found. For N = 2, the position of the "weight center" of two vortices is time independent. For N ≥ 3, the position of the "weight center" weakly depends on time and is located in the range of the order of a 3, where a is a characteristic distance of a single vortex from the "weight center." For N = 3, the time evolution of the N-vortex state is fixed by the position of vortices at any time instant and by the values of two small parameters. For N ≥ 4, a new parameter arises in the problem, connected with relative increases in the number of decay modes.
The Inverse of Banded Matrices
2013-01-01
indexed entries all zeros. In this paper, generalizing a method of Mallik (1999) [5], we give the LU factorization and the inverse of the matrix Br,n (if it...r ≤ i ≤ r, 1 ≤ j ≤ r, with the remaining un-indexed entries all zeros. In this paper generalizing a method of Mallik (1999) [5...matrices and applications to piecewise cubic approximation, J. Comput. Appl. Math. 8 (4) (1982) 285–288. [5] R.K. Mallik , The inverse of a lower
Size Estimates in Inverse Problems
Di Cristo, Michele
2014-01-06
Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.
-Dimensional Fractional Lagrange's Inversion Theorem
Directory of Open Access Journals (Sweden)
F. A. Abd El-Salam
2013-01-01
Full Text Available Using Riemann-Liouville fractional differential operator, a fractional extension of the Lagrange inversion theorem and related formulas are developed. The required basic definitions, lemmas, and theorems in the fractional calculus are presented. A fractional form of Lagrange's expansion for one implicitly defined independent variable is obtained. Then, a fractional version of Lagrange's expansion in more than one unknown function is generalized. For extending the treatment in higher dimensions, some relevant vectors and tensors definitions and notations are presented. A fractional Taylor expansion of a function of -dimensional polyadics is derived. A fractional -dimensional Lagrange inversion theorem is proved.
Darwin's "strange inversion of reasoning".
Dennett, Daniel
2009-06-16
Darwin's theory of evolution by natural selection unifies the world of physics with the world of meaning and purpose by proposing a deeply counterintuitive "inversion of reasoning" (according to a 19th century critic): "to make a perfect and beautiful machine, it is not requisite to know how to make it" [MacKenzie RB (1868) (Nisbet & Co., London)]. Turing proposed a similar inversion: to be a perfect and beautiful computing machine, it is not requisite to know what arithmetic is. Together, these ideas help to explain how we human intelligences came to be able to discern the reasons for all of the adaptations of life, including our own.
Inverse transport theory and applications
International Nuclear Information System (INIS)
Bal, Guillaume
2009-01-01
Inverse transport consists of reconstructing the optical properties of a domain from measurements performed at the domain's boundary. This review concerns several types of measurements: time-dependent, time-independent, angularly resolved and angularly averaged measurements. We review recent results on the reconstruction of the optical parameters from such measurements and the stability of such reconstructions. Inverse transport finds applications e.g. in medical imaging (optical tomography, optical molecular imaging) and in geophysical imaging (remote sensing in the Earth's atmosphere). (topical review)
Inverse Interval Matrix: A Survey
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Farhadsefat, R.
2011-01-01
Roč. 22, - (2011), s. 704-719 E-ISSN 1081-3810 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval matrix * inverse interval matrix * NP-hardness * enclosure * unit midpoint * inverse sign stability * nonnegative invertibility * absolute value equation * algorithm Subject RIV: BA - General Mathematics Impact factor: 0.808, year: 2010 http://www.math.technion.ac.il/iic/ ela / ela -articles/articles/vol22_pp704-719.pdf
Lee, Sun-Min; Lee, Jung-Hoon
2015-01-01
[Purpose] The purpose of this study was to report the effects of ankle inversion taping using kinesiology tape in a patient with a medial ankle sprain. [Subject] A 28-year-old amateur soccer player suffered a Grade 2 medial ankle sprain during a match. [Methods] Ankle inversion taping was applied to the sprained ankle every day for 2 months. [Results] His symptoms were reduced after ankle inversion taping application for 2 months. The self-reported function score, the reach distances in the S...
Inversion of self-potential anomalies caused by 2D inclined sheets using neural networks
International Nuclear Information System (INIS)
El-Kaliouby, Hesham M; Al-Garni, Mansour A
2009-01-01
The modular neural network (MNN) inversion method has been used for inversion of self-potential (SP) data anomalies caused by 2D inclined sheets of infinite horizontal extent. The analysed parameters are the depth (h), the half-width (a), the inclination (α), the zero distance from the origin (x o ) and the polarization amplitude (k). The MNN inversion has been first tested on a synthetic example and then applied to two field examples from the Surda area of Rakha mines, India, and Kalava fault zone, India. The effect of random noise has been studied, and the technique showed satisfactory results. The inversion results show good agreement with the measured field data compared with other inversion techniques in use
Two-Dimensional Linear Inversion of GPR Data with a Shifting Zoom along the Observation Line
Directory of Open Access Journals (Sweden)
Raffaele Persico
2017-09-01
Full Text Available Linear inverse scattering problems can be solved by regularized inversion of a matrix, whose calculation and inversion may require significant computing resources, in particular, a significant amount of RAM memory. This effort is dependent on the extent of the investigation domain, which drives a large amount of data to be gathered and a large number of unknowns to be looked for, when this domain becomes electrically large. This leads, in turn, to the problem of inversion of excessively large matrices. Here, we consider the problem of a ground-penetrating radar (GPR survey in two-dimensional (2D geometry, with antennas at an electrically short distance from the soil. In particular, we present a strategy to afford inversion of large investigation domains, based on a shifting zoom procedure. The proposed strategy was successfully validated using experimental radar data.
Testing the gravitational inverse-square law
International Nuclear Information System (INIS)
Adelberger, Eric; Heckel, B.; Hoyle, C.D.
2005-01-01
If the universe contains more than three spatial dimensions, as many physicists believe, our current laws of gravity should break down at small distances. When Isaac Newton realized that the acceleration of the Moon as it orbited around the Earth could be related to the acceleration of an apple as it fell to the ground, it was the first time that two seemingly unrelated physical phenomena had been 'unified'. The quest to unify all the forces of nature is one that still keeps physicists busy today. Newton showed that the gravitational attraction between two point bodies is proportional to the product of their masses and inversely proportional to the square of the distance between them. Newton's theory, which assumes that the gravitational force acts instantaneously, remained essentially unchallenged for roughly two centuries until Einstein proposed the general theory of relativity in 1915. Einstein's radical new theory made gravity consistent with the two basic ideas of relativity: the world is 4D - the three directions of space combined with time - and no physical effect can travel faster than light. The theory of general relativity states that gravity is not a force in the usual sense but a consequence of the curvature of this space-time produced by mass or energy. However, in the limit of low velocities and weak gravitational fields, Einstein's theory still predicts that the gravitational force between two point objects obeys an inverse-square law. One of the outstanding challenges in physics is to finish what Newton started and achieve the ultimate 'grand unification' - to unify gravity with the other three fundamental forces (the electromagnetic force, and the strong and weak nuclear forces) into a single quantum theory. In string theory - one of the leading candidates for an ultimate theory - the fundamental entities of nature are 1D strings and higher-dimensional objects called 'branes', rather than the point-like particles we are familiar with. String
Energy Technology Data Exchange (ETDEWEB)
McQuinn, Kristen B. W. [University of Texas at Austin, McDonald Observatory, 2515 Speedway, Stop C1400 Austin, TX 78712 (United States); Skillman, Evan D. [Minnesota Institute for Astrophysics, School of Physics and Astronomy, 116 Church Street, SE, University of Minnesota, Minneapolis, MN 55455 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 E. Hermans Road, Tucson, AZ 85756 (United States); Berg, Danielle [Center for Gravitation, Cosmology and Astrophysics, Department of Physics, University of Wisconsin Milwaukee, 1900 East Kenwood Boulevard, Milwaukee, WI 53211 (United States); Kennicutt, Robert, E-mail: kmcquinn@astro.as.utexas.edu [Institute for Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom)
2016-11-01
M104 (NGC 4594; the Sombrero galaxy) is a nearby, well-studied elliptical galaxy included in scores of surveys focused on understanding the details of galaxy evolution. Despite the importance of observations of M104, a consensus distance has not yet been established. Here, we use newly obtained Hubble Space Telescope optical imaging to measure the distance to M104 based on the tip of the red giant branch (TRGB) method. Our measurement yields the distance to M104 to be 9.55 ± 0.13 ± 0.31 Mpc equivalent to a distance modulus of 29.90 ± 0.03 ± 0.07 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian maximum likelihood technique that reduces measurement uncertainties. The most discrepant previous results are due to Tully–Fisher method distances, which are likely inappropriate for M104 given its peculiar morphology and structure. Our results are part of a larger program to measure accurate distances to a sample of well-known spiral galaxies (including M51, M74, and M63) using the TRGB method.
Energy Technology Data Exchange (ETDEWEB)
McQuinn, Kristen B. W. [University of Texas at Austin, McDonald Observatory, 2515 Speedway, Stop C1400 Austin, TX 78712 (United States); Skillman, Evan D. [Minnesota Institute for Astrophysics, School of Physics and Astronomy, 116 Church Street, S.E., University of Minnesota, Minneapolis, MN 55455 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 E. Hermans Road, Tucson, AZ 85756 (United States); Berg, Danielle [Center for Gravitation, Cosmology and Astrophysics, Department of Physics, University of Wisconsin Milwaukee, 1900 East Kenwood Boulevard, Milwaukee, WI 53211 (United States); Kennicutt, Robert, E-mail: kmcquinn@astro.as.utexas.edu [Institute for Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom)
2016-07-20
Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark matter distributions. Accurate distances are fundamental to interpreting observations of these galaxies, yet many of the best studied nearby galaxies have distances based on methods with relatively large uncertainties. We have started a program to derive accurate distances to these galaxies. Here we measure the distance to M51—the Whirlpool galaxy—from newly obtained Hubble Space Telescope optical imaging using the tip of the red giant branch method. We measure the distance modulus to be 8.58 ± 0.10 Mpc (statistical), corresponding to a distance modulus of 29.67 ± 0.02 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian Maximum Likelihood technique that reduces measurement uncertainties.
McQuinn, Kristen. B. W.; Skillman, Evan D.; Dolphin, Andrew E.; Berg, Danielle; Kennicutt, Robert
2016-07-01
Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark matter distributions. Accurate distances are fundamental to interpreting observations of these galaxies, yet many of the best studied nearby galaxies have distances based on methods with relatively large uncertainties. We have started a program to derive accurate distances to these galaxies. Here we measure the distance to M51—the Whirlpool galaxy—from newly obtained Hubble Space Telescope optical imaging using the tip of the red giant branch method. We measure the distance modulus to be 8.58 ± 0.10 Mpc (statistical), corresponding to a distance modulus of 29.67 ± 0.02 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian Maximum Likelihood technique that reduces measurement uncertainties. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
Distance criterion for hydrogen bond
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Distance criterion for hydrogen bond. In a D-H ...A contact, the D...A distance must be less than the sum of van der Waals Radii of the D and A atoms, for it to be a hydrogen bond.
Social Distance and Intergenerational Relations
Kidwell, I. Jane; Booth, Alan
1977-01-01
Questionnaires were administered to a sample of adults to assess the extent of social distance between people of different ages. The findings suggest that the greater the age difference (younger or older) between people, the greater the social distance they feel. (Author)
Quality Content in Distance Education
Yildiz, Ezgi Pelin; Isman, Aytekin
2016-01-01
In parallel with technological advances in today's world of education activities can be conducted without the constraints of time and space. One of the most important of these activities is distance education. The success of the distance education is possible with content quality. The proliferation of e-learning environment has brought a need for…
Virtual Bioinformatics Distance Learning Suite
Tolvanen, Martti; Vihinen, Mauno
2004-01-01
Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…
The Psychology of Psychic Distance
DEFF Research Database (Denmark)
Håkanson, Lars; Ambos, Björn; Schuster, Anja
2016-01-01
and their theoretical underpinnings assume psychic distances to be symmetric. Building on insights from psychology and sociology, this paper demonstrates how national factors and cognitive processes interact in the formation of asymmetric distance perceptions. The results suggest that exposure to other countries...
Cognitive Styles and Distance Education.
Liu, Yuliang; Ginther, Dean
1999-01-01
Considers how to adapt the design of distance education to students' cognitive styles. Discusses cognitive styles, including field dependence versus independence, holistic-analytic, sensory preference, hemispheric preferences, and Kolb's Learning Style Model; and the characteristics of distance education, including technology. (Contains 92…
Distance Learning: Practice and Dilemmas
Tatkovic, Nevenka; Sehanovic, Jusuf; Ruzic, Maja
2006-01-01
In accordance with the European processes of integrated and homogeneous education, the paper presents the essential viewpoints and questions covering the establishment and development of "distance learning" (DL) in Republic of Croatia. It starts from the advantages of distance learning versus traditional education taking into account…
Hierarchical traits distances explain grassland Fabaceae species' ecological niches distances
Fort, Florian; Jouany, Claire; Cruz, Pablo
2015-01-01
Fabaceae species play a key role in ecosystem functioning through their capacity to fix atmospheric nitrogen via their symbiosis with Rhizobium bacteria. To increase benefits of using Fabaceae in agricultural systems, it is necessary to find ways to evaluate species or genotypes having potential adaptations to sub-optimal growth conditions. We evaluated the relevance of phylogenetic distance, absolute trait distance and hierarchical trait distance for comparing the adaptation of 13 grassland Fabaceae species to different habitats, i.e., ecological niches. We measured a wide range of functional traits (root traits, leaf traits, and whole plant traits) in these species. Species phylogenetic and ecological distances were assessed from a species-level phylogenetic tree and species' ecological indicator values, respectively. We demonstrated that differences in ecological niches between grassland Fabaceae species were related more to their hierarchical trait distances than to their phylogenetic distances. We showed that grassland Fabaceae functional traits tend to converge among species with the same ecological requirements. Species with acquisitive root strategies (thin roots, shallow root systems) are competitive species adapted to non-stressful meadows, while conservative ones (coarse roots, deep root systems) are able to tolerate stressful continental climates. In contrast, acquisitive species appeared to be able to tolerate low soil-P availability, while conservative ones need high P availability. Finally we highlight that traits converge along the ecological gradient, providing the assumption that species with similar root-trait values are better able to coexist, regardless of their phylogenetic distance. PMID:25741353
Hierarchical traits distances explain grassland Fabaceae species’ ecological niches distances
Directory of Open Access Journals (Sweden)
Florian eFort
2015-02-01
Full Text Available Fabaceae species play a key role in ecosystem functioning through their capacity to fix atmospheric nitrogen via their symbiosis with Rhizobium bacteria. To increase benefits of using Fabaceae in agricultural systems, it is necessary to find ways to evaluate species or genotypes having potential adaptations to sub-optimal growth conditions. We evaluated the relevance of phylogenetic distance, absolute trait distance and hierarchical trait distance for comparing the adaptation of 13 grassland Fabaceae species to different habitats, i.e. ecological niches. We measured a wide range of functional traits (root traits, leaf traits and whole plant traits in these species. Species phylogenetic and ecological distances were assessed from a species-level phylogenetic tree and species’ ecological indicator values, respectively. We demonstrated that differences in ecological niches between grassland Fabaceae species were related more to their hierarchical trait distances than to their phylogenetic distances. We showed that grassland Fabaceae functional traits tend to converge among species with the same ecological requirements. Species with acquisitive root strategies (thin roots, shallow root systems are competitive species adapted to non-stressful meadows, while conservative ones (coarse roots, deep root systems are able to tolerate stressful continental climates. In contrast, acquisitive species appeared to be able to tolerate low soil-P availability, while conservative ones need high P availability. Finally we highlight that traits converge along the ecological gradient, providing the assumption that species with similar root-trait values are better able to coexist, regardless of their phylogenetic distance.
Three-dimensional inversion of multisource array electromagnetic data
Tartaras, Efthimios
Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM
Superconductivity in Pb inverse opal
International Nuclear Information System (INIS)
Aliev, Ali E.; Lee, Sergey B.; Zakhidov, Anvar A.; Baughman, Ray H.
2007-01-01
Type-II superconducting behavior was observed in highly periodic three-dimensional lead inverse opal prepared by infiltration of melted Pb in blue (D = 160 nm), green (D = 220 nm) and red (D = 300 nm) opals and followed by the extraction of the SiO 2 spheres by chemical etching. The onset of a broad phase transition (ΔT = 0.3 K) was shifted from T c = 7.196 K for bulk Pb to T c = 7.325 K. The upper critical field H c2 (3150 Oe) measured from high-field hysteresis loops exceeds the critical field for bulk lead (803 Oe) fourfold. Two well resolved peaks observed in the hysteresis loops were ascribed to flux penetration into the cylindrical void space that can be found in inverse opal structure and into the periodic structure of Pb nanoparticles. The red inverse opal shows pronounced oscillations of magnetic moment in the mixed state at low temperatures, T 0.9T c has been observed for all of the samples studied. The magnetic field periodicity of resistivity modulation is in good agreement with the lattice parameter of the inverse opal structure. We attribute the failure to observe pronounced modulation in magneto-resistive measurement to difficulties in the precision orientation of the sample along the magnetic field
Inverse problem of solar oscillations
International Nuclear Information System (INIS)
Sekii, T.; Shibahashi, H.
1987-01-01
The authors present some preliminary results of numerical simulation to infer the sound velocity distribution in the solar interior from the oscillation data of the Sun as the inverse problem. They analyze the acoustic potential itself by taking account of some factors other than the sound velocity, and infer the sound velocity distribution in the deep interior of the Sun
Wave-equation dispersion inversion
Li, Jing
2016-12-08
We present the theory for wave-equation inversion of dispersion curves, where the misfit function is the sum of the squared differences between the wavenumbers along the predicted and observed dispersion curves. The dispersion curves are obtained from Rayleigh waves recorded by vertical-component geophones. Similar to wave-equation traveltime tomography, the complicated surface wave arrivals in traces are skeletonized as simpler data, namely the picked dispersion curves in the phase-velocity and frequency domains. Solutions to the elastic wave equation and an iterative optimization method are then used to invert these curves for 2-D or 3-D S-wave velocity models. This procedure, denoted as wave-equation dispersion inversion (WD), does not require the assumption of a layered model and is significantly less prone to the cycle-skipping problems of full waveform inversion. The synthetic and field data examples demonstrate that WD can approximately reconstruct the S-wave velocity distributions in laterally heterogeneous media if the dispersion curves can be identified and picked. The WD method is easily extended to anisotropic data and the inversion of dispersion curves associated with Love waves.
Asymptotics of weighted random sums
DEFF Research Database (Denmark)
Corcuera, José Manuel; Nualart, David; Podolskij, Mark
2014-01-01
of the weight process with respect to the Brownian motion when the distance between observations goes to zero. The result is obtained with the help of fractional calculus showing the power of this technique. This study, though interesting by itself, is motivated by an error found in the proof of Theorem 4...... in Corcuera, J.M. Nualart, D., Woerner, J. H. C. (2006). Power variation of some integral fractional processes, Bernoulli 12(4) 713-735....
Workflows for Full Waveform Inversions
Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas
2017-04-01
Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.
Inverse treatment planning based on MRI for HDR prostate brachytherapy
International Nuclear Information System (INIS)
Citrin, Deborah; Ning, Holly; Guion, Peter; Li Guang; Susil, Robert C.; Miller, Robert W.; Lessard, Etienne; Pouliot, Jean; Xie Huchen; Capala, Jacek; Coleman, C. Norman; Camphausen, Kevin; Menard, Cynthia
2005-01-01
Purpose: To develop and optimize a technique for inverse treatment planning based solely on magnetic resonance imaging (MRI) during high-dose-rate brachytherapy for prostate cancer. Methods and materials: Phantom studies were performed to verify the spatial integrity of treatment planning based on MRI. Data were evaluated from 10 patients with clinically localized prostate cancer who had undergone two high-dose-rate prostate brachytherapy boosts under MRI guidance before and after pelvic radiotherapy. Treatment planning MRI scans were systematically evaluated to derive a class solution for inverse planning constraints that would reproducibly result in acceptable target and normal tissue dosimetry. Results: We verified the spatial integrity of MRI for treatment planning. MRI anatomic evaluation revealed no significant displacement of the prostate in the left lateral decubitus position, a mean distance of 14.47 mm from the prostatic apex to the penile bulb, and clear demarcation of the neurovascular bundles on postcontrast imaging. Derivation of a class solution for inverse planning constraints resulted in a mean target volume receiving 100% of the prescribed dose of 95.69%, while maintaining a rectal volume receiving 75% of the prescribed dose of <5% (mean 1.36%) and urethral volume receiving 125% of the prescribed dose of <2% (mean 0.54%). Conclusion: Systematic evaluation of image spatial integrity, delineation uncertainty, and inverse planning constraints in our procedure reduced uncertainty in planning and treatment
Tracking frequency laser distance gauge
International Nuclear Information System (INIS)
Phillips, J.D.; Reasenberg, R.D.
2005-01-01
Advanced astronomical missions with greatly enhanced resolution and physics missions of unprecedented accuracy will require laser distance gauges of substantially improved performance. We describe a laser gauge, based on Pound-Drever-Hall locking, in which the optical frequency is adjusted to maintain an interferometer's null condition. This technique has been demonstrated with pm performance. Automatic fringe hopping allows it to track arbitrary distance changes. The instrument is intrinsically free of the nm-scale cyclic bias present in traditional (heterodyne) high-precision laser gauges. The output is a radio frequency, readily measured to sufficient accuracy. The laser gauge has operated in a resonant cavity, which improves precision, can suppress the effects of misalignments, and makes possible precise automatic alignment. The measurement of absolute distance requires little or no additional hardware, and has also been demonstrated. The proof-of-concept version, based on a stabilized HeNe laser and operating on a 0.5 m path, has achieved 10 pm precision with 0.1 s integration time, and 0.1 mm absolute distance accuracy. This version has also followed substantial distance changes as fast as 16 mm/s. We show that, if the precision in optical frequency is a fixed fraction of the linewidth, both incremental and absolute distance precision are independent of the distance measured. We discuss systematic error sources, and present plans for a new version of the gauge based on semiconductor lasers and fiber-coupled components
Reducing the distance in distance-caregiving by technology innovation
Directory of Open Access Journals (Sweden)
Lazelle E Benefield
2007-07-01
Full Text Available Lazelle E Benefield1, Cornelia Beck21College of Nursing, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA; 2Pat & Willard Walker Family Memory Research Center, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USAAbstract: Family caregivers are responsible for the home care of over 34 million older adults in the United States. For many, the elder family member lives more than an hour’s distance away. Distance caregiving is a growing alternative to more familiar models where: 1 the elder and the family caregiver(s may reside in the same household; or 2 the family caregiver may live nearby but not in the same household as the elder. The distance caregiving model involves elders and their family caregivers who live at some distance, defined as more than a 60-minute commute, from one another. Evidence suggests that distance caregiving is a distinct phenomenon, differs substantially from on-site family caregiving, and requires additional assistance to support the physical, social, and contextual dimensions of the caregiving process. Technology-based assists could virtually connect the caregiver and elder and provide strong support that addresses the elder’s physical, social, cognitive, and/or sensory impairments. Therefore, in today’s era of high technology, it is surprising that so few affordable innovations are being marketed for distance caregiving. This article addresses distance caregiving, proposes the use of technology innovation to support caregiving, and suggests a research agenda to better inform policy decisions related to the unique needs of this situation.Keywords: caregiving, family, distance, technology, elders
Equivalence of massive propagator distance and mathematical distance on graphs
International Nuclear Information System (INIS)
Filk, T.
1992-01-01
It is shown in this paper that the assignment of distance according to the massive propagator method and according to the mathematical definition (length of minimal path) on arbitrary graphs with a bound on the degree leads to equivalent large scale properties of the graph. Especially, the internal scaling dimension is the same for both definitions. This result holds for any fixed, non-vanishing mass, so that a really inequivalent definition of distance requires the limit m → 0
Language distance and tree reconstruction
International Nuclear Information System (INIS)
Petroni, Filippo; Serva, Maurizio
2008-01-01
Languages evolve over time according to a process in which reproduction, mutation and extinction are all possible. This is very similar to haploid evolution for asexual organisms and for the mitochondrial DNA of complex ones. Exploiting this similarity, it is possible, in principle, to verify hypotheses concerning the relationship among languages and to reconstruct their family tree. The key point is the definition of the distances among pairs of languages in analogy with the genetic distances among pairs of organisms. Distances can be evaluated by comparing grammar and/or vocabulary, but while it is difficult, if not impossible, to quantify grammar distance, it is possible to measure a distance from vocabulary differences. The method used by glottochronology computes distances from the percentage of shared 'cognates', which are words with a common historical origin. The weak point of this method is that subjective judgment plays a significant role. Here we define the distance of two languages by considering a renormalized edit distance among words with the same meaning and averaging over the two hundred words contained in a Swadesh list. In our approach the vocabulary of a language is the analogue of DNA for organisms. The advantage is that we avoid subjectivity and, furthermore, reproducibility of results is guaranteed. We apply our method to the Indo-European and the Austronesian groups, considering, in both cases, fifty different languages. The two trees obtained are, in many respects, similar to those found by glottochronologists, with some important differences as regards the positions of a few languages. In order to support these different results we separately analyze the structure of the distances of these languages with respect to all the others
Language distance and tree reconstruction
Petroni, Filippo; Serva, Maurizio
2008-08-01
Languages evolve over time according to a process in which reproduction, mutation and extinction are all possible. This is very similar to haploid evolution for asexual organisms and for the mitochondrial DNA of complex ones. Exploiting this similarity, it is possible, in principle, to verify hypotheses concerning the relationship among languages and to reconstruct their family tree. The key point is the definition of the distances among pairs of languages in analogy with the genetic distances among pairs of organisms. Distances can be evaluated by comparing grammar and/or vocabulary, but while it is difficult, if not impossible, to quantify grammar distance, it is possible to measure a distance from vocabulary differences. The method used by glottochronology computes distances from the percentage of shared 'cognates', which are words with a common historical origin. The weak point of this method is that subjective judgment plays a significant role. Here we define the distance of two languages by considering a renormalized edit distance among words with the same meaning and averaging over the two hundred words contained in a Swadesh list. In our approach the vocabulary of a language is the analogue of DNA for organisms. The advantage is that we avoid subjectivity and, furthermore, reproducibility of results is guaranteed. We apply our method to the Indo-European and the Austronesian groups, considering, in both cases, fifty different languages. The two trees obtained are, in many respects, similar to those found by glottochronologists, with some important differences as regards the positions of a few languages. In order to support these different results we separately analyze the structure of the distances of these languages with respect to all the others.
Vector Directional Distance Rational Hybrid Filters for Color Image Restoration
Directory of Open Access Journals (Sweden)
L. Khriji
2005-12-01
Full Text Available A new class of nonlinear filters, called vector-directional distance rational hybrid filters (VDDRHF for multispectral image processing, is introduced and applied to color image-filtering problems. These filters are based on rational functions (RF. The VDDRHF filter is a two-stage filter, which exploits the features of the vector directional distance filter (VDDF, the center weighted vector directional distance filter (CWVDDF and those of the rational operator. The filter output is a result of vector rational function (VRF operating on the output of three sub-functions. Two vector directional distance (VDDF filters and one center weighted vector directional distance filter (CWVDDF are proposed to be used in the first stage due to their desirable properties, such as, noise attenuation, chromaticity retention, and edges and details preservation. Experimental results show that the new VDDRHF outperforms a number of widely known nonlinear filters for multi-spectral image processing such as the vector median filter (VMF, the generalized vector directional filters (GVDF and distance directional filters (DDF with respect to all criteria used.
Sub-Millimeter Tests of the Newtonian Inverse Square Law
International Nuclear Information System (INIS)
Adelberger, Eric
2005-01-01
It is remarkable that small-scale experiments can address important open issues in fundamental science such as: 'why is gravity so weak compared to the other interactions?' and 'why is the cosmological constant so small compared to the predictions of quantum mechanics?' String theory ideas (new scalar particles and extra dimensions) and other notions hint that Newton's Inverse-Square Law could break down at distances less than 1 mm. I will review some motivations for testing the Inverse-Square Law, and discuss recent mechanical experiments with torsion balances, small-scillators, micro-cantilevers, and ultra-cold neutrons. Our torsion-balance experiments have probed for gravitational-strength interactions with length scales down to 70 micrometers, which is approximately the diameter of a human hair.
Real time monitoring of moment magnitude by waveform inversion
Lee, J.; Friederich, W.; Meier, T.
2012-01-01
An instantaneous measure of the moment magnitude (Mw) of an ongoing earthquake is estimated from the moment rate function (MRF) determined in real-time from available seismic data using waveform inversion. Integration of the MRF gives the moment function from which an instantaneous Mw is derived. By repeating the inversion procedure at regular intervals while seismic data are coming in we can monitor the evolution of seismic moment and Mw with time. The final size and duration of a strong earthquake can be obtained within 12 to 15 minutes after the origin time. We show examples of Mw monitoring for three large earthquakes at regional distances. The estimated Mw is only weakly sensitive to changes in the assumed source parameters. Depending on the availability of seismic stations close to the epicenter, a rapid estimation of the Mw as a prerequisite for the assessment of earthquake damage potential appears to be feasible.
Distance between two binding sites of the same antibody molecule
International Nuclear Information System (INIS)
Cser, L.; Gladkikh, I.A.; Ostanevich, Y.M.; Franek, F.; Novotny, J.; Nezlin, R.S.
1978-01-01
Neutron small-angle scattering experiments are reported, aimed at determining the distance between the two binding sites of the same antibody molecule employing complexes of anti-Dnp antibody with an antigenically univalent, high molecular weight ligand. Although the distance values could be determined only with a large statistical error, the data allowed the conclusion that the geometrical parameters of the complexes formed with the early (i.e., precipitating) antibody are significantly different from those of the complexes formed with the late (i.e, non-precipitating) antibody. The data suggest that the precipitating antibody complexed with a high molecular weight antigen assumes an extended shape with an antigen to antigen distance of 35.8 +- 1.3 nm. (Auth.)
3D stochastic inversion and joint inversion of potential fields for multi scale parameters
Shamsipour, Pejman
In this thesis we present the development of new techniques for the interpretation of potential field (gravity and magnetic data), which are the most widespread economic geophysical methods used for oil and mineral exploration. These new techniques help to address the long-standing issue with the interpretation of potential fields, namely the intrinsic non-uniqueness inversion of these types of data. The thesis takes the form of three papers (four including Appendix), which have been published, or soon to be published, in respected international journals. The purpose of the thesis is to introduce new methods based on 3D stochastical approaches for: 1) Inversion of potential field data (magnetic), 2) Multiscale Inversion using surface and borehole data and 3) Joint inversion of geophysical potential field data. We first present a stochastic inversion method based on a geostatistical approach to recover 3D susceptibility models from magnetic data. The aim of applying geostatistics is to provide quantitative descriptions of natural variables distributed in space or in time and space. We evaluate the uncertainty on the parameter model by using geostatistical unconditional simulations. The realizations are post-conditioned by cokriging to observation data. In order to avoid the natural tendency of the estimated structure to lay near the surface, depth weighting is included in the cokriging system. Then, we introduce algorithm for multiscale inversion, the presented algorithm has the capability of inverting data on multiple supports. The method involves four main steps: i. upscaling of borehole parameters (It could be density or susceptibility) to block parameters, ii. selection of block to use as constraints based on a threshold on kriging variance, iii. inversion of observation data with selected block densities as constraints, and iv. downscaling of inverted parameters to small prisms. Two modes of application are presented: estimation and simulation. Finally, a novel
Academy Distance Learning Tools (IRIS) -
Department of Transportation — IRIS is a suite of front-end web applications utilizing a centralized back-end Oracle database. The system fully supports the FAA Academy's Distance Learning Program...
Distance labeling schemes for trees
DEFF Research Database (Denmark)
Alstrup, Stephen; Gørtz, Inge Li; Bistrup Halvorsen, Esben
2016-01-01
We consider distance labeling schemes for trees: given a tree with n nodes, label the nodes with binary strings such that, given the labels of any two nodes, one can determine, by looking only at the labels, the distance in the tree between the two nodes. A lower bound by Gavoille et al. [Gavoille...... variants such as, for example, small distances in trees [Alstrup et al., SODA, 2003]. We improve the known upper and lower bounds of exact distance labeling by showing that 1/4 log2(n) bits are needed and that 1/2 log2(n) bits are sufficient. We also give (1 + ε)-stretch labeling schemes using Theta...
Distance Education in Technological Age
Directory of Open Access Journals (Sweden)
R .C. SHARMA
2005-04-01
Full Text Available Distance Education in Technological AgeRomesh Verma (Editor, New Delhi: Anmol Publications, 2005, ISBN 81-261-2210-2, pp. 419 Reviewed by R C SHARMARegional DirectorIndira Gandhi National Open University-INDIA The advancements in information and communication technologies have brought significant changes in the way the open and distance learning are provided to the learners. The impact of such changes is quite visible in both developed and developing countries. Switching over to online mode, joining hands with private initiatives and making a presence in foreign waters, are some of the hallmarks of the open and distance education (ODE institutions in developing countries. The compilation of twenty six essays on themes as applicable to ODE has resulted in the book, Distance Education in Technological Age. These essays follow a progressive style of narration, starting from describing conceptual framework of distance education, how the distance education was emerged on the global scene and in India, and then goes on to discuss emergence of online distance education and research aspects in ODE. The initial four chapters provide a detailed account of historical development and growth of distance education in India and State Open University and National Open University Model in India . Student support services are pivot to any distance education and much of its success depends on how well the support services are provided. These are discussed from national and international perspective. The issues of collaborative learning, learning on demand, life long learning, learning-unlearning and re-learning model and strategic alliances have also given due space by the authors. An assortment of technologies like communication technology, domestic technology, information technology, mass media and entertainment technology, media technology and educational technology give an idea of how these technologies are being adopted in the open universities. The study
The effects of changing exercise levels on weight and age-relatedweight gain
Energy Technology Data Exchange (ETDEWEB)
Williams, Paul T.; Wood, Peter D.
2004-06-01
To determine prospectively whether physical activity canprevent age-related weight gain and whether changing levels of activityaffect body weight. DESIGN/SUBJECTS: The study consisted of 8,080 maleand 4,871 female runners who completed two questionnaires an average(+/-standard deviation (s.d.)) of 3.20+/-2.30 and 2.59+/-2.17 yearsapart, respectively, as part of the National Runners' Health Study.RESULTS: Changes in running distance were inversely related to changes inmen's and women's body mass indices (BMIs) (slope+/-standard error(s.e.): -0.015+/-0.001 and -0.009+/-0.001 kg/m(2) per Deltakm/week,respectively), waist circumferences (-0.030+/-0.002 and -0.022+/-0.005 cmper Deltakm/week, respectively) and percent changes in body weight(-0.062+/-0.003 and -0.041+/-0.003 percent per Deltakm/week,respectively, all P<0.0001). The regression slopes were significantlysteeper (more negative) in men than women for DeltaBMI and Deltapercentbody weight (P<0.0001). A longer history of running diminishedthe impact of changing running distance on men's weights. When adjustedfor Deltakm/week, years of aging in men and years of aging in women wereassociated with increases of 0.066+/-0.005 and 0.056+/-0.006 kg/m(2) inBMI, respectively, increases of 0.294+/-0.019 and 0.279+/-0.028 percentin Delta percentbody weight, respectively, and increases of 0.203+/-0.016and 0.271+/-0.033 cm in waist circumference, respectively (allP<0.0001). These regression slopes suggest that vigorous exercise mayneed to increase 4.4 km/week annually in men and 6.2 km/week annually inwomen to compensate for the expected gain in weight associated with aging(2.7 and 3.9 km/week annually when correct for the attenuation due tomeasurement error). CONCLUSIONS: Age-related weight gain occurs evenamong the most active individuals when exercise is constant.Theoretically, vigorous exercise must increase significantly with age tocompensate for the expected gain in weight associated withaging.
Distance Education in Technological Age
R .C. SHARMA
2005-01-01
Distance Education in Technological AgeRomesh Verma (Editor), New Delhi: Anmol Publications, 2005, ISBN 81-261-2210-2, pp. 419 Reviewed by R C SHARMARegional DirectorIndira Gandhi National Open University-INDIA The advancements in information and communication technologies have brought significant changes in the way the open and distance learning are provided to the learners. The impact of such changes is quite visible in both developed and developing countries. Switching over to online mode...
COLLAGE-BASED INVERSE PROBLEMS FOR IFSM WITH ENTROPY MAXIMIZATION AND SPARSITY CONSTRAINTS
Directory of Open Access Journals (Sweden)
Herb Kunze
2013-11-01
Full Text Available We consider the inverse problem associated with IFSM: Given a target function f, find an IFSM, such that its invariant fixed point f is sufficiently close to f in the Lp distance. In this paper, we extend the collage-based method developed by Forte and Vrscay (1995 along two different directions. We first search for a set of mappings that not only minimizes the collage error but also maximizes the entropy of the dynamical system. We then include an extra term in the minimization process which takes into account the sparsity of the set of mappings. In this new formulation, the minimization of collage error is treated as multi-criteria problem: we consider three different and conflicting criteria i.e., collage error, entropy and sparsity. To solve this multi-criteria program we proceed by scalarization and we reduce the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented. Numerical studies indicate that a maximum entropy principle exists for this approximation problem, i.e., that the suboptimal solutions produced by collage coding can be improved at least slightly by adding a maximum entropy criterion.
A robust probabilistic approach for variational inversion in shallow water acoustic tomography
International Nuclear Information System (INIS)
Berrada, M; Badran, F; Crépon, M; Thiria, S; Hermand, J-P
2009-01-01
This paper presents a variational methodology for inverting shallow water acoustic tomography (SWAT) measurements. The aim is to determine the vertical profile of the speed of sound c(z), knowing the acoustic pressures generated by a frequency source and collected by a sparse vertical hydrophone array (VRA). A variational approach that minimizes a cost function measuring the distance between observations and their modeled equivalents is used. A regularization term in the form of a quadratic restoring term to a background is also added. To avoid inverting the variance–covariance matrix associated with the above-weighted quadratic background, this work proposes to model the sound speed vector using probabilistic principal component analysis (PPCA). The PPCA introduces an optimum reduced number of non-correlated latent variables η, which determine a new control vector and a new regularization term, expressed as η T η. The PPCA represents a rigorous formalism for the use of a priori information and allows an efficient implementation of the variational inverse method
Inverse photon-photon processes
International Nuclear Information System (INIS)
Carimalo, C.; Crozon, M.; Kesler, P.; Parisi, J.
1981-12-01
We here consider inverse photon-photon processes, i.e. AB → γγX (where A, B are hadrons, in particular protons or antiprotons), at high energies. As regards the production of a γγ continuum, we show that, under specific conditions the study of such processes might provide some information on the subprocess gg γγ, involving a quark box. It is also suggested to use those processes in order to systematically look for heavy C = + structures (quarkonium states, gluonia, etc.) showing up in the γγ channel. Inverse photon-photon processes might thus become a new and fertile area of investigation in high-energy physics, provided the difficult problem of discriminating between direct photons and indirect ones can be handled in a satisfactory way
Hedland, D. A.; Degonia, P. K.
1974-01-01
The RAE-1 spacecraft inversion performed October 31, 1972 is described based upon the in-orbit dynamical data in conjunction with results obtained from previously developed computer simulation models. The computer simulations used are predictive of the satellite dynamics, including boom flexing, and are applicable during boom deployment and retraction, inter-phase coast periods, and post-deployment operations. Attitude data, as well as boom tip data, were analyzed in order to obtain a detailed description of the dynamical behavior of the spacecraft during and after the inversion. Runs were made using the computer model and the results were analyzed and compared with the real time data. Close agreement between the actual recorded spacecraft attitude and the computer simulation results was obtained.
Ensemble Weight Enumerators for Protograph LDPC Codes
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
Inverse problem in nuclear physics
International Nuclear Information System (INIS)
Zakhariev, B.N.
1976-01-01
The method of reconstruction of interaction from the scattering data is formulated in the frame of the R-matrix theory in which the potential is determined by position of resonance Esub(lambda) and their reduced widths γ 2 lambda. In finite difference approximation for the Schroedinger equation this new approach allows to make the logics of the inverse problem IP more clear. A possibility of applications of IP formalism to various nuclear systems is discussed. (author)
Improving waveform inversion using modified interferometric imaging condition
Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen
2018-02-01
Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.
Crookes, Kate; Hayward, William G.
2012-01-01
Presenting a face inverted (upside down) disrupts perceptual sensitivity to the spacing between the features. Recently, it has been shown that this disruption is greater for vertical than horizontal changes in eye position. One explanation for this effect proposed that inversion disrupts the processing of long-range (e.g., eye-to-mouth distance)…
Inversion of full acoustic wavefield in local helioseismology: A study with synthetic data
Cobden, L.J.; Tong, C.H.; Warner, M.R.
We present the first results from the inversion of full acoustic wavefield in the helioseismic context. In contrast to time-distance helioseismology, which involves analyzing the travel times of seismic waves propagating into the solar interior, wavefield tomography models both the travel times and
Inversion of potential-field data for layers with uneven thickness
Caratori Tontini, F.; Cocchi, L.; Carmisciano, C.; Stefanelli, P.
2008-01-01
AB: Inversion of large-scale potential-field anomalies, aimed at determining density or magnetization, is usually made in the Fourier domain. The commonly adopted geometry is based on a layer of constant thickness, characterized by a bottom surface at a fixed distance from the top surface.....
Elastic reflection waveform inversion with variable density
Li, Yuanyuan; Li, Zhenchun; Alkhalifah, Tariq Ali; Guo, Qiang
2017-01-01
Elastic full waveform inversion (FWI) provides a better description of the subsurface than those given by the acoustic assumption. However it suffers from a more serious cycle skipping problem compared with the latter. Reflection waveform inversion
On a complete topological inverse polycyclic monoid
Directory of Open Access Journals (Sweden)
S. O. Bardyla
2016-12-01
Full Text Available We give sufficient conditions when a topological inverse $\\lambda$-polycyclic monoid $P_{\\lambda}$ is absolutely $H$-closed in the class of topological inverse semigroups. For every infinite cardinal $\\lambda$ we construct the coarsest semigroup inverse topology $\\tau_{mi}$ on $P_\\lambda$ and give an example of a topological inverse monoid $S$ which contains the polycyclic monoid $P_2$ as a dense discrete subsemigroup.
INVERSION OF FULL ACOUSTIC WAVEFIELD IN LOCAL HELIOSEISMOLOGY: A STUDY WITH SYNTHETIC DATA
International Nuclear Information System (INIS)
Cobden, L. J.; Warner, M. R.; Tong, C. H.
2011-01-01
We present the first results from the inversion of full acoustic wavefield in the helioseismic context. In contrast to time-distance helioseismology, which involves analyzing the travel times of seismic waves propagating into the solar interior, wavefield tomography models both the travel times and amplitude variations present in the entire seismic record. Unlike the use of ray-based, Fresnel-zone, Born, or Rytov approximations in previous time-distance studies, this method does not require any simplifications to be made to the sensitivity kernel in the inversion. In this study, the acoustic wavefield is simulated for all iterations in the inversion. The sensitivity kernel is therefore updated while lateral variations in sound-speed structure in the model emerge during the course of the inversion. Our results demonstrate that the amplitude-based inversion approach is capable of resolving sound-speed structures defined by relatively sharp vertical and horizontal boundaries. This study therefore provides the foundation for a new type of subsurface imaging in local helioseismology that is based on the inversion of the entire seismic wavefield.
Overweight, Obesity, and Weight Loss
... Back to section menu Healthy Weight Weight and obesity Underweight Weight, fertility, and pregnancy Weight loss and ... section Home Healthy Weight Healthy Weight Weight and obesity Underweight Weight, fertility, and pregnancy Weight loss and ...
[Crop geometry identification based on inversion of semiempirical BRDF models].
Zhao, Chun-jiang; Huang, Wen-jiang; Mu, Xu-han; Wang, Jin-diz; Wang, Ji-hua
2009-09-01
With the rapid development of remote sensing technology, the application of remote sensing has extended from single view angle to multi-view angles. It was studied for the qualitative and quantitative effect of average leaf angle (ALA) on crop canopy reflected spectrum. Effect of ALA on canopy reflected spectrum can not be ignored with inversion of leaf area index (LAI) and monitoring of crop growth condition by remote sensing technology. Investigations of the effect of erective and horizontal varieties were conducted by bidirectional canopy reflected spectrum and semiempirical bidirectional reflectance distribution function (BRDF) models. The sensitive analysis was done based on the weight for the volumetric kernel (fvol), the weight for the geometric kernel (fgeo), and the weight for constant corresponding to isotropic reflectance (fiso) at red band (680 nm) and near infrared band (800 nm). By combining the weights of the red and near-infrared bands, the semiempirical models can obtain structural information by retrieving biophysical parameters from the physical BRDF model and a number of bidirectional observations. So, it will allow an on-site and non-sampling mode of crop ALA identification, which is useful for using remote sensing for crop growth monitoring and for improving the LAI inversion accuracy, and it will help the farmers in guiding the fertilizer and irrigation management in the farmland without a priori knowledge.
Probabilistic Geoacoustic Inversion in Complex Environments
2015-09-30
Probabilistic Geoacoustic Inversion in Complex Environments Jan Dettmer School of Earth and Ocean Sciences, University of Victoria, Victoria BC...long-range inversion methods can fail to provide sufficient resolution. For proper quantitative examination of variability, parameter uncertainty must...project aims to advance probabilistic geoacoustic inversion methods for complex ocean environments for a range of geoacoustic data types. The work is
Measuring distances between complex networks
International Nuclear Information System (INIS)
Andrade, Roberto F.S.; Miranda, Jose G.V.; Pinho, Suani T.R.; Lobao, Thierry Petit
2008-01-01
A previously introduced concept of higher order neighborhoods in complex networks, [R.F.S. Andrade, J.G.V. Miranda, T.P. Lobao, Phys. Rev. E 73 (2006) 046101] is used to define a distance between networks with the same number of nodes. With such measure, expressed in terms of the matrix elements of the neighborhood matrices of each network, it is possible to compare, in a quantitative way, how far apart in the space of neighborhood matrices two networks are. The distance between these matrices depends on both the network topologies and the adopted node numberings. While the numbering of one network is fixed, a Monte Carlo algorithm is used to find the best numbering of the other network, in the sense that it minimizes the distance between the matrices. The minimal value found for the distance reflects differences in the neighborhood structures of the two networks that arise only from distinct topologies. This procedure ends up by providing a projection of the first network on the pattern of the second one. Examples are worked out allowing for a quantitative comparison for distances among distinct networks, as well as among distinct realizations of random networks
Computing Distances between Probabilistic Automata
Directory of Open Access Journals (Sweden)
Mathieu Tracol
2011-07-01
Full Text Available We present relaxed notions of simulation and bisimulation on Probabilistic Automata (PA, that allow some error epsilon. When epsilon is zero we retrieve the usual notions of bisimulation and simulation on PAs. We give logical characterisations of these notions by choosing suitable logics which differ from the elementary ones, L with negation and L without negation, by the modal operator. Using flow networks, we show how to compute the relations in PTIME. This allows the definition of an efficiently computable non-discounted distance between the states of a PA. A natural modification of this distance is introduced, to obtain a discounted distance, which weakens the influence of long term transitions. We compare our notions of distance to others previously defined and illustrate our approach on various examples. We also show that our distance is not expansive with respect to process algebra operators. Although L without negation is a suitable logic to characterise epsilon-(bisimulation on deterministic PAs, it is not for general PAs; interestingly, we prove that it does characterise weaker notions, called a priori epsilon-(bisimulation, which we prove to be NP-difficult to decide.
Distance sampling methods and applications
Buckland, S T; Marques, T A; Oedekoven, C S
2015-01-01
In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website. Some of the case studies use the software Distance, while others use R code. The book is in three parts. The first part addresses basic methods, the ...
Long-distance asymptotics of temperature correlators of the impenetrable Bose gas
International Nuclear Information System (INIS)
Its, A.R.; Izergin, A.G.; Korepin, V.E.
1989-06-01
The inverse scattering method is applied to the integrable nonlinear system describing temperature correlators of the impenetrable bosons in one space dimension. The corresponding matrix Riemann problems are constructed for two-point as well as for multi-point correlators. Long-distance asymptotics of two-point correlators is calculated. (author). 8 refs
Effect of objective function on multi-objective inverse planning of radiation therapy
International Nuclear Information System (INIS)
Li Guoli; Wu Yican; Song Gang; Wang Shifang
2006-01-01
There are two kinds of objective functions in radiotherapy inverse planning: dose distribution-based and Dose-Volume Histogram (DVH)-based functions. The treatment planning in our days is still a trial and error process because the multi-objective problem is solved by transforming it into a single objective problem using a specific set of weights for each object. This work investigates the problem of objective function setting based on Pareto multi-optimization theory, and compares the effect on multi-objective inverse planning of those two kinds of objective functions including calculation time, converge speed, etc. The basis of objective function setting on inverse planning is discussed. (authors)
Further results on binary convolutional codes with an optimum distance profile
DEFF Research Database (Denmark)
Johannesson, Rolf; Paaske, Erik
1978-01-01
Fixed binary convolutional codes are considered which are simultaneously optimal or near-optimal according to three criteria: namely, distance profiled, free distanced_{ infty}, and minimum number of weightd_{infty}paths. It is shown how the optimum distance profile criterion can be used to limit...... codes. As a counterpart to quick-look-in (QLI) codes which are not "transparent," we introduce rateR = 1/2easy-look-in-transparent (ELIT) codes with a feedforward inverse(1 + D,D). In general, ELIT codes haved_{infty}superior to that of QLI codes....
Covariant chronogeometry and extreme distances
International Nuclear Information System (INIS)
Segal, I.E.
1981-01-01
A theory for the analysis of major features of the fundamental physical structure of the universe, from micro- to macroscopic is proposed. It indicates that gravity is essentially the transform of the aggregate of the basic microscopic forces under conformal inversion. The theory also suggests a natural form for elementary particle structure that implies a nonparametric cosmological effect and indicates an intrinsic hierarchy among the microscopic forces. (author)
Euclidean distance geometry an introduction
Liberti, Leo
2017-01-01
This textbook, the first of its kind, presents the fundamentals of distance geometry: theory, useful methodologies for obtaining solutions, and real world applications. Concise proofs are given and step-by-step algorithms for solving fundamental problems efficiently and precisely are presented in Mathematica®, enabling the reader to experiment with concepts and methods as they are introduced. Descriptive graphics, examples, and problems, accompany the real gems of the text, namely the applications in visualization of graphs, localization of sensor networks, protein conformation from distance data, clock synchronization protocols, robotics, and control of unmanned underwater vehicles, to name several. Aimed at intermediate undergraduates, beginning graduate students, researchers, and practitioners, the reader with a basic knowledge of linear algebra will gain an understanding of the basic theories of distance geometry and why they work in real life.
Geodesic distance in planar graphs
International Nuclear Information System (INIS)
Bouttier, J.; Di Francesco, P.; Guitter, E.
2003-01-01
We derive the exact generating function for planar maps (genus zero fatgraphs) with vertices of arbitrary even valence and with two marked points at a fixed geodesic distance. This is done in a purely combinatorial way based on a bijection with decorated trees, leading to a recursion relation on the geodesic distance. The latter is solved exactly in terms of discrete soliton-like expressions, suggesting an underlying integrable structure. We extract from this solution the fractal dimensions at the various (multi)-critical points, as well as the precise scaling forms of the continuum two-point functions and the probability distributions for the geodesic distance in (multi)-critical random surfaces. The two-point functions are shown to obey differential equations involving the residues of the KdV hierarchy
Inverse Compton gamma-rays from pulsars
International Nuclear Information System (INIS)
Morini, M.
1983-01-01
A model is proposed for pulsar optical and gamma-ray emission where relativistic electrons beams: (i) scatter the blackbody photons from the polar cap surface giving inverse Compton gamma-rays and (ii) produce synchrotron optical photons in the light cylinder region which are then inverse Compton scattered giving other gamma-rays. The model is applied to the Vela pulsar, explaining the first gamma-ray pulse by inverse Compton scattering of synchrotron photons near the light cylinder and the second gamma-ray pulse partly by inverse Compton scattering of synchrotron photons and partly by inverse Compton scattering of the thermal blackbody photons near the star surface. (author)
Point-source inversion techniques
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Inversion of GPS meteorology data
Directory of Open Access Journals (Sweden)
K. Hocke
Full Text Available The GPS meteorology (GPS/MET experiment, led by the Universities Corporation for Atmospheric Research (UCAR, consists of a GPS receiver aboard a low earth orbit (LEO satellite which was launched on 3 April 1995. During a radio occultation the LEO satellite rises or sets relative to one of the 24 GPS satellites at the Earth's horizon. Thereby the atmospheric layers are successively sounded by radio waves which propagate from the GPS satellite to the LEO satellite. From the observed phase path increases, which are due to refraction of the radio waves by the ionosphere and the neutral atmosphere, the atmospheric parameter refractivity, density, pressure and temperature are calculated with high accuracy and resolution (0.5–1.5 km. In the present study, practical aspects of the GPS/MET data analysis are discussed. The retrieval is based on the Abelian integral inversion of the atmospheric bending angle profile into the refractivity index profile. The problem of the upper boundary condition of the Abelian integral is described by examples. The statistical optimization approach which is applied to the data above 40 km and the use of topside bending angle profiles from model atmospheres stabilize the inversion. The retrieved temperature profiles are compared with corresponding profiles which have already been calculated by scientists of UCAR and Jet Propulsion Laboratory (JPL, using Abelian integral inversion too. The comparison shows that in some cases large differences occur (5 K and more. This is probably due to different treatment of the upper boundary condition, data runaways and noise. Several temperature profiles with wavelike structures at tropospheric and stratospheric heights are shown. While the periodic structures at upper stratospheric heights could be caused by residual errors of the ionospheric correction method, the periodic temperature fluctuations at heights below 30 km are most likely caused by atmospheric waves (vertically
Weight loss surgery helps people with extreme obesity to lose weight. It may be an option if you cannot lose weight ... obesity. There are different types of weight loss surgery. They often limit the amount of food you ...
Waveform inversion of lateral velocity variation from wavefield source location perturbation
Choi, Yun Seok
2013-09-22
It is challenge in waveform inversion to precisely define the deep part of the velocity model compared to the shallow part. The lateral velocity variation, or what referred to as the derivative of velocity with respect to the horizontal distance, with well log data can be used to update the deep part of the velocity model more precisely. We develop a waveform inversion algorithm to obtain the lateral velocity variation by inverting the wavefield variation associated with the lateral shot location perturbation. The gradient of the new waveform inversion algorithm is obtained by the adjoint-state method. Our inversion algorithm focuses on resolving the lateral changes of the velocity model with respect to a fixed reference vertical velocity profile given by a well log. We apply the method on a simple-dome model to highlight the methods potential.
Adaptive Distance Protection for Microgrids
DEFF Research Database (Denmark)
Lin, Hengwei; Guerrero, Josep M.; Quintero, Juan Carlos Vasquez
2015-01-01
is adopted to accelerate the tripping speed of the relays on the weak lines. The protection methodology is tested on a mid-voltage microgrid network in Aalborg, Denmark. The results show that the adaptive distance protection methodology has good selectivity and sensitivity. What is more, this system also has......Due to the increasing penetration of distributed generation resources, more and more microgrids can be found in distribution systems. This paper proposes a phasor measurement unit based distance protection strategy for microgrids in distribution system. At the same time, transfer tripping scheme...
Steiner Distance in Graphs--A Survey
Mao, Yaping
2017-01-01
For a connected graph $G$ of order at least $2$ and $S\\subseteq V(G)$, the \\emph{Steiner distance} $d_G(S)$ among the vertices of $S$ is the minimum size among all connected subgraphs whose vertex sets contain $S$. In this paper, we summarize the known results on the Steiner distance parameters, including Steiner distance, Steiner diameter, Steiner center, Steiner median, Steiner interval, Steiner distance hereditary graph, Steiner distance stable graph, average Steiner distance, and Steiner ...
Partial distance correlation with methods for dissimilarities
Székely, Gábor J.; Rizzo, Maria L.
2014-01-01
Distance covariance and distance correlation are scalar coefficients that characterize independence of random vectors in arbitrary dimension. Properties, extensions, and applications of distance correlation have been discussed in the recent literature, but the problem of defining the partial distance correlation has remained an open question of considerable interest. The problem of partial distance correlation is more complex than partial correlation partly because the squared distance covari...
A study of metrics of distance and correlation between ranked lists for compositionality detection
DEFF Research Database (Denmark)
Lioma, Christina; Hansen, Niels Dalum
2017-01-01
affects the measurement of semantic similarity. We propose a new compositionality detection method that represents phrases as ranked lists of term weights. Our method approximates the semantic similarity between two ranked list representations using a range of well-known distance and correlation metrics...... of compositionality using any of the distance and correlation metrics considered....
Inverse problems in systems biology
International Nuclear Information System (INIS)
Engl, Heinz W; Lu, James; Müller, Stefan; Flamm, Christoph; Schuster, Peter; Kügler, Philipp
2009-01-01
Systems biology is a new discipline built upon the premise that an understanding of how cells and organisms carry out their functions cannot be gained by looking at cellular components in isolation. Instead, consideration of the interplay between the parts of systems is indispensable for analyzing, modeling, and predicting systems' behavior. Studying biological processes under this premise, systems biology combines experimental techniques and computational methods in order to construct predictive models. Both in building and utilizing models of biological systems, inverse problems arise at several occasions, for example, (i) when experimental time series and steady state data are used to construct biochemical reaction networks, (ii) when model parameters are identified that capture underlying mechanisms or (iii) when desired qualitative behavior such as bistability or limit cycle oscillations is engineered by proper choices of parameter combinations. In this paper we review principles of the modeling process in systems biology and illustrate the ill-posedness and regularization of parameter identification problems in that context. Furthermore, we discuss the methodology of qualitative inverse problems and demonstrate how sparsity enforcing regularization allows the determination of key reaction mechanisms underlying the qualitative behavior. (topical review)
The seismic reflection inverse problem
International Nuclear Information System (INIS)
Symes, W W
2009-01-01
The seismic reflection method seeks to extract maps of the Earth's sedimentary crust from transient near-surface recording of echoes, stimulated by explosions or other controlled sound sources positioned near the surface. Reasonably accurate models of seismic energy propagation take the form of hyperbolic systems of partial differential equations, in which the coefficients represent the spatial distribution of various mechanical characteristics of rock (density, stiffness, etc). Thus the fundamental problem of reflection seismology is an inverse problem in partial differential equations: to find the coefficients (or at least some of their properties) of a linear hyperbolic system, given the values of a family of solutions in some part of their domains. The exploration geophysics community has developed various methods for estimating the Earth's structure from seismic data and is also well aware of the inverse point of view. This article reviews mathematical developments in this subject over the last 25 years, to show how the mathematics has both illuminated innovations of practitioners and led to new directions in practice. Two themes naturally emerge: the importance of single scattering dominance and compensation for spectral incompleteness by spatial redundancy. (topical review)
Inversion theory and conformal mapping
Blair, David E
2000-01-01
It is rarely taught in an undergraduate or even graduate curriculum that the only conformal maps in Euclidean space of dimension greater than two are those generated by similarities and inversions in spheres. This is in stark contrast to the wealth of conformal maps in the plane. The principal aim of this text is to give a treatment of this paucity of conformal maps in higher dimensions. The exposition includes both an analytic proof in general dimension and a differential-geometric proof in dimension three. For completeness, enough complex analysis is developed to prove the abundance of conformal maps in the plane. In addition, the book develops inversion theory as a subject, along with the auxiliary theme of circle-preserving maps. A particular feature is the inclusion of a paper by Carath�odory with the remarkable result that any circle-preserving transformation is necessarily a M�bius transformation, not even the continuity of the transformation is assumed. The text is at the level of advanced undergr...
LHC Report: 2 inverse femtobarns!
Mike Lamont for the LHC Team
2011-01-01
The LHC is enjoying a confluence of twos. This morning (Friday 5 August) we passed 2 inverse femtobarns delivered in 2011; the peak luminosity is now just over 2 x1033 cm-2s-1; and recently fill 2000 was in for nearly 22 hours and delivered around 90 inverse picobarns, almost twice 2010's total. In order to increase the luminosity we can increase of number of bunches, increase the number of particles per bunch, or decrease the transverse beam size at the interaction point. The beam size can be tackled in two ways: either reduce the size of the injected bunches or squeeze harder with the quadrupole magnets situated on either side of the experiments. Having increased the number of bunches to 1380, the maximum possible with a 50 ns bunch spacing, a one day meeting in Crozet decided to explore the other possibilities. The size of the beams coming from the injectors has been reduced to the minimum possible. This has brought an increase in the peak luminosity of about 50% and the 2 x 1033 cm...
Instrument developments for inverse photoemission
International Nuclear Information System (INIS)
Brenac, A.
1987-02-01
Experimental developments principally concerning electron sources for inverse photoemission are presented. The specifications of the electron beam are derived from experiment requirements, taking into account the limitations encountered (space charge divergence). For a wave vector resolution of 0.2 A -1 , the maximum current is 25 microA at 20 eV. The design of a gun providing such a beam in the range 5 to 50 eV is presented. Angle-resolved inverse photoemission experiments show angular effects at 30 eV. For an energy of 10 eV, angular effects should be stronger, but the low efficiency of the spectrometer in this range makes the experiments difficult. The total energy resolution of 0.3 eV is the result mainly of electron energy spread, as expected. The electron sources are based on field effect electron emission from a cathode consisting of a large number of microtips. The emission arises from a few atomic cells for each tip. The ultimate theoretical energy spread is 0.1 eV. This value is not attained because of an interface resistance problem. A partial solution of this problem allows measurement of an energy spread of 0.9 eV for a current of 100 microA emitted at 60 eV. These cathodes have a further advantage in that emission can occur at a low temperature [fr
Gesture Interaction at a Distance
Fikkert, F.W.
2010-01-01
The aim of this work is to explore, from a perspective of human behavior, which gestures are suited to control large display surfaces from a short distance away; why that is so; and, equally important, how such an interface can be made a reality. A well-known example of the type of interface that is
Communication Barriers in Distance Education
Isman, Aytekin; Dabaj, Fahme; Altinay, Fahriye; Altinay, Zehra
2003-01-01
Communication is a key concept as being the major tool for people in order to satisfy their needs. It is an activity which refers as process and effective communication requires qualified communication with the elimination of communication barriers. As it is known, distance education is a new trend by following contemporary facilities and tools…
Distance Education Technologies in Asia
International Development Research Centre (IDRC) Digital Library (Canada)
17 schools ... Mobile Technology in Non-formal Distance Education 192 ..... in the design and application of e-learning strategies, the need to standardise and ...... library providing access to over 20,000 journals and thesis databases, and 6,000 ...
Video surveillance using distance maps
Schouten, Theo E.; Kuppens, Harco C.; van den Broek, Egon L.
2006-02-01
Human vigilance is limited; hence, automatic motion and distance detection is one of the central issues in video surveillance. Hereby, many aspects are of importance, this paper specially addresses: efficiency, achieving real-time performance, accuracy, and robustness against various noise factors. To obtain fully controlled test environments, an artificial development center for robot navigation is introduced in which several parameters can be set (e.g., number of objects, trajectories and type and amount of noise). In the videos, for each following frame, movement of stationary objects is detected and pixels of moving objects are located from which moving objects are identified in a robust way. An Exact Euclidean Distance Map (E2DM) is utilized to determine accurately the distances between moving and stationary objects. Together with the determined distances between moving objects and the detected movement of stationary objects, this provides the input for detecting unwanted situations in the scene. Further, each intelligent object (e.g., a robot), is provided with its E2DM, allowing the object to plan its course of action. Timing results are specified for each program block of the processing chain for 20 different setups. So, the current paper presents extensive, experimentally controlled research on real-time, accurate, and robust motion detection for video surveillance, using E2DMs, which makes it a unique approach.
Interaction in Distance Nursing Education
Boz Yuksekdag, Belgin
2012-01-01
The purpose of this study is to determine psychiatry nurses' attitudes toward the interactions in distance nursing education, and also scrunize their attitudes based on demographics and computer/Internet usage. The comparative relational scanning model is the method of this study. The research data were collected through "The Scale of Attitudes of…
Student Monitoring in Distance Education.
Holt, Peter; And Others
1987-01-01
Reviews a computerized monitoring system for distance education students at Athabasca University designed to solve the problems of tracking student performance. A pilot project for tutors is described which includes an electronic conferencing system and electronic mail, and an evaluation currently in progress is briefly discussed. (LRW)
A 2D nonlinear inversion of well-seismic data
International Nuclear Information System (INIS)
Métivier, Ludovic; Lailly, Patrick; Delprat-Jannaud, Florence; Halpern, Laurence
2011-01-01
Well-seismic data such as vertical seismic profiles are supposed to provide detailed information about the elastic properties of the subsurface at the vicinity of the well. Heterogeneity of sedimentary terrains can lead to far from negligible multiple scattering, one of the manifestations of the nonlinearity involved in the mapping between elastic parameters and seismic data. We present a 2D extension of an existing 1D nonlinear inversion technique in the context of acoustic wave propagation. In the case of a subsurface with gentle lateral variations, we propose a regularization technique which aims at ensuring the stability of the inversion in a context where the recorded seismic waves provide a very poor illumination of the subsurface. We deal with a huge size inverse problem. Special care has been taken for its numerical solution, regarding both the choice of the algorithms and the implementation on a cluster-based supercomputer. Our tests on synthetic data show the effectiveness of our regularization. They also show that our efforts in accounting for the nonlinearities are rewarded by an exceptional seismic resolution at distances of about 100 m from the well. They also show that the result is not very sensitive to errors in the estimation of the velocity distribution, as far as these errors remain realistic in the context of a medium with gentle lateral variations
Repar, Jelena; Warnecke, Tobias
2017-08-01
Inversions are a major contributor to structural genome evolution in prokaryotes. Here, using a novel alignment-based method, we systematically compare 1,651 bacterial and 98 archaeal genomes to show that inversion landscapes are frequently biased toward (symmetric) inversions around the origin-terminus axis. However, symmetric inversion bias is not a universal feature of prokaryotic genome evolution but varies considerably across clades. At the extremes, inversion landscapes in Bacillus-Clostridium and Actinobacteria are dominated by symmetric inversions, while there is little or no systematic bias favoring symmetric rearrangements in archaea with a single origin of replication. Within clades, we find strong but clade-specific relationships between symmetric inversion bias and different features of adaptive genome architecture, including the distance of essential genes to the origin of replication and the preferential localization of genes on the leading strand. We suggest that heterogeneous selection pressures have converged to produce similar patterns of structural genome evolution across prokaryotes. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Inverse problems and inverse scattering of plane waves
Ghosh Roy, Dilip N
2001-01-01
The purpose of this text is to present the theory and mathematics of inverse scattering, in a simple way, to the many researchers and professionals who use it in their everyday research. While applications range across a broad spectrum of disciplines, examples in this text will focus primarly, but not exclusively, on acoustics. The text will be especially valuable for those applied workers who would like to delve more deeply into the fundamentally mathematical character of the subject matter.Practitioners in this field comprise applied physicists, engineers, and technologists, whereas the theory is almost entirely in the domain of abstract mathematics. This gulf between the two, if bridged, can only lead to improvement in the level of scholarship in this highly important discipline. This is the book''s primary focus.
O'Neil, Patrick M; Theim, Kelly R; Boeka, Abbe; Johnson, Gail; Miller-Kovach, Karen
2012-12-01
Greater use of key self-regulatory behaviors (e.g., self-monitoring of food intake and weight) is associated with greater weight loss within behavioral weight loss treatments, although this association is less established within widely-available commercial weight loss programs. Further, high hedonic hunger (i.e., susceptibility to environmental food cues) may present a barrier to successful behavior change and weight loss, although this has not yet been examined. Adult men and women (N=111, body mass index M±SD=31.5±2.7kg/m(2)) were assessed before and after participating in a 12-week commercial weight loss program. From pre- to post-treatment, reported usage of weight control behaviors improved and hedonic hunger decreased, and these changes were inversely associated. A decrease in hedonic hunger was associated with better weight loss. An improvement in reported weight control behaviors (e.g., self-regulatory behaviors) was associated with better weight loss, and this association was even stronger among individuals with high baseline hedonic hunger. Findings highlight the importance of specific self-regulatory behaviors within weight loss treatment, including a commercial weight loss program developed for widespread community implementation. Assessment of weight control behavioral skills usage and hedonic hunger may be useful to further identify mediators of weight loss within commercial weight loss programs. Future interventions might specifically target high hedonic hunger and prospectively examine changes in hedonic hunger during other types of weight loss treatment to inform its potential impact on sustained behavior change and weight control. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Streaming Distance Transform Algorithm for Neighborhood-Sequence Distances
Directory of Open Access Journals (Sweden)
Nicolas Normand
2014-09-01
Full Text Available We describe an algorithm that computes a “translated” 2D Neighborhood-Sequence Distance Transform (DT using a look up table approach. It requires a single raster scan of the input image and produces one line of output for every line of input. The neighborhood sequence is specified either by providing one period of some integer periodic sequence or by providing the rate of appearance of neighborhoods. The full algorithm optionally derives the regular (centered DT from the “translated” DT, providing the result image on-the-ﬂy, with a minimal delay, before the input image is fully processed. Its efficiency can benefit all applications that use neighborhood- sequence distances, particularly when pipelined processing architectures are involved, or when the size of objects in the source image is limited.
New weighting methods for phylogenetic tree reconstruction using multiple loci.
Misawa, Kazuharu; Tajima, Fumio
2012-08-01
Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.
International Nuclear Information System (INIS)
Švanda, Michal; Roudier, Thierry; Rieutord, Michel; Burston, Raymond; Gizon, Laurent
2013-01-01
We compare measurements of horizontal flows on the surface of the Sun using helioseismic time-distance inversions and coherent structure tracking of solar granules. Tracking provides two-dimensional horizontal flows on the solar surface, whereas the time-distance inversions estimate the full three-dimensional velocity flows in the shallow near-surface layers. Both techniques use Helioseismic and Magnetic Imager observations as input. We find good correlations between the various measurements resulting from the two techniques. Further, we find a good agreement between these measurements and the time-averaged Doppler line-of-sight velocity, and also perform sanity checks on the vertical flow that resulted from the three-dimensional time-distance inversion.
Wake Vortex Inverse Model User's Guide
Lai, David; Delisi, Donald
2008-01-01
NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input
Inverse diffusion theory of photoacoustics
International Nuclear Information System (INIS)
Bal, Guillaume; Uhlmann, Gunther
2010-01-01
This paper analyzes the reconstruction of diffusion and absorption parameters in an elliptic equation from knowledge of internal data. In the application of photoacoustics, the internal data are the amount of thermal energy deposited by high frequency radiation propagating inside a domain of interest. These data are obtained by solving an inverse wave equation, which is well studied in the literature. We show that knowledge of two internal data based on well-chosen boundary conditions uniquely determines two constitutive parameters in diffusion and Schrödinger equations. Stability of the reconstruction is guaranteed under additional geometric constraints of strict convexity. No geometric constraints are necessary when 2n internal data for well-chosen boundary conditions are available, where n is spatial dimension. The set of well-chosen boundary conditions is characterized in terms of appropriate complex geometrical optics solutions
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Fagerberg, Rolf; Mailund, Thomas
2013-01-01
), respectively, and counting how often the induced topologies in the two input trees are different. In this paper we present efficient algorithms for computing these distances. We show how to compute the triplet distance in time O(n log n) and the quartet distance in time O(d n log n), where d is the maximal......The triplet and quartet distances are distance measures to compare two rooted and two unrooted trees, respectively. The leaves of the two trees should have the same set of n labels. The distances are defined by enumerating all subsets of three labels (triplets) and four labels (quartets...... degree of any node in the two trees. Within the same time bounds, our framework also allows us to compute the parameterized triplet and quartet distances, where a parameter is introduced to weight resolved (binary) topologies against unresolved (non-binary) topologies. The previous best algorithm...
Action understanding as inverse planning.
Baker, Chris L; Saxe, Rebecca; Tenenbaum, Joshua B
2009-12-01
Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an "intentional stance" [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a "teleological stance" [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.
Optimization and inverse problems in electromagnetism
Wiak, Sławomir
2003-01-01
From 12 to 14 September 2002, the Academy of Humanities and Economics (AHE) hosted the workshop "Optimization and Inverse Problems in Electromagnetism". After this bi-annual event, a large number of papers were assembled and combined in this book. During the workshop recent developments and applications in optimization and inverse methodologies for electromagnetic fields were discussed. The contributions selected for the present volume cover a wide spectrum of inverse and optimal electromagnetic methodologies, ranging from theoretical to practical applications. A number of new optimal and inverse methodologies were proposed. There are contributions related to dedicated software. Optimization and Inverse Problems in Electromagnetism consists of three thematic chapters, covering: -General papers (survey of specific aspects of optimization and inverse problems in electromagnetism), -Methodologies, -Industrial Applications. The book can be useful to students of electrical and electronics engineering, computer sci...
Reverse Universal Resolving Algorithm and inverse driving
DEFF Research Database (Denmark)
Pécseli, Thomas
2012-01-01
Inverse interpretation is a semantics based, non-standard interpretation of programs. Given a program and a value, an inverse interpreter finds all or one of the inputs, that would yield the given value as output with normal forward evaluation. The Reverse Universal Resolving Algorithm is a new...... variant of the Universal Resolving Algorithm for inverse interpretation. The new variant outperforms the original algorithm in several cases, e.g., when unpacking a list using inverse interpretation of a pack program. It uses inverse driving as its main technique, which has not been described in detail...... before. Inverse driving may find application with, e.g., supercompilation, thus suggesting a new kind of program inverter....
Inverse analysis of turbidites by machine learning
Naruse, H.; Nakao, K.
2017-12-01
This study aims to propose a method to estimate paleo-hydraulic conditions of turbidity currents from ancient turbidites by using machine-learning technique. In this method, numerical simulation was repeated under various initial conditions, which produces a data set of characteristic features of turbidites. Then, this data set of turbidites is used for supervised training of a deep-learning neural network (NN). Quantities of characteristic features of turbidites in the training data set are given to input nodes of NN, and output nodes are expected to provide the estimates of initial condition of the turbidity current. The optimization of weight coefficients of NN is then conducted to reduce root-mean-square of the difference between the true conditions and the output values of NN. The empirical relationship with numerical results and the initial conditions is explored in this method, and the discovered relationship is used for inversion of turbidity currents. This machine learning can potentially produce NN that estimates paleo-hydraulic conditions from data of ancient turbidites. We produced a preliminary implementation of this methodology. A forward model based on 1D shallow-water equations with a correction of density-stratification effect was employed. This model calculates a behavior of a surge-like turbidity current transporting mixed-size sediment, and outputs spatial distribution of volume per unit area of each grain-size class on the uniform slope. Grain-size distribution was discretized 3 classes. Numerical simulation was repeated 1000 times, and thus 1000 beds of turbidites were used as the training data for NN that has 21000 input nodes and 5 output nodes with two hidden-layers. After the machine learning finished, independent simulations were conducted 200 times in order to evaluate the performance of NN. As a result of this test, the initial conditions of validation data were successfully reconstructed by NN. The estimated values show very small
Mindfulness Approaches and Weight Loss, Weight Maintenance, and Weight Regain.
Dunn, Carolyn; Haubenreiser, Megan; Johnson, Madison; Nordby, Kelly; Aggarwal, Surabhi; Myer, Sarah; Thomas, Cathy
2018-03-01
There is an urgent need for effective weight management techniques, as more than one third of US adults are overweight or obese. Recommendations for weight loss include a combination of reducing caloric intake, increasing physical activity, and behavior modification. Behavior modification includes mindful eating or eating with awareness. The purpose of this review was to summarize the literature and examine the impact of mindful eating on weight management. The practice of mindful eating has been applied to the reduction of food cravings, portion control, body mass index, and body weight. Past reviews evaluating the relationship between mindfulness and weight management did not focus on change in mindful eating as the primary outcome or mindful eating as a measured variable. This review demonstrates strong support for inclusion of mindful eating as a component of weight management programs and may provide substantial benefit to the treatment of overweight and obesity.
Managerial Distance and Virtual Ownership
DEFF Research Database (Denmark)
Hansmann, Henry; Thomsen, Steen
Industrial foundations are autonomous nonprofit entities that own and control one or more conventional business firms. These foundations are common in Northern Europe, where they own a number of internationally prominent companies. Previous studies have indicated, surprisingly, that companies con......, but corporate governance and fiduciary behavior more generally....... on differences among the industrial foundations themselves. We work with a rich data set comprising 113 foundation-owned Danish companies over the period 2003-2008. We focus in particular on a composite structural factor that we term “managerial distance.” We propose this as a measure of the extent to which......-seeking outside owners of the company. Consistent with this hypothesis, our empirical analysis shows a positive, significant, and robust association between managerial distance and the economic performance of foundation owned companies. The findings appear to illuminate not just foundation governance...
Determining distances using asteroseismic methods
DEFF Research Database (Denmark)
Aguirre, Victor Silva; Casagrande, L.; Basu, Sarbina
2013-01-01
Asteroseismology has been extremely successful in determining the properties of stars in different evolutionary stages with a remarkable level of precision. However, to fully exploit its potential, robust methods for estimating stellar parameters are required and independent verification of the r......Asteroseismology has been extremely successful in determining the properties of stars in different evolutionary stages with a remarkable level of precision. However, to fully exploit its potential, robust methods for estimating stellar parameters are required and independent verification...... fluxes, and thus distances for field stars in a self-consistent manner. Applying our method to a sample of solar-like oscillators in the {\\it Kepler} field that have accurate {\\it Hipparcos} parallaxes, we find agreement in our distance determinations to better than 5%. Comparison with measurements...
Anisotropic magnetotelluric inversion using a mutual information constraint
Mandolesi, E.; Jones, A. G.
2012-12-01
technique: the MI constraint effects the distance between images that models draw in spite of the parameters that build the considered models. Results from a medium-size synthetic test show the MI constraint's ability in driving the inverse problem solution towards a model compatible with the known ones.
Czech Academy of Sciences Publication Activity Database
Bielas, Wojciech; Plewik, S.; Walczyńska, Marta
2018-01-01
Roč. 4, č. 2 (2018), s. 687-698 ISSN 2199-675X R&D Projects: GA ČR GF16-34860L Institutional support: RVO:67985840 Keywords : Cantorval center of distances von Neumann's theorem * set of subsums Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics https://link.springer.com/article/10.1007%2Fs40879-017-0199-4
Distance probes of dark energy
Energy Technology Data Exchange (ETDEWEB)
Kim, A. G.; Padmanabhan, N.; Aldering, G.; Allen, S. W.; Baltay, C.; Cahn, R. N.; D’Andrea, C. B.; Dalal, N.; Dawson, K. S.; Denney, K. D.; Eisenstein, D. J.; Finley, D. A.; Freedman, W. L.; Ho, S.; Holz, D. E.; Kasen, D.; Kent, S. M.; Kessler, R.; Kuhlmann, S.; Linder, E. V.; Martini, P.; Nugent, P. E.; Perlmutter, S.; Peterson, B. M.; Riess, A. G.; Rubin, D.; Sako, M.; Suntzeff, N. V.; Suzuki, N.; Thomas, R. C.; Wood-Vasey, W. M.; Woosley, S. E.
2015-03-01
This document presents the results from the Distances subgroup of the Cosmic Frontier Community Planning Study (Snowmass 2013). We summarize the current state of the field as well as future prospects and challenges. In addition to the established probes using Type Ia supernovae and baryon acoustic oscillations, we also consider prospective methods based on clusters, active galactic nuclei, gravitational wave sirens and strong lensing time delays.
Inverse kinematics of OWI-535 robotic arm
DEBENEC, PRIMOŽ
2015-01-01
The thesis aims to calculate the inverse kinematics for the OWI-535 robotic arm. The calculation of the inverse kinematics determines the joint parameters that provide the right pose of the end effector. The pose consists of the position and orientation, however, we will focus only on the second one. Due to arm limitations, we have created our own type of the calculation of the inverse kinematics. At first we have derived it only theoretically, and then we have transferred the derivation into...
Automatic Flight Controller With Model Inversion
Meyer, George; Smith, G. Allan
1992-01-01
Automatic digital electronic control system based on inverse-model-follower concept being developed for proposed vertical-attitude-takeoff-and-landing airplane. Inverse-model-follower control places inverse mathematical model of dynamics of controlled plant in series with control actuators of controlled plant so response of combination of model and plant to command is unity. System includes feedback to compensate for uncertainties in mathematical model and disturbances imposed from without.
Lectures on the inverse scattering method
International Nuclear Information System (INIS)
Zakharov, V.E.
1983-06-01
In a series of six lectures an elementary introduction to the theory of inverse scattering is given. The first four lectures contain a detailed theory of solitons in the framework of the KdV equation, together with the inverse scattering theory of the one-dimensional Schroedinger equation. In the fifth lecture the dressing method is described, while the sixth lecture gives a brief review of the equations soluble by the inverse scattering method. (author)
Weighted conditional least-squares estimation
International Nuclear Information System (INIS)
Booth, J.G.
1987-01-01
A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered
Support Services for Distance Education
Directory of Open Access Journals (Sweden)
Sandra Frieden
1999-01-01
Full Text Available The creation and operation of a distance education support infrastructure requires the collaboration of virtually all administrative departments whose activities deal with students and faculty, and all participating academic departments. Implementation can build on where the institution is and design service-oriented strategies that strengthen institutional support and commitment. Issues to address include planning, faculty issues and concerns, policies and guidelines, approval processes, scheduling, training, publicity, information-line operations, informational materials, orientation and registration processes, class coordination and support, testing, evaluations, receive site management, partnerships, budgets, staffing, library and e-mail support, and different delivery modes (microwave, compressed video, radio, satellite, public television/cable, video tape and online. The process is ongoing and increasingly participative as various groups on campus begin to get involved with distance education activities. The distance education unit must continuously examine and revise its processes and procedures to maintain the academic integrity and service excellence of its programs. Its a daunting prospect to revise the way things have been done for many years, but each department has an opportunity to respond to new ways of serving and reaching students.
Teaching Chemistry via Distance Education
Boschmann, Erwin
2003-06-01
This paper describes a chemistry course taught at Indiana University Purdue University, Indianapolis via television, with a Web version added later. The television format is a delivery technology; the Web is an engagement technology and is preferred since it requires student participation. The distance-laboratory component presented the greatest challenge since laboratories via distance education are not a part of the U.S. academic culture. Appropriate experiments have been developed with the consultation of experts from The Open University in the United Kingdom, Athabasca University in Canada, and Monash University in Australia. The criteria used in the development of experiments are: (1) they must be credible academic experiences equal to or better than those used on campus, (2) they must be easy to perform without supervision, (3) they must be safe, and (4) they must meet all legal requirements. An evaluation of the program using three different approaches is described. The paper concludes that technology-mediated distance education students do as well as on-campus students, but drop out at a higher rate. It is very important to communicate with students frequently, and technology tools ought to be used only if good pedagogy is enhanced by their use.
Bayesian approach to inverse statistical mechanics
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Time-reversal and Bayesian inversion
Debski, Wojciech
2017-04-01
Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.
Inverse Kinematics of a Serial Robot
Directory of Open Access Journals (Sweden)
Amici Cinzia
2016-01-01
Full Text Available This work describes a technique to treat the inverse kinematics of a serial manipulator. The inverse kinematics is obtained through the numerical inversion of the Jacobian matrix, that represents the equation of motion of the manipulator. The inversion is affected by numerical errors and, in different conditions, due to the numerical nature of the solver, it does not converge to a reasonable solution. Thus a soft computing approach is adopted to mix different traditional methods to obtain an increment of algorithmic convergence.
New method for distance-based close following safety indicator.
Sharizli, A A; Rahizar, R; Karim, M R; Saifizul, A A
2015-01-01
The increase in the number of fatalities caused by road accidents involving heavy vehicles every year has raised the level of concern and awareness on road safety in developing countries like Malaysia. Changes in the vehicle dynamic characteristics such as gross vehicle weight, travel speed, and vehicle classification will affect a heavy vehicle's braking performance and its ability to stop safely in emergency situations. As such, the aim of this study is to establish a more realistic new distance-based safety indicator called the minimum safe distance gap (MSDG), which incorporates vehicle classification (VC), speed, and gross vehicle weight (GVW). Commercial multibody dynamics simulation software was used to generate braking distance data for various heavy vehicle classes under various loads and speeds. By applying nonlinear regression analysis to the simulation results, a mathematical expression of MSDG has been established. The results show that MSDG is dynamically changed according to GVW, VC, and speed. It is envisaged that this new distance-based safety indicator would provide a more realistic depiction of the real traffic situation for safety analysis.
A Novel Parallel Algorithm for Edit Distance Computation
Directory of Open Access Journals (Sweden)
Muhammad Murtaza Yousaf
2018-01-01
Full Text Available The edit distance between two sequences is the minimum number of weighted transformation-operations that are required to transform one string into the other. The weighted transformation-operations are insert, remove, and substitute. Dynamic programming solution to find edit distance exists but it becomes computationally intensive when the lengths of strings become very large. This work presents a novel parallel algorithm to solve edit distance problem of string matching. The algorithm is based on resolving dependencies in the dynamic programming solution of the problem and it is able to compute each row of edit distance table in parallel. In this way, it becomes possible to compute the complete table in min(m,n iterations for strings of size m and n whereas state-of-the-art parallel algorithm solves the problem in max(m,n iterations. The proposed algorithm also increases the amount of parallelism in each of its iteration. The algorithm is also capable of exploiting spatial locality while its implementation. Additionally, the algorithm works in a load balanced way that further improves its performance. The algorithm is implemented for multicore systems having shared memory. Implementation of the algorithm in OpenMP shows linear speedup and better execution time as compared to state-of-the-art parallel approach. Efficiency of the algorithm is also proven better in comparison to its competitor.
Identifying Isotropic Events using an Improved Regional Moment Tensor Inversion Technique
Energy Technology Data Exchange (ETDEWEB)
Dreger, Douglas S. [Univ. of California, Berkeley, CA (United States); Ford, Sean R. [Univ. of California, Berkeley, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Walter, William R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-12-08
Research was carried out investigating the feasibility of using a regional distance seismic waveform moment tensor inverse procedure to estimate source parameters of nuclear explosions and to use the source inversion results to develop a source-type discrimination capability. The results of the research indicate that it is possible to robustly determine the seismic moment tensor of nuclear explosions, and when compared to natural seismicity in the context of the a Hudson et al. (1989) source-type diagram they are found to separate from populations of earthquakes and underground cavity collapse seismic sources.
... Global Map Premature Birth Report Cards Careers Archives Pregnancy Before or between pregnancies Nutrition, weight & fitness Prenatal ... fitness > Weight gain during pregnancy Weight gain during pregnancy E-mail to a friend Please fill in ...
... Videos for Educators Search English Español Should I Gain Weight? KidsHealth / For Teens / Should I Gain Weight? ... something about it. Why Do People Want to Gain Weight? Some of the reasons people give for ...
Fully probabilistic seismic source inversion – Part 1: Efficient parameterisation
Directory of Open Access Journals (Sweden)
S. C. Stähler
2014-11-01
Full Text Available Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters themselves but also estimates of their uncertainties are of great practical importance. Probabilistic source inversion (Bayesian inference is very adapted to this challenge, provided that the parameter space can be chosen small enough to make Bayesian sampling computationally feasible. We propose a framework for PRobabilistic Inference of Seismic source Mechanisms (PRISM that parameterises and samples earthquake depth, moment tensor, and source time function efficiently by using information from previous non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible.
DISTANCES TO FOUR SOLAR NEIGHBORHOOD ECLIPSING BINARIES FROM ABSOLUTE FLUXES
International Nuclear Information System (INIS)
Wilson, R. E.; Van Hamme, W.
2009-01-01
Eclipsing binary (EB)-based distances are estimated for four solar neighborhood EBs by means of the Direct Distance Estimation (DDE) algorithm. Results are part of a project to map the solar neighborhood EBs in three dimensions, independently of parallaxes, and provide statistical comparisons between EB and parallax distances. Apart from judgments on adopted temperature and interstellar extinction, DDE's simultaneous light-velocity solutions are essentially objective and work as well for semidetached (SD) and overcontact binaries as for detached systems. Here, we analyze two detached and two SD binaries, all double lined. RS Chamaeleontis is a pre-main-sequence (MS), detached EB with weak δ Scuti variations. WW Aurigae is detached and uncomplicated, except for having high metallicity. RZ Cassiopeiae is SD and has very clear δ Scuti variations and several peculiarities. R Canis Majoris (R CMa) is an apparently simple but historically problematic SD system, also with weak δ Scuti variations. Discussions include solution rules and strategies, weighting, convergence, and third light problems. So far there is no indication of systematic band dependence among the derived distances, so the adopted band-calibration ratios seem consistent. Agreement of EB-based and parallax distances is typically within the overlapped uncertainties, with minor exceptions. We also suggest an explanation for the long-standing undermassiveness problem of R CMa's hotter component, in terms of a fortuitous combination of low metallicity and evolution slightly beyond the MS.
Efficient Algorithms for Analyzing Segmental Duplications, Deletions, and Inversions in Genomes
Kahn, Crystal L.; Mozes, Shay; Raphael, Benjamin J.
Segmental duplications, or low-copy repeats, are common in mammalian genomes. In the human genome, most segmental duplications are mosaics consisting of pieces of multiple other segmental duplications. This complex genomic organization complicates analysis of the evolutionary history of these sequences. Earlier, we introduced a genomic distance, called duplication distance, that computes the most parsimonious way to build a target string by repeatedly copying substrings of a source string. We also showed how to use this distance to describe the formation of segmental duplications according to a two-step model that has been proposed to explain human segmental duplications. Here we describe polynomial-time exact algorithms for several extensions of duplication distance including models that allow certain types of substring deletions and inversions. These extensions will permit more biologically realistic analyses of segmental duplications in genomes.
Morphological inversion of complex diffusion
Nguyen, V. A. T.; Vural, D. C.
2017-09-01
Epidemics, neural cascades, power failures, and many other phenomena can be described by a diffusion process on a network. To identify the causal origins of a spread, it is often necessary to identify the triggering initial node. Here, we define a new morphological operator and use it to detect the origin of a diffusive front, given the final state of a complex network. Our method performs better than algorithms based on distance (closeness) and Jordan centrality. More importantly, our method is applicable regardless of the specifics of the forward model, and therefore can be applied to a wide range of systems such as identifying the patient zero in an epidemic, pinpointing the neuron that triggers a cascade, identifying the original malfunction that causes a catastrophic infrastructure failure, and inferring the ancestral species from which a heterogeneous population evolves.
Inverse problem in radionuclide transport
International Nuclear Information System (INIS)
Yu, C.
1988-01-01
The disposal of radioactive waste must comply with the performance objectives set forth in 10 CFR 61 for low-level waste (LLW) and 10 CFR 60 for high-level waste (HLW). To determine probable compliance, the proposed disposal system can be modeled to predict its performance. One of the difficulties encountered in such a study is modeling the migration of radionuclides through a complex geologic medium for the long term. Although many radionuclide transport models exist in the literature, the accuracy of the model prediction is highly dependent on the model parameters used. The problem of using known parameters in a radionuclide transport model to predict radionuclide concentrations is a direct problem (DP); whereas the reverse of DP, i.e., the parameter identification problem of determining model parameters from known radionuclide concentrations, is called the inverse problem (IP). In this study, a procedure to solve IP is tested, using the regression technique. Several nonlinear regression programs are examined, and the best one is recommended. 13 refs., 1 tab
Inversion based on computational simulations
International Nuclear Information System (INIS)
Hanson, K.M.; Cunningham, G.S.; Saquib, S.S.
1998-01-01
A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal
MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS
International Nuclear Information System (INIS)
Asensio Ramos, A.; Manso Sainz, R.; Martínez González, M. J.; Socas-Navarro, H.; Viticchié, B.; Orozco Suárez, D.
2012-01-01
Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.
Inverse transport theory of photoacoustics
International Nuclear Information System (INIS)
Bal, Guillaume; Jollivet, Alexandre; Jugnon, Vincent
2010-01-01
We consider the reconstruction of optical parameters in a domain of interest from photoacoustic data. Photoacoustic tomography (PAT) radiates high-frequency electromagnetic waves into the domain and measures acoustic signals emitted by the resulting thermal expansion. Acoustic signals are then used to construct the deposited thermal energy map. The latter depends on the constitutive optical parameters in a nontrivial manner. In this paper, we develop and use an inverse transport theory with internal measurements to extract information on the optical coefficients from knowledge of the deposited thermal energy map. We consider the multi-measurement setting in which many electromagnetic radiation patterns are used to probe the domain of interest. By developing an expansion of the measurement operator into singular components, we show that the spatial variations of the intrinsic attenuation and the scattering coefficients may be reconstructed. We also reconstruct coefficients describing anisotropic scattering of photons, such as the anisotropy coefficient g(x) in a Henyey–Greenstein phase function model. Finally, we derive stability estimates for the reconstructions
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander
2014-01-06
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander; Matthies, Hermann G.
2014-01-01
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse Free Electron Laser accelerator
International Nuclear Information System (INIS)
Fisher, A.; Gallardo, J.; van Steenbergen, A.; Sandweiss, J.
1992-09-01
The study of the INVERSE FREE ELECTRON LASER, as a potential mode of electron acceleration, is being pursued at Brookhaven National Laboratory. Recent studies have focussed on the development of a low energy, high gradient, multi stage linear accelerator. The elementary ingredients for the IFEL interaction are the 50 MeV Linac e - beam and the 10 11 Watt CO 2 laser beam of BNL's Accelerator Test Facility (ATF), Center for Accelerator Physics (CAP) and a wiggler. The latter element is designed as a fast excitation unit making use of alternating stacks of Vanadium Permendur (VaP) ferromagnetic laminations, periodically interspersed with conductive, nonmagnetic laminations, which act as eddy current induced field reflectors. Wiggler parameters and field distribution data will be presented for a prototype wiggler in a constant period and in a ∼ 1.5 %/cm tapered period configuration. The CO 2 laser beam will be transported through the IFEL interaction region by means of a low loss, dielectric coated, rectangular waveguide. Short waveguide test sections have been constructed and have been tested using a low power cw CO 2 laser. Preliminary results of guide attenuation and mode selectivity will be given, together with a discussion of the optical issues for the IFEL accelerator. The IFEL design is supported by the development and use of 1D and 3D simulation programs. The results of simulation computations, including also wiggler errors, for a single module accelerator and for a multi-module accelerator will be presented
Inverse problems and uncertainty quantification
Litvinenko, Alexander
2013-12-18
In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Distance Measurement Solves Astrophysical Mysteries
2003-08-01
Location, location, and location. The old real-estate adage about what's really important proved applicable to astrophysics as astronomers used the sharp radio "vision" of the National Science Foundation's Very Long Baseline Array (VLBA) to pinpoint the distance to a pulsar. Their accurate distance measurement then resolved a dispute over the pulsar's birthplace, allowed the astronomers to determine the size of its neutron star and possibly solve a mystery about cosmic rays. "Getting an accurate distance to this pulsar gave us a real bonanza," said Walter Brisken, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. Monogem Ring The Monogem Ring, in X-Ray Image by ROSAT satellite CREDIT: Max-Planck Institute, American Astronomical Society (Click on Image for Larger Version) The pulsar, called PSR B0656+14, is in the constellation Gemini, and appears to be near the center of a circular supernova remnant that straddles Gemini and its neighboring constellation, Monoceros, and is thus called the Monogem Ring. Since pulsars are superdense, spinning neutron stars left over when a massive star explodes as a supernova, it was logical to assume that the Monogem Ring, the shell of debris from a supernova explosion, was the remnant of the blast that created the pulsar. However, astronomers using indirect methods of determining the distance to the pulsar had concluded that it was nearly 2500 light-years from Earth. On the other hand, the supernova remnant was determined to be only about 1000 light-years from Earth. It seemed unlikely that the two were related, but instead appeared nearby in the sky purely by a chance juxtaposition. Brisken and his colleagues used the VLBA to make precise measurements of the sky position of PSR B0656+14 from 2000 to 2002. They were able to detect the slight offset in the object's apparent position when viewed from opposite sides of Earth's orbit around the Sun. This effect, called parallax, provides a direct measurement of
Third Harmonic Imaging using a Pulse Inversion
DEFF Research Database (Denmark)
Rasmussen, Joachim; Du, Yigang; Jensen, Jørgen Arendt
2011-01-01
The pulse inversion (PI) technique can be utilized to separate and enhance harmonic components of a waveform for tissue harmonic imaging. While most ultrasound systems can perform pulse inversion, only few image the 3rd harmonic component. PI pulse subtraction can isolate and enhance the 3rd...
Resolution analysis in full waveform inversion
Fichtner, A.; Trampert, J.
2011-01-01
We propose a new method for the quantitative resolution analysis in full seismic waveform inversion that overcomes the limitations of classical synthetic inversions while being computationally more efficient and applicable to any misfit measure. The method rests on (1) the local quadratic
Abel inverse transformation applied to plasma diagnostics
International Nuclear Information System (INIS)
Zhu Shiyao
1987-01-01
Two methods of Abel inverse transformation are applied to two different test profiles. The effects of random errors of input data, position uncertainty and number of points of input data on the accuracy of inverse transformation have been studied. The two methods are compared in each other
Tietze, Kristina; Ritter, Oliver
2013-10-01
3-D inversion techniques have become a widely used tool in magnetotelluric (MT) data interpretation. However, with real data sets, many of the controlling factors for the outcome of 3-D inversion are little explored, such as alignment of the coordinate system, handling and influence of data errors and model regularization. Here we present 3-D inversion results of 169 MT sites from the central San Andreas Fault in California. Previous extensive 2-D inversion and 3-D forward modelling of the data set revealed significant along-strike variation of the electrical conductivity structure. 3-D inversion can recover these features but only if the inversion parameters are tuned in accordance with the particularities of the data set. Based on synthetic 3-D data we explore the model space and test the impacts of a wide range of inversion settings. The tests showed that the recovery of a pronounced regional 2-D structure in inversion of the complete impedance tensor depends on the coordinate system. As interdependencies between data components are not considered in standard 3-D MT inversion codes, 2-D subsurface structures can vanish if data are not aligned with the regional strike direction. A priori models and data weighting, that is, how strongly individual components of the impedance tensor and/or vertical magnetic field transfer functions dominate the solution, are crucial controls for the outcome of 3-D inversion. If deviations from a prior model are heavily penalized, regularization is prone to result in erroneous and misleading 3-D inversion models, particularly in the presence of strong conductivity contrasts. A `good' overall rms misfit is often meaningless or misleading as a huge range of 3-D inversion results exist, all with similarly `acceptable' misfits but producing significantly differing images of the conductivity structures. Reliable and meaningful 3-D inversion models can only be recovered if data misfit is assessed systematically in the frequency
Forward modeling. Route to electromagnetic inversion
Energy Technology Data Exchange (ETDEWEB)
Groom, R; Walker, P [PetRos EiKon Incorporated, Ontario (Canada)
1996-05-01
Inversion of electromagnetic data is a topical subject in the literature, and much time has been devoted to understanding the convergence properties of various inverse methods. The relative lack of success of electromagnetic inversion techniques is partly attributable to the difficulties in the kernel forward modeling software. These difficulties come in two broad classes: (1) Completeness and robustness, and (2) convergence, execution time and model simplicity. If such problems exist in the forward modeling kernel, it was demonstrated that inversion can fail to generate reasonable results. It was suggested that classical inversion techniques, which are based on minimizing a norm of the error between data and the simulated data, will only be successful when these difficulties in forward modeling kernels are properly dealt with. 4 refs., 5 figs.
Stochastic Gabor reflectivity and acoustic impedance inversion
Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John
2018-02-01
To delineate subsurface lithology to estimate petrophysical properties of a reservoir, it is possible to use acoustic impedance (AI) which is the result of seismic inversion. To change amplitude to AI, removal of wavelet effects from the seismic signal in order to get a reflection series, and subsequently transforming those reflections to AI, is vital. To carry out seismic inversion correctly it is important to not assume that the seismic signal is stationary. However, all stationary deconvolution methods are designed following that assumption. To increase temporal resolution and interpretation ability, amplitude compensation and phase correction are inevitable. Those are pitfalls of stationary reflectivity inversion. Although stationary reflectivity inversion methods are trying to estimate reflectivity series, because of incorrect assumptions their estimations will not be correct, but may be useful. Trying to convert those reflection series to AI, also merging with the low frequency initial model, can help us. The aim of this study was to apply non-stationary deconvolution to eliminate time variant wavelet effects from the signal and to convert the estimated reflection series to the absolute AI by getting bias from well logs. To carry out this aim, stochastic Gabor inversion in the time domain was used. The Gabor transform derived the signal’s time-frequency analysis and estimated wavelet properties from different windows. Dealing with different time windows gave an ability to create a time-variant kernel matrix, which was used to remove matrix effects from seismic data. The result was a reflection series that does not follow the stationary assumption. The subsequent step was to convert those reflections to AI using well information. Synthetic and real data sets were used to show the ability of the introduced method. The results highlight that the time cost to get seismic inversion is negligible related to general Gabor inversion in the frequency domain. Also
Inverse m-matrices and ultrametric matrices
Dellacherie, Claude; San Martin, Jaime
2014-01-01
The study of M-matrices, their inverses and discrete potential theory is now a well-established part of linear algebra and the theory of Markov chains. The main focus of this monograph is the so-called inverse M-matrix problem, which asks for a characterization of nonnegative matrices whose inverses are M-matrices. We present an answer in terms of discrete potential theory based on the Choquet-Deny Theorem. A distinguished subclass of inverse M-matrices is ultrametric matrices, which are important in applications such as taxonomy. Ultrametricity is revealed to be a relevant concept in linear algebra and discrete potential theory because of its relation with trees in graph theory and mean expected value matrices in probability theory. Remarkable properties of Hadamard functions and products for the class of inverse M-matrices are developed and probabilistic insights are provided throughout the monograph.
Recurrent Neural Network for Computing Outer Inverse.
Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin
2016-05-01
Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.
Fast wavelet based sparse approximate inverse preconditioner
Energy Technology Data Exchange (ETDEWEB)
Wan, W.L. [Univ. of California, Los Angeles, CA (United States)
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Dynamics of an N-vortex state at small distances
International Nuclear Information System (INIS)
Ovchinnikov, Yu. N.
2013-01-01
We investigate the dynamics of a state of N vortices, placed at the initial instant at small distances from some point, close to the “weight center” of vortices. The general solution of the time-dependent Ginsburg-Landau equation for N vortices in a large time interval is found. For N = 2, the position of the “weight center” of two vortices is time independent. For N ≥ 3, the position of the “weight center” weakly depends on time and is located in the range of the order of a 3 , where a is a characteristic distance of a single vortex from the “weight center.” For N = 3, the time evolution of the N-vortex state is fixed by the position of vortices at any time instant and by the values of two small parameters. For N ≥ 4, a new parameter arises in the problem, connected with relative increases in the number of decay modes.
Dynamics of an N-vortex state at small distances
Energy Technology Data Exchange (ETDEWEB)
Ovchinnikov, Yu. N., E-mail: ovc@itp.ac.ru [Max-Planck Institute for Physics of Complex Systems (Germany)
2013-01-15
We investigate the dynamics of a state of N vortices, placed at the initial instant at small distances from some point, close to the 'weight center' of vortices. The general solution of the time-dependent Ginsburg-Landau equation for N vortices in a large time interval is found. For N = 2, the position of the 'weight center' of two vortices is time independent. For N {>=} 3, the position of the 'weight center' weakly depends on time and is located in the range of the order of a{sup 3}, where a is a characteristic distance of a single vortex from the 'weight center.' For N = 3, the time evolution of the N-vortex state is fixed by the position of vortices at any time instant and by the values of two small parameters. For N {>=} 4, a new parameter arises in the problem, connected with relative increases in the number of decay modes.
Energy Technology Data Exchange (ETDEWEB)
Dobranszky, G.
2005-12-15
Stratigraphic modeling aims at rebuilding the history of the sedimentary basins by simulating the processes of erosion, transport and deposit of sediments using physical models. The objective is to determine the location of the bed-rocks likely to contain the organic matter, the location of the porous rocks that could trap the hydrocarbons during their migration and the location of the impermeable rocks likely to seal the reservoir. The model considered within this thesis is based on a multi-lithological diffusive transport model and applies to large scales of time and space. Due to the complexity of the phenomena and scales considered, none of the model parameters is directly measurable. Therefore it is essential to inverse them. The standard approach, which consists in inverting all the parameters by minimizing a cost function using a gradient method, proved very sensitive to the choice of the parameterization, to the weights given to the various terms of the cost function (hearing on data of very diverse nature) and to the numerical noise. These observations led us to give up this method and to carry out the in-version step by step by decoupling the parameters. This decoupling is not obtained by fixing the parameters but by making several assumptions on the model resulting in a range of reduced but relevant models. In this thesis, we show how these models enable us to inverse all the parameters in a robust and interactive way. (author)
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-04-01
Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in
Two-dimensional analytic weighting functions for limb scattering
Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.
2017-10-01
Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.
Mechanisms of Weight Regain following Weight Loss.
Blomain, Erik Scott; Dirhan, Dara Anne; Valentino, Michael Anthony; Kim, Gilbert Won; Waldman, Scott Arthur
2013-01-01
Obesity is a world-wide pandemic and its incidence is on the rise along with associated comorbidities. Currently, there are few effective therapies to combat obesity. The use of lifestyle modification therapy, namely, improvements in diet and exercise, is preferable over bariatric surgery or pharmacotherapy due to surgical risks and issues with drug efficacy and safety. Although they are initially successful in producing weight loss, such lifestyle intervention strategies are generally unsuccessful in achieving long-term weight maintenance, with the vast majority of obese patients regaining their lost weight during followup. Recently, various compensatory mechanisms have been elucidated by which the body may oppose new weight loss, and this compensation may result in weight regain back to the obese baseline. The present review summarizes the available evidence on these compensatory mechanisms, with a focus on weight loss-induced changes in energy expenditure, neuroendocrine pathways, nutrient metabolism, and gut physiology. These findings have added a major focus to the field of antiobesity research. In addition to investigating pathways that induce weight loss, the present work also focuses on pathways that may instead prevent weight regain. Such strategies will be necessary for improving long-term weight loss maintenance and outcomes for patients who struggle with obesity.
Quantum chromodynamics at large distances
International Nuclear Information System (INIS)
Arbuzov, B.A.
1987-01-01
Properties of QCD at large distances are considered in the framework of traditional quantum field theory. An investigation of asymptotic behaviour of lower Green functions in QCD is the starting point of the approach. The recent works are reviewed which confirm the singular infrared behaviour of gluon propagator M 2 /(k 2 ) 2 at least under some gauge conditions. A special covariant gauge comes out to be the most suitable for description of infrared region due to absence of ghost contributions to infrared asymptotics of Green functions. Solutions of Schwinger-Dyson equation for quark propagator are obtained in this special gauge and are shown to possess desirable properties: spontaneous breaking of chiral invariance and nonperturbative character. The infrared asymptotics of lower Green functions are used for calculation of vacuum expectation values of gluon and quark fields. These vacuum expectation values are obtained in a good agreement with the corresponding phenomenological values which are needed in the method of sum rules in QCD, that confirms adequacy of the infrared region description. The consideration of a behaviour of QCD at large distances leads to the conclusion that at contemporary stage of theory development one may consider two possibilities. The first one is the well-known confinement hypothesis and the second one is called incomplete confinement and stipulates for open color to be observable. Possible manifestations of incomplete confinement are discussed
Pareto-Optimal Multi-objective Inversion of Geophysical Data
Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham
2018-01-01
In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.
128Xe Lifetime Measurement Using the Coulex-Plunger Technique in Inverse Kinematics
International Nuclear Information System (INIS)
Konstantinopoulos, T.; Lagoyannis, A.; Harissopulos, S.; Dewald, A.; Rother, W.; Ilie, G.; Jones, P.; Rakhila, P.; Greenlees, P.; Grahn, T.; Julin, R.; Balabanski, D. L.
2008-01-01
The lifetimes of the lowest collective yrast and non-yrast states in 128 Xe were measured in a Coulomb excitation experiment using the recoil distance method (RDM) in inverse kinematics. Hereby, the Cologne plunger apparatus was employed together with the JUROGAM spectrometer. Excited states in 128 Xe were populated using a 128 Xe beam impinging on a nat Fe target with E( 128 Xe)≅525 MeV. Recoils were detected by means of an array of solar cells placed at forward angles. Recoil-gated γ-spectra were measured at different plunger distances
128Xe Lifetime Measurement Using the Coulex-Plunger Technique in Inverse Kinematics
Konstantinopoulos, T.; Lagoyannis, A.; Harissopulos, S.; Dewald, A.; Rother, W.; Ilie, G.; Jones, P.; Rakhila, P.; Greenlees, P.; Grahn, T.; Julin, R.; Balabanski, D. L.
2008-05-01
The lifetimes of the lowest collective yrast and non-yrast states in 128Xe were measured in a Coulomb excitation experiment using the recoil distance method (RDM) in inverse kinematics. Hereby, the Cologne plunger apparatus was employed together with the JUROGAM spectrometer. Excited states in 128Xe were populated using a 128Xe beam impinging on a natFe target with E(128Xe)~525 MeV. Recoils were detected by means of an array of solar cells placed at forward angles. Recoil-gated γ-spectra were measured at different plunger distances.
Haematological status in elite long-distance runners
DEFF Research Database (Denmark)
Suetta, C; Kanstrup, I L; Fogh-Andersen, N
1996-01-01
In 10 female and eight male Danish elite middle- and long-distance runners, haematological status, including blood volume, was examined. Haemoglobin, haematocrit and serum (s)-ferritin concentrations were all within the normal range. In both men and women, blood volume, plasma volume...... and erythrocyte volume were increased in relation to various reference values. However, the runners had a low body weight due to a reduced fat level, 9.5% (7.3-15.1%) fat for the women, 5.9% (5.0-8.8%) fat (median and ranges) for the men, measured by dual-energy X-ray absorptiometry (DEXA) scanning. When...... the runners' body weights were 'normalized' to a reference population (25% fat for women, 15% fat for men), only plasma volume remained increased in relation to body weight for the women, whereas all the volumes remained increased for the men. This confirms that endurance training induces a true increased...
Spontaneous Self-Distancing and Adaptive Self-Reflection Across Adolescence.
White, Rachel E; Kross, Ethan; Duckworth, Angela L
2015-07-01
Experiments performed primarily with adults show that self-distancing facilitates adaptive self-reflection. However, no research has investigated whether adolescents spontaneously engage in this process or whether doing so is linked to adaptive outcomes. In this study, 226 African American adolescents, aged 11-20, reflected on an anger-related interpersonal experience. As expected, spontaneous self-distancing during reflection predicted lower levels of emotional reactivity by leading adolescents to reconstrue (rather than recount) their experience and blame their partner less. Moreover, the inverse relation between self-distancing and emotional reactivity strengthened with age. These findings highlight the role that self-distancing plays in fostering adaptive self-reflection in adolescence, and begin to elucidate the role that development plays in enhancing the benefits of engaging in this process. © 2015 The Authors. Child Development © 2015 Society for Research in Child Development, Inc.
International Nuclear Information System (INIS)
McMurray, J. S.; Williams, C. C.
1998-01-01
Scanning Capacitance Microscopy (SCM) is capable of providing two-dimensional information about dopant and carrier concentrations in semiconducting devices. This information can be used to calibrate models used in the simulation of these devices prior to manufacturing and to develop and optimize the manufacturing processes. To provide information for future generations of devices, ultra-high spatial accuracy (<10 nm) will be required. One method, which potentially provides a means to obtain these goals, is inverse modeling of SCM data. Current semiconducting devices have large dopant gradients. As a consequence, the capacitance probe signal represents an average over the local dopant gradient. Conversion of the SCM signal to dopant density has previously been accomplished with a physical model which assumes that no dopant gradient exists in the sampling area of the tip. The conversion of data using this model produces results for abrupt profiles which do not have adequate resolution and accuracy. A new inverse model and iterative method has been developed to obtain higher resolution and accuracy from the same SCM data. This model has been used to simulate the capacitance signal obtained from one and two-dimensional ideal abrupt profiles. This simulated data has been input to a new iterative conversion algorithm, which has recovered the original profiles in both one and two dimensions. In addition, it is found that the shape of the tip can significantly impact resolution. Currently SCM tips are found to degrade very rapidly. Initially the apex of the tip is approximately hemispherical, but quickly becomes flat. This flat region often has a radius of about the original hemispherical radius. This change in geometry causes the silicon directly under the disk to be sampled with approximately equal weight. In contrast, a hemispherical geometry samples most strongly the silicon centered under the SCM tip and falls off quickly with distance from the tip's apex. Simulation
Full-waveform inversion of surface waves in exploration geophysics
Borisov, D.; Gao, F.; Williamson, P.; Tromp, J.
2017-12-01
Full-waveform inversion (FWI) is a data fitting approach to estimate high-resolution properties of the Earth from seismic data by minimizing the misfit between observed and calculated seismograms. In land seismics, the source on the ground generates high-amplitude surface waves, which generally represent most of the energy recorded by ground sensors. Although surface waves are widely used in global seismology and engineering studies, they are typically treated as noise within the seismic exploration community since they mask deeper reflections from the intervals of exploration interest. This is mainly due to the fact that surface waves decay exponentially with depth and for a typical frequency range (≈[5-50] Hz) sample only the very shallow part of the subsurface, but also because they are much more sensitive to S-wave than P-wave velocities. In this study, we invert surface waves in the hope of using them as additional information for updating the near surface. In a heterogeneous medium, the main challenge of surface wave inversion is associated with their dispersive character, which makes it difficult to define a starting model for conventional FWI which can avoid cycle-skipping. The standard approach to dealing with this is by inverting the dispersion curves in the Fourier (f-k) domain to generate locally 1-D models, typically for the shear wavespeeds only. However this requires that the near-surface zone be more or less horizontally invariant over a sufficient distance for the spatial Fourier transform to be applicable. In regions with significant topography, such as foothills, this is not the case, so we revert to the time-space domain, but aim to minimize the differences of envelopes in the early stages of the inversion to resolve the cycle-skipping issue. Once the model is good enough, we revert to the classic waveform-difference inversion. We first present a few synthetic examples. We show that classical FWI might be trapped in a local minimum even for
Developments in inverse photoemission spectroscopy
International Nuclear Information System (INIS)
Sheils, W.; Leckey, R.C.G.; Riley, J.D.
1996-01-01
In the 1950's and 1960's, Photoemission Spectroscopy (PES) established itself as the major technique for the study of the occupied electronic energy levels of solids. During this period the field divided into two branches: X-ray Photoemission Spectroscopy (XPS) for photon energies greater than ∼l000eV, and Ultra-violet Photoemission Spectroscopy (UPS) for photon energies below ∼100eV. By the 1970's XPS and UPS had become mature techniques. Like XPS, BIS (at x-ray energies) does not have the momentum-resolving ability of UPS that has contributed much to the understanding of the occupied band structures of solids. BIS moved into a new energy regime in 1977 when Dose employed a Geiger-Mueller tube to obtain density of unoccupied states data from a tantalum sample at a photon energy of ∼9.7eV. At similar energies, the technique has since become known as Inverse Photoemission Spectroscopy (IPS), in acknowledgment of its complementary relationship to UPS and to distinguish it from the higher energy BIS. Drawing on decades of UPS expertise, IPS has quickly moved into areas of interest where UPS has been applied; metals, semiconductors, layer compounds, adsorbates, ferromagnets, and superconductors. At La Trobe University an IPS facility has been constructed. This presentation reports on developments in the experimental and analytical techniques of IPS that have been made there. The results of a study of the unoccupied bulk and surface bands of GaAs are presented
Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. It can lower ... at www.hormone.org/Spanish . Proven Weight Loss Methods Fact Sheet www.hormone.org
Identification of polymorphic inversions from genotypes
Directory of Open Access Journals (Sweden)
Cáceres Alejandro
2012-02-01
Full Text Available Abstract Background Polymorphic inversions are a source of genetic variability with a direct impact on recombination frequencies. Given the difficulty of their experimental study, computational methods have been developed to infer their existence in a large number of individuals using genome-wide data of nucleotide variation. Methods based on haplotype tagging of known inversions attempt to classify individuals as having a normal or inverted allele. Other methods that measure differences between linkage disequilibrium attempt to identify regions with inversions but unable to classify subjects accurately, an essential requirement for association studies. Results We present a novel method to both identify polymorphic inversions from genome-wide genotype data and classify individuals as containing a normal or inverted allele. Our method, a generalization of a published method for haplotype data 1, utilizes linkage between groups of SNPs to partition a set of individuals into normal and inverted subpopulations. We employ a sliding window scan to identify regions likely to have an inversion, and accumulation of evidence from neighboring SNPs is used to accurately determine the inversion status of each subject. Further, our approach detects inversions directly from genotype data, thus increasing its usability to current genome-wide association studies (GWAS. Conclusions We demonstrate the accuracy of our method to detect inversions and classify individuals on principled-simulated genotypes, produced by the evolution of an inversion event within a coalescent model 2. We applied our method to real genotype data from HapMap Phase III to characterize the inversion status of two known inversions within the regions 17q21 and 8p23 across 1184 individuals. Finally, we scan the full genomes of the European Origin (CEU and Yoruba (YRI HapMap samples. We find population-based evidence for 9 out of 15 well-established autosomic inversions, and for 52 regions
A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion
CUI, C.; Hou, W.
2017-12-01
Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.
The distance-decay function of geographical gravity model: Power law or exponential law?
International Nuclear Information System (INIS)
Chen, Yanguang
2015-01-01
Highlights: •The distance-decay exponent of the gravity model is a fractal dimension. •Entropy maximization accounts for the gravity model based on power law decay. •Allometric scaling relations relate gravity models with spatial interaction models. •The four-parameter gravity models have dual mathematical expressions. •The inverse power law is the most probable distance-decay function. -- Abstract: The distance-decay function of the geographical gravity model is originally an inverse power law, which suggests a scaling process in spatial interaction. However, the distance exponent of the model cannot be reasonably explained with the ideas from Euclidean geometry. This results in a dimension dilemma in geographical analysis. Consequently, a negative exponential function was used to replace the inverse power function to serve for a distance-decay function. But a new puzzle arose that the exponential-based gravity model goes against the first law of geography. This paper is devoted for solving these kinds of problems by mathematical reasoning and empirical analysis. New findings are as follows. First, the distance exponent of the gravity model is demonstrated to be a fractal dimension using the geometric measure relation. Second, the similarities and differences between the gravity models and spatial interaction models are revealed using allometric relations. Third, a four-parameter gravity model possesses a symmetrical expression, and we need dual gravity models to describe spatial flows. The observational data of China's cities and regions (29 elements indicative of 841 data points) in 2010 are employed to verify the theoretical inferences. A conclusion can be reached that the geographical gravity model based on power-law decay is more suitable for analyzing large, complex, and scale-free regional and urban systems. This study lends further support to the suggestion that the underlying rationale of fractal structure is entropy maximization. Moreover
Open and Distance Learning Today. Routledge Studies in Distance Education Series.
Lockwood, Fred, Ed.
This book contains the following papers on open and distance learning today: "Preface" (Daniel); "Big Bang Theory in Distance Education" (Hawkridge); "Practical Agenda for Theorists of Distance Education" (Perraton); "Trends, Directions and Needs: A View from Developing Countries" (Koul); "American…
SQUIDs and inverse problem techniques in nondestructive evaluation of metals
Bruno, A C
2001-01-01
Superconducting Quantum Interference Devices coupled to gradiometers were used to defect flaws in metals. We detected flaws in aluminium samples carrying current, measuring fields at lift-off distances up to one order of magnitude larger than the size of the flaw. Configured as a susceptometer we detected surface-braking flaws in steel samples, measuring the distortion on the applied magnetic field. We also used spatial filtering techniques to enhance the visualization of the magnetic field due to the flaws. In order to assess its severity, we used the generalized inverse method and singular value decomposition to reconstruct small spherical inclusions in steel. In addition, finite elements and optimization techniques were used to image complex shaped flaws.
STM contrast inversion of the Fe(1 1 0) surface
Energy Technology Data Exchange (ETDEWEB)
Mándi, Gábor [Budapest University of Technology and Economics, Department of Theoretical Physics, Budafoki út 8, H-1111 Budapest (Hungary); Palotás, Krisztián, E-mail: palotas@phy.bme.hu [Budapest University of Technology and Economics, Department of Theoretical Physics, Budafoki út 8, H-1111 Budapest (Hungary); Condensed Matter Research Group of the Hungarian Academy of Sciences, Budafoki út 8, H-1111 Budapest (Hungary)
2014-06-01
We extend the orbital-dependent electron tunneling model implemented within the three-dimensional (3D) Wentzel–Kramers–Brillouin (WKB) atom-superposition approach to simulate spin-polarized scanning tunneling microscopy (SP-STM) above magnetic surfaces. The tunneling model is based on the electronic structure data of the magnetic tip and surface obtained from first principles. Applying our method, we analyze the orbital contributions to the tunneling current, and study the nature of atomic contrast reversals occurring on constant-current SP-STM images above the Fe(1 1 0) surface. We find an interplay of orbital-dependent tunneling and spin-polarization effects responsible for the contrast inversion, and we discuss its dependence on the bias voltage, on the tip-sample distance, and on the tip orbital composition.
Photoinduced localization and decoherence in inversion symmetric molecules
Energy Technology Data Exchange (ETDEWEB)
Langer, Burkhard, E-mail: langer@gpta.de [Physikalische und Theoretische Chemie, Freie Universitaet Berlin, Takustrasse 3, D-14195 Berlin (Germany); Ueda, Kiyoshi [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Sendai 980-8577 (Japan); Al-Dossary, Omar M. [Physics Department, College of Science, King Saud University, Riyadh 11451 (Saudi Arabia); Becker, Uwe [Physics Department, College of Science, King Saud University, Riyadh 11451 (Saudi Arabia); Fritz-Haber-Institut der Max-Planck-Gesellschaft, Faradayweg 4-6, D-14195 Berlin (Germany)
2011-04-15
Coherence of particles in form of matter waves is one of the basic properties of nature which distinguishes classical from quantum behavior. This is a direct consequence of the particle-wave dualism. It is the wave-like nature, which gives rise to coherence, whereas particle-like behavior results from decoherence. If two quantum objects are coherently coupled with respect to a particular variable, even over long distances, one speaks of entanglement. The study of entanglement is nowadays one of the most exciting research fields in physics with enormous impact on the most innovative development in information technology, the development of a future quantum computer. The loss of coherence by decoherence processes may occur due to momentum kicks or thermal heating. In this paper we report on a further decoherence process which occurs in dissociating inversion symmetric molecules due to the superposition of orthogonal symmetry states in the excitation along with freezing of the electron tunneling process afterwards.
Pulsar high energy emission due to inverse Compton scattering
Energy Technology Data Exchange (ETDEWEB)
Lyutikov, Maxim
2013-06-15
We discuss growing evidence that pulsar high energy is emission is generated via Inverse Compton mechanism. We reproduce the broadband spectrum of Crab pulsar, from UV to very high energy gamma-rays - nearly ten decades in energy, within the framework of the cyclotron-self-Compton model. Emission is produced by two counter-streaming beams within the outer gaps, at distances above ∼ 20 NS radii. The outward moving beam produces UV-X-ray photons via Doppler-booster cyclotron emission, and GeV photons by Compton scattering the cyclotron photons produced by the inward going beam. The scattering occurs in the deep Klein-Nishina regime, whereby the IC component provides a direct measurement of particle distribution within the magnetosphere. The required plasma multiplicity is high, ∼10{sup 6} – 10{sup 7}, but is consistent with the average particle flux injected into the pulsar wind nebula.
VIRTUAL LABORATORY IN DISTANCE LEARNING SYSTEM
Directory of Open Access Journals (Sweden)
Е. Kozlovsky
2011-11-01
Full Text Available Questions of designing and a choice of technologies of creation of virtual laboratory for the distance learning system are considered. Distance learning system «Kherson Virtual University» is used as illustration.
Distance Learning Plan Development: Initiating Organizational Structures
National Research Council Canada - National Science Library
Poole, Clifton
1998-01-01
.... Army distance learning plan managers to examine the DLPs they were directing. The analysis showed that neither army nor civilian distance learning plan managers used formalized requirements for organizational structure development (OSD...
When Do Distance Effects Become Empirically Observable?
DEFF Research Database (Denmark)
Beugelsdijk, Sjoerd; Nell, Phillip C.; Ambos, Björn
2017-01-01
Integrating distance research with the behavioral strategy literature on MNC headquarters-subsidiary relations, this paper explores how the distance between headquarters and subsidiaries relates to value added by the headquarters. We show for 124 manufacturing subsidiaries in Europe that...
Institutional Distance and the Internationalization Process
DEFF Research Database (Denmark)
Pogrebnyakov, Nicolai; Maitland, Carleen
2011-01-01
This paper applies the institutional lens to the internationalization process model. It updates the concept of psychic distance in the model with a recently developed, theoretically grounded construct of institutional distance. Institutions are considered simultaneously at the national and industry...
Convex blind image deconvolution with inverse filtering
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Photonic Design: From Fundamental Solar Cell Physics to Computational Inverse Design
Miller, Owen Dennis
2012-01-01
Photonic innovation is becoming ever more important in the modern world. Optical systems are dominating shorter and shorter communications distances, LED's are rapidly emerging for a variety of applications, and solar cells show potential to be a mainstream technology in the energy space. The need for novel, energy-efficient photonic and optoelectronic devices will only increase. This work unites fundamental physics and a novel computational inverse design approach towards such innovation....
Support minimized inversion of acoustic and elastic wave scattering
International Nuclear Information System (INIS)
Safaeinili, A.
1994-01-01
This report discusses the following topics on support minimized inversion of acoustic and elastic wave scattering: Minimum support inversion; forward modelling of elastodynamic wave scattering; minimum support linearized acoustic inversion; support minimized nonlinear acoustic inversion without absolute phase; and support minimized nonlinear elastic inversion
Anxiety and Resistance in Distance Learning
Nazime Tuncay; Huseyin Uzunboylu
2010-01-01
The purpose of this study was to investigate students' anxiety and resistance towards learning through distance education.Specifically, the study sought answers to the following questions: -What are the reasons of students not choosing distancelearning courses? -Which symptoms of anxiety, if any, do distance learner’s exhibit towards distance learning? Does genderhave any significant relationships with distance learners' perception of factors that affect their anxiety and resistance? A totalo...
Distance majorization and its applications.
Chi, Eric C; Zhou, Hua; Lange, Kenneth
2014-08-01
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.
Elasticity of Long Distance Travelling
DEFF Research Database (Denmark)
Knudsen, Mette Aagaard
2011-01-01
With data from the Danish expenditure survey for 12 years 1996 through 2007, this study analyses household expenditures for long distance travelling. Household expenditures are examined at two levels of aggregation having the general expenditures on transportation and leisure relative to five other...... aggregated commodities at the highest level, and the specific expenditures on plane tickets and travel packages at the lowest level. The Almost Ideal Demand System is applied to determine the relationship between expenditures on transportation and leisure and all other purchased non-durables within...... packages has higher income elasticity of demand than plane tickets but also higher than transportation and leisure in general. The findings within price sensitiveness are not as sufficient estimated, but the model results indicate that travel packages is far more price elastic than plane tickets which...
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
From Moon-fall to motions under inverse square laws
International Nuclear Information System (INIS)
Foong, S K
2008-01-01
The motion of two bodies, along a straight line, under the inverse square law of gravity is considered in detail, progressing from simpler cases to more complex ones: (a) one body fixed and one free, (b) both bodies free and identical mass, (c) both bodies free and different masses and (d) the inclusion of electrostatic forces for both bodies free and different masses. The equations of motion (EOM) are derived starting from Newton's second law or from conservation of energy. They are then reduced to dimensionless EOM using appropriate scales for time and distance. Solutions of the dimensionless EOM as well as the original EOM are given. The time interval for the bodies to fall is expressed as a function of the distance fallen. Formulae for the inverse were obtained. The coalescence times for the different cases are (a) π/2√2 √(L 3 /(Gm 1 )) where L is the initial separation of the two bodies and m 1 is the mass of the fixed body, (b) and (c) t=π/2√2 √(L 3 /(Gm T )) where m T is the total mass of the two bodies and (d) t=π/2√2 √(L 3 /[Gm T (1-Λ)]) where Λ=(kq 1 q 2 )/(Gm 1 m 2 ) and is a measure of the ratio of the electrostatic force to gravity. The last formula may also be used when Λ≥1 with the interpretation that there is no collision if t is infinity or imaginary. We also discuss this motion along the straight line as a special case of the general elliptic motion of two bodies. I believe that this paper will be useful to university tutors as well as undergraduate and even graduate students who prefer to consider the special case before the general case, and their relationship
Theim, Kelly R; Brown, Joshua D; Juarascio, Adrienne S; Malcolm, Robert R; O'Neil, Patrick M
2013-11-01
Greater self-regulatory behavior usage is associated with greater weight loss within behavioral weight loss treatments. Hedonic hunger (i.e., susceptibility to environmental food cues) may impede successful behavior change and weight loss. Adult men and women (N = 111, body mass index M ± SD = 35.89 ± 6.97 kg/m(2)) were assessed before and after a 15-week lifestyle change weight loss program with a partial meal-replacement diet. From pre- to post-treatment, reported weight control behavior usage improved and hedonic hunger decreased, and these changes were inversely related. Individuals with higher hedonic hunger scores at baseline showed the greatest weight loss. Similarly, participants with lower baseline use of weight control behaviors lost more weight, and increased weight control behavior usage was associated with greater weight loss-particularly among individuals with low baseline hedonic hunger. Further study is warranted regarding the significance of hedonic hunger in weight loss treatments.
Mitchell, Steven; Cockcroft, Anne; Andersson, Neil
2011-12-21
Maps can portray trends, patterns, and spatial differences that might be overlooked in tabular data and are now widely used in health research. Little has been reported about the process of using maps to communicate epidemiological findings. Population weighted raster maps show colour changes over the study area. Similar to the rasters of barometric pressure in a weather map, data are the health occurrence--a peak on the map represents a higher value of the indicator in question. The population relevance of each sentinel site, as determined in the stratified last stage random sample, combines with geography (inverse-distance weighting) to provide a population-weighted extension of each colour. This transforms the map to show population space rather than simply geographic space. Maps allowed discussion of strategies to reduce violence against women in a context of political sensitivity about quoting summary indicator figures. Time-series maps showed planners how experiences of health services had deteriorated despite a reform programme; where in a country HIV risk behaviours were improving; and how knowledge of an economic development programme quickly fell off across a region. Change maps highlighted where indicators were improving and where they were deteriorating. Maps of potential impact of interventions, based on multivariate modelling, displayed how partial and full implementation of programmes could improve outcomes across a country. Scale depends on context. To support local planning, district maps or local government authority maps of health indicators were more useful than national maps; but multinational maps of outcomes were more useful for regional institutions. Mapping was useful to illustrate in which districts enrolment in religious schools--a rare occurrence--was more prevalent. Population weighted raster maps can present social audit findings in an accessible and compelling way, increasing the use of evidence by planners with limited numeracy
Analysing designed experiments in distance sampling
Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block
2009-01-01
Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...
Distance learning: its advantages and disadvantages
KEGEYAN SVETLANA ERIHOVNA
2016-01-01
Distance learning has become popular in higher institutions because of its flexibility and availability to learners and teachers at anytime, regardless of geographic location. With so many definitions and phases of distance education, this paper only focuses on the delivery mode of distance education (the use of information technology), background, and its disadvantages and advantages for today’s learners.
ETUDE - European Trade Union Distance Education.
Creanor, Linda; Walker, Steve
2000-01-01
Describes transnational distance learning activities among European trade union educators carried out as part of the European Trade Union Distance Education (ETUDE) project, supported by the European Commission. Highlights include the context of international trade union distance education; tutor training course; tutors' experiences; and…
Continuity Properties of Distances for Markov Processes
DEFF Research Database (Denmark)
Jaeger, Manfred; Mao, Hua; Larsen, Kim Guldstrand
2014-01-01
In this paper we investigate distance functions on finite state Markov processes that measure the behavioural similarity of non-bisimilar processes. We consider both probabilistic bisimilarity metrics, and trace-based distances derived from standard Lp and Kullback-Leibler distances. Two desirable...
Inverse bifurcation analysis: application to simple gene systems
Directory of Open Access Journals (Sweden)
Schuster Peter
2006-07-01
Full Text Available Abstract Background Bifurcation analysis has proven to be a powerful method for understanding the qualitative behavior of gene regulatory networks. In addition to the more traditional forward problem of determining the mapping from parameter space to the space of model behavior, the inverse problem of determining model parameters to result in certain desired properties of the bifurcation diagram provides an attractive methodology for addressing important biological problems. These include understanding how the robustness of qualitative behavior arises from system design as well as providing a way to engineer biological networks with qualitative properties. Results We demonstrate that certain inverse bifurcation problems of biological interest may be cast as optimization problems involving minimal distances of reference parameter sets to bifurcation manifolds. This formulation allows for an iterative solution procedure based on performing a sequence of eigen-system computations and one-parameter continuations of solutions, the latter being a standard capability in existing numerical bifurcation software. As applications of the proposed method, we show that the problem of maximizing regions of a given qualitative behavior as well as the reverse engineering of bistable gene switches can be modelled and efficiently solved.
Inverse Funnel Effect of Excitons in Strained Black Phosphorus
Directory of Open Access Journals (Sweden)
Pablo San-Jose
2016-09-01
Full Text Available We study the effects of strain on the properties and dynamics of Wannier excitons in monolayer (phosphorene and few-layer black phosphorus (BP, a promising two-dimensional material for optoelectronic applications due to its high mobility, mechanical strength, and strain-tunable direct band gap. We compare the results to the case of molybdenum disulphide (MoS_{2} monolayers. We find that the so-called funnel effect, i.e., the possibility of controlling exciton motion by means of inhomogeneous strains, is much stronger in few-layer BP than in MoS_{2} monolayers and, crucially, is of opposite sign. Instead of excitons accumulating isotropically around regions of high tensile strain like in MoS_{2}, excitons in BP are pushed away from said regions. This inverse funnel effect is moreover highly anisotropic, with much larger funnel distances along the armchair crystallographic direction, leading to a directional focusing of exciton flow. A strong inverse funnel effect could enable simpler designs of funnel solar cells and offer new possibilities for the manipulation and harvesting of light.
3rd Annual Workshop on Inverse Problem
2015-01-01
This proceeding volume is based on papers presented on the Third Annual Workshop on Inverse Problems which was organized by the Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, and took place in May 2013 in Stockholm. The purpose of this workshop was to present new analytical developments and numerical techniques for solution of inverse problems for a wide range of applications in acoustics, electromagnetics, optical fibers, medical imaging, geophysics, etc. The contributions in this volume reflect these themes and will be beneficial to researchers who are working in the area of applied inverse problems.
Inverse problems in the Bayesian framework
International Nuclear Information System (INIS)
Calvetti, Daniela; Somersalo, Erkki; Kaipio, Jari P
2014-01-01
The history of Bayesian methods dates back to the original works of Reverend Thomas Bayes and Pierre-Simon Laplace: the former laid down some of the basic principles on inverse probability in his classic article ‘An essay towards solving a problem in the doctrine of chances’ that was read posthumously in the Royal Society in 1763. Laplace, on the other hand, in his ‘Memoirs on inverse probability’ of 1774 developed the idea of updating beliefs and wrote down the celebrated Bayes’ formula in the form we know today. Although not identified yet as a framework for investigating inverse problems, Laplace used the formalism very much in the spirit it is used today in the context of inverse problems, e.g., in his study of the distribution of comets. With the evolution of computational tools, Bayesian methods have become increasingly popular in all fields of human knowledge in which conclusions need to be drawn based on incomplete and noisy data. Needless to say, inverse problems, almost by definition, fall into this category. Systematic work for developing a Bayesian inverse problem framework can arguably be traced back to the 1980s, (the original first edition being published by Elsevier in 1987), although articles on Bayesian methodology applied to inverse problems, in particular in geophysics, had appeared much earlier. Today, as testified by the articles in this special issue, the Bayesian methodology as a framework for considering inverse problems has gained a lot of popularity, and it has integrated very successfully with many traditional inverse problems ideas and techniques, providing novel ways to interpret and implement traditional procedures in numerical analysis, computational statistics, signal analysis and data assimilation. The range of applications where the Bayesian framework has been fundamental goes from geophysics, engineering and imaging to astronomy, life sciences and economy, and continues to grow. There is no question that Bayesian
NLSE: Parameter-Based Inversion Algorithm
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.
Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.
Inverse problems for the Boussinesq system
International Nuclear Information System (INIS)
Fan, Jishan; Jiang, Yu; Nakamura, Gen
2009-01-01
We obtain two results on inverse problems for a 2D Boussinesq system. One is that we prove the Lipschitz stability for the inverse source problem of identifying a time-independent external force in the system with observation data in an arbitrary sub-domain over a time interval of the velocity and the data of velocity and temperature at a fixed positive time t 0 > 0 over the whole spatial domain. The other one is that we prove a conditional stability estimate for an inverse problem of identifying the two initial conditions with a single observation on a sub-domain
Population inversion in a stationary recombining plasma
International Nuclear Information System (INIS)
Otsuka, M.
1980-01-01
Population inversion, which occurs in a recombining plasma when a stationary He plasma is brought into contact with a neutral gas, is examined. With hydrogen as a contact gas, noticeable inversion between low-lying levels of H as been found. The overpopulation density is of the order of 10 8 cm -3 , which is much higher then that (approx. =10 5 cm -3 ) obtained previously with He as a contact gas. Relations between these experimental results and the conditions for population inversion are discussed with the CR model
Inverse Raman effect: applications and detection techniques
International Nuclear Information System (INIS)
Hughes, L.J. Jr.
1980-08-01
The processes underlying the inverse Raman effect are qualitatively described by comparing it to the more familiar phenomena of conventional and stimulated Raman scattering. An experession is derived for the inverse Raman absorption coefficient, and its relationship to the stimulated Raman gain is obtained. The power requirements of the two fields are examined qualitatively and quantitatively. The assumption that the inverse Raman absorption coefficient is constant over the interaction length is examined. Advantages of the technique are discussed and a brief survey of reported studies is presented
Inverse Raman effect: applications and detection techniques
Energy Technology Data Exchange (ETDEWEB)
Hughes, L.J. Jr.
1980-08-01
The processes underlying the inverse Raman effect are qualitatively described by comparing it to the more familiar phenomena of conventional and stimulated Raman scattering. An experession is derived for the inverse Raman absorption coefficient, and its relationship to the stimulated Raman gain is obtained. The power requirements of the two fields are examined qualitatively and quantitatively. The assumption that the inverse Raman absorption coefficient is constant over the interaction length is examined. Advantages of the technique are discussed and a brief survey of reported studies is presented.
Multiparameter Optimization for Electromagnetic Inversion Problem
Directory of Open Access Journals (Sweden)
M. Elkattan
2017-10-01
Full Text Available Electromagnetic (EM methods have been extensively used in geophysical investigations such as mineral and hydrocarbon exploration as well as in geological mapping and structural studies. In this paper, we developed an inversion methodology for Electromagnetic data to determine physical parameters of a set of horizontal layers. We conducted Forward model using transmission line method. In the inversion part, we solved multi parameter optimization problem where, the parameters are conductivity, dielectric constant, and permeability of each layer. The optimization problem was solved by simulated annealing approach. The inversion methodology was tested using a set of models representing common geological formations.
Generalized Selection Weighted Vector Filters
Directory of Open Access Journals (Sweden)
Rastislav Lukac
2004-09-01
Full Text Available This paper introduces a class of nonlinear multichannel filters capable of removing impulsive noise in color images. The here-proposed generalized selection weighted vector filter class constitutes a powerful filtering framework for multichannel signal processing. Previously defined multichannel filters such as vector median filter, basic vector directional filter, directional-distance filter, weighted vector median filters, and weighted vector directional filters are treated from a global viewpoint using the proposed framework. Robust order-statistic concepts and increased degree of freedom in filter design make the proposed method attractive for a variety of applications. Introduced multichannel sigmoidal adaptation of the filter parameters and its modifications allow to accommodate the filter parameters to varying signal and noise statistics. Simulation studies reported in this paper indicate that the proposed filter class is computationally attractive, yields excellent performance, and is able to preserve fine details and color information while efficiently suppressing impulsive noise. This paper is an extended version of the paper by Lukac et al. presented at the 2003 IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing (NSIP '03 in Grado, Italy.
BOOK REVIEW: Inverse Problems. Activities for Undergraduates
Yamamoto, Masahiro
2003-06-01
This book is a valuable introduction to inverse problems. In particular, from the educational point of view, the author addresses the questions of what constitutes an inverse problem and how and why we should study them. Such an approach has been eagerly awaited for a long time. Professor Groetsch, of the University of Cincinnati, is a world-renowned specialist in inverse problems, in particular the theory of regularization. Moreover, he has made a remarkable contribution to educational activities in the field of inverse problems, which was the subject of his previous book (Groetsch C W 1993 Inverse Problems in the Mathematical Sciences (Braunschweig: Vieweg)). For this reason, he is one of the most qualified to write an introductory book on inverse problems. Without question, inverse problems are important, necessary and appear in various aspects. So it is crucial to introduce students to exercises in inverse problems. However, there are not many introductory books which are directly accessible by students in the first two undergraduate years. As a consequence, students often encounter diverse concrete inverse problems before becoming aware of their general principles. The main purpose of this book is to present activities to allow first-year undergraduates to learn inverse theory. To my knowledge, this book is a rare attempt to do this and, in my opinion, a great success. The author emphasizes that it is very important to teach inverse theory in the early years. He writes; `If students consider only the direct problem, they are not looking at the problem from all sides .... The habit of always looking at problems from the direct point of view is intellectually limiting ...' (page 21). The book is very carefully organized so that teachers will be able to use it as a textbook. After an introduction in chapter 1, sucessive chapters deal with inverse problems in precalculus, calculus, differential equations and linear algebra. In order to let one gain some insight
Byun, Jinyoung; Han, Younghun; Gorlov, Ivan P; Busam, Jonathan A; Seldin, Michael F; Amos, Christopher I
2017-10-16
Accurate inference of genetic ancestry is of fundamental interest to many biomedical, forensic, and anthropological research areas. Genetic ancestry memberships may relate to genetic disease risks. In a genome association study, failing to account for differences in genetic ancestry between cases and controls may also lead to false-positive results. Although a number of strategies for inferring and taking into account the confounding effects of genetic ancestry are available, applying them to large studies (tens thousands samples) is challenging. The goal of this study is to develop an approach for inferring genetic ancestry of samples with unknown ancestry among closely related populations and to provide accurate estimates of ancestry for application to large-scale studies. In this study we developed a novel distance-based approach, Ancestry Inference using Principal component analysis and Spatial analysis (AIPS) that incorporates an Inverse Distance Weighted (IDW) interpolation method from spatial analysis to assign individuals to population memberships. We demonstrate the benefits of AIPS in analyzing population substructure, specifically related to the four most commonly used tools EIGENSTRAT, STRUCTURE, fastSTRUCTURE, and ADMIXTURE using genotype data from various intra-European panels and European-Americans. While the aforementioned commonly used tools performed poorly in inferring ancestry from a large number of subpopulations, AIPS accurately distinguished variations between and within subpopulations. Our results show that AIPS can be applied to large-scale data sets to discriminate the modest variability among intra-continental populations as well as for characterizing inter-continental variation. The method we developed will protect against spurious associations when mapping the genetic basis of a disease. Our approach is more accurate and computationally efficient method for inferring genetic ancestry in the large-scale genetic studies.
DEFF Research Database (Denmark)
Lisonek, Petr
1996-01-01
A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...
Interplay between dewetting and layer inversion in poly(4-vinylpyridine)/polystyrene bilayers.
Thickett, Stuart C; Harris, Andrew; Neto, Chiara
2010-10-19
We investigated the morphology and dynamics of the dewetting of metastable poly(4-vinylpyridine) (P4VP) thin films situated on top of polystyrene (PS) thin films as a function of the molecular weight and thickness of both films. We focused on the competition between the dewetting process, occurring as a result of unfavorable intermolecular interactions at the P4VP/PS interface, and layer inversion due to the lower surface energy of PS. By means of optical and atomic force microscopy (AFM), we observed how both the dynamics of the instability and the morphology of the emerging patterns depend on the ratio of the molecular weights of the polymer films. When the bottom PS layer was less viscous than the top P4VP layer (liquid-liquid dewetting), nucleated holes in the P4VP film typically stopped growing at long annealing times because of a combination of viscous dissipation in the bottom layer and partial layer inversion. Full layer inversion was achieved when the viscosity of the top P4VP layer was significantly greater (>10⁴) than the viscosity of the PS layer underneath, which is attributed to strongly different mobilities of the two layers. The density of holes produced by nucleation dewetting was observed for the first time to depend on the thickness of the top film as well as the polymer molecular weight. The final (completely dewetted) morphology of isolated droplets could be achieved only if the time frame of layer inversion was significantly slower than that of dewetting, which was characteristic of high-viscosity PS underlayers that allowed dewetting to fall into a liquid-solid regime. Assuming a simple reptation model for layer inversion occurring at the dewetting front, the observed surface morphologies could be predicted on the basis of the relative rates of dewetting and layer inversion.
Parametric optimization of inverse trapezoid oleophobic surfaces
DEFF Research Database (Denmark)
Cavalli, Andrea; Bøggild, Peter; Okkels, Fridolin
2012-01-01
In this paper, we introduce a comprehensive and versatile approach to the parametric shape optimization of oleophobic surfaces. We evaluate the performance of inverse trapezoid microstructures in terms of three objective parameters: apparent contact angle, maximum sustainable hydrostatic pressure...
An inverse method for radiation transport
Energy Technology Data Exchange (ETDEWEB)
Favorite, J. A. (Jeffrey A.); Sanchez, R. (Richard)
2004-01-01
Adjoint functions have been used with forward functions to compute gradients in implicit (iterative) solution methods for inverse problems in optical tomography, geoscience, thermal science, and other fields, but only once has this approach been used for inverse solutions to the Boltzmann transport equation. In this paper, this approach is used to develop an inverse method that requires only angle-independent flux measurements, rather than angle-dependent measurements as was done previously. The method is applied to a simplified form of the transport equation that does not include scattering. The resulting procedure uses measured values of gamma-ray fluxes of discrete, characteristic energies to determine interface locations in a multilayer shield. The method was implemented with a Newton-Raphson optimization algorithm, and it worked very well in numerical one-dimensional spherical test cases. A more sophisticated optimization method would better exploit the potential of the inverse method.
The Transmuted Generalized Inverse Weibull Distribution
Directory of Open Access Journals (Sweden)
Faton Merovci
2014-05-01
Full Text Available A generalization of the generalized inverse Weibull distribution the so-called transmuted generalized inverse Weibull distribution is proposed and studied. We will use the quadratic rank transmutation map (QRTM in order to generate a flexible family of probability distributions taking the generalized inverseWeibull distribution as the base value distribution by introducing a new parameter that would offer more distributional flexibility. Various structural properties including explicit expressions for the moments, quantiles, and moment generating function of the new distribution are derived. We propose the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set are used to compare the flexibility of the transmuted version versus the generalized inverse Weibull distribution.
Parameterization analysis and inversion for orthorhombic media
Masmoudi, Nabil
2018-01-01
Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most
Wave-equation reflection traveltime inversion
Zhang, Sanzong; Schuster, Gerard T.; Luo, Yi
2011-01-01
The main difficulty with iterative waveform inversion using a gradient optimization method is that it tends to get stuck in local minima associated within the waveform misfit function. This is because the waveform misfit function is highly nonlinear
Voxel inversion of airborne EM data
DEFF Research Database (Denmark)
Fiandaca, Gianluca G.; Auken, Esben; Christiansen, Anders Vest C A.V.C.
2013-01-01
We present a geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which allows for straightforward integration of different data types in joint inversion, for informing geological/hydrogeological models directly and for easier incorporation...... of prior information. Inversion of geophysical data usually refers to a model space being linked to the actual observation points. For airborne surveys the spatial discretization of the model space reflects the flight lines. Often airborne surveys are carried out in areas where other ground......-based geophysical data are available. The model space of geophysical inversions is usually referred to the positions of the measurements, and ground-based model positions do not generally coincide with the airborne model positions. Consequently, a model space based on the measuring points is not well suited...
Deep controls on intraplate basin inversion
DEFF Research Database (Denmark)
Nielsen, S.B.; Stephenson, Randell Alexander; Schiffer, Christian
2014-01-01
Basin inversion is an intermediate-scale manifestation of continental intraplate deformation, which produces earthquake activity in the interior of continents. The sedimentary basins of central Europe, inverted in the Late Cretaceous– Paleocene, represent a classic example of this phenomenon....... It is known that inversion of these basins occurred in two phases: an initial one of transpressional shortening involving reverse activation of former normal faults and a subsequent one of uplift of the earlier developed inversion axis and a shift of sedimentary depocentres, and that this is a response...... to changes in the regional intraplate stress field. This European intraplate deformation is considered in thecontext of a new model of the present-day stress field of Europe (and the North Atlantic) caused by lithospheric potential energy variations. Stresses causingbasin inversion of Europe must have been...