Assessment of Groundwater Chemical Quality, Using Inverse Distance Weighted Method
Directory of Open Access Journals (Sweden)
Sh. Ashraf
2013-04-01
Full Text Available An interpolation technique, ordinary Inverse Distance Weighted (IDW, was used to obtain the spatial distribution of groundwater quality parameters in Damghan plain of Iran. According to Scofield guidelines for TDS value, 60% of the water samples were harmful for irrigation purposes. Regarding to EC parameter, more than 60% of studied area was laid in bad range for irrigation purposes. The most dominant anion was Cl- and 10% of water samples showed a very hazardous class. According to Doneen guidelines for chloride value, 100% of collected water from the aquifer had slight to moderate problems for irrigation water purposes. The predominant cations in Damghan plain aquifer were according to Na+> Ca++> Mg++> K+. Sodium ion was the dominant cation and regarding to Na+ content guidelines, almost all groundwater samples had problem for foliar application. Calcium ion distribution was within usual range. The magnesium ion concentration is generally lower than sodium and calcium. The majority of the samples showed Mg++amount within usual range. Also K+ value ranged from 0.1 to 0.23 meq/L and all the water samples had potassium values within the permissible limit. Based on SAR criterion 80 % of collected water had slight to moderate problems. The SSP values were found from 2.87 to 6.87%. According to SAR value, thirty percent of ground water samples were doubtful class. The estimated amounts of RSC were ranged from 0.4-2 and based on RSC criterion, twenty percent of groundwater samples had slight to moderate problems.
Directory of Open Access Journals (Sweden)
Y. Gholipour
Full Text Available This paper focuses on a metamodel-based design optimization algorithm. The intention is to improve its computational cost and convergence rate. Metamodel-based optimization method introduced here, provides the necessary means to reduce the computational cost and convergence rate of the optimization through a surrogate. This algorithm is a combination of a high quality approximation technique called Inverse Distance Weighting and a meta-heuristic algorithm called Harmony Search. The outcome is then polished by a semi-tabu search algorithm. This algorithm adopts a filtering system and determines solution vectors where exact simulation should be applied. The performance of the algorithm is evaluated by standard truss design problems and there has been a significant decrease in the computational effort and improvement of convergence rate.
Comparing ordinary kriging and inverse distance weighting for soil as pollution in Beijing.
Qiao, Pengwei; Lei, Mei; Yang, Sucai; Yang, Jun; Guo, Guanghui; Zhou, Xiaoyong
2018-03-23
Spatial interpolation method is the basis of soil heavy metal pollution assessment and remediation. The existing evaluation index for interpolation accuracy did not combine with actual situation. The selection of interpolation methods needs to be based on specific research purposes and research object characteristics. In this paper, As pollution in soils of Beijing was taken as an example. The prediction accuracy of ordinary kriging (OK) and inverse distance weighted (IDW) were evaluated based on the cross validation results and spatial distribution characteristics of influencing factors. The results showed that, under the condition of specific spatial correlation, the cross validation results of OK and IDW for every soil point and the prediction accuracy of spatial distribution trend are similar. But the prediction accuracy of OK for the maximum and minimum is less than IDW, while the number of high pollution areas identified by OK are less than IDW. It is difficult to identify the high pollution areas fully by OK, which shows that the smoothing effect of OK is obvious. In addition, with increasing of the spatial correlation of As concentration, the cross validation error of OK and IDW decreases, and the high pollution area identified by OK is approaching the result of IDW, which can identify the high pollution areas more comprehensively. However, because the semivariogram constructed by OK interpolation method is more subjective and requires larger number of soil samples, IDW is more suitable for spatial prediction of heavy metal pollution in soils.
Zarco-Perello, Salvador; Simões, Nuno
2017-01-01
Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in
Directory of Open Access Journals (Sweden)
Salvador Zarco-Perello
2017-11-01
Full Text Available Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW and ordinary kriging (OK, to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance
Directory of Open Access Journals (Sweden)
Mehmet Arif Özyazıcı
2015-11-01
Full Text Available The aim of this study was to determine plant nutrients content and to in terms of soil variables their soil database and generate maps of their distribution on agricultural land in Central and Eastern Black Sea Region using geographical information system (GIS. In this research, total 3400 soil samples (0-20 cm depth were taken at 2.5 x 2.5 km grid points representing agricultural soils. Total nitrogen, extractable calcium, magnesium, sodium, boron, iron, copper, zinc and manganese contents were analysed in collected soil samples. Analysis results of these samples were classified and evaluated for deficiency, sufficiency or excess with respect to plant nutrients. Afterwards, in terms of GIS, a soil database and maps for current status of the study area were created by using inverse distance weighted (IDW interpolation method. According to this research results, it was determined sufficient plant nutrient elements in terms of total nitrogen, extractable iron, copper and manganese in arable soils of Central and Eastern Blacksea Region while, extractable calcium, magnesium, sodium were found good and moderate level in 66.88%, 81.44% and 64.56% of total soil samples, respectively. In addition, insufficient boron and zinc concentration were found in 34.35% and 51.36% of soil samples, respectively.
Water quality assessment and mapping using inverse distance ...
African Journals Online (AJOL)
Water quality assessment and mapping using inverse distance weighted interpolation: a case of River Kaduna, Nigeria. ... Several researchers have studied the water quality of the upper and lower stretches of River Kaduna with little on the middle stretch of the river. Besides, no work has ever been done on mapping the ...
Phylogenetic inference with weighted codon evolutionary distances.
Criscuolo, Alexis; Michel, Christian J
2009-04-01
We develop a new approach to estimate a matrix of pairwise evolutionary distances from a codon-based alignment based on a codon evolutionary model. The method first computes a standard distance matrix for each of the three codon positions. Then these three distance matrices are weighted according to an estimate of the global evolutionary rate of each codon position and averaged into a unique distance matrix. Using a large set of both real and simulated codon-based alignments of nucleotide sequences, we show that this approach leads to distance matrices that have a significantly better treelikeness compared to those obtained by standard nucleotide evolutionary distances. We also propose an alternative weighting to eliminate the part of the noise often associated with some codon positions, particularly the third position, which is known to induce a fast evolutionary rate. Simulation results show that fast distance-based tree reconstruction algorithms on distance matrices based on this codon position weighting can lead to phylogenetic trees that are at least as accurate as, if not better, than those inferred by maximum likelihood. Finally, a well-known multigene dataset composed of eight yeast species and 106 codon-based alignments is reanalyzed and shows that our codon evolutionary distances allow building a phylogenetic tree which is similar to those obtained by non-distance-based methods (e.g., maximum parsimony and maximum likelihood) and also significantly improved compared to standard nucleotide evolutionary distance estimates.
Distance-weighted city growth.
Rybski, Diego; García Cantú Ros, Anselmo; Kropp, Jürgen P
2013-04-01
Urban agglomerations exhibit complex emergent features of which Zipf's law, i.e., a power-law size distribution, and fractality may be regarded as the most prominent ones. We propose a simplistic model for the generation of citylike structures which is solely based on the assumption that growth is more likely to take place close to inhabited space. The model involves one parameter which is an exponent determining how strongly the attraction decays with the distance. In addition, the model is run iteratively so that existing clusters can grow (together) and new ones can emerge. The model is capable of reproducing the size distribution and the fractality of the boundary of the largest cluster. Although the power-law distribution depends on both, the imposed exponent and the iteration, the fractality seems to be independent of the former and only depends on the latter. Analyzing land-cover data, we estimate the parameter-value γ≈2.5 for Paris and its surroundings.
RFDR with Adiabatic Inversion Pulses: Application to Internuclear Distance Measurements
International Nuclear Information System (INIS)
Leppert, Joerg; Ohlenschlaeger, Oliver; Goerlach, Matthias; Ramachandran, Ramadurai
2004-01-01
In the context of the structural characterisation of biomolecular systems via MAS solid state NMR, the potential utility of homonuclear dipolar recoupling with adiabatic inversion pulses has been assessed via numerical simulations and experimental measurements. The results obtained suggest that it is possible to obtain reliable estimates of internuclear distances via an analysis of the initial cross-peak intensity buildup curves generated from two-dimensional adiabatic inversion pulse driven longitudinal magnetisation exchange experiments
Distance weighting for improved tomographic reconstructions
International Nuclear Information System (INIS)
Koeppe, R.A.; Holden, J.E.
1984-01-01
An improved method for the reconstruction of emission computed axial tomography images has been developed. The method is a modification of filtered back-projection, where the back projected values are weighted to reflect the loss of formation, with distance from the camera, which is inherent in gamma camera imaging. This information loss is a result of: loss of spatial resolution with distance, attenuation, and scatter. The weighting scheme can best be described by considering the contributions of any two opposing views to the reconstruction image pixels. The weight applied to the projections of one view is set to equal the relative amount of the original activity that was initially received in that projection, assuming a uniform attenuating medium. This yields a weighting value which is a function of distance into the image with a value of one for pixels ''near the camera'', a value of .5 at the image center, and a value of zero on the opposite side. Tomographic reconstructions produced with this method show improved spatial resolution when compared to conventional 360 0 reconstructions. The improvement is in the tangential direction, where simulations have indicated a FWHM improvement of 1 to 1.5 millimeters. The resolution in the radial direction is essentially the same for both methods. Visual inspection of the reconstructed images show improved resolution and contrast
Weighted Branching Simulation Distance for Parametric Weighted Kripke Structures
DEFF Research Database (Denmark)
Foshammer, Louise; Larsen, Kim Guldstrand; Mariegaard, Anders
2016-01-01
This paper concerns branching simulation for weighted Kripke structures with parametric weights. Concretely, we consider a weighted extension of branching simulation where a single transitions can be matched by a sequence of transitions while preserving the branching behavior. We relax this notion...... which, in the general parametric case, corresponds to finding suitable parameter valuations such that one system can approximately simulate another. Although the distance considers a potentially infinite set of transition sequences we demonstrate that there exists an upper bound on the length...
Inverse-designed stretchable metalens with tunable focal distance
Callewaert, Francois; Velev, Vesselin; Jiang, Shizhou; Sahakian, Alan Varteres; Kumar, Prem; Aydin, Koray
2018-02-01
In this paper, we present an inverse-designed 3D-printed all-dielectric stretchable millimeter wave metalens with a tunable focal distance. A computational inverse-design method is used to design a flat metalens made of disconnected polymer building blocks with complex shapes, as opposed to conventional monolithic lenses. The proposed metalens provides better performance than a conventional Fresnel lens, using lesser amount of material and enabling larger focal distance tunability. The metalens is fabricated using a commercial 3D-printer and attached to a stretchable platform. Measurements and simulations show that the focal distance can be tuned by a factor of 4 with a stretching factor of only 75%, a nearly diffraction-limited focal spot, and with a 70% relative focusing efficiency, defined as the ratio between power focused in the focal spot and power going through the focal plane. The proposed platform can be extended for design and fabrication of multiple electromagnetic devices working from visible to microwave radiation depending on scaling of the devices.
Full waveform inversion for time-distance helioseismology
International Nuclear Information System (INIS)
Hanasoge, Shravan M.; Tromp, Jeroen
2014-01-01
Inferring interior properties of the Sun from photospheric measurements of the seismic wavefield constitutes the helioseismic inverse problem. Deviations in seismic measurements (such as wave travel times) from their fiducial values estimated for a given model of the solar interior imply that the model is inaccurate. Contemporary inversions in local helioseismology assume that properties of the solar interior are linearly related to measured travel-time deviations. It is widely known, however, that this assumption is invalid for sunspots and active regions and is likely for supergranular flows. Here, we introduce nonlinear optimization, executed iteratively, as a means of inverting for the subsurface structure of large-amplitude perturbations. Defining the penalty functional as the L 2 norm of wave travel-time deviations, we compute the total misfit gradient of this functional with respect to the relevant model parameters at each iteration around the corresponding model. The model is successively improved using either steepest descent, conjugate gradient, or the quasi-Newton limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Performing nonlinear iterations requires privileging pixels (such as those in the near field of the scatterer), a practice that is not compliant with the standard assumption of translational invariance. Measurements for these inversions, although similar in principle to those used in time-distance helioseismology, require some retooling. For the sake of simplicity in illustrating the method, we consider a two-dimensional inverse problem with only a sound-speed perturbation.
Weighted Chebyshev distance classification method for hyperspectral imaging
Demirci, S.; Erer, I.; Ersoy, O.
2015-06-01
The main objective of classification is to partition the surface materials into non-overlapping regions by using some decision rules. For supervised classification, the hyperspectral imagery (HSI) is compared with the reflectance spectra of the material containing similar spectral characteristic. As being a spectral similarity based classification method, prediction of different level of upper and lower spectral boundaries of all classes spectral signatures across spectral bands constitutes the basic principles of the Multi-Scale Vector Tunnel Algorithm (MS-VTA) classification algorithm. The vector tunnel (VT) scaling parameters obtained from means and standard deviations of the class references are used. In this study, MS-VT method is improved and a spectral similarity based technique referred to as Weighted Chebyshev Distance (WCD) method for the supervised classification of HSI is introduced. This is also shown to be equivalent to the use of the WCD in which the weights are chosen as an inverse power of the standard deviation per spectral band. The use of WCD measures in terms of the inverse power of standard deviations and optimization of power parameter constitute the most important side of the study. The algorithms are trained with the same kinds of training sets, and their performances are calculated for the power of the standard deviation. During these studies, various levels of the power parameters are evaluated based on the efficiency of the algorithms for choosing the best values of the weights.
ipw: An R Package for Inverse Probability Weighting
Directory of Open Access Journals (Sweden)
Ronald B. Geskus
2011-10-01
Full Text Available We describe the R package ipw for estimating inverse probability weights. We show how to use the package to fit marginal structural models through inverse probability weighting, to estimate causal effects. Our package can be used with data from a point treatment situation as well as with a time-varying exposure and time-varying confounders. It can be used with binomial, categorical, ordinal and continuous exposure variables.
30 CFR 285.543 - Example of how the inverse distance formula works.
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Example of how the inverse distance formula works. 285.543 Section 285.543 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR... Financial Assurance Requirements Revenue Sharing with States § 285.543 Example of how the inverse distance...
Crestel, Benjamin; Alexanderian, Alen; Stadler, Georg; Ghattas, Omar
2017-07-01
The computational cost of solving an inverse problem governed by PDEs, using multiple experiments, increases linearly with the number of experiments. A recently proposed method to decrease this cost uses only a small number of random linear combinations of all experiments for solving the inverse problem. This approach applies to inverse problems where the PDE solution depends linearly on the right-hand side function that models the experiment. As this method is stochastic in essence, the quality of the obtained reconstructions can vary, in particular when only a small number of combinations are used. We develop a Bayesian formulation for the definition and computation of encoding weights that lead to a parameter reconstruction with the least uncertainty. We call these weights A-optimal encoding weights. Our framework applies to inverse problems where the governing PDE is nonlinear with respect to the inversion parameter field. We formulate the problem in infinite dimensions and follow the optimize-then-discretize approach, devoting special attention to the discretization and the choice of numerical methods in order to achieve a computational cost that is independent of the parameter discretization. We elaborate our method for a Helmholtz inverse problem, and derive the adjoint-based expressions for the gradient of the objective function of the optimization problem for finding the A-optimal encoding weights. The proposed method is potentially attractive for real-time monitoring applications, where one can invest the effort to compute optimal weights offline, to later solve an inverse problem repeatedly, over time, at a fraction of the initial cost.
Evaluation of an inverse distance weighting method for patching ...
African Journals Online (AJOL)
2016-07-03
Jul 3, 2016 ... There are three main techniques for estimating missing meteoro- logical data, namely, empirical methods, statistical methods and function-fitting methods (Xia et al., 1999). The application of patching methods is dependent on the length of the gap, the sea- son, climatic region, density of stations, and the ...
Evaluation of an inverse distance weighting method for patching ...
African Journals Online (AJOL)
If you would like more information about how to print, save, and work with PDFs, Highwire Press provides a helpful Frequently Asked Questions about PDFs. Alternatively, you can download the PDF file directly to your computer, from where it can be opened using a PDF reader. To download the PDF, click the Download link ...
Body weight: relationship to conversational distance and self-actualization.
Clarke, P N
1989-01-01
Body weight is a major concern for women. This study investigated the relationship between body weight deviation, perceived effect of weight, conversational distance, and self-actualization in healthy caucasian college women (N = 109) between the ages of 18 and 50. The perception of body weight was measured with a structured questionnaire. Conversational distance was measured by having the participant approach the investigator, and self-actualization was determined using the Personal Orientation Inventory (POI). Significant correlations were found between the Time Competence (TC) subscale of the POI and Perceived Effect (PE) and between conversational distance and TC. Further analysis of the data revealed a relationship between body weight deviation, using the actual deviation from the norm, and the combined effect (magnitude and direction) of body weight (r = .54, p less than .001). Path analysis revealed the multidimensional nature of the issue of body weight for women. The usual assumptions about body fat for women are questioned and implications for future research are discussed.
Inverse odds ratio-weighted estimation for causal mediation analysis.
Tchetgen Tchetgen, Eric J
2013-11-20
An important scientific goal of studies in the health and social sciences is increasingly to determine to what extent the total effect of a point exposure is mediated by an intermediate variable on the causal pathway between the exposure and the outcome. A causal framework has recently been proposed for mediation analysis, which gives rise to new definitions, formal identification results and novel estimators of direct and indirect effects. In the present paper, the author describes a new inverse odds ratio-weighted approach to estimate so-called natural direct and indirect effects. The approach, which uses as a weight the inverse of an estimate of the odds ratio function relating the exposure and the mediator, is universal in that it can be used to decompose total effects in a number of regression models commonly used in practice. Specifically, the approach may be used for effect decomposition in generalized linear models with a nonlinear link function, and in a number of other commonly used models such as the Cox proportional hazards regression for a survival outcome. The approach is simple and can be implemented in standard software provided a weight can be specified for each observation. An additional advantage of the method is that it easily incorporates multiple mediators of a categorical, discrete or continuous nature. Copyright © 2013 John Wiley & Sons, Ltd.
Signed Distance Computation using the Angle Weighted Pseudo-normal
DEFF Research Database (Denmark)
Bærentzen, Jakob Andreas; Aanæs, Henrik
2005-01-01
The normals of closed, smooth surfaces have long been used to determine whether a point is inside or outside such a surface. It is tempting also to use this method for polyhedra represented as triangle meshes. Unfortunately, this is not possible since at the vertices and edges of a triangle mesh...... that are inside and points that are outside a mesh, regardless of whether a mesh vertex, edge or face is the closest feature. This inside-outside information is usually represented as the sign in the signed distance to the mesh. In effect, our result shows that this sign can be computed as an integral part...... of the distance computation. Moreover, it provides an additional argument in favour of the angle weighted pseudo-normals being the natural extension of the face normals. Apart from the theoretical results, we also propose a simple and efficient algorithm for computing the signed distance to a closed \\$C\\^0\\$ mesh...
Inverse probability weighting for covariate adjustment in randomized studies.
Shen, Changyu; Li, Xiaochun; Li, Lingling
2014-02-20
Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.
DeWeese, Robin; Ohri-Vachaspati, Punam
2015-09-01
Active commuting to school (ACS) increases students' daily physical activity, but associations between student weight and ACS are inconsistent. Few studies examining ACS and weight account for distance commuted. This study examines the association between students' weight status and ACS, taking into account distance to school. In 2009-10 a random digit-dial household survey conducted in low-income minority cities collected information about ACS for 1 randomly selected school-going student per household. Parents provided measured heights and weights. Distance commuted was obtained using geocoded home and school addresses. Multivariate regression analyses assessed associations of ACS and distance commuted with weight status. 36.6% of students were overweight/obese; 47.2% engaged in ACS. Distance walked/biked to school was associated with 7% lower odds of overweight/obesity (OR = 0.93, 95% CI: 0.88- 0.99). Without distance commuted in the model, ACS was not associated with students' weight status. Compared with no ACS, ACS greater than a half-mile was associated with 65% lower odds of a student being overweight/obese (OR = 0.35, 95% CI: 0.16- 0.78); ACS less than a half-mile was not. ACS is significantly inversely associated with overweight/obesity among students who commute beyond a one-half mile threshold.
Non-inverse-square force-distance law for long thin magnets-revisited.
Darvell, Brian W; Gilding, Brian H
2012-05-01
It had previously been shown that the inverse-square law does not apply to the force-distance relationship in the case of a long, thin magnet with one end in close proximity to its image in a permeable plane when simple point-like poles are assumed. Treating the system instead as having a 'polar disc', arising from an assumed bundle of dipoles, led to a double integral that could only be evaluated numerically, and a relationship that still did not match observed behavior. Using an elaborate 'stretched' exponential polynomial to represent the position of an 'elastic' polar disc resulted in a fair representation of the physical response, but this was essentially merely the fitting of an arbitrary function. The present purpose was therefore to find an explicit formula for the force-distance relationship in the polar-disc problem and assess its fit to the previously obtained experimental data. Starting from Coulomb's law a corrected integral formula for the force-distance relationship was derived. The integral in this formula was evaluated explicitly using rescaling, changes of order of integration, reduction by symmetry, and change of variables. The resulting formula was then fitted to data that had been obtained for the force exerted by eighty-five rod-shaped magnets (Alnico V, 3 mm diameter, 170 mm long) perpendicular to a large steel plate, as a function of distance, at small separations (magnet data was found. A key feature remains the marked departure from inverse-square behavior. The failure of the explicit formula to fit the data indicates an inadequate model of the physical system. Nonetheless it constitutes a useful tool for quantifying the force-distance relationship on the premise of polar discs. Given these insights, it may now be possible to address the original motivating problem of the behavior of real dental magnets. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Hofstra, N.; New, M.
2009-01-01
Angular-distance weighting (ADW) is a common approach for interpolation of an irregular network of meteorological observations to a regular grid. A widely used version of ADW employs the correlation decay distance (CDD) to (1) select stations that should contribute to each grid-point estimate and
Directory of Open Access Journals (Sweden)
P. Sande-Fouz
2007-04-01
Full Text Available In this paper, results from three different interpolation techniques based on Geostatistics (ordinary kriging, kriging with external drift and conditional simulation and one deterministic method (inverse distances for mapping total monthly rainfall are compared. The study data set comprised total monthly rainfall from 1998 till 2001 corresponding to a maximum of 121 meteorological stations irregularly distributed in the region of Galicia (NW Spain. Furthermore, a raster Geographic Information System (GIS was used for spatial interpolation with a 500×500 m grid digital elevation model. Inverse distance technique was appropriate for a rapid estimation of the rainfall at the studied scale. In order to apply geostatistical interpolation techniques, a spatial dependence analysis was performed; rainfall spatial dependence was observed in 33 out of 48 months analysed, the rest of the rainfall data sets presented a random behaviour. Different values of the semivariogram parameters caused the smoothing in the maps obtained by ordinary kriging. Kriging with external drift results were according to former studies which showed the influence of topography. Conditional simulation is considered to give more realistic results; however, this consideration must be confirmed with new data.
Directory of Open Access Journals (Sweden)
Monica H T Wong
Full Text Available Successful weight maintenance following weight loss is challenging for many people. Identifying predictors of longer-term success will help target clinical resources more effectively. To date, focus has been predominantly on the identification of predictors of weight loss. The goal of the current study was to determine if changes in anthropometric and clinical parameters during acute weight loss are associated with subsequent weight regain.The study consisted of an 8-week low calorie diet (LCD followed by a 6-month weight maintenance phase. Anthropometric and clinical parameters were analyzed before and after the LCD in the 285 participants (112 men, 173 women who regained weight during the weight maintenance phase. Mixed model ANOVA, Spearman correlation, and linear regression were used to study the relationships between clinical measurements and weight regain.Gender differences were observed for body weight and several clinical parameters at both baseline and during the LCD-induced weight loss phase. LCD-induced changes in BMI (Spearman's ρ = 0.22, p = 0.0002 were inversely associated with weight regain in both men and women. LCD-induced changes in fasting insulin (ρ = 0.18, p = 0.0043 and HOMA-IR (ρ = 0.19, p = 0.0023 were also associated independently with weight regain in both genders. The aforementioned associations remained statistically significant in regression models taking account of variables known to independently influence body weight.LCD-induced changes in BMI, fasting insulin, and HOMA-IR are inversely associated with weight regain in the 6-month period following weight loss.
Application of the Inverse Square Law distance in conventional radiology and mammography
International Nuclear Information System (INIS)
Hoff, Gabriela; Lima, Nathan Willig
2014-01-01
The Inverse Square Law (ISL) is a mathematical rule largely used to adjust the KERMA and exposure to different distances of focal spot having as reference a determined point in space. Taking into account the limitations of this Mathematical Law and its application, we have as main objective to verify the applicability of the ISL to determine exposure on radio-diagnostic area (maximum tensions between 30kVp and 150kVp). Experimental data was collected, deterministic calculation and simulation using Monte Carlo Method (Geant4 toolkit) was applied. The experimental data was collected using a calibrated ionizing chamber TNT 12000 from Fluke. The conventional X-ray equipment used was a Multix Top of Siemens, with Tungsten track and total filtration equivalent to 2,5 mm of Aluminum; and the mammographic equipment was a Mammomat Inspiration from Siemens, presenting the track-add filtration combinations of Molybdenum-Molybdenum (25 μm), Molybdenum-Rhodium (30 μm), Tungsten-Rhodium (50 μm). Both equipment have the Quality Control testes in agreement to Brazilian regulations. On conventional radiology the measurements were performed at following focal spot-detector distance (FsDD): 40cm, 50 cm, 60 cm, 70 cm, 80 cm, 90 cm, 100 cm for peak tensions of 66 kVp, 81 kVp e 125 kVp. On mammography the measurements were performed at FsDD: 60cm, 50cm, 40 cm and 26 cm for peak tensions of 25kVp, 30kVp and 35kVp. Based on the results it is possible conclude that the ISL presents lower performance in correct measurement on Mammography spectra (induce to larger errors on estimate data), but it can cause significant impact on both areas depending on the spectra energy and distance to correct. (author)
Determination of the complexity of distance weights in Mexican city systems
Directory of Open Access Journals (Sweden)
Igor Lugo
2017-03-01
Full Text Available This study tests distance weights based on the economic geography assumption of straight lines and the complex networks approach of empirical road segments in the Mexican system of cities to determine the best distance specification. We generated network graphs by using geospatial data and computed weights by measuring shortest paths, thereby characterizing their probability distributions and comparing them with spatial null models. Findings show that distributions are sufficiently different and are associated with asymmetrical beta distributions. Straight lines over- and underestimated distances compared to the empirical data, and they showed compatibility with random models. Therefore, accurate distance weights depend on the type of the network specification.
Khaidir Noor, Muhammad
2018-03-01
Reserve estimation is one of important work in evaluating a mining project. It is estimation of the quality and quantity of the presence of minerals have economic value. Reserve calculation method plays an important role in determining the efficiency in commercial exploration of a deposit. This study was intended to calculate ore reserves contained in the study area especially Pit Block 3A. Nickel ore reserve was estimated by using detailed exploration data, processing by using Surpac 6.2 by Inverse Distance Weight: Squared Power estimation method. Ore estimation result obtained from 30 drilling data was 76453.5 ton of Saprolite with density of 1.5 ton/m3 and COG (Cut Off Grade) Ni ≥ 1.6 %, while overburden data was 112,570.8 tons with waste rock density of 1.2 ton/m3 . Striping Ratio (SR) was 1.47 : 1 smaller than Stripping Ratio ( SR ) were set of 1.60 : 1.
Chen, Xiangdong; He, Liwen; Jeon, Gwanggil; Jeong, Jechang
2014-05-01
In this paper, we present a novel color image demosaicking algorithm based on a directional weighted interpolation method and gradient inverse-weighted filter-based refinement method. By applying a directional weighted interpolation method, the missing center pixel is interpolated, and then using the nearest neighboring pixels of the pre-interpolated pixel within the same color channel, the accuracy of interpolation is refined using a five-point gradient inverse weighted filtering method we proposed. The refined interpolated pixel values can be used to estimate the other missing pixel values successively according to the correlation inter-channels. Experimental analysis of images revealed that our proposed algorithm provided superior performance in terms of both objective and subjective image quality compared to conventional state-of-the-art demosaicking algorithms. Our implementation has very low complexity and is therefore well suited for real-time applications.
Minimum-weight perfect matching for non-intrinsic distances on the line
Delon, Julie; Salomon, Julien; Sobolevski, Andrei
2011-01-01
13 pages, figures in TiKZ, uses xcolor package; introduction and the concluding section have been expanded.; Consider a real line equipped with a (not necessarily intrinsic) distance. We deal with the minimum-weight perfect matching problem for a complete graph whose points are located on the line and whose edges have weights equal to distances along the line. This problem is closely related to one-dimensional Monge-Kantorovich trasnport optimization. The main result of the present note is a ...
Energy Technology Data Exchange (ETDEWEB)
Riedel, C; AlegrIa, A; Colmenero, J [Departamento de Fisica de Materiales UPV/EHU, Facultad de Quimica, Apartado 1072, 20080 San Sebastian (Spain); Arinero, R [Institut d' Electronique du Sud (IES), UMR CNRS 5214, Universite Montpellier II, CC 082, Place E Bataillon, 34095 Montpellier Cedex (France); Saenz, J J, E-mail: riedel@ies.univ-montp2.fr [Donostia International Physics Center, Paseo Manuel de Lardizabal 4, 20018 San Sebastian (Spain)
2011-08-26
We present a numerical and analytical study of the behavior of both electrostatic force and force gradient created by a charge trapped below the surface of a dielectric on an atomic force microscope tip as a function of the dielectric constant and tip-sample distance. As expected, the force decreases monotonously when the dielectric constant increases. However, a maximum in the dielectric constant dependence of the force gradient is found. This maximum occurs in the typical experimental parameters' range and depends on the tip-sample distance and the sample thickness. The analytical study permits us to understand the physical origin of this phenomenon and is in good agreement with the numerical simulation for small tip-sample distances. We also report a study exemplifying a possible contrast inversion in electrostatic force microscopy (EFM) signals while scanning, at different heights, two charges trapped in a sample having heterogeneous dielectric domains. In addition to this particular contrast inversion effect, this study can be considered as a way to gain insight into the mechanisms of EFM image formation as a function of the dielectric constant and tip-sample.
Effect of marital distance on birth weight and length of offspring
Directory of Open Access Journals (Sweden)
Kozieł Sławomir
2017-09-01
Full Text Available Marital distance (MD, the geographical distance between birthplaces of spouses, is considered an agent favouring occurrence of heterosis and can be used as a measure of its level. Heterosis itself is a phenomenon of hybrid vigour and seems to be an important factor regulating human growth and development. The main aim of the study is to examine potential effects of MD on birth weight and length of offspring, controlling for socioeconomic status (SES, mother’s age and birth order. Birth weight (2562 boys and 2572 girls and length (2526 boys, 2542 girls of children born in Ostrowiec Swietokrzyski (Poland in 1980, 1983, 1985 and 1988 were recorded during cross-sectional surveys carried out between 1994-1999. Data regarding the socio-demographic variables of families were provided by the parents. Analysis of covariance showed that MD significantly affected both birth weight and length, allowing for sex, birth order, mother’s age and SES of family. For both sexes, a greater marital distance was associated with a higher birth weight and a longer birth length. Our results support the hypothesis that a greater geographical distance between the birth places of parents may contribute to the heterosis effects in offspring. Better birth outcomes may be one of the manifestations of these effects.
Cardiovascular responses to static exercise in distance runners and weight lifters
Longhurst, J. C.; Kelly, A. R.; Gonyea, W. J.; Mitchell, J. H.
1980-01-01
Three groups of athletes including long-distance runners, competitive and amateur weight lifters, and age- and sex-matched control subjects have been studied by hemodynamic and echocardiographic methods in order to determine the effect of the training programs on the cardiovascular response to static exercise. Blood pressure, heart rate, and double product data at rest and at fatigue suggest that competitive endurance (dynamic exercise) training alters the cardiovascular response to static exercise. In contrast to endurance exercise, weight lifting (static exercise) training does not alter the cardiovascular response to static exercise: weight lifters responded to static exercise in a manner very similar to that of the control subjects.
Amalia, Junita; Purhadi, Otok, Bambang Widjanarko
2017-11-01
Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.
Using synchronous distance-education technology to deliver a weight management intervention.
Dunn, Carolyn; Whetstone, Lauren MacKenzie; Kolasa, Kathryn M; Jayaratne, K S U; Thomas, Cathy; Aggarwal, Surabhi; Nordby, Kelly; Riley, Kenisha E M
2014-01-01
To compare the effectiveness of online delivery of a weight management program using synchronous (real-time), distance-education technology to in-person delivery. Synchronous, distance-education technology was used to conduct weekly sessions for participants with a live instructor. Program effectiveness was indicated by changes in weight, body mass index (BMI), waist circumference, and confidence in ability to eat healthy and be physically active. Online class participants (n = 398) had significantly greater reductions in BMI, weight, and waist circumference than in-person class participants (n = 1,313). Physical activity confidence increased more for in-person than online class participants. There was no difference for healthy eating confidence. This project demonstrates the feasibility of using synchronous distance-education technology to deliver a weight management program. Synchronous online delivery could be employed with no loss to improvements in BMI, weight, and waist circumference. Copyright © 2014 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Jiping Liu
2016-08-01
Full Text Available Previous studies have demonstrated that non-Euclidean distance metrics can improve model fit in the geographically weighted regression (GWR model. However, the GWR model often considers spatial nonstationarity and does not address variations in local temporal issues. Therefore, this paper explores a geographically temporal weighted regression (GTWR approach that accounts for both spatial and temporal nonstationarity simultaneously to estimate house prices based on travel time distance metrics. Using house price data collected between 1980 and 2016, the house price response and explanatory variables are then modeled using both the GWR and the GTWR approaches. Comparing the GWR model with Euclidean and travel distance metrics, the GTWR model with travel distance obtains the highest value for the coefficient of determination ( R 2 and the lowest values for the Akaike information criterion (AIC. The results show that the GTWR model provides a relatively high goodness of fit and sufficient space-time explanatory power with non-Euclidean distance metrics. The results of this study can be used to formulate more effective policies for real estate management.
Using synchronous distance education to deliver a weight loss intervention: A randomized trial.
Dunn, Carolyn; Olabode-Dada, Olusola; Whetstone, Lauren; Thomas, Cathy; Aggarwal, Surabhi; Nordby, Kelly; Thompson, Samuel; Johnson, Madison; Allison, Christine
2016-01-01
To implement a randomized trial to evaluate the effectiveness of a weight loss program delivered using synchronous distance education compared with a wait-list control group with 6-month follow-up. Adults with a body mass index (BMI) ≥25 were randomized to the intervention (n = 42) or wait-list control group (n = 38). The intervention group participated in a synchronous, online, 15-week weight loss program; weight loss was the primary outcome. Secondary measures included height, BMI, and confidence in ability to be physically active and eat healthy. Assessments occurred at three and four time points in the intervention and control group, respectively. Participants who completed the program lost significantly more weight (1.8 kg) than those in the wait-list control group (0.25 kg) at week 15 [F(1,61) = 6.19, P = 0.02] and had a greater reduction in BMI (0.71 vs. 0.14 kg/m(2) ), [F(1,61) = 7.45, P = 0.01]. There were no significant differences between the intervention and the wait-list control groups for change in confidence in ability to be physically active or eat healthy. Weight loss was maintained at 6 months. Use of synchronous distance education is a promising approach for weight loss. The results of this study will help to inform future research that employs Web-based interventions. © 2015 The Obesity Society.
Regularized Laplace-Fourier-Domain Full Waveform Inversion Using a Weighted l 2 Objective Function
Jun, Hyunggu; Kwon, Jungmin; Shin, Changsoo; Zhou, Hongbo; Cogan, Mike
2017-03-01
Full waveform inversion (FWI) can be applied to obtain an accurate velocity model that contains important geophysical and geological information. FWI suffers from the local minimum problem when the starting model is not sufficiently close to the true model. Therefore, an accurate macroscale velocity model is essential for successful FWI, and Laplace-Fourier-domain FWI is appropriate for obtaining such a velocity model. However, conventional Laplace-Fourier-domain FWI remains an ill-posed and ill-conditioned problem, meaning that small errors in the data can result in large differences in the inverted model. This approach also suffers from certain limitations related to the logarithmic objective function. To overcome the limitations of conventional Laplace-Fourier-domain FWI, we introduce a weighted l 2 objective function, instead of the logarithmic objective function, as the data-domain objective function, and we also introduce two different model-domain regularizations: first-order Tikhonov regularization and prior model regularization. The weighting matrix for the data-domain objective function is constructed to suitably enhance the far-offset information. Tikhonov regularization smoothes the gradient, and prior model regularization allows reliable prior information to be taken into account. Two hyperparameters are obtained through trial and error and used to control the trade-off and achieve an appropriate balance between the data-domain and model-domain gradients. The application of the proposed regularizations facilitates finding a unique solution via FWI, and the weighted l 2 objective function ensures a more reasonable residual, thereby improving the stability of the gradient calculation. Numerical tests performed using the Marmousi synthetic dataset show that the use of the weighted l 2 objective function and the model-domain regularizations significantly improves the Laplace-Fourier-domain FWI. Because the Laplace-Fourier-domain FWI is improved, the
Nguyen, Quynh C; Osypuk, Theresa L; Schmidt, Nicole M; Glymour, M Maria; Tchetgen Tchetgen, Eric J
2015-03-01
Despite the recent flourishing of mediation analysis techniques, many modern approaches are difficult to implement or applicable to only a restricted range of regression models. This report provides practical guidance for implementing a new technique utilizing inverse odds ratio weighting (IORW) to estimate natural direct and indirect effects for mediation analyses. IORW takes advantage of the odds ratio's invariance property and condenses information on the odds ratio for the relationship between the exposure (treatment) and multiple mediators, conditional on covariates, by regressing exposure on mediators and covariates. The inverse of the covariate-adjusted exposure-mediator odds ratio association is used to weight the primary analytical regression of the outcome on treatment. The treatment coefficient in such a weighted regression estimates the natural direct effect of treatment on the outcome, and indirect effects are identified by subtracting direct effects from total effects. Weighting renders treatment and mediators independent, thereby deactivating indirect pathways of the mediators. This new mediation technique accommodates multiple discrete or continuous mediators. IORW is easily implemented and is appropriate for any standard regression model, including quantile regression and survival analysis. An empirical example is given using data from the Moving to Opportunity (1994-2002) experiment, testing whether neighborhood context mediated the effects of a housing voucher program on obesity. Relevant Stata code (StataCorp LP, College Station, Texas) is provided. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Application of weighted early-arrival waveform inversion to shallow land data
Yu, Han
2014-03-01
Seismic imaging of deep land targets is usually difficult since the near-surface velocities are not accurately estimated. Recent studies have shown that inverting traces weighted by the energy of the early-arrivals can improve the accuracy of estimating shallow velocities. In this work, it is explained by showing that the associated misfit gradient function tends to be sensitive to the kinetics of wave propagation and insensitive to the dynamics. A synthetic example verifies the theoretical predictions and shows that the effects of noise and unpredicted amplitude variations in the inversion are reduced using this weighted early arrival waveform inversion (WEWI). We also apply this method to a 2D land data set for estimating the near-surface velocity distribution. The reverse time migration images suggest that, compared to the tomogram inverted directly from the early arrival waveforms, the WEWI tomogram provides a more convincing velocity model and more focused reflections in the deeper part of the image. © 2014 Elsevier B.V.
Hamming Distance Method with Subjective and Objective Weights for Personnel Selection
Directory of Open Access Journals (Sweden)
R. Md Saad
2014-01-01
Full Text Available Multicriteria decision making (MCDM is one of the methods that popularly has been used in solving personnel selection problem. Alternatives, criteria, and weights are some of the fundamental aspects in MCDM that need to be defined clearly in order to achieve a good result. Apart from these aspects, fuzzy data has to take into consideration that it may arise from unobtainable and incomplete information. In this paper, we propose a new approach for personnel selection problem. The proposed approach is based on Hamming distance method with subjective and objective weights (HDMSOW’s. In case of vagueness situation, fuzzy set theory is then incorporated onto the HDMSOW’s. To determine the objective weight for each attribute, the fuzzy Shannon’s entropy is considered. While for the subjective weight, it is aggregated into a comparable scale. A numerical example is presented to illustrate the HDMSOW’s.
Hamming distance method with subjective and objective weights for personnel selection.
Saad, R; Ahmad, M Z; Abu, M S; Jusoh, M S
2014-01-01
Multicriteria decision making (MCDM) is one of the methods that popularly has been used in solving personnel selection problem. Alternatives, criteria, and weights are some of the fundamental aspects in MCDM that need to be defined clearly in order to achieve a good result. Apart from these aspects, fuzzy data has to take into consideration that it may arise from unobtainable and incomplete information. In this paper, we propose a new approach for personnel selection problem. The proposed approach is based on Hamming distance method with subjective and objective weights (HDMSOW's). In case of vagueness situation, fuzzy set theory is then incorporated onto the HDMSOW's. To determine the objective weight for each attribute, the fuzzy Shannon's entropy is considered. While for the subjective weight, it is aggregated into a comparable scale. A numerical example is presented to illustrate the HDMSOW's.
Akkaya, Nuray; Akkaya, Semih; Gungor, Harun R; Yaşar, Gokce; Atalay, Nilgun Simsir; Sahin, Fusun
2017-01-01
Although functional results of combined rehabilitation programs are reported, there have been no reports studying the effects of solo pendulum exercises on ultrasonographic measurements of acromiohumeral distance (AHD). To investigate the effects of weighted and un-weighted pendulum exercises on ultrasonographic AHD and clinical symptoms in patients with subacromial impingement syndrome. Patients with subacromial impingement syndrome were randomized to performing weighted (1.5 kilograms hand held dumbbell, N= 18) or un-weighted (free of weight, N= 16) pendulum exercises for 4 weeks, 3 sessions/day. Exercises were repeated for each direction of shoulder motion in each session (ten minutes). Clinical situation was evaluated by Constant score and Shoulder Pain Disability Index (SPADI). Ultrasonographic measurements of AHD at 0°, 30° and 60° shoulder abduction were performed. All clinical and ultrasonographic evaluations were performed at the beginning of the exercise program and at end of 4 weeks of exercise program. Thirty-four patients (23 females, 11 males; mean age 41.7 ± 8.9 years) were evaluated. Significant clinical improvements were detected in both exercise groups between pre and post-treatment evaluations (p shoulder abduction between groups (p > 0.05). There was no significant difference for pre and post-treatment narrowing of AHD (narrowing of 0°-30°, and 0°-60°) between groups (p > 0.05). While significant clinical improvements were achieved with both weighted and un-weighted solo pendulum exercises, no significant difference was detected for ultrasonographic AHD measurements between exercise groups.
A Distance-Weighted Graph-Cut Method for the Segmentation of Laser Point Clouds
Dutta, A.; Engels, J.; Hahn, M.
2014-08-01
Normalized Cut according to (Shi and Malik 2000) is a well-established divisive image segmentation method. Here we use Normalized Cut for the segmentation of laser point clouds in urban areas. In particular we propose an edge weight measure which takes local plane parameters, RGB values and eigenvalues of the covariance matrices of the local point distribution into account. Due to its target function, Normalized Cut favours cuts with "small cut lines/surfaces", which appears to be a drawback for our application. We therefore modify the target function, weighting the similarity measures with distant-depending weights. We call the induced minimization problem "Distance-weighted Cut" (DWCut). The new target function leads to a slightly more complicated generalized eigenvalue problem than in case of the Normalized Cut; on the other hand, the new target function is easier to interpret and avoids the just-mentioned drawback. DWCut can be beneficially combined with an aggregation in order to reduce the computational effort and to avoid shortcomings due to insufficient plane parameters. Finally we present examples for the successful application of the Distance-weighted Cut principle. The method was implemented as a plugin into the free and open source geographic information system SAGA; for preprocessing steps the proprietary SAGA-based LiDAR software LIS was applied.
A distance weighted-based approach for self-organized aggregation in robot swarms
Khaldi, Belkacem
2017-12-14
In this paper, a Distance-Weighted K Nearest Neighboring (DW-KNN) topology is proposed to study self-organized aggregation as an emergent swarming behavior within robot swarms. A virtual physics approach is applied among the proposed neighborhood topology to keep the robots together. A distance-weighted function based on a Smoothed Particle Hydrodynamic (SPH) interpolation approach is used as a key factor to identify the K-Nearest neighbors taken into account when aggregating the robots. The intra virtual physical connectivity among these neighbors is achieved using a virtual viscoelastic-based proximity model. With the ARGoS based-simulator, we model and evaluate the proposed approach showing various self-organized aggregations performed by a swarm of N foot-bot robots.
Prasetiyowati, S. S.; Sibaroni, Y.
2018-03-01
Dengue hemorrhagic disease, is a disease caused by the Dengue virus of the Flavivirus genus Flaviviridae family. Indonesia is the country with the highest case of dengue in Southeast Asia. In addition to mosquitoes as vectors and humans as hosts, other environmental and social factors are also the cause of widespread dengue fever. To prevent the occurrence of the epidemic of the disease, fast and accurate action is required. Rapid and accurate action can be taken, if there is appropriate information support on the occurrence of the epidemic. Therefore, a complete and accurate information on the spread pattern of endemic areas is necessary, so that precautions can be done as early as possible. The information on dispersal patterns can be obtained by various methods, which are based on empirical and theoretical considerations. One of the methods used is based on the estimated number of infected patients in a region based on spatial and time. The first step of this research is conducted by predicting the number of DHF patients in 2016 until 2018 based on 2010 to 2015 data using GSTAR (1, 1). In the second phase, the distribution pattern prediction of dengue disease area is conducted. Furthermore, based on the characteristics of DHF epidemic trends, i.e. down, stable or rising, the analysis of distribution patterns of dengue fever distribution areas with IDW and Kriging (ordinary and universal Kriging) were conducted in this study. The difference between IDW and Kriging, is the initial process that underlies the prediction process. Based on the experimental results, it is known that the dispersion pattern of epidemic areas of dengue disease with IDW and Ordinary Kriging is similar in the period of time.
Bonsu, Kwadwo Osei; Owusu, Isaac Kofi; Buabeng, Kwame Ohene; Reidpath, Daniel D; Kadirvelu, Amudha
2017-04-01
Randomized control trials of statins have not demonstrated significant benefits in outcomes of heart failure (HF). However, randomized control trials may not always be generalizable. The aim was to determine whether statin and statin type-lipophilic or -hydrophilic improve long-term outcomes in Africans with HF. This was a retrospective longitudinal study of HF patients aged ≥18 years hospitalized at a tertiary healthcare center between January 1, 2009 and December 31, 2013 in Ghana. Patients were eligible if they were discharged from first admission for HF (index admission) and followed up to time of all-cause, cardiovascular, and HF mortality or end of study. Multivariable time-dependent Cox model and inverse-probability-of-treatment weighting of marginal structural model were used to estimate associations between statin treatment and outcomes. Adjusted hazard ratios were also estimated for lipophilic and hydrophilic statin compared with no statin use. The study included 1488 patients (mean age 60.3±14.2 years) with 9306 person-years of observation. Using the time-dependent Cox model, the 5-year adjusted hazard ratios with 95% CI for statin treatment on all-cause, cardiovascular, and HF mortality were 0.68 (0.55-0.83), 0.67 (0.54-0.82), and 0.63 (0.51-0.79), respectively. Use of inverse-probability-of-treatment weighting resulted in estimates of 0.79 (0.65-0.96), 0.77 (0.63-0.96), and 0.77 (0.61-0.95) for statin treatment on all-cause, cardiovascular, and HF mortality, respectively, compared with no statin use. Among Africans with HF, statin treatment was associated with significant reduction in mortality. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Convers, Jaime; Custodio, Susana
2016-04-01
Rapid assessment of seismological parameters pertinent to the nucleation and rupture of earthquakes are now routinely calculated by local and regional seismic networks. With the increasing number of stations, fast data transmission, and advanced computer power, we can now go beyond accurate magnitude and epicentral locations, to rapid estimations of other higher-order earthquake parameters such as seismic moment tensor. Although an increased number of stations can minimize azimuthal gaps, it also increases computation time, and potentially introduces poor quality data that often leads to a lower the stability of automated inversions. In this presentation, we focus on moment tensor calculations for earthquakes occurring offshore the southwestern Iberian peninsula. The available regional seismic data in this region has a significant azimuthal gap that results from the geographical setting. In this case, increasing the number of data from stations spanning a small area (and at a small azimuthal angle) increases the calculation time without necessarily improving the accuracy of the inversion. Additionally, limited regional data coverage makes it imperative to exclude poor-quality data, as their negative effect on moment tensor inversions is often significant. In our work, we analyze methods to minimize the effects of large azimuthal gaps in a regional station coverage, of potential bias by uneven station distribution, and of poor data quality in moment tensor inversions obtained for earthquakes offshore the southwestern Iberian peninsula. We calculate moment tensors using the KIWI tools, and we implement different configurations of station-weighing, and cross-correlation of neighboring stations, with the aim of automatically estimating and selecting high-quality data, improving the accuracy of results, and reducing the computation time of moment tensor inversions. As the available recent intermediate-size events offshore the Iberian peninsula is limited due to the long
Tumour nuclear oestrogen receptor beta 1 correlates inversely with parathyroid tumour weight.
Haglund, Felix; Rosin, Gustaf; Nilsson, Inga-Lena; Juhlin, C Christofer; Pernow, Ylva; Norenstedt, Sophie; Dinets, Andrii; Larsson, Catharina; Hartman, Johan; Höög, Anders
2015-03-01
Primary hyperparathyroidism (PHPT) is a common endocrinopathy, frequently caused by a parathyroid adenoma, rarely by a parathyroid carcinoma that lacks effective oncological treatment. As the majority of cases are present in postmenopausal women, oestrogen signalling has been implicated in the tumourigenesis. Oestrogen receptor beta 1 (ERB1) and ERB2 have been recently identified in parathyroid adenomas, the former inducing genes coupled to tumour apoptosis. We applied immunohistochemistry and slide digitalisation to quantify nuclear ERB1 and ERB2 in 172 parathyroid adenomas, atypical adenomas and carcinomas, and ten normal parathyroid glands. All the normal parathyroid glands expressed ERB1 and ERB2. The majority of tumours expressed ERB1 (70.6%) at varying intensities, and ERB2 (96.5%) at strong intensities. Parathyroid carcinomas expressed ERB1 in three out of six cases and ERB2 in five out of six cases. The intensity of tumour nuclear ERB1 staining significantly correlated inversely with tumour weight (P=0.011), and patients whose tumours were classified as ERB1-negative had significantly greater tumour weight as well as higher serum calcium (P=0.002) and parathyroid hormone levels (P=0.003). Additionally, tumour nuclear ERB1 was not expressed differentially with respect to sex or age of the patient. Levels of tumour nuclear ERB2 did not correlate with clinical characteristics. In conclusion, decreased ERB1 immunoreactivity is associated with increased tumour weight in parathyroid adenomas. Given the previously reported correlation with tumour-suppressive signalling, selective oestrogen receptor modulation (SERMs) may play a role in the treatment of parathyroid carcinomas. Future studies of SERMs and oestrogen treatment in PHPT should consider tumour weight as a potential factor in pharmacological responsiveness. © 2015 The authors.
Improving the accuracy of k-nearest neighbor using local mean based and distance weight
Syaliman, K. U.; Nababan, E. B.; Sitompul, O. S.
2018-03-01
In k-nearest neighbor (kNN), the determination of classes for new data is normally performed by a simple majority vote system, which may ignore the similarities among data, as well as allowing the occurrence of a double majority class that can lead to misclassification. In this research, we propose an approach to resolve the majority vote issues by calculating the distance weight using a combination of local mean based k-nearest neighbor (LMKNN) and distance weight k-nearest neighbor (DWKNN). The accuracy of results is compared to the accuracy acquired from the original k-NN method using several datasets from the UCI Machine Learning repository, Kaggle and Keel, such as ionosphare, iris, voice genre, lower back pain, and thyroid. In addition, the proposed method is also tested using real data from a public senior high school in city of Tualang, Indonesia. Results shows that the combination of LMKNN and DWKNN was able to increase the classification accuracy of kNN, whereby the average accuracy on test data is 2.45% with the highest increase in accuracy of 3.71% occurring on the lower back pain symptoms dataset. For the real data, the increase in accuracy is obtained as high as 5.16%.
Directory of Open Access Journals (Sweden)
Leonardo Oliveira Reis
2013-01-01
Full Text Available Background. Protective factors against Gleason upgrading and its impact on outcomes after surgery warrant better definition. Patients and Methods. Consecutive 343 patients were categorized at biopsy (BGS and prostatectomy (PGS as Gleason score, ≤6, 7, and ≥8; 94 patients (27.4% had PSA recurrence, mean followup 80.2 months (median 99. Independent predictors of Gleason upgrading (logistic regression and disease-free survival (DFS (Kaplan-Meier, log-rank were determined. Results. Gleason discordance was 45.7% (37.32% upgrading and 8.45% downgrading. Upgrading risk decreased by 2.4% for each 1 g of prostate weight increment, while it increased by 10.2% for every 1 ng/mL of PSA, 72.0% for every 0.1 unity of PSA density and was 21 times higher for those with BGS 7. Gleason upgrading showed increased clinical stage (P=0.019, higher tumor extent (P=0.009, extraprostatic extension (P=0.04, positive surgical margins (P<0.001, seminal vesicle invasion (P=0.003, less “insignificant” tumors (P<0.001, and also worse DFS, χ2=4.28, df=1, P=0.039. However, when setting the final Gleason score (BGS ≤6 to PGS 7 versus BGS 7 to PGS 7, avoiding allocation bias, DFS impact is not confirmed, χ2=0.40, df=1, P=0.530.Conclusions. Gleason upgrading is substantial and confers worse outcomes. Prostate weight is inversely related to upgrading and its protective effect warrants further evaluation.
International Nuclear Information System (INIS)
Oppenheim, C.; Dormont, D.; Lehericy, S.; Marsault, C.; Logak, M.; Manai, R.; Samson, Y.; Rancurel, G.
2000-01-01
We evaluated the feasibility and use of diffusion-weighted and fluid-attenuated inversion-recovery pulse sequences performed as an emergency for patients with acute ischaemic stroke. A 5-min MRI session was designed as an emergency diagnostic procedure for patients admitted with suspected acute ischaemic stroke. We reviewed routine clinical implementation of the procedure, and its sensitivity and specificity for acute ischaemic stroke over the first 8 months. We imaged 91 patients (80 min to 48 h following the onset of stroke). Clinical deficit had resolved in less than 3 h in 15 patients, and the remaining 76 were classified as stroke (59) or stroke-like (17) after hospital discharge. Sensitivity of MRI for acute ischaemic stroke was 98 %, specificity 100 %. MRI provided an immediate and accurate picture of the number, site, size and age of ischaemic lesions in stroke and simplified diagnosis in stroke-like episodes. The feasibility and high diagnostic accuracy of emergency MRI in acute stroke strongly support its routine use in a stroke centre. (orig.)
Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M
2018-02-01
Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.
Gasser, T; Ziegler, P; Kneip, A; Prader, A; Molinari, L; Largo, R H
1993-01-01
Based on structural average curves of distance, velocity and acceleration, an analysis of the longitudinally assessed growth of weight, arm and calf circumferences and skinfolds (biceps, triceps, suprailiac, subscapular) was undertaken. The data come from the first Zürich longitudinal growth study and represent a normal sample. In addition to a graphic analysis, timing, intensity and duration of the mid-growth spurt (MS) and of the pubertal spurt (PS) are quantified via descriptive parameters of growth. Mechanisms are different and more complex for these variables, in particular for skinfolds, compared to previously studied somatic variables, such as height. Skinfolds showed a rapid decline to a negative velocity minimum in the first year, recovering to a pre-PS fat spurt, earlier and more pronounced for central (suprailiac, subscapular) than for peripheral skinfolds (biceps, triceps). At age of peak height velocity a drop occurred, stronger for boys, followed by a post-PS spurt. A further analysis demonstrates that these ups and downs in skinfold velocity are mainly due to subjects with thick skinfolds. Weight and circumferences show a distinct MS, with sex-independent characteristics and a strong, sex-dependent PS. Weight and even more arm circumference are delayed compared to height in puberty.
Classification of EEG Signals using adaptive weighted distance nearest neighbor algorithm
Directory of Open Access Journals (Sweden)
E. Parvinnia
2014-01-01
Full Text Available Electroencephalogram (EEG signals are often used to diagnose diseases such as seizure, alzheimer, and schizophrenia. One main problem with the recorded EEG samples is that they are not equally reliable due to the artifacts at the time of recording. EEG signal classification algorithms should have a mechanism to handle this issue. It seems that using adaptive classifiers can be useful for the biological signals such as EEG. In this paper, a general adaptive method named weighted distance nearest neighbor (WDNN is applied for EEG signal classification to tackle this problem. This classification algorithm assigns a weight to each training sample to control its influence in classifying test samples. The weights of training samples are used to find the nearest neighbor of an input query pattern. To assess the performance of this scheme, EEG signals of thirteen schizophrenic patients and eighteen normal subjects are analyzed for the classification of these two groups. Several features including, fractal dimension, band power and autoregressive (AR model are extracted from EEG signals. The classification results are evaluated using Leave one (subject out cross validation for reliable estimation. The results indicate that combination of WDNN and selected features can significantly outperform the basic nearest-neighbor and the other methods proposed in the past for the classification of these two groups. Therefore, this method can be a complementary tool for specialists to distinguish schizophrenia disorder.
DEFF Research Database (Denmark)
Wong, Monica H T; Holst, Claus; Astrup, Arne
2012-01-01
Successful weight maintenance following weight loss is challenging for many people. Identifying predictors of longer-term success will help target clinical resources more effectively. To date, focus has been predominantly on the identification of predictors of weight loss. The goal of the current...... study was to determine if changes in anthropometric and clinical parameters during acute weight loss are associated with subsequent weight regain....
International Nuclear Information System (INIS)
Shin, Ho Cheol; Park, Moon Ghu; You, Skin
2006-01-01
Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto associative kernel regression (AAKR) by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression
Butcher, Michael T.; Bertram, John E. A.
2004-01-01
This laboratory exercise is designed to provide an understanding of the mechanical concept of impulse as it applies to human movement and athletic performance. Students compare jumps performed with and without handheld weights. Contrary to initial expectation, jump distance is increased with moderate additional weights. This was familiar to…
Directory of Open Access Journals (Sweden)
J Swain
2017-12-01
Full Text Available Indian Space Research Organization had launched Oceansat-2 on 23 September 2009, and the scatterometer onboard was a space-borne sensor capable of providing ocean surface winds (both speed and direction over the globe for a mission life of 5 years. The observations of ocean surface winds from such a space-borne sensor are the potential source of data covering the global oceans and useful for driving the state-of-the-art numerical models for simulating ocean state if assimilated/blended with weather prediction model products. In this study, an efficient interpolation technique of inverse distance and time is demonstrated using the Oceansat-2 wind measurements alone for a selected month of June 2010 to generate gridded outputs. As the data are available only along the satellite tracks and there are obvious data gaps due to various other reasons, Oceansat-2 winds were subjected to spatio-temporal interpolation, and 6-hour global wind fields for the global oceans were generated over 1 × 1 degree grid resolution. Such interpolated wind fields can be used to drive the state-of-the-art numerical models to predict/hindcast ocean-state so as to experiment and test the utility/performance of satellite measurements alone in the absence of blended fields. The technique can be tested for other satellites, which provide wind speed as well as direction data. However, the accuracy of input winds is obviously expected to have a perceptible influence on the predicted ocean-state parameters. Here, some attempts are also made to compare the interpolated Oceansat-2 winds with available buoy measurements and it was found that they are reasonably in good agreement with a correlation coefficient of R > 0.8 and mean deviation 1.04 m/s and 25° for wind speed and direction, respectively.
Dictionary learning based noisy image super-resolution via distance penalty weight model.
Han, Yulan; Zhao, Yongping; Wang, Qisong
2017-01-01
In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness.
International Nuclear Information System (INIS)
Kazama, Toshiki; Nasu, Katsuhiro; Kuroki, Yoshifumi; Nawano, Shigeru; Ito, Hisao
2009-01-01
Fat suppression is essential for diffusion-weighted imaging (DWI) in the body. However, the chemical shift selective (CHESS) pulse often fails to suppress fat signals in the breast. The purpose of this study was to compare DWI using CHESS and DWI using short inversion time inversion recovery (STIR) in terms of fat suppression and the apparent diffusion coefficient (ADC) value. DWI using STIR, DWI using CHESS, and contrast-enhanced T1-weighted images were obtained in 32 patients with breast carcinoma. Uniformity of fat suppression, ADC, signal intensity, and visualization of the breast tumors were evaluated. In 44% (14/32) of patients there was insufficient fat suppression in the breasts on DWI using CHESS, whereas 0% was observed on DWI using STIR (P<0.0001). The ADCs obtained for DWI using STIR were 4.3% lower than those obtained for DWI using CHESS (P<0.02); there was a strong correlation of the ADC measurement (r=0.93, P<0.001). DWI using STIR may be excellent for fat suppression; and the ADC obtained in this sequence was well correlated with that obtained with DWI using CHESS. DWI using STIR may be useful when the fat suppression technique in DWI using CHESS does not work well. (author)
Water Quality Interpolation Using Various In-Stream Distance Weighting Metrics
Saia, S. M.; Walter, T.; Sullivan, P.; Christie, R.
2012-12-01
Interpolation of water quality samples along the reach of a stream can be used to (1) extend point data to un-sampled locations along the stream network, (2) identify spatial patterns in water quality, and (3) understand how natural and human factors shape these patterns. Kriging, one of the most commonly used geospatial interpolation methods, assumes that nearby sites are spatially auto-correlated; sites closer together have more in common than sites further away. Studies have introduced kriging methods that weight in-stream distance metrics with either landscape attributes (i.e. topography, land use, temperature, and various soil properties) or stream order. Here we present a weighting scheme that combines both surrounding landscape attributes with stream order. We use R, an open-source programming language, to interpolate water quality data collected from the Mianus River in Westchester County, New York. As the major drinking water supply for approximately 100,000 people in Connecticut and New York, the Mianus River watershed community values the cleanliness of its water for recreational activities as well as the sustenance of terrestrial and aquatic wildlife. With the in-stream interpolation results, we can gain a better understanding of factors contributing to water quality issues and observed biogeochemical patterns within the watershed. For example, we can help answer questions such as: How can we target landscape stabilization projects to reduce turbidity? If we find that the most powerful weighting is associated with first order streams and cropland, we know conservation efforts should be focused on agricultural head waters.
Energy Technology Data Exchange (ETDEWEB)
Lavdas, Eleftherios; Vlychou, Marianna; Arikidis, Nikos; Kapsalaki, Eftychia; Roka, Violetta; Fezoulidis, Ioannis V. (Dept. of Radiology, Univ. Hospital of Larissa, Medical School of Thessaly, Mezourlo (Greece)), e-mail: mvlychou@med.uth.gr
2010-04-15
Background: T1-weighted fluid-attenuated inversion recovery (FLAIR) sequence has been reported to provide improved contrast between lesions and normal anatomical structures compared to T1-weighted fast spin-echo (FSE) imaging at 1.5T regarding imaging of the lumbar spine. Purpose: To compare T1-weighted FSE and fast T1-weighted FLAIR imaging in normal anatomic structures and degenerative and metastatic lesions of the lumbar spine at 3.0T. Material and Methods: Thirty-two consecutive patients (19 females, 13 males; mean age 44 years, range 30-67 years) with lesions of the lumbar spine were prospectively evaluated. Sagittal images of the lumbar spine were obtained using T1-weighted FSE and fast T1-weighted FLAIR sequences. Both qualitative and quantitative analyses measuring the signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and relative contrast (ReCon) between degenerative and metastatic lesions and normal anatomic structures were conducted, comparing these sequences. Results: On quantitative evaluation, SNRs of cerebrospinal fluid (CSF), nerve root, and fat around the root of fast T1-weighted FLAIR imaging were significantly lower than those of T1-weighted FSE images (P<0.001). CNRs of normal spinal cord/CSF and disc herniation/ CSF for fast T1-weighted FLAIR images were significantly higher than those for T1-weighted FSE images (P<0.001). ReCon of normal spinal cord/CSF, disc herniation/CSF, and vertebral lesions/CSF for fast T1-weighted FLAIR images were significantly higher than those for T1-weighted FSE images (P<0.001). On qualitative evaluation, it was found that CSF nulling and contrast at the spinal cord (cauda equina)/CSF interface for T1-weighted FLAIR images were significantly superior compared to those for T1-weighted FSE images (P<0.001), and the disc/spinal cord (cauda equina) interface was better for T1-weighted FLAIR images (P<0.05). Conclusion: The T1-weighted FLAIR sequence may be considered as the preferred lumbar spine imaging
Zheng, Miaobing; Rangan, Anna; Allman-Farinelli, Margaret; Rohde, Jeanett Friis; Olsen, Nanna Julie; Heitmann, Berit Lilienthal
2015-11-14
The aim of the present study was to examine the associations of sugary drink consumption and its substitution with alternative beverages with body weight gain among young children predisposed to future weight gain. Secondary analysis of the Healthy Start Study, a 1·5-year randomised controlled trial designed to prevent overweight among Danish children aged 2-6 years (n 366), was carried out. Multivariate linear regression models were used to investigate the associations of beverage consumption with change in body weight (Δweight) or BMI(ΔBMI) z-score. Substitution models were used to extrapolate the influence of replacing sugary drinks with alternative beverages (water, milk and diet drinks) on Δweight or ΔBMI z-score. Sugary drink intake at baseline and substitution of sugary drinks with milk were associated with both Δweight and ΔBMI z-score. Every 100 g/d increase in sugary drink intake was associated with 0·10 kg and 0·06 unit increases in body weight (P=0·048) and BMI z-score (P=0·04), respectively. Substitution of 100 g/d sugary drinks with 100 g/d milk was inversely associated with Δweight (β=-0·16 kg; P=0·045) and ΔBMI z-score (β=-0·07 units; P=0·04). The results of this study suggest that sugary drink consumption was associated with body weight gain among young children with high predisposition for future overweight. In line with the current recommendations, sugary drinks, whether high in added or natural sugar, should be discouraged to help prevent childhood obesity. Milk may be a good alternative to sugary drinks with regard to weight management among young obesity-predisposed children.
Melo, Ingrid Sofia Vieira de; Costa, Clara Andrezza Crisóstomo Bezerra; Santos, João Victor Laurindo Dos; Santos, Aldenir Feitosa Dos; Florêncio, Telma Maria de Menezes Toledo; Bueno, Nassib Bezerra
2017-01-01
The consumption of ultra-processed foods may be associated with the development of chronic diseases, both in adults and in children/adolescents. This consumption is growing worldwide, especially in low and middle-income countries. Nevertheless, its magnitude in small, poor cities from the countryside is not well characterized, especially in adolescents. This study aimed to assess the consumption of minimally processed, processed and ultra-processed foods by adolescents from a poor Brazilian city and to determine if it was associated with excess weight, high waist circumference and high blood pressure. Cross-sectional study, conducted at a public federal school that offers technical education together with high school, located in the city of Murici. Adolescents of both sexes and aged between 14-19 years old were included. Anthropometric characteristics (weight, height, waist circumference), blood pressure, and dietary intake data were assessed. Associations were calculated using Poisson regression models, adjusted by sex and age. At total, 249 adolescents were included, being 55.8% girls, with a mean age of 16 years-old. The consumption of minimally processed foods was inversely associated with excess weight (Adjusted Prevalence Ratio: 0.61, 95% Confidence Interval: [0.39-0.96], P = 0.03). Although the consumption of ultra-processed foods was not associated with excess weight, high blood pressure and high waist circumference, 46.2% of the sample reported eating these products more than weekly. Consumption of minimally processed food is inversely associated with excess weight in adolescents. Investments in nutritional education aiming the prevention of chronic diseases associated with the consumption of these foods are necessary.
Directory of Open Access Journals (Sweden)
Ingrid Sofia Vieira de Melo
Full Text Available The consumption of ultra-processed foods may be associated with the development of chronic diseases, both in adults and in children/adolescents. This consumption is growing worldwide, especially in low and middle-income countries. Nevertheless, its magnitude in small, poor cities from the countryside is not well characterized, especially in adolescents. This study aimed to assess the consumption of minimally processed, processed and ultra-processed foods by adolescents from a poor Brazilian city and to determine if it was associated with excess weight, high waist circumference and high blood pressure.Cross-sectional study, conducted at a public federal school that offers technical education together with high school, located in the city of Murici. Adolescents of both sexes and aged between 14-19 years old were included. Anthropometric characteristics (weight, height, waist circumference, blood pressure, and dietary intake data were assessed. Associations were calculated using Poisson regression models, adjusted by sex and age.At total, 249 adolescents were included, being 55.8% girls, with a mean age of 16 years-old. The consumption of minimally processed foods was inversely associated with excess weight (Adjusted Prevalence Ratio: 0.61, 95% Confidence Interval: [0.39-0.96], P = 0.03. Although the consumption of ultra-processed foods was not associated with excess weight, high blood pressure and high waist circumference, 46.2% of the sample reported eating these products more than weekly.Consumption of minimally processed food is inversely associated with excess weight in adolescents. Investments in nutritional education aiming the prevention of chronic diseases associated with the consumption of these foods are necessary.
A GRASP-Based Heuristic for the Sorting by Length-Weighted Inversions Problem.
Arruda, Thiago da Silva; Dias, Ulisses; Dias, Zanoni
2018-01-01
Genome Rearrangements are large-scale mutational events that affect genomes during the evolutionary process. Therefore, these mutations differ from punctual mutations. They can move genes from one place to the other, change the orientation of some genes, or even change the number of chromosomes. In this work, we deal with inversion events which occur when a segment of DNA sequence in the genome is reversed. In our model, each inversion costs the number of elements in the reversed segment. We present a new algorithm for this problem based on the metaheuristic called Greedy Randomized Adaptive Search Procedure (GRASP) that has been routinely used to find solutions for combinatorial optimization problems. In essence, we implemented an iterative process in which each iteration receives a feasible solution whose neighborhood is investigated. Our analysis shows that we outperform any other approach by significant margin. We also use our algorithm to build phylogenetic trees for a subset of species in the Yersinia genus and we compared our trees to other results in the literature.
Energy Technology Data Exchange (ETDEWEB)
Hwang, Asiry; Seo, Jeong-Jin; Jeong, Gwang Woo; Chung, Tae Woong; Jeong, Yong Yeon; Kang, Heoung Keun; Kook, Hoon; Woo, Young Jong; Hwang, Tai Joo [Chonnam Univ. Medical School, Seoul (Korea, Republic of)
1999-03-01
The purpose of this study was to evaluate the usefulness of FLAIR(Fluid Attenuated Inversion Recovery) MR imaging in childhood adrenoleukodystrophy by comparing with those of T2-weighted FSE imaging, and to correlate MRI findings with clinical manifestations. Axial FLAIR images(TR/TE/TI=10004/123/2200) and T2-weighted FSE images(TR/TE=4000/104) of brain in six male patients(age range : 6-17 years, mean age : 10.2 years) with biochemically confirmed adrenoleukodystrophy were compared visually by two radiologists for detection, conspicuity, and the extent of lesion. Quantitatively, we compared lesion/CSF contrast, lesion/CSF contrast to noise ratio(CNR), lesion/white matter(WM) contrast, and lesion/WM CNR between FLAIR and T2 weighted image. We correlated MR findings with clinical manifestations of neurologic symptoms and evaluated whether MRI could detect white matter lesions in neurologically asymptomatic patients. Visual detection of lesions was better with FLAIR images in 2 of the 6 cases and it was equal in the remainders. Visual conspicuity and detection of the extent of lesion were superior on FLAIR images than T2-weighted images in all 6 cases. In the quantitative assessment of lesions, FLAIR was superior to T2-weighted image for lesion/CSF contrast and lesion/CSF CNR, but was inferior to T2 weighted image for lesion/WM contrast and lesion/WM CNR. In one case, FLAIR images distinguished the portion of encephalomalacic change from lesions. MR findings of adrenoleukodystrophy were correlated with clinical manifestations in symptomatic 4 cases, and also detected white matter lesions in asymptomatic 2 cases. MR imaging with FLAIR sequence provided images that were equal or superior to T2-weighted images in the evaluation of childhood adrenoleukodystrophy. MRI findings were well correlated with clinical manifestations and could detect white matter lesions in neurologically asymptomatic adrenoleukodystrophy patients.
Measuring distance through dense weighted networks: The case of hospital-associated pathogens.
Directory of Open Access Journals (Sweden)
Tjibbe Donker
2017-08-01
Full Text Available Hospital networks, formed by patients visiting multiple hospitals, affect the spread of hospital-associated infections, resulting in differences in risks for hospitals depending on their network position. These networks are increasingly used to inform strategies to prevent and control the spread of hospital-associated pathogens. However, many studies only consider patients that are received directly from the initial hospital, without considering the effect of indirect trajectories through the network. We determine the optimal way to measure the distance between hospitals within the network, by reconstructing the English hospital network based on shared patients in 2014-2015, and simulating the spread of a hospital-associated pathogen between hospitals, taking into consideration that each intermediate hospital conveys a delay in the further spread of the pathogen. While the risk of transferring a hospital-associated pathogen between directly neighbouring hospitals is a direct reflection of the number of shared patients, the distance between two hospitals far-away in the network is determined largely by the number of intermediate hospitals in the network. Because the network is dense, most long distance transmission chains in fact involve only few intermediate steps, spreading along the many weak links. The dense connectivity of hospital networks, together with a strong regional structure, causes hospital-associated pathogens to spread from the initial outbreak in a two-step process: first, the directly surrounding hospitals are affected through the strong connections, second all other hospitals receive introductions through the multitude of weaker links. Although the strong connections matter for local spread, weak links in the network can offer ideal routes for hospital-associated pathogens to travel further faster. This hold important implications for infection prevention and control efforts: if a local outbreak is not controlled in time
Directory of Open Access Journals (Sweden)
Javad Nematian
2015-04-01
Full Text Available Vertex and p-center problems are two well-known types of the center problem. In this paper, a p-center problem with uncertain demand-weighted distance will be introduced in which the demands are considered as fuzzy random variables (FRVs and the objective of the problem is to minimize the maximum distance between a node and its nearest facility. Then, by introducing new methods, the proposed problem is converted to deterministic integer programming (IP problems where these methods will be obtained through the implementation of the possibility theory and fuzzy random chance-constrained programming (FRCCP. Finally, the proposed methods are applied for locating bicycle stations in the city of Tabriz in Iran as a real case study. The computational results of our study show that these methods can be implemented for the center problem with uncertain frameworks.
Bouillon, Lucinda E; Wilhelm, Jacqueline; Eisel, Patricia; Wiesner, Jessica; Rachow, Megan; Hatteberg, Lindsay
2012-12-01
Researchers have observed differences in muscle activity patterns between males and females during functional exercises. The research methods employed have used various step heights and lunge distances to assess functional exercise making gender comparisons difficult. The purpose of this study was to examine core and lower extremity muscle activity between genders during single-limb exercises using adjusted distances and step heights based on a percentage of the participant's height. Twenty men and 20 women who were recreationally active and healthy participated in the study. Two-dimensional video and surface electromyography (SEMG) were used to assess performance during three exercise maneuvers (step down, forward lunge, and side-step lunge). Eight muscles were assessed using SEMG (rectus abdominus, external oblique, erector spinae, rectus femoris, tensor fascia latae, gluteus medius, gluteus maximus, biceps femoris). Maximal voluntary isometric contractions (MVIC) were used for each muscle and expressed as %MVIC to normalize SEMG to account for body mass differences. Exercises were randomized and distances were normalized to the participant's lower limb length. Descriptive statistics, mixed-model ANOVA, and ICCs with 95% confidence intervals were calculated. Males were taller, heavier, and had longer leg length when compared to the females. No differences in %MVIC activity were found between genders by task across the eight muscles. For both males and females, the step down task resulted in higher %MVIC for gluteus maximus compared to lunge, (p=0.002). Step down exercise produced higher %MVIC for gluteus medius than lunge (p=0.002) and side step (p=0.006). ICC(3,3) ranged from moderate to high (0.74 to 0.97) for the three tasks. Muscle activation among the eight muscles was similar between females and males during the lunge, side-step, and step down tasks, with distances adjusted to leg length. Both males and females elicited higher muscle activity for gluteus
Stenroos, Matti; Haueisen, Jens
2008-09-01
In electrocardiographic imaging, epicardial potentials are reconstructed computationally from electrocardiographic measurements. The reconstruction is typically done with the help of the boundary element method (BEM), using the point collocation weighting and constant or linear basis functions. In this paper, we evaluated the performance of constant and linear point collocation and Galerkin BEMs in the epicardial potential problem. The integral equations and discretizations were formulated in terms of the single- and double-layer operators. All inner element integrals were calculated analytically. The computational methods were validated against analytical solutions in a simplified geometry. On the basis of the validation, no method was optimal in all testing scenarios. In the forward computation of the epicardial potential, the linear Galerkin (LG) method produced the smallest errors. The LG method also produced the smallest discretization error on the epicardial surface. In the inverse computation of epicardial potential, the electrode-specific transfer matrix performed better than the full transfer matrix. The Tikhonov 2 regularization outperformed the Tikhonov 0. In the optimal modeling conditions, the best BEM technique depended on electrode positions and chosen error measure. When large modeling errors such as omission of the lungs were present, the choice of the basis and weighting functions was not significant.
Directory of Open Access Journals (Sweden)
Mohammad Hassan Ehrampoush
2017-12-01
Conclusion: According to higher concentration of PM10 compared to WHO standard values particularly in spring, necessary actions and solutions should be taken for the pollution reduction. This study indicated that Kriging model has a better efficiency for spatial analysis of suspended particles, compared to IDW method.
Energy Technology Data Exchange (ETDEWEB)
Hoff, Gabriela; Lima, Nathan Willig, E-mail: ghoff.gesic@gmail.com [Pontificia Universidade Catolica do Rio Grande do Sul (PUCRS), Porto Alegre, RS (Brazil). Faculdade de Fisica
2014-07-01
The Inverse Square Law (ISL) is a mathematical rule largely used to adjust the KERMA and exposure to different distances of focal spot having as reference a determined point in space. Taking into account the limitations of this Mathematical Law and its application, we have as main objective to verify the applicability of the ISL to determine exposure on radio-diagnostic area (maximum tensions between 30kVp and 150kVp). Experimental data was collected, deterministic calculation and simulation using Monte Carlo Method (Geant4 toolkit) was applied. The experimental data was collected using a calibrated ionizing chamber TNT 12000 from Fluke. The conventional X-ray equipment used was a Multix Top of Siemens, with Tungsten track and total filtration equivalent to 2,5 mm of Aluminum; and the mammographic equipment was a Mammomat Inspiration from Siemens, presenting the track-add filtration combinations of Molybdenum-Molybdenum (25 μm), Molybdenum-Rhodium (30 μm), Tungsten-Rhodium (50 μm). Both equipment have the Quality Control testes in agreement to Brazilian regulations. On conventional radiology the measurements were performed at following focal spot-detector distance (FsDD): 40cm, 50 cm, 60 cm, 70 cm, 80 cm, 90 cm, 100 cm for peak tensions of 66 kVp, 81 kVp e 125 kVp. On mammography the measurements were performed at FsDD: 60cm, 50cm, 40 cm and 26 cm for peak tensions of 25kVp, 30kVp and 35kVp. Based on the results it is possible conclude that the ISL presents lower performance in correct measurement on Mammography spectra (induce to larger errors on estimate data), but it can cause significant impact on both areas depending on the spectra energy and distance to correct. (author)
International Nuclear Information System (INIS)
Beaulieu, Frederic; Beaulieu, Luc; Tremblay, Daniel; Roy, Rene
2004-01-01
As an alternative between manual planning and beamlet-based IMRT, we have developed an optimization system for inverse planning with anatomy-based MLC fields. In this system, named Ballista, the orientation (table and gantry), the wedge filter and the field weights are simultaneously optimized for every beam. An interesting feature is that the system is coupled to Pinnacle3 by means of the PinnComm interface, and uses its convolution dose calculation engine. A fully automatic MLC segmentation algorithm is also included. The plan evaluation is based on a quasi-random sampling and on a quadratic objective function with penalty-like constraints. For efficiency, optimal wedge angles and wedge orientations are determined using the concept of the super-omni wedge. A bound-constrained quasi-Newton algorithm performs field weight optimization, while a fast simulated annealing algorithm selects the optimal beam orientations. Moreover, in order to generate directly deliverable plans, the following practical considerations have been incorporated in the system: collision between the gantry and the table as well as avoidance of the radio-opaque elements of a table top. We illustrate the performance of the new system on two patients. In a rhabdomyosarcoma case, the system generated plans improving both the target coverage and the sparing of the parotide, as compared to a manually designed plan. In the second case presented, the system successfully produced an adequate plan for the treatment of the prostate while avoiding both hip prostheses. For the many cases where full IMRT may not be necessary, the system efficiently generates satisfactory plans meeting the clinical objectives, while keeping the treatment verification much simpler
Fujii, Tsutomu; Satoi, Sohei; Yamada, Suguru; Murotani, Kenta; Yanagimoto, Hiroaki; Takami, Hideki; Yamamoto, Tomohisa; Kanda, Mitsuro; Yamaki, So; Hirooka, Satoshi; Kon, Masanori; Kodera, Yasuhiro
2017-01-01
The efficacy of neoadjuvant chemoradiotherapy (NACRT) and subset of pancreatic ductal adenocarcinoma (PDAC) patients who are most likely to benefit from this strategy remain elusive. The aim of this study was to investigate the effects of NACRT in patients with resectable (R) or borderline resectable (BR) adenocarcinoma of the pancreatic head. BR diseases were classified into two groups: lesions involving exclusively the portal vein system (BR-PV) and those abutting the major artery (BR-A). A total of 504 patients treated with curative intent for PDAC were analyzed (R, n = 273; BR-PV, n = 129; BR-A, n = 102). Patients who underwent upfront surgery and those who underwent NACRT followed by surgery were compared using propensity score-matched and inverse probability of treatment-weighted analyses (UMIN000019719). No significant differences were noted in the incidences of curative resection among the three categories (R, BR-PV and BR-A). Propensity score-weighted logistic regression analysis revealed that the incidence of pathologically positive resection margins was reduced by NACRT only for BR patients. Among the propensity score-matched patients, NACRT rather than upfront surgery significantly prolonged the median survival time of BR-PV patients (28.4 vs. 20.1 months; P = 0.044) but not that of R-PDAC patients (28.6 vs. 33.7 months; P = 0.960). NACRT prolonged the median survival time of BR-A patients (18.1 vs. 10.0 months; P = 0.046), but the results remained unsatisfactory. These findings suggest that NACRT improves R0 rates and increases the survival of patients with BR-PV adenocarcinoma of the pancreatic head but not that of patients with R-PDAC.
Na, Bing; Lv, Ruihua; Xu, Wenfei; Yu, Pingsheng; Wang, Ke; Fu, Qiang
2007-11-22
Irradiation of ultrahigh molecular weight polyethylene (UHMWPE) with a dose of 150 kGy by an electron beam can effectively increase the entanglement density in the amorphous phase and has little influence on the properties of the crystalline phase, which provides examples to comparatively investigate the role of lamellar coupling and entanglement density in determining the strain-hardening effect in semicrystalline polymers. The strain-hardening modulus, deduced from the Haward plots of true stress-strain curves, is inversely temperature-dependent and has a sharp transition around 65 degrees C that corresponds to the mechanical alphaI-process of the crystalline phase for both nonirradiated and irradiated samples, irrespective of the entanglement density in the amorphous phase. Lamellar coupling takes more effect in determining the strain-hardening behavior before the mechanical alphaI-process is activated. With further increasing temperature, lamellar coupling becomes weaker and the role of the entangled amorphous phase is gradually presented. However, the same temperature dependence of the strain-hardening modulus in both nonirradiated and irradiated samples indicates that the strain-hardening behavior in semicrystalline polymer is mostly determined by lamellar coupling rather than by entanglement density.
Laraia, Barbara A; Downing, Janelle M; Zhang, Y Tara; Dow, William H; Kelly, Maggi; Blanchard, Samuel D; Adler, Nancy; Schillinger, Dean; Moffet, Howard; Warton, E Margaret; Karter, Andrew J
2017-05-01
Associations between neighborhood food environment and adult body mass index (BMI; weight (kg)/height (m)2) derived using cross-sectional or longitudinal random-effects models may be biased due to unmeasured confounding and measurement and methodological limitations. In this study, we assessed the within-individual association between change in food environment from 2006 to 2011 and change in BMI among adults with type 2 diabetes using clinical data from the Kaiser Permanente Diabetes Registry collected from 2007 to 2011. Healthy food environment was measured using the kernel density of healthful food venues. Fixed-effects models with a 1-year-lagged BMI were estimated. Separate models were fitted for persons who moved and those who did not. Sensitivity analysis using different lag times and kernel density bandwidths were tested to establish the consistency of findings. On average, patients lost 1 pound (0.45 kg) for each standard-deviation improvement in their food environment. This relationship held for persons who remained in the same location throughout the 5-year study period but not among persons who moved. Proximity to food venues that promote nutritious foods alone may not translate into clinically meaningful diet-related health changes. Community-level policies for improving the food environment need multifaceted strategies to invoke clinically meaningful change in BMI among adult patients with diabetes. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Drogendijk, Rian; Martin Martin, Oscar
We investigate how distance and different dimensions of distance between countries explain the outward FDI of firms according to distinct home country contexts. We identify three important dimensions of country distance: socio-economic development distance, cultural and historical distance and
Doidge, James C
2018-02-01
Population-based cohort studies are invaluable to health research because of the breadth of data collection over time, and the representativeness of their samples. However, they are especially prone to missing data, which can compromise the validity of analyses when data are not missing at random. Having many waves of data collection presents opportunity for participants' responsiveness to be observed over time, which may be informative about missing data mechanisms and thus useful as an auxiliary variable. Modern approaches to handling missing data such as multiple imputation and maximum likelihood can be difficult to implement with the large numbers of auxiliary variables and large amounts of non-monotone missing data that occur in cohort studies. Inverse probability-weighting can be easier to implement but conventional wisdom has stated that it cannot be applied to non-monotone missing data. This paper describes two methods of applying inverse probability-weighting to non-monotone missing data, and explores the potential value of including measures of responsiveness in either inverse probability-weighting or multiple imputation. Simulation studies are used to compare methods and demonstrate that responsiveness in longitudinal studies can be used to mitigate bias induced by missing data, even when data are not missing at random.
International Nuclear Information System (INIS)
Erdem, L. Oktay; Erdem, C. Zuhal; Acikgoz, Bektas; Gundogdu, Sadi
2005-01-01
Objective: To compare fast T1-weighted fluid-attenuated inversion recovery (FLAIR) and T1-weighted turbo spin-echo (TSE) imaging of the degenerative disc disease of the lumbar spine. Materials and methods: Thirty-five consecutive patients (19 females, 16 males; mean age 41 years, range 31-67 years) with suspected degenerative disc disease of the lumbar spine were prospectively evaluated. Sagittal images of the lumbar spine were obtained using T1-weighted TSE and fast T1-weighted FLAIR sequences. Two radiologists compared these sequences both qualitatively and quantitatively. Results: On qualitative evaluation, CSF nulling, contrast at the disc-CSF interface, the disc-spinal cord (cauda equina) interface, and the spinal cord (cauda equina)-CSF interface of fast T1-weighted FLAIR images were significantly higher than those for T1-weighted TSE images (P < 0.001). On quantitative evaluation of the first 15 patients, signal-to-noise ratios of cerebrospinal fluid of fast T1-weighted FLAIR imaging were significantly lower than those for T1-weighted TSE images (P < 0.05). Contrast-to-noise ratios of spinal cord/CSF and normal bone marrow/disc for fast T1-weighted FLAIR images were significantly higher than those for T1-weighted TSE images (P < 0.05). Conclusion: Results in our study have shown that fast T1-weighted FLAIR imaging may be a valuable imaging modality in the armamentarium of lumbar spinal T1-weighted MR imaging, because the former technique has definite superior advantages such as CSF nulling, conspicuousness of the normal anatomic structures and changes in the lumbar spinal discogenic disease and image contrast and also almost equally acquisition times
DEFF Research Database (Denmark)
Zheng, Miaobing; Rangan, Anna; Allman-Farinelli, Margaret
2015-01-01
trial designed to prevent overweight among Danish children aged 2-6 years (n 366), was carried out. Multivariate linear regression models were used to investigate the associations of beverage consumption with change in body weight (Δweight) or BMI(ΔBMI) z-score. Substitution models were used...... to extrapolate the influence of replacing sugary drinks with alternative beverages (water, milk and diet drinks) on Δweight or ΔBMI z-score. Sugary drink intake at baseline and substitution of sugary drinks with milk were associated with both Δweight and ΔBMI z-score. Every 100 g/d increase in sugary drink...
International Nuclear Information System (INIS)
Chang, J; Gu, X; Lu, W; Jiang, S; Song, T
2016-01-01
Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidate and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will
Directory of Open Access Journals (Sweden)
María Fernanda Garcés
2017-04-01
Conclusions: Inversions of intron 22 and 1 were found in half of this group of patients. These results are reproducible and useful to identify the two most frequent mutations in severe hemophilia A patients.
Yoshida, Tsukasa; Urikura, Atsushi; Shirata, Kensei; Nakaya, Yoshihiro; Endo, Masahiro; Terashima, Shingo; Hosokawa, Yoichiro
2018-04-01
This study aimed to compare the signal-to-noise ratios (SNRs) and apparent diffusion coefficients (ADCs) obtained using two fat suppression techniques in breast diffusion-weighted imaging (DWI) of a phantom. The breast phantom comprised agar gels with four different concentrations of granulated sugar (samples 1, 2, 3, and 4). DWI with short tau inversion recovery (STIR-DWI) and that with spectral attenuated inversion recovery (SPAIR-DWI) were performed using 3.0-T magnetic resonance imaging, and the obtained SNRs and ADCs were compared. ADCs were also compared between the right and left breast phantoms. For samples 3 and 4, SNRs obtained using STIR-DWI were lower than those obtained using SPAIR-DWI. For samples 2, 3, and 4, overall ADCs obtained using STIR-DWI were significantly higher than those obtained using SPAIR-DWI (p phantoms than SPAIR-DWI. SNRs and ADCs obtained using STIR-DWI are influenced by the T 1 value; a shorter T 1 value decreases SNRs, overestimates ADCs, and induces the measurement error in ADCs. STIR-DWI showed a larger difference in ADCs between the right and left phantoms than SPAIR-DWI.
Rhew, Isaac C; Oesterle, Sabrina; Coffman, Donna; Hawkins, J David
2018-01-01
Earlier intention-to-treat (ITT) findings from a community-randomized trial demonstrated effects of the Communities That Care (CTC) prevention system on reducing problem behaviors among youth. In ITT analyses, youth were analyzed according to their original study community's randomized condition even if they moved away from the community over the course of follow-up and received little to no exposure to intervention activities. Using inverse probability weights (IPWs), this study estimated effects of CTC in the same randomized trial among youth who remained in their original study communities throughout follow-up. Data were from the Community Youth Development Study, a community-randomized trial of 24 small towns in the United States. A cohort of 4,407 youth was followed from fifth grade (prior to CTC implementation) to eighth grade. IPWs for one's own moving status were calculated using fifth- and sixth-grade covariates. Results from inverse probability weighted multilevel models indicated larger effects for youth who remained in their study community for the first 2 years of CTC intervention implementation compared to ITT estimates. These effects included reduced likelihood of alcohol use, binge drinking, smokeless tobacco use, and delinquent behavior. These findings strengthen support for CTC as an efficacious system for preventing youth problem behaviors.
Directory of Open Access Journals (Sweden)
Lei Zeng
2016-01-01
Full Text Available Cone beam computed tomography (CBCT is a new detection method for 3D nondestructive testing of printed circuit boards (PCBs. However, the obtained 3D image of PCBs exhibits low contrast because of several factors, such as the occurrence of metal artifacts and beam hardening, during the process of CBCT imaging. Histogram equalization (HE algorithms cannot effectively extend the gray difference between a substrate and a metal in 3D CT images of PCBs, and the reinforcing effects are insignificant. To address this shortcoming, this study proposes an image enhancement algorithm based on gray and its distance double-weighting HE. Considering the characteristics of 3D CT images of PCBs, the proposed algorithm uses gray and its distance double-weighting strategy to change the form of the original image histogram distribution, suppresses the grayscale of a nonmetallic substrate, and expands the grayscale of wires and other metals. The proposed algorithm also enhances the gray difference between a substrate and a metal and highlights metallic materials. The proposed algorithm can enhance the gray value of wires and other metals in 3D CT images of PCBs. It applies enhancement strategies of changing gray and its distance double-weighting mechanism to adapt to this particular purpose. The flexibility and advantages of the proposed algorithm are confirmed by analyses and experimental results.
Camilleri, Géraldine M; Méjean, Caroline; Bellisle, France; Andreeva, Valentina A; Kesse-Guyot, Emmanuelle; Hercberg, Serge; Péneau, Sandrine
2016-05-01
To examine the relationship between intuitive eating (IE), which includes eating in response to hunger and satiety cues rather than emotional cues and without having forbidden foods, and weight status in a large sample of adults. A total of 11,774 men and 40,389 women aged ≥18 years participating in the NutriNet-Santé cohort were included in this cross-sectional analysis. Self-reported weight and height were collected as well as IE levels using the validated French version of the Intuitive Eating Scale-2. The association between IE and weight status was assessed using multinomial logistic regression models. A higher IE score was strongly associated with lower odds of overweight or obesity in both men and women. The strongest associations were observed in women for both overweight [quartile 4 vs. 1 of IE: odds ratio, 95% confidence interval: (0.19, 0.17-0.20)] and obesity (0.09, 0.08-0.10). Associations in men were as follows: for overweight (0.43, 0.38-0.48) and obesity (0.14, 0.11-0.18). IE is inversely associated with overweight and obesity which supports its importance. Although no causality can be inferred from the reported associations, these data suggest that IE might be relevant for obesity prevention and treatment. © 2016 The Obesity Society.
Bertoli, Simona; Spadafranca, Angela; Bes-Rastrollo, Maira; Martinez-Gonzalez, Miguel Angel; Ponissi, Veronica; Beggio, Valentina; Leone, Alessandro; Battezzati, Alberto
2015-02-01
The key factors influencing the development of Binge Eating Disorder (BED) are not well known. Adherence to the Mediterranean diet (MD) has been suspected to reduce the risk of several mental illnesses such as depression and anxiety. There are no existing studies that have examined the relationships between BED and MD. Cross-sectional study of 1472 participants (71.3% women; mean age: 44.8 ± 12.7) at high risk of BED. A MD score (MED-score) was derived from a validated food frequency questionnaire and BED by Binge Eating Scale questionnaire (BES). Body mass index, waist circumference and total body fat (%) were assessed by anthropometric measurements. 376 (25.5%) cases of self reported BED were identified. 11.1% of participants had a good adherence to MD (MED-score ≥ 9). After adjustments for age, gender, nutritional status, education, and physical activity level, high MED-score was associated with lower odds for BED (odds ratios and 95% confidence intervals of a BED disorder for successive levels of MED-score were 1 (reference), 0.77 (0.44, 1.36), 0.66 (0.37, 1.15), 0.50 (0.26, 0.96), and 0.45 (0.22, 0.55) (P for trend: depression and anxiety on MED-score in binge eaters. These results demonstrate an inverse association between MD and the development of BED in a clinical setting among subjects at risk of BED. Therefore, we should be cautious about generalizing the results to the whole population, although reverse causality and confounding cannot be excluded as explanation. Further prospective studies are warranted. Copyright © 2014 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Guo, Xiaohui; Tresserra-Rimbau, Anna; Estruch, Ramón; Martínez-González, Miguel A; Medina-Remón, Alexander; Fitó, Montserrat; Corella, Dolores; Salas-Salvadó, Jordi; Portillo, Maria Puy; Moreno, Juan J; Pi-Sunyer, Xavier; Lamuela-Raventós, Rosa M
2017-05-03
Overweight and obesity have been steadily increasing in recent years and currently represent a serious threat to public health. Few human studies have investigated the relationship between polyphenol intake and body weight. Our aim was to assess the relationship between urinary polyphenol levels and body weight. A cross-sectional study was performed with 573 participants from the PREDIMED (Prevención con Dieta Mediterránea) trial (ISRCTN35739639). Total polyphenol levels were measured by a reliable biomarker, total urinary polyphenol excretion (TPE), determined by the Folin-Ciocalteu method in urine samples. Participants were categorized into five groups according to their TPE at the fifth year. Multiple linear regression models were used to assess the relationships between TPE and obesity parameters; body weight (BW), body mass index (BMI), waist circumference (WC), and waist-to-height ratio (WHtR). After a five years follow up, significant inverse correlations were observed between TPE at the 5th year and BW (β = -1.004; 95% CI: -1.634 to -0.375, p = 0.002), BMI (β = -0.320; 95% CI: -0.541 to -0.098, p = 0.005), WC (β = -0.742; 95% CI: -1.326 to -0.158, p = 0.013), and WHtR (β = -0.408; 95% CI: -0.788 to -0.028, p = 0.036) after adjustments for potential confounders. To conclude, a greater polyphenol intake may thus contribute to reducing body weight in elderly people at high cardiovascular risk.
Guo, Xiaohui; Tresserra-Rimbau, Anna; Estruch, Ramón; Martínez-González, Miguel A.; Medina-Remón, Alexander; Fitó, Montserrat; Corella, Dolores; Salas-Salvadó, Jordi; Portillo, Maria Puy; Moreno, Juan J.; Pi-Sunyer, Xavier; Lamuela-Raventós, Rosa M.
2017-01-01
Overweight and obesity have been steadily increasing in recent years and currently represent a serious threat to public health. Few human studies have investigated the relationship between polyphenol intake and body weight. Our aim was to assess the relationship between urinary polyphenol levels and body weight. A cross-sectional study was performed with 573 participants from the PREDIMED (Prevención con Dieta Mediterránea) trial (ISRCTN35739639). Total polyphenol levels were measured by a reliable biomarker, total urinary polyphenol excretion (TPE), determined by the Folin-Ciocalteu method in urine samples. Participants were categorized into five groups according to their TPE at the fifth year. Multiple linear regression models were used to assess the relationships between TPE and obesity parameters; body weight (BW), body mass index (BMI), waist circumference (WC), and waist-to-height ratio (WHtR). After a five years follow up, significant inverse correlations were observed between TPE at the 5th year and BW (β = −1.004; 95% CI: −1.634 to −0.375, p = 0.002), BMI (β = −0.320; 95% CI: −0.541 to −0.098, p = 0.005), WC (β = −0.742; 95% CI: −1.326 to −0.158, p = 0.013), and WHtR (β = −0.408; 95% CI: −0.788 to −0.028, p = 0.036) after adjustments for potential confounders. To conclude, a greater polyphenol intake may thus contribute to reducing body weight in elderly people at high cardiovascular risk. PMID:28467383
Representing distance, consuming distance
DEFF Research Database (Denmark)
Larsen, Gunvor Riber
are being consumed in the contemporary society, in the same way as places, media, cultures and status are being consumed (Urry 1995, Featherstone 2007). An exploration of distance and its representations through contemporary consumption theory could expose what role distance plays in forming......Title: Representing Distance, Consuming Distance Abstract: Distance is a condition for corporeal and virtual mobilities, for desired and actual travel, but yet it has received relatively little attention as a theoretical entity in its own right. Understandings of and assumptions about distance...
Directory of Open Access Journals (Sweden)
R. Venkata Rao
2012-04-01
Full Text Available In response to increasing inflexible customer demands and to improve the competitive advantage, industrial organizations have to adopt strategies to achieve cost reduction, continual quality improvement, increased customer service and on-time delivery performance. Selection of the most suitable plant or facility layout design for an organization is one among the most important strategic issues to fulfill all these above-mentioned objectives. Nowadays, many industrial organizations have come to realize the importance of proper selection of the plant or facility layout design to survive in the global competitive market. Selecting the proper layout design from a given set of candidate alternatives is a difficult task, as many potential qualitative and quantitative criteria need to be considered. This paper proposes a Euclidean distance based approach (WEDBA as a multiple attribute decision making method to deal with the complex plant or facility layout design problems of the industrial environment. Three examples are included to illustrate the approach.
Piasecki, J; Ireland, A; Piasecki, M; Cameron, J; McPhee, J S; Degens, H
2018-01-30
Regular intense endurance exercise can lead to amenorrhea with possible adverse consequences for bone health. We compared whole body and regional bone strength and skeletal muscle characteristics between amenorrheic (AA: n = 14) and eumenorrheic (EA: n = 15) elite adult female long-distance runners and nonathletic controls (C: n = 15). Participants completed 3-day food diaries, dual-energy X-ray absorptiometry (DXA), magnetic resonance imaging (MRI), peripheral quantitative computed tomography (pQCT), and isometric maximal voluntary knee extension contraction (MVC). Both athlete groups had a higher caloric intake than controls, with no significant difference between athlete groups. DXA revealed lower bone mineral density (BMD) at the trunk, rib, pelvis, and lumbar spine in the AA than EA and C. pQCT showed greater bone size in the radius and tibia in EA and AA than C. The radius and tibia of AA had a larger endocortical circumference than C. Tibia bone mass and moments of inertia (Ix and Iy) were greater in AA and EA than C, whereas in the radius, only the proximal Iy was larger in EA than C. Knee extensor MVC did not differ significantly between groups. Amenorrheic adult female elite long-distance runners had lower BMD in the trunk, lumbar spine, ribs, and pelvis than eumenorrheic athletes and controls. The radius and tibia bone size and strength indicators were similar in amenorrheic and eumenorrheic athletes, suggesting that long bones of the limbs differ in their response to amenorrhea from bones in the trunk. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Energy Technology Data Exchange (ETDEWEB)
Azad, Rajiv; Tayal, Mohit; Azad, Sheenam; Sharma, Garima; Srivastava, Rajendra Kumar [SGRR Institute of Medical and Health Sciences, Patel Nagar, Dehradun (India)
2017-11-15
To compare the contrast-enhanced fluid-attenuated inversion recovery (CE-FLAIR), the CE T1-weighted (CE-T1W) sequence with fat suppression (FS) and magnetization transfer (MT) for early detection and characterization of infectious meningitis. Fifty patients and 10 control subjects were evaluated with the CE-FLAIR and the CE-T1W sequences with FS and MT. Qualitative assessment was done by two observers for presence and grading of abnormal leptomeningeal enhancement. Quantitative assessment included computation of net meningeal enhancement, using single pixel signal intensity software. A newly devised FLAIR based scoring system, based on certain imaging features including ventricular dilatation, ependymal enhancement, infarcts and subdural effusions was used to indicate the etiology. Data were analysed using the Student's t test, Cohen's Kappa coefficient, Pearson's correlation coefficient, the intraclass correlation coefficient, one way analysis of variance, and Fisher's exact test with Bonferroni correction as the post hoc test. The CE-FLAIR sequence demonstrated a better sensitivity (100%), diagnostic accuracy (95%), and a stronger correlation with the cerebrospinal fluid, total leukocyte count (r = 0.75), protein (r = 0.77), adenosine deaminase (r = 0.81) and blood glucose (r = -0.6) values compared to the CE-T1W sequences. Qualitative grades and quantitative meningeal enhancement on the CE-FLAIR sequence were also significantly greater than those on the other sequences. The FLAIR based scoring system yielded a diagnostic accuracy of 91.6% and a sensitivity of 96%. A strong inverse Pearson's correlation (r = -0.95) was found between the assigned score and patient's Glasgow Coma Scale at the time of admission. The CE-FLAIR sequence is better suited for evaluating infectious meningitis and could be included as a part of the routine MR imaging protocol.
International Nuclear Information System (INIS)
Azad, Rajiv; Tayal, Mohit; Azad, Sheenam; Sharma, Garima; Srivastava, Rajendra Kumar
2017-01-01
To compare the contrast-enhanced fluid-attenuated inversion recovery (CE-FLAIR), the CE T1-weighted (CE-T1W) sequence with fat suppression (FS) and magnetization transfer (MT) for early detection and characterization of infectious meningitis. Fifty patients and 10 control subjects were evaluated with the CE-FLAIR and the CE-T1W sequences with FS and MT. Qualitative assessment was done by two observers for presence and grading of abnormal leptomeningeal enhancement. Quantitative assessment included computation of net meningeal enhancement, using single pixel signal intensity software. A newly devised FLAIR based scoring system, based on certain imaging features including ventricular dilatation, ependymal enhancement, infarcts and subdural effusions was used to indicate the etiology. Data were analysed using the Student's t test, Cohen's Kappa coefficient, Pearson's correlation coefficient, the intraclass correlation coefficient, one way analysis of variance, and Fisher's exact test with Bonferroni correction as the post hoc test. The CE-FLAIR sequence demonstrated a better sensitivity (100%), diagnostic accuracy (95%), and a stronger correlation with the cerebrospinal fluid, total leukocyte count (r = 0.75), protein (r = 0.77), adenosine deaminase (r = 0.81) and blood glucose (r = -0.6) values compared to the CE-T1W sequences. Qualitative grades and quantitative meningeal enhancement on the CE-FLAIR sequence were also significantly greater than those on the other sequences. The FLAIR based scoring system yielded a diagnostic accuracy of 91.6% and a sensitivity of 96%. A strong inverse Pearson's correlation (r = -0.95) was found between the assigned score and patient's Glasgow Coma Scale at the time of admission. The CE-FLAIR sequence is better suited for evaluating infectious meningitis and could be included as a part of the routine MR imaging protocol
Legrand, Laurence; Tisserand, Marie; Turc, Guillaume; Edjlali, Myriam; Calvet, David; Trystram, Denis; Roca, Pauline; Naggara, Olivier; Mas, Jean-Louis; Méder, Jean-Francois; Baron, Jean-Claude; Oppenheim, Catherine
2016-02-01
Fluid-attenuated inversion recovery vascular hyperintensities (FVH) beyond the boundaries of diffusion-weighted imaging (DWI) lesion (FVH-DWI mismatch) have been proposed as an alternative to perfusion-weighted imaging (PWI)-DWI mismatch. We aimed to establish whether FVH-DWI mismatch can identify patients most likely to benefit from recanalization. FVH-DWI mismatch was assessed in 164 patients with proximal middle cerebral artery occlusion before intravenous thrombolysis. PWI-DWI mismatch (PWITmax>6sec/DWI>1.8) was assessed in the 104 patients with available PWI data. We tested the associations between 24-hours complete recanalization on magnetic resonance angiography and 3-month favorable outcome (modified Rankin Scale score ≤2), stratified on FVH-DWI (or PWI-DWI) status. FVH-DWI mismatch was present in 121/164 (74%) patients and recanalization in 50/164 (30%) patients. The odds ratio for favorable outcome with recanalization was 16.2 (95% confidence interval, 5.7-46.5; Pmismatch and 2.6 (95% confidence interval, 0.6-12.1; P=0.22) in those without FVH-DWI mismatch (P=0.048 for interaction). Recanalization was associated with favorable outcome in patients with PWI-DWI mismatch (odds ratios, 9.9; 95% confidence interval, 3.1-31.3; P=0.0001) and in patients without PWI-DWI mismatch (odds ratios, 7.0; 95% confidence interval, 1.1-44.1; P=0.047), P=0.76 for interaction. The FVH-DWI mismatch may rapidly identify patients with proximal occlusion most likely to benefit from recanalization. © 2016 American Heart Association, Inc.
Energy Technology Data Exchange (ETDEWEB)
Donmez, F.Y.; Aslan, H.; Coskun, M. (Dept. of Radiology, Faculty of Medicine, Baskent Univ., Ankara (Turkey))
2009-04-15
Background: Acute disseminated encephalomyelitis (ADEM) may be a rapidly progressive disease with different clinical outcomes. Purpose: To investigate the radiological findings of fulminant ADEM on diffusion-weighted imaging (DWI) and fluid-attenuated inversion recovery (FLAIR) images, and to correlate these findings with clinical outcome. Material and Methods: Initial and follow-up magnetic resonance imaging (MRI) scans in eight patients were retrospectively evaluated for distribution of lesions on FLAIR images and presence of hemorrhage or contrast enhancement. DWI of the patients was evaluated as to cytotoxic versus vasogenic edema. The clinical records were analyzed, and MRI results and clinical outcome were correlated. Results: Four of the eight patients died, three had full recovery, and one had residual cortical blindness. The distribution of the hyperintense lesions on FLAIR sequence was as follows: frontal (37.5%), parietal (50%), temporal (37.5%), occipital (62.5%), basal ganglia (50%), pons (37.5%), mesencephalon (37.5%), and cerebellum (50%). Three of the patients who died had brainstem involvement. Two patients had a cytotoxic edema, one of whom died, and the other developed cortical blindness. Six patients had vasogenic edema: three of these patients had a rapid progression to coma and died; three of them recovered. Conclusion: DWI is not always helpful for evaluating the evolution or predicting the outcome of ADEM. However, extension of the lesions, particularly brainstem involvement, may have an influence on the prognosis.
International Nuclear Information System (INIS)
Donmez, F.Y.; Aslan, H.; Coskun, M.
2009-01-01
Background: Acute disseminated encephalomyelitis (ADEM) may be a rapidly progressive disease with different clinical outcomes. Purpose: To investigate the radiological findings of fulminant ADEM on diffusion-weighted imaging (DWI) and fluid-attenuated inversion recovery (FLAIR) images, and to correlate these findings with clinical outcome. Material and Methods: Initial and follow-up magnetic resonance imaging (MRI) scans in eight patients were retrospectively evaluated for distribution of lesions on FLAIR images and presence of hemorrhage or contrast enhancement. DWI of the patients was evaluated as to cytotoxic versus vasogenic edema. The clinical records were analyzed, and MRI results and clinical outcome were correlated. Results: Four of the eight patients died, three had full recovery, and one had residual cortical blindness. The distribution of the hyperintense lesions on FLAIR sequence was as follows: frontal (37.5%), parietal (50%), temporal (37.5%), occipital (62.5%), basal ganglia (50%), pons (37.5%), mesencephalon (37.5%), and cerebellum (50%). Three of the patients who died had brainstem involvement. Two patients had a cytotoxic edema, one of whom died, and the other developed cortical blindness. Six patients had vasogenic edema: three of these patients had a rapid progression to coma and died; three of them recovered. Conclusion: DWI is not always helpful for evaluating the evolution or predicting the outcome of ADEM. However, extension of the lesions, particularly brainstem involvement, may have an influence on the prognosis
Energy Technology Data Exchange (ETDEWEB)
Klang, Eyal; Aharoni, Dvora; Rimon, Uri; Eshed, Iris [Tel Aviv University, Department of Diagnostic Imaging, Sheba Medical Center, Tel Aviv (Israel); Hermann, Kay-Geert [Department of Radiology, Charite University Hospital, Berlin (Germany); Herman, Amir [Sheba Medical Center, Department of Orthopedic Surgery, Tel-Hashomer (Israel); Tel Aviv University, The Sackler School of Medicine, Tel Aviv (Israel); Shazar, Nachshon [Sheba Medical Center, Department of Orthopedic Surgery, Tel-Hashomer (Israel)
2014-04-15
To assess the contribution of contrast material in detecting and evaluating enthesitis of pelvic entheses by MRI. Sixty-seven hip or pelvic 1.5-T MRIs (30:37 male:female, mean age: 53 years) were retrospectively evaluated for the presence of hamstring and gluteus medius (GM) enthesitis by two readers (a resident and an experienced radiologist). Short tau inversion recovery (STIR) and T1-weighted pre- and post-contrast (T1+Gd) images were evaluated by each reader at two sessions. A consensus reading of two senior radiologists was regarded as the gold standard. Clinical data was retrieved from patients' referral form and medical files. Cohen's kappa was used for intra- and inter-observer agreement calculation. Diagnostic properties were calculated against the gold standard reading. A total of 228 entheses were evaluated. Gold standard analysis diagnosed 83 (36 %) enthesitis lesions. Intra-reader reliability for the experienced reader was significantly (p = 0.0001) higher in the T1+Gd images compared to the STIR images (hamstring: k = 0.84/0.45, GM: k = 0.84/0.47). Sensitivity and specificity increased from 0.74/0.8 to 0.87/0.9 in the STIR images and T1+Gd sequences. Intra-reader reliability for the inexperienced reader was lower (p > 0.05). Evidence showing that contrast material improves the reliability, sensitivity, and specificity of detecting enthesitis supports its use in this setting. (orig.)
Analytic processing of distance.
Dopkins, Stephen; Galyer, Darin
2018-01-01
How does a human observer extract from the distance between two frontal points the component corresponding to an axis of a rectangular reference frame? To find out we had participants classify pairs of small circles, varying on the horizontal and vertical axes of a computer screen, in terms of the horizontal distance between them. A response signal controlled response time. The error rate depended on the irrelevant vertical as well as the relevant horizontal distance between the test circles with the relevant distance effect being larger than the irrelevant distance effect. The results implied that the horizontal distance between the test circles was imperfectly extracted from the overall distance between them. The results supported an account, derived from the Exemplar Based Random Walk model (Nosofsky & Palmieri, 1997), under which distance classification is based on the overall distance between the test circles, with relevant distance being extracted from overall distance to the extent that the relevant and irrelevant axes are differentially weighted so as to reduce the contribution of irrelevant distance to overall distance. The results did not support an account, derived from the General Recognition Theory (Ashby & Maddox, 1994), under which distance classification is based on the relevant distance between the test circles, with the irrelevant distance effect arising because a test circle's perceived location on the relevant axis depends on its location on the irrelevant axis, and with relevant distance being extracted from overall distance to the extent that this dependency is absent. Copyright © 2017 Elsevier B.V. All rights reserved.
Fahed, Robert; Lecler, Augustin; Sabben, Candice; Khoury, Naim; Ducroux, Célina; Chalumeau, Vanessa; Botta, Daniele; Kalsoum, Erwah; Boisseau, William; Duron, Loïc; Cabral, Dominique; Koskas, Patricia; Benaïssa, Azzedine; Koulakian, Hasmik; Obadia, Michael; Maïer, Benjamin; Weisenburger-Lile, David; Lapergue, Bertrand; Wang, Adrien; Redjem, Hocine; Ciccio, Gabriele; Smajda, Stanislas; Desilles, Jean-Philippe; Mazighi, Mikaël; Ben Maacha, Malek; Akkari, Inès; Zuber, Kevin; Blanc, Raphaël; Raymond, Jean; Piotin, Michel
2018-01-01
We aimed to study the intrarater and interrater agreement of clinicians attributing DWI-ASPECTS (Diffusion-Weighted Imaging-Alberta Stroke Program Early Computed Tomography Scores) and DWI-FLAIR (Diffusion-Weighted Imaging-Fluid Attenuated Inversion Recovery) mismatch in patients with acute ischemic stroke referred for mechanical thrombectomy. Eighteen raters independently scored anonymized magnetic resonance imaging scans of 30 participants from a multicentre thrombectomy trial, in 2 different reading sessions. Agreement was measured using Fleiss κ and Cohen κ statistics. Interrater agreement for DWI-ASPECTS was slight (κ=0.17 [0.14-0.21]). Four raters (22.2%) had a substantial (or higher) intrarater agreement. Dichotomization of the DWI-ASPECTS (0-5 versus 6-10 or 0-6 versus 7-10) increased the interrater agreement to a substantial level (κ=0.62 [0.48-0.75] and 0.68 [0.55-0.79], respectively) and more raters reached a substantial (or higher) intrarater agreement (17/18 raters [94.4%]). Interrater agreement for DWI-FLAIR mismatch was moderate (κ=0.43 [0.33-0.57]); 11 raters (61.1%) reached a substantial (or higher) intrarater agreement. Agreement between clinicians assessing DWI-ASPECTS and DWI-FLAIR mismatch may not be sufficient to make repeatable clinical decisions in mechanical thrombectomy. The dichotomization of the DWI-ASPECTS (0-5 versus 0-6 or 0-6 versus 7-10) improved interrater and intrarater agreement, however, its relevance for patients selection for mechanical thrombectomy needs to be validated in a randomized trial. © 2017 American Heart Association, Inc.
Thomalla, Götz; Boutitie, Florent; Fiebach, Jochen B; Simonsen, Claus Z; Pedraza, Salvador; Lemmens, Robin; Nighoghossian, Norbert; Roy, Pascal; Muir, Keith W; Ebinger, Martin; Ford, Ian; Cheng, Bastian; Galinovic, Ivana; Cho, Tae-Hee; Puig, Josep; Thijs, Vincent; Endres, Matthias; Fiehler, Jens; Gerloff, Christian
2018-01-01
Background Diffusion-weighted imaging (DWI) and fluid-attenuated inversion recovery (FLAIR) mismatch was suggested to identify stroke patients with unknown time of symptom onset likely to be within the time window for thrombolysis. Aims We aimed to study clinical characteristics associated with DWI-FLAIR mismatch in patients with unknown onset stroke. Methods We analyzed baseline MRI and clinical data from patients with acute ischemic stroke proven by DWI from WAKE-UP, an investigator-initiated, randomized, placebo-controlled trial of MRI-based thrombolysis in stroke patients with unknown time of symptom onset. Clinical characteristics were compared between patients with and without DWI-FLAIR mismatch. Results Of 699 patients included, 418 (59.8%) presented with DWI-FLAIR mismatch. A shorter delay between last seen well and symptom recognition (p = 0.0063), a shorter delay between symptom recognition and arrival at hospital (p = 0.0025), and history of atrial fibrillation (p = 0.19) were predictors of DWI-FLAIR mismatch in multivariate analysis. All other characteristics were comparable between groups. Conclusions There are only minor differences in measured clinical characteristics between unknown symptom onset stroke patients with and without DWI-FLAIR mismatch. DWI-FLAIR mismatch as an indicator of stroke onset within 4.5 h shows no relevant association with commonly collected clinical characteristics of stroke patients. Clinical Trial Registration URL: http://www.clinicaltrials.gov . Unique identifier: NCT01525290; URL: https://www.clinicaltrialsregister.eu . Unique identifier: 2011-005906-32.
Vock, David M; Wolfson, Julian; Bandyopadhyay, Sunayan; Adomavicius, Gediminas; Johnson, Paul E; Vazquez-Benitez, Gabriela; O'Connor, Patrick J
2016-06-01
Models for predicting the probability of experiencing various health outcomes or adverse events over a certain time frame (e.g., having a heart attack in the next 5years) based on individual patient characteristics are important tools for managing patient care. Electronic health data (EHD) are appealing sources of training data because they provide access to large amounts of rich individual-level data from present-day patient populations. However, because EHD are derived by extracting information from administrative and clinical databases, some fraction of subjects will not be under observation for the entire time frame over which one wants to make predictions; this loss to follow-up is often due to disenrollment from the health system. For subjects without complete follow-up, whether or not they experienced the adverse event is unknown, and in statistical terms the event time is said to be right-censored. Most machine learning approaches to the problem have been relatively ad hoc; for example, common approaches for handling observations in which the event status is unknown include (1) discarding those observations, (2) treating them as non-events, (3) splitting those observations into two observations: one where the event occurs and one where the event does not. In this paper, we present a general-purpose approach to account for right-censored outcomes using inverse probability of censoring weighting (IPCW). We illustrate how IPCW can easily be incorporated into a number of existing machine learning algorithms used to mine big health care data including Bayesian networks, k-nearest neighbors, decision trees, and generalized additive models. We then show that our approach leads to better calibrated predictions than the three ad hoc approaches when applied to predicting the 5-year risk of experiencing a cardiovascular adverse event, using EHD from a large U.S. Midwestern healthcare system. Copyright © 2016 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Stehling, C.; Niederstadt, T.; Kraemer, S.; Kugel, H.; Schwindt, W.; Heindel, W.; Bachmann, R.
2005-01-01
Purpose: The increased T1 relaxation times at 3.0 Tesla lead to a reduced T1 contrast, requiring adaptation of imaging protocols for high magnetic fields. This prospective study assesses the performance of three techniques for T1-weighted imaging (T1w) at 3.0 T with regard to gray-white differentiation and contrast-to-noise-ratio (CNR). Materials and Methods: Thirty-one patients were examined at a 3.0 T system with axial T1 w inversion recovery (IR), spin-echo (SE) and gradient echo (GE) sequences and after contrast enhancement (CE) with CE-SE and CE-GE sequences. For qualitative analysis, the images were ranked with regard to artifacts, gray-white differentiation, image noise and overall diagnostic quality. For quantitative analysis, the CNR was calculated, and cortex and basal ganglia were compared with the white matter. Results: In the qualitative analysis, IR was judged superior to SE and GE for gray-white differentiation, image noise and overall diagnostic quality, but inferior to the GE sequence with regard to artifacts. CE-GE proved superior to CE-SE in all categories. In the quantitative analysis, CNR of the based ganglia was highest for IR, followed by GE and SE. For the CNR of the cortex, no significant difference was found between IR (16.9) and GE (15.4) but both were superior to the SE (9.4). The CNR of the cortex was significantly higher for CE-GE compared to CE-SE (12.7 vs. 7.6, p<0.001), but the CNR of the basal ganglia was not significantly different. Conclusion: For unenhanced T1w imaging at 3.0 T, the IR technique is, despite increased artifacts, the method of choice due to its superior gray-white differentiation and best overall image quality. For CE-studies, GE sequences are recommended. For cerebral imaging, SE sequences give unsatisfactory results at 3.0 T. (orig.)
International Nuclear Information System (INIS)
Namatame, Hirofumi; Taniguchi, Masaki
1994-01-01
Photoelectron spectroscopy is regarded as the most powerful means since it can measure almost perfectly the occupied electron state. On the other hand, inverse photoelectron spectroscopy is the technique for measuring unoccupied electron state by using the inverse process of photoelectron spectroscopy, and in principle, the similar experiment to photoelectron spectroscopy becomes feasible. The development of the experimental technology for inverse photoelectron spectroscopy has been carried out energetically by many research groups so far. At present, the heightening of resolution of inverse photoelectron spectroscopy, the development of inverse photoelectron spectroscope in which light energy is variable and so on are carried out. But the inverse photoelectron spectroscope for vacuum ultraviolet region is not on the market. In this report, the principle of inverse photoelectron spectroscopy and the present state of the spectroscope are described, and the direction of the development hereafter is groped. As the experimental equipment, electron guns, light detectors and so on are explained. As the examples of the experiment, the inverse photoelectron spectroscopy of semimagnetic semiconductors and resonance inverse photoelectron spectroscopy are reported. (K.I.)
Wang, Zheng-Xin; Li, Dan-Dan; Zheng, Hong-Hao
2018-01-30
In China's industrialization process, the effective regulation of energy and environment can promote the positive externality of energy consumption while reducing negative externality, which is an important means for realizing the sustainable development of an economic society. The study puts forward an improved technique for order preference by similarity to an ideal solution based on entropy weight and Mahalanobis distance (briefly referred as E-M-TOPSIS). The performance of the approach was verified to be satisfactory. By separately using traditional and improved TOPSIS methods, the study carried out the empirical appraisals on the external performance of China's energy regulation during 1999~2015. The results show that the correlation between the performance indexes causes the significant difference between the appraisal results of E-M-TOPSIS and traditional TOPSIS. The E-M-TOPSIS takes the correlation between indexes into account and generally softens the closeness degree compared with traditional TOPSIS. Moreover, it makes the relative closeness degree fluctuate within a small-amplitude. The results conform to the practical condition of China's energy regulation and therefore the E-M-TOPSIS is favorably applicable for the external performance appraisal of energy regulation. Additionally, the external economic performance and social responsibility performance (including environmental and energy safety performances) based on the E-M-TOPSIS exhibit significantly different fluctuation trends. The external economic performance dramatically fluctuates with a larger fluctuation amplitude, while the social responsibility performance exhibits a relatively stable interval fluctuation. This indicates that compared to the social responsibility performance, the fluctuation of external economic performance is more sensitive to energy regulation.
Directory of Open Access Journals (Sweden)
Eduardo Terrero Matos
2015-04-01
Full Text Available El aprovechamiento del potencial energético del viento requiere obtener suficientes y adecuadas mediciones de la velocidad y la dirección del viento. A partir de estas se modela el comportamiento de estas variables y se calculan los parámetros que caracterizan el potencial; con estos resultados se diseñan los parques eólicos seleccionando los aerogeneradores más convenientes, determinando sus ubicaciones espaciales y diseñando la infraestructura tecnológica. La presente investigación tiene como objetivo resolver uno de los problemas prácticos más comunes durante este proceso: la ausencia de suficientes datos medidos. La solución que se propone se basa en la estimación de los datos ausentes mediante el Método de Inverso de una Potencia de la Distancia, el cual es aplicado a un caso de estudio denominado Colina 4. Los resultados muestran que el método es viable para cualquier caso semejante y que los valores estimados son coherentes con los datos medidos.The use of the wind energy potential requires obtaining sufficient and appropriate measurements of velocity and wind direction. From these measurements, the behavior of these variables is modeled and the parameters of this potential are calculated; with these results the wind farms are designed, selecting the most convenient wind turbines, determining their space locations and designing the technological infrastructure. The present research aims to solve one of the most common practical problems during this process: the absence of sufficient measured data. The solution proposed is based on the estimation of the missing data by means of the Inverse of a Power of the Distance Method, which is applied to a case of study named Colina 4. The results show that the method is viable for any similar case and that the estimated values are coherent with the measured data.
National Research Council Canada - National Science Library
Braddock, Joseph
1997-01-01
A study reviewing the existing Army Distance Learning Plan (ADLP) and current Distance Learning practices, with a focus on the Army's training and educational challenges and the benefits of applying Distance Learning techniques...
Inversion: A Most Useful Kind of Transformation.
Dubrovsky, Vladimir
1992-01-01
The transformation assigning to every point its inverse with respect to a circle with given radius and center is called an inversion. Discusses inversion with respect to points, circles, angles, distances, space, and the parallel postulate. Exercises related to these topics are included. (MDH)
Ingram, WT
2012-01-01
Inverse limits provide a powerful tool for constructing complicated spaces from simple ones. They also turn the study of a dynamical system consisting of a space and a self-map into a study of a (likely more complicated) space and a self-homeomorphism. In four chapters along with an appendix containing background material the authors develop the theory of inverse limits. The book begins with an introduction through inverse limits on [0,1] before moving to a general treatment of the subject. Special topics in continuum theory complete the book. Although it is not a book on dynamics, the influen
Shu, Hong; Edwards, Geoffrey; Qi, Cuihong
2001-09-01
In geographic space, it is well known that spatial behaviors of humans are directly driven by their spatial cognition, rather than by the physical or geometrical reality. The cognitive distance in spatial cognition is fundamental in intelligent pattern recognition. More precisely, the cognitive distance can be used to measure the similarities (or relevance) of cognized geographic objects. In the past work, the physical or Euclidean distances are used very often. In practice, many inconsistencies are found between the cognitive distance and the physical distance. Usually the physical distance is overestimated or underestimated in the process of human spatial behaviors and pattern recognition. These inconsistencies are termed distance distortions. The aim of this paper is to illustrate the conceptions of cognitive distance and distance distortion. And if the cognitive distance is argued to be two-dimensional, it exists in heterogeneous space and the property of quasi-metric is shown. If the cognitive distance is multi-dimensional, it exists in homogeneous space and the property of metric is shown. We argue that distance distortions arise from the transformation of homogeneous to heterogeneous space and from the transformation of the two-dimensional cognitive distance to the multi-dimensional cognitive distance. In some sense, the physical distance is an instance of cognitive distance.
Codimension zero laminations are inverse limits
Lozano Rojo, Álvaro
2013-01-01
The aim of the paper is to investigate the relation between inverse limit of branched manifolds and codimension zero laminations. We give necessary and sufficient conditions for such an inverse limit to be a lamination. We also show that codimension zero laminations are inverse limits of branched manifolds. The inverse limit structure allows us to show that equicontinuous codimension zero laminations preserves a distance function on transversals.
Guo, Xiaohui; Tresserra-Rimbau, Anna; Estruch, Ram?n; Mart?nez-Gonz?lez, Miguel A.; Medina-Rem?n, Alexander; Fit?, Montserrat; Corella, Dolores; Salas-Salvad?, Jordi; Portillo, Maria Puy; Moreno, Juan J.; Pi-Sunyer, Xavier; Lamuela-Ravent?s, Rosa M.
2017-01-01
Overweight and obesity have been steadily increasing in recent years and currently represent a serious threat to public health. Few human studies have investigated the relationship between polyphenol intake and body weight. Our aim was to assess the relationship between urinary polyphenol levels and body weight. A cross-sectional study was performed with 573 participants from the PREDIMED (Prevenci?n con Dieta Mediterr?nea) trial (ISRCTN35739639). Total polyphenol levels were measured by a re...
Gueye, Aliou B.; Pryslawsky, Yaroslaw; Trigo, Jose M.; Poulia, Nafsika; Delis, Foteini; Antoniou, Katerina; Loureiro, Michael; Laviolette, Steve R.; Vemuri, Kiran; Makriyannis, Alexandros
2016-01-01
Background: Multiple studies suggest a pivotal role of the endocannabinoid system in regulating the reinforcing effects of various substances of abuse. Rimonabant, a CB1 inverse agonist found to be effective for smoking cessation, was associated with an increased risk of anxiety and depression. Here we evaluated the effects of the CB1 neutral antagonist AM4113 on the abuse-related effects of nicotine and its effects on anxiety and depressive-like behavior in rats. Methods: Rats were trained to self-administer nicotine under a fixed-ratio 5 or progressive-ratio schedules of reinforcement. A control group was trained to self-administer food. The acute/chronic effects of AM4113 pretreatment were evaluated on nicotine taking, motivation for nicotine, and cue-, nicotine priming- and yohimbine-induced reinstatement of nicotine-seeking. The effects of AM4113 in the basal firing and bursting activity of midbrain dopamine neurons were evaluated in a separate group of animals treated with nicotine. Anxiety/depression-like effects of AM4113 and rimonabant were evaluated 24h after chronic (21 days) pretreatment (0, 1, 3, and 10mg/kg, 1/d). Results: AM4113 significantly attenuated nicotine taking, motivation for nicotine, as well as cue-, priming- and stress-induced reinstatement of nicotine-seeking behavior. These effects were accompanied by a decrease of the firing and burst rates in the ventral tegmental area dopamine neurons in response to nicotine. On the other hand, AM4113 pretreatment did not have effects on operant responding for food. Importantly, AM4113 did not have effects on anxiety and showed antidepressant-like effects. Conclusion: Our results indicate that AM4113 could be a promising therapeutic option for the prevention of relapse to nicotine-seeking while lacking anxiety/depression-like side effects. PMID:27493155
DEFF Research Database (Denmark)
Nordlund, David; Heiberg, Einar; Carlsson, Marcus
2016-01-01
Background - Contrast-enhanced steady state free precession (CE-SSFP) and T2-weighted short tau inversion recovery (T2-STIR) have been clinically validated to estimate myocardium at risk (MaR) by cardiovascular magnetic resonance while using myocardial perfusion single-photon emission computed...... tomography as reference standard. Myocardial perfusion single-photon emission computed tomography has been used to describe the coronary perfusion territories during myocardial ischemia. Compared with myocardial perfusion single-photon emission computed tomography, cardiovascular magnetic resonance offers...... superior image quality and practical advantages. Therefore, the aim was to describe the main coronary perfusion territories using CE-SSFP and T2-STIR cardiovascular magnetic resonance data in patients after acute ST-segment-elevation myocardial infarction. Methods and Results - CE-SSFP and T2-STIR data...
Willis, Erik A; Szabo-Reed, Amanda N; Ptomey, Lauren T; Steger, Felicia L; Honas, Jeffery J; Al-Hihi, Eyad M; Lee, Robert; Vansaghi, Lisa; Washburn, Richard A; Donnelly, Joseph E
2016-03-01
Management of obesity in the context of the primary care physician visit is of limited efficacy in part because of limited ability to engage participants in sustained behavior change between physician visits. Therefore, healthcare systems must find methods to address obesity that reach beyond the walls of clinics and hospitals and address the issues of lifestyle modification in a cost-conscious way. The dramatic increase in technology and online social networks may present healthcare providers with innovative ways to deliver weight management programs that could have an impact on health care at the population level. A randomized study will be conducted on 70 obese adults (BMI 30.0-45.0 kg/m(2)) to determine if weight loss (6 months) is equivalent between weight management interventions utilizing behavioral strategies by either a conference call or social media approach. The primary outcome, body weight, will be assessed at baseline and 6 months. Secondary outcomes including waist circumference, energy and macronutrient intake, and physical activity will be assessed on the same schedule. In addition, a cost analysis and process evaluation will be completed. Copyright © 2016 Elsevier Inc. All rights reserved.
Wier, M.F. van; Ariëns, G.A.M.; Dekkers, J.C.; Hendriksen, I.J.M.; Pronk, N.P.; Smid, T.; Mechelen, W. van
2006-01-01
Background: The prevalence of overweight is increasing and its consequences will cause a major public health burden in the near future. Cost-effective interventions for weight control among the general population are therefore needed. The ALIFE@Work study is investigating a novel lifestyle
Directory of Open Access Journals (Sweden)
Zaitseva A.
2013-03-01
Full Text Available A novel approach for improving estimations of existing group contribution methods is developed. Instead of fixed contributions, weighted contributions are optimized for each property estimation using a database. These weighting factors are calculated based on similarities between the compound whose properties are estimated and the other compounds in the database. By this approach, those components which are chemically more similar to the estimated one can systematically be given more weight in the estimation process, while sustaining the general nature of the group contribution methods. The new approach was applied to the Joback and Reid (1987, and the Marrero and Gani (2001 group contribution methods. The performances were demonstrated on Normal Boiling Point (NBP predictions. The absolute average error of NBP estimations was reduced by 6.3 K for the Joback and Reid method and by 4 K for the Marrero and Gani method. Other physical properties such as critical pressure, volume, formation energies, melting temperature and fusion enthalpy were estimated by applying the new technique to Joback and Reid method. La nouvelle approche pour améliorer les estimations existantes de la méthode de contribution du groupe est développée. Au lieu d’utiliser les contributions fixes, les contributions pondérées sont optimisées pour chaque propriété de la base de données. Les facteurs de pondération sont calculés en se basant sur les similitudes entre le composé, dont les propriétés sont estimées, et les autres composés de la base de données. En utilisant cette approche, les substances qui sont chimiquement plus proches des substances en question sont systématiquement plus pondérées. Ainsi la nature générale de l’approche est maintenue. La nouvelle technique est appliquée aux méthodes de contribution du groupe de Joback et Reid (1987, ainsi que de Marrero et Gani (2001. Les performances sont démontrées en utilisant la prédiction du
Ganesan, K; Bydder, G M
2014-09-01
This study compared T1 fluid attenuation inversion recovery (FLAIR) and T1 turbo spin echo (TSE) sequences for evaluation of cervical spine degenerative disease at 3 T. 72 patients (44 males and 28 females; mean age of 39 years; age range, 27-75 years) with suspected cervical spine degenerative disease were prospectively evaluated. Sagittal images of the spine were obtained using T1 FLAIR and T1 TSE sequences. Two experienced neuroradiologists compared the sequences qualitatively and quantitatively. On qualitative evaluation, cerebrospinal fluid (CSF) nulling and contrast at cord-CSF, disc-CSF and disc-cord interfaces were significantly higher on fast T1 FLAIR images than on T1 TSE images (p degenerative disease, owing to higher cord-CSF, disc-cord and disc-CSF contrast. However, intrinsic cord contrast is low on T1 FLAIR images. T1 FLAIR is more promising and sensitive than T1 TSE for evaluation of degenerative spondyloarthropathy and may provide a foundation for development of MR protocols for early detection of degenerative and neoplastic diseases.
Directory of Open Access Journals (Sweden)
Hendriksen Ingrid JM
2006-05-01
Full Text Available Abstract Background The prevalence of overweight is increasing and its consequences will cause a major public health burden in the near future. Cost-effective interventions for weight control among the general population are therefore needed. The ALIFE@Work study is investigating a novel lifestyle intervention, aimed at the working population, with individual counselling through either phone or e-mail. This article describes the design of the study and the participant flow up to and including randomisation. Methods/Design ALIFE@Work is a controlled trial, with randomisation to three arms: a control group, a phone based intervention group and an internet based intervention group. The intervention takes six months and is based on a cognitive behavioural approach, addressing physical activity and diet. It consists of 10 lessons with feedback from a personal counsellor, either by phone or e-mail, between each lesson. Lessons contain educational content combined with behaviour change strategies. Assignments in each lesson teach the participant to apply these strategies to every day life. The study population consists of employees from seven Dutch companies. The most important inclusion criteria are having a body mass index (BMI ≥ 25 kg/m2 and being an employed adult. Primary outcomes of the study are body weight and BMI, diet and physical activity. Other outcomes are: perceived health; empowerment; stage of change and self-efficacy concerning weight control, physical activity and eating habits; work performance/productivity; waist circumference, sum of skin folds, blood pressure, total blood cholesterol level and aerobic fitness. A cost-utility- and a cost-effectiveness analysis will be performed as well. Physiological outcomes are measured at baseline and after six and 24 months. Other outcomes are measured by questionnaire at baseline and after six, 12, 18 and 24 months. Statistical analyses for short term (six month results are performed with
Directory of Open Access Journals (Sweden)
Fen Wei
2016-01-01
Full Text Available In order to sufficiently capture the useful fault-related information available in the multiple vibration sensors used in rotation machinery, while concurrently avoiding the introduction of the limitation of dimensionality, a new fault diagnosis method for rotation machinery based on supervised second-order tensor locality preserving projection (SSTLPP and weighted k-nearest neighbor classifier (WKNNC with an assembled matrix distance metric (AMDM is presented. Second-order tensor representation of multisensor fused conditional features is employed to replace the prevailing vector description of features from a single sensor. Then, an SSTLPP algorithm under AMDM (SSTLPP-AMDM is presented to realize dimensional reduction of original high-dimensional feature tensor. Compared with classical second-order tensor locality preserving projection (STLPP, the SSTLPP-AMDM algorithm not only considers both local neighbor information and class label information but also replaces the existing Frobenius distance measure with AMDM for construction of the similarity weighting matrix. Finally, the obtained low-dimensional feature tensor is input into WKNNC with AMDM to implement the fault diagnosis of the rotation machinery. A fault diagnosis experiment is performed for a gearbox which demonstrates that the second-order tensor formed multisensor fused fault data has good results for multisensor fusion fault diagnosis and the formulated fault diagnosis method can effectively improve diagnostic accuracy.
Indian Academy of Sciences (India)
Further Suggested Reading. [1] PC Mahalanobis, A statistical study oftbe Chinese hud, Man in India,. 8, 107-122, 1928. [2] PC Mahalanobis, On the generalised distance in statistics, Proceedings of the National Institute of Sciences of India, 2, 49-55, 1936. [3] P C Mahalanobis, Normalisation of statistical variates and the use ...
Inverse comptonization vs. thermal synchrotron
International Nuclear Information System (INIS)
Fenimore, E.E.; Klebesadel, R.W.; Laros, J.G.
1983-01-01
There are currently two radiation mechanisms being considered for gamma-ray bursts: thermal synchrotron and inverse comptonization. They are mutually exclusive since thermal synchrotron requires a magnetic field of approx. 10 12 Gauss whereas inverse comptonization cannot produce a monotonic spectrum if the field is larger than 10 11 and is too inefficient relative to thermal synchrotron unless the field is less than 10 9 Gauss. Neither mechanism can explain completely the observed characteristics of gamma-ray bursts. However, we conclude that thermal synchrotron is more consistent with the observations if the sources are approx. 40 kpc away whereas inverse comptonization is more consistent if they are approx. 300 pc away. Unfortunately, the source distance is still not known and, thus, the radiation mechanism is still uncertain
Distance Dependent Model for the Delay Power Spectrum of In-room Radio Channels
DEFF Research Database (Denmark)
Steinböck, Gerhard; Pedersen, Troels; Fleury, Bernard Henri
2013-01-01
A model based on experimental observations of the delay power spectrum in closed rooms is proposed. The model includes the distance between the transmitter and the receiver as a parameter which makes it suitable for range based radio localization. The experimental observations motivate the proposed...... to the propagation time between transmitter and receiver. Its power decays exponentially with distance. The proposed model allows for the prediction of e.g. the path loss, mean delay, root mean squared (rms) delay spread, and kurtosis versus the distance. The model predictions are validated by measurements...... model of the delay power spectrum with a primary (early) component and a reverberant component (tail). The primary component is modeled as a Dirac delta function weighted according to an inverse distance power law (d-n). The reverberant component is an exponentially decaying function with onset equal...
Optical inverse-square displacement sensor
Howe, R.D.; Kychakoff, G.
1989-09-12
This invention comprises an optical displacement sensor that uses the inverse-square attenuation of light reflected from a diffused surface to calculate the distance from the sensor to the reflecting surface. Light emerging from an optical fiber or the like is directed onto the surface whose distance is to be measured. The intensity I of reflected light is angle dependent, but within a sufficiently small solid angle it falls off as the inverse square of the distance from the surface. At least a pair of optical detectors are mounted to detect the reflected light within the small solid angle, their ends being at different distances R and R + [Delta]R from the surface. The distance R can then be found in terms of the ratio of the intensity measurements and the separation length as given in an equation. 10 figs.
Yao, Xiu-Zhong; Yun, Hong; Zeng, Meng-Su; Wang, He; Sun, Fei; Rao, Sheng-Xiang; Ji, Yuan
2013-05-01
The objective of this paper was to investigate the value of apparent diffusion coefficients (ADCs) for differential diagnosis among solid pancreatic masses using respiratory triggered diffusion-weighted MR imaging with inversion-recovery fat-suppression technique (RT-IR-DWI) at 3.0 T. 20 normal volunteers and 72 patients (Pancreatic ductal adenocarcinoma [PDCA, n=30], mass-forming pancreatitis [MFP, n=15], solid pseudopapillary neoplasm [SPN, n=12], and pancreatic neuroendocrine tumor[PNET, n=15]) underwent RT-IR-DWI (b values: 0 and 600 s/mm(2)) at 3.0 T. Results were correlated with histopathologic data and follow-up imaging. ADC values among different types of pancreatic tissue were statistically analyzed and compared. Statistical difference was noticed in ADC values among normal pancreas, MFP, PDCA, SPN and PNET by ANOVA (pPDCA, MFP and SPN. There was noticeable statistical difference in ADC values among PDCA, MFP and normal pancreas by Least Significant Difference (LSD) (pPDCA (p=0.0300×10(-4)) and normal pancreas (p=0.0007×10(-4)). ADC of PNET was statistically lower than that of normal pancreas (p=0.0360) and higher than that of MFP (p=9.3000×10(-4)). ADC measurements using RT-IR-DWI at 3.0T may aid to disclose the histopathological pattern of normal pancreas and solid pancreatic masses, which may be helpful in characterizing solid pancreatic lesions. Copyright © 2013 Elsevier Inc. All rights reserved.
Inverse problems of geophysics
International Nuclear Information System (INIS)
Yanovskaya, T.B.
2003-07-01
This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given
Directory of Open Access Journals (Sweden)
Halis Aygün
2008-01-01
Full Text Available We introduce definitions of fuzzy inverse compactness, fuzzy inverse countable compactness, and fuzzy inverse Lindelöfness on arbitrary -fuzzy sets in -fuzzy topological spaces. We prove that the proposed definitions are good extensions of the corresponding concepts in ordinary topology and obtain different characterizations of fuzzy inverse compactness.
A Comparison of Weights Matrices on Computation of Dengue Spatial Autocorrelation
Suryowati, K.; Bekti, R. D.; Faradila, A.
2018-04-01
Spatial autocorrelation is one of spatial analysis to identify patterns of relationship or correlation between locations. This method is very important to get information on the dispersal patterns characteristic of a region and linkages between locations. In this study, it applied on the incidence of Dengue Hemorrhagic Fever (DHF) in 17 sub districts in Sleman, Daerah Istimewa Yogyakarta Province. The link among location indicated by a spatial weight matrix. It describe the structure of neighbouring and reflects the spatial influence. According to the spatial data, type of weighting matrix can be divided into two types: point type (distance) and the neighbourhood area (contiguity). Selection weighting function is one determinant of the results of the spatial analysis. This study use queen contiguity based on first order neighbour weights, queen contiguity based on second order neighbour weights, and inverse distance weights. Queen contiguity first order and inverse distance weights shows that there is the significance spatial autocorrelation in DHF, but not by queen contiguity second order. Queen contiguity first and second order compute 68 and 86 neighbour list
Fast Exact Euclidean Distance (FEED): A new class of adaptable distance transforms
Schouten, Theo E.; van den Broek, Egon
2014-01-01
A new unique class of foldable distance transforms of digital images (DT) is introduced, baptized: Fast Exact Euclidean Distance (FEED) transforms. FEED class algorithms calculate the DT starting directly from the definition or rather its inverse. The principle of FEED class algorithms is
Fast Exact Euclidean Distance (FEED) : A new class of adaptable distance transforms
Schouten, Theo E.; van den Broek, Egon L.
2014-01-01
A new unique class of foldable distance transforms of digital images (DT) is introduced, baptized: Fast Exact Euclidean Distance (FEED) transforms. FEED class algorithms calculate the DT startingdirectly from the definition or rather its inverse. The principle of FEED class algorithms is introduced,
Fuzzy logic guided inverse treatment planning
International Nuclear Information System (INIS)
Yan Hui; Yin Fangfang; Guan Huaiqun; Kim, Jae Ho
2003-01-01
A fuzzy logic technique was applied to optimize the weighting factors in the objective function of an inverse treatment planning system for intensity-modulated radiation therapy (IMRT). Based on this technique, the optimization of weighting factors is guided by the fuzzy rules while the intensity spectrum is optimized by a fast-monotonic-descent method. The resultant fuzzy logic guided inverse planning system is capable of finding the optimal combination of weighting factors for different anatomical structures involved in treatment planning. This system was tested using one simulated (but clinically relevant) case and one clinical case. The results indicate that the optimal balance between the target dose and the critical organ dose is achieved by a refined combination of weighting factors. With the help of fuzzy inference, the efficiency and effectiveness of inverse planning for IMRT are substantially improved
Directory of Open Access Journals (Sweden)
Longxiang Li
Full Text Available Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.
Amirpour Haredasht, Sara; Polson, Dale; Main, Rodger; Lee, Kyuyoung; Holtkamp, Derald; Martínez-López, Beatriz
2017-06-07
Porcine reproductive and respiratory syndrome (PRRS) is one of the most economically devastating infectious diseases for the swine industry. A better understanding of the disease dynamics and the transmission pathways under diverse epidemiological scenarios is a key for the successful PRRS control and elimination in endemic settings. In this paper we used a two step parameter-driven (PD) Bayesian approach to model the spatio-temporal dynamics of PRRS and predict the PRRS status on farm in subsequent time periods in an endemic setting in the US. For such purpose we used information from a production system with 124 pig sites that reported 237 PRRS cases from 2012 to 2015 and from which the pig trade network and geographical location of farms (i.e., distance was used as a proxy of airborne transmission) was available. We estimated five PD models with different weights namely: (i) geographical distance weight which contains the inverse distance between each pair of farms in kilometers, (ii) pig trade weight (PT ji ) which contains the absolute number of pig movements between each pair of farms, (iii) the product between the distance weight and the standardized relative pig trade weight, (iv) the product between the standardized distance weight and the standardized relative pig trade weight, and (v) the product of the distance weight and the pig trade weight. The model that included the pig trade weight matrix provided the best fit to model the dynamics of PRRS cases on a 6-month basis from 2012 to 2015 and was able to predict PRRS outbreaks in the subsequent time period with an area under the ROC curve (AUC) of 0.88 and the accuracy of 85% (105/124). The result of this study reinforces the importance of pig trade in PRRS transmission in the US. Methods and results of this study may be easily adapted to any production system to characterize the PRRS dynamics under diverse epidemic settings to more timely support decision-making.
water quality assessment and mapping using inverse distance ...
African Journals Online (AJOL)
USER
quality parameters were analyzed namely; temperature, turbidity, pH, dissolved oxygen, biochemical oxygen demand. (BOD), chemical oxygen demand (COD), total nitrogen and total phosphorus using standard methods. Rainy season results were .... measuring cylinders, pipettes and burets) made by. Kimax Company ...
The inverse square law of gravitation
International Nuclear Information System (INIS)
Cook, A.H.
1987-01-01
The inverse square law of gravitation is very well established over the distances of celestial mechanics, while in electrostatics the law has been shown to be followed to very high precision. However, it is only within the last century that any laboratory experiments have been made to test the inverse square law for gravitation, and all but one has been carried out in the last ten years. At the same time, there has been considerable interest in the possibility of deviations from the inverse square law, either because of a possible bearing on unified theories of forces, including gravitation or, most recently, because of a possible additional fifth force of nature. In this article the various lines of evidence for the inverse square law are summarized, with emphasis upon the recent laboratory experiments. (author)
Distance and Cable Length Measurement System
Hernández, Sergio Elias; Acosta, Leopoldo; Toledo, Jonay
2009-01-01
A simple, economic and successful design for distance and cable length detection is presented. The measurement system is based on the continuous repetition of a pulse that endlessly travels along the distance to be detected. There is a pulse repeater at both ends of the distance or cable to be measured. The endless repetition of the pulse generates a frequency that varies almost inversely with the distance to be measured. The resolution and distance or cable length range could be adjusted by varying the repetition time delay introduced at both ends and the measurement time. With this design a distance can be measured with centimeter resolution using electronic system with microsecond resolution, simplifying classical time of flight designs which require electronics with picosecond resolution. This design was also applied to position measurement. PMID:22303169
Laterally constrained inversion for CSAMT data interpretation
Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun
2015-10-01
Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.
The Inverse-Square Law with Data Loggers
Bates, Alan
2013-01-01
The inverse-square law for the intensity of light received at a distance from a light source has been verified using various experimental techniques. Typical measurements involve a manual variation of the distance between a light source and a light sensor, usually by sliding the sensor or source along a bench, measuring the source-sensor distance…
Inverse Kinematics using Quaternions
DEFF Research Database (Denmark)
Henriksen, Knud; Erleben, Kenny; Engell-Nørregård, Morten
In this project I describe the status of inverse kinematics research, with the focus firmly on the methods that solve the core problem. An overview of the different methods are presented Three common methods used in inverse kinematics computation have been chosen as subject for closer inspection....
Inverse logarithmic potential problem
Cherednichenko, V G
1996-01-01
The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.
Deza, Michel Marie
2016-01-01
This 4th edition of the leading reference volume on distance metrics is characterized by updated and rewritten sections on some items suggested by experts and readers, as well a general streamlining of content and the addition of essential new topics. Though the structure remains unchanged, the new edition also explores recent advances in the use of distances and metrics for e.g. generalized distances, probability theory, graph theory, coding theory, data analysis. New topics in the purely mathematical sections include e.g. the Vitanyi multiset-metric, algebraic point-conic distance, triangular ratio metric, Rossi-Hamming metric, Taneja distance, spectral semimetric between graphs, channel metrization, and Maryland bridge distance. The multidisciplinary sections have also been supplemented with new topics, including: dynamic time wrapping distance, memory distance, allometry, atmospheric depth, elliptic orbit distance, VLBI distance measurements, the astronomical system of units, and walkability distance. Lea...
Canter, David; Tagg, Stephen K.
1975-01-01
The results of eleven distance estimation studies made in seven cities and five countries are reported. Distances were estimated between various points within the cities in which the subjects were resident. In general, undergraduate residents' distance estimates correlated highly with actual distance, but the nonundergraduate group's did not.…
DEFF Research Database (Denmark)
Larsen, Gunvor Riber
2017-01-01
This paper explores how Danish tourists represent distance in relation to their holiday mobility and how these representations of distance are a result of being aero-mobile as opposed to being land-mobile. Based on interviews with Danish tourists, whose holiday mobility ranges from the European...... continent to global destinations, the first part of this qualitative study identifies three categories of representations of distance that show how distance is being ‘translated’ by the tourists into non-geometric forms: distance as resources, distance as accessibility, and distance as knowledge....... The representations of distance articulated by the Danish tourists show that distance is often not viewed in ‘just’ kilometres. Rather, it is understood in forms that express how transcending the physical distance through holiday mobility is dependent on individual social and economic contexts, and on whether...
International Nuclear Information System (INIS)
Burkhard, N.R.
1979-01-01
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
Inverse design of multicomponent assemblies
Piñeros, William D.; Lindquist, Beth A.; Jadrich, Ryan B.; Truskett, Thomas M.
2018-03-01
Inverse design can be a useful strategy for discovering interactions that drive particles to spontaneously self-assemble into a desired structure. Here, we extend an inverse design methodology—relative entropy optimization—to determine isotropic interactions that promote assembly of targeted multicomponent phases, and we apply this extension to design interactions for a variety of binary crystals ranging from compact triangular and square architectures to highly open structures with dodecagonal and octadecagonal motifs. We compare the resulting optimized (self- and cross) interactions for the binary assemblies to those obtained from optimization of analogous single-component systems. This comparison reveals that self-interactions act as a "primer" to position particles at approximately correct coordination shell distances, while cross interactions act as the "binder" that refines and locks the system into the desired configuration. For simpler binary targets, it is possible to successfully design self-assembling systems while restricting one of these interaction types to be a hard-core-like potential. However, optimization of both self- and cross interaction types appears necessary to design for assembly of more complex or open structures.
The Distance Standard Deviation
Edelmann, Dominic; Richards, Donald; Vogel, Daniel
2017-01-01
The distance standard deviation, which arises in distance correlation analysis of multivariate data, is studied as a measure of spread. New representations for the distance standard deviation are obtained in terms of Gini's mean difference and in terms of the moments of spacings of order statistics. Inequalities for the distance variance are derived, proving that the distance standard deviation is bounded above by the classical standard deviation and by Gini's mean difference. Further, it is ...
Sharp spatially constrained inversion
DEFF Research Database (Denmark)
Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.
2013-01-01
We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....
International Nuclear Information System (INIS)
Rosenwald, J.-C.
2008-01-01
The lecture addressed the following topics: Optimizing radiotherapy dose distribution; IMRT contributes to optimization of energy deposition; Inverse vs direct planning; Main steps of IMRT; Background of inverse planning; General principle of inverse planning; The 3 main components of IMRT inverse planning; The simplest cost function (deviation from prescribed dose); The driving variable : the beamlet intensity; Minimizing a 'cost function' (or 'objective function') - the walker (or skier) analogy; Application to IMRT optimization (the gradient method); The gradient method - discussion; The simulated annealing method; The optimization criteria - discussion; Hard and soft constraints; Dose volume constraints; Typical user interface for definition of optimization criteria; Biological constraints (Equivalent Uniform Dose); The result of the optimization process; Semi-automatic solutions for IMRT; Generalisation of the optimization problem; Driving and driven variables used in RT optimization; Towards multi-criteria optimization; and Conclusions for the optimization phase. (P.A.)
Speaker independent acoustic-to-articulatory inversion
Ji, An
Acoustic-to-articulatory inversion, the determination of articulatory parameters from acoustic signals, is a difficult but important problem for many speech processing applications, such as automatic speech recognition (ASR) and computer aided pronunciation training (CAPT). In recent years, several approaches have been successfully implemented for speaker dependent models with parallel acoustic and kinematic training data. However, in many practical applications inversion is needed for new speakers for whom no articulatory data is available. In order to address this problem, this dissertation introduces a novel speaker adaptation approach called Parallel Reference Speaker Weighting (PRSW), based on parallel acoustic and articulatory Hidden Markov Models (HMM). This approach uses a robust normalized articulatory space and palate referenced articulatory features combined with speaker-weighted adaptation to form an inversion mapping for new speakers that can accurately estimate articulatory trajectories. The proposed PRSW method is evaluated on the newly collected Marquette electromagnetic articulography -- Mandarin Accented English (EMA-MAE) corpus using 20 native English speakers. Cross-speaker inversion results show that given a good selection of reference speakers with consistent acoustic and articulatory patterns, the PRSW approach gives good speaker independent inversion performance even without kinematic training data.
Submucous Myoma Induces Uterine Inversion
Directory of Open Access Journals (Sweden)
Yu-Li Chen
2006-06-01
Conclusion: Nonpuerperal inversion of the uterus is rarely encountered by gynecologists. Diagnosis of uterine inversion is often not easy and imaging studies might be helpful. Surgical treatment is the method of choice in nonpuerperal uterine inversion.
Deza, Michel Marie
2014-01-01
This updated and revised third edition of the leading reference volume on distance metrics includes new items from very active research areas in the use of distances and metrics such as geometry, graph theory, probability theory and analysis. Among the new topics included are, for example, polyhedral metric space, nearness matrix problems, distances between belief assignments, distance-related animal settings, diamond-cutting distances, natural units of length, Heidegger’s de-severance distance, and brain distances. The publication of this volume coincides with intensifying research efforts into metric spaces and especially distance design for applications. Accurate metrics have become a crucial goal in computational biology, image analysis, speech recognition and information retrieval. Leaving aside the practical questions that arise during the selection of a ‘good’ distance function, this work focuses on providing the research community with an invaluable comprehensive listing of the main available di...
Predicting objective function weights from patient anatomy in prostate IMRT treatment planning.
Lee, Taewoo; Hammad, Muhannad; Chan, Timothy C Y; Craig, Tim; Sharpe, Michael B
2013-12-01
Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. A regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, using l2 distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights
Fuzzy Weighted Average: Analytical Solution
van den Broek, P.M.; Noppen, J.A.R.
2009-01-01
An algorithm is presented for the computation of analytical expressions for the extremal values of the α-cuts of the fuzzy weighted average, for triangular or trapeizoidal weights and attributes. Also, an algorithm for the computation of the inverses of these expressions is given, providing exact
Safety distance between underground natural gas and water pipeline facilities
International Nuclear Information System (INIS)
Mohsin, R.; Majid, Z.A.; Yusof, M.Z.
2014-01-01
A leaking water pipe bursting high pressure water jet in the soil will create slurry erosion which will eventually erode the adjacent natural gas pipe, thus causing its failure. The standard 300 mm safety distance used to place natural gas pipe away from water pipeline facilities needs to be reviewed to consider accidental damage and provide safety cushion to the natural gas pipe. This paper presents a study on underground natural gas pipeline safety distance via experimental and numerical approaches. The pressure–distance characteristic curve obtained from this experimental study showed that the pressure was inversely proportional to the square of the separation distance. Experimental testing using water-to-water pipeline system environment was used to represent the worst case environment, and could be used as a guide to estimate appropriate safety distance. Dynamic pressures obtained from the experimental measurement and simulation prediction mutually agreed along the high-pressure water jetting path. From the experimental and simulation exercises, zero effect distance for water-to-water medium was obtained at an estimated horizontal distance at a minimum of 1500 mm, while for the water-to-sand medium, the distance was estimated at a minimum of 1200 mm. - Highlights: • Safe separation distance of underground natural gas pipes was determined. • Pressure curve is inversely proportional to separation distance. • Water-to-water system represents the worst case environment. • Measured dynamic pressures mutually agreed with simulation results. • Safe separation distance of more than 1200 mm should be applied
Inverse scale space decomposition
DEFF Research Database (Denmark)
Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane
2018-01-01
We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...
Haptic Discrimination of Distance
van Beek, Femke E.; Bergmann Tiest, Wouter M.; Kappers, Astrid M. L.
2014-01-01
While quite some research has focussed on the accuracy of haptic perception of distance, information on the precision of haptic perception of distance is still scarce, particularly regarding distances perceived by making arm movements. In this study, eight conditions were measured to answer four main questions, which are: what is the influence of reference distance, movement axis, perceptual mode (active or passive) and stimulus type on the precision of this kind of distance perception? A discrimination experiment was performed with twelve participants. The participants were presented with two distances, using either a haptic device or a real stimulus. Participants compared the distances by moving their hand from a start to an end position. They were then asked to judge which of the distances was the longer, from which the discrimination threshold was determined for each participant and condition. The precision was influenced by reference distance. No effect of movement axis was found. The precision was higher for active than for passive movements and it was a bit lower for real stimuli than for rendered stimuli, but it was not affected by adding cutaneous information. Overall, the Weber fraction for the active perception of a distance of 25 or 35 cm was about 11% for all cardinal axes. The recorded position data suggest that participants, in order to be able to judge which distance was the longer, tried to produce similar speed profiles in both movements. This knowledge could be useful in the design of haptic devices. PMID:25116638
Haptic discrimination of distance.
Directory of Open Access Journals (Sweden)
Femke E van Beek
Full Text Available While quite some research has focussed on the accuracy of haptic perception of distance, information on the precision of haptic perception of distance is still scarce, particularly regarding distances perceived by making arm movements. In this study, eight conditions were measured to answer four main questions, which are: what is the influence of reference distance, movement axis, perceptual mode (active or passive and stimulus type on the precision of this kind of distance perception? A discrimination experiment was performed with twelve participants. The participants were presented with two distances, using either a haptic device or a real stimulus. Participants compared the distances by moving their hand from a start to an end position. They were then asked to judge which of the distances was the longer, from which the discrimination threshold was determined for each participant and condition. The precision was influenced by reference distance. No effect of movement axis was found. The precision was higher for active than for passive movements and it was a bit lower for real stimuli than for rendered stimuli, but it was not affected by adding cutaneous information. Overall, the Weber fraction for the active perception of a distance of 25 or 35 cm was about 11% for all cardinal axes. The recorded position data suggest that participants, in order to be able to judge which distance was the longer, tried to produce similar speed profiles in both movements. This knowledge could be useful in the design of haptic devices.
DEFF Research Database (Denmark)
Larsen, Gunvor Riber
contribute to an understanding of how it is possible to change tourism travel behaviour towards becoming more sustainable. How tourists 'consume distance' is discussed, from the practical level of actually driving the car or sitting in the air plane, to the symbolic consumption of distance that occurs when......The environmental impact of tourism mobility is linked to the distances travelled in order to reach a holiday destination, and with tourists travelling more and further than previously, an understanding of how the tourists view the distance they travel across becomes relevant. Based on interviews...... travelling on holiday becomes part of a lifestyle and a social positioning game. Further, different types of tourist distance consumers are identified, ranging from the reluctant to the deliberate and nonchalant distance consumers, who display very differing attitudes towards the distance they all travel...
Interface Simulation Distances
Directory of Open Access Journals (Sweden)
Pavol Černý
2012-10-01
Full Text Available The classical (boolean notion of refinement for behavioral interfaces of system components is the alternating refinement preorder. In this paper, we define a distance for interfaces, called interface simulation distance. It makes the alternating refinement preorder quantitative by, intuitively, tolerating errors (while counting them in the alternating simulation game. We show that the interface simulation distance satisfies the triangle inequality, that the distance between two interfaces does not increase under parallel composition with a third interface, and that the distance between two interfaces can be bounded from above and below by distances between abstractions of the two interfaces. We illustrate the framework, and the properties of the distances under composition of interfaces, with two case studies.
Convergence analysis of surrogate-based methods for Bayesian inverse problems
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback–Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
Designing legible fonts for distance reading
DEFF Research Database (Denmark)
Beier, Sofie
2016-01-01
This chapter reviews existing knowledge on distance legibility of fonts, and finds that for optimal distance reading, letters and numbers benefit from relative wide shapes, open inner counters and a large x-height; fonts should further be widely spaced, and the weight should not be too heavy or t...... light. Research also indicates that serifs on the vertical extremes improve legibility under such reading conditions.......This chapter reviews existing knowledge on distance legibility of fonts, and finds that for optimal distance reading, letters and numbers benefit from relative wide shapes, open inner counters and a large x-height; fonts should further be widely spaced, and the weight should not be too heavy or too...
Moral distance in dictator games
Directory of Open Access Journals (Sweden)
Fernando Aguiar
2008-04-01
Full Text Available We perform an experimental investigation using a dictator game in which individuals must make a moral decision --- to give or not to give an amount of money to poor people in the Third World. A questionnaire in which the subjects are asked about the reasons for their decision shows that, at least in this case, moral motivations carry a heavy weight in the decision: the majority of dictators give the money for reasons of a consequentialist nature. Based on the results presented here and of other analogous experiments, we conclude that dicator behavior can be understood in terms of moral distance rather than social distance and that it systematically deviates from the egoism assumption in economic models and game theory. %extit{JEL}: A13, C72, C91
Inversion assuming weak scattering
DEFF Research Database (Denmark)
Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus
2013-01-01
due to the complex nature of the field. A method based on linear inversion is employed to infer information about the statistical properties of the scattering field from the obtained cross-spectral matrix. A synthetic example based on an active high-frequency sonar demonstrates that the proposed...
Broekhuis, H.
2005-01-01
This article aims at reformulating in more current terms Hoekstra and Mulder’s (1990) analysis of the Locative Inversion (LI) construction. The new proposal is crucially based on the assumption that Small Clause (SC) predicates agree with their external argument in phi-features, which may be
Lindstrom, Peter A.; And Others
This document consists of four units. The first of these views calculus applications to work, area, and distance problems. It is designed to help students gain experience in: 1) computing limits of Riemann sums; 2) computing definite integrals; and 3) solving elementary area, distance, and work problems by integration. The second module views…
Bayesian seismic AVO inversion
Energy Technology Data Exchange (ETDEWEB)
Buland, Arild
2002-07-01
A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S
Energy Technology Data Exchange (ETDEWEB)
Shin, Chang Soo; Park, Keun Pil [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of); Suh, Jung Hee; Hyun, Byung Koo; Shin, Sung Ryul [Seoul National University, Seoul (Korea, Republic of)
1995-12-01
The seismic reflection exploration technique which is one of the geophysical methods for oil exploration became effectively to image the subsurface structure with rapid development of computer. However, the imagining of subsurface based on the conventional data processing is almost impossible to obtain the information on physical properties of the subsurface such as velocity and density. Since seismic data are implicitly function of velocities of subsurface, it is necessary to develop the inversion method that can delineate the velocity structure using seismic topography and waveform inversion. As a tool to perform seismic inversion, seismic forward modeling program using ray tracing should be developed. In this study, we have developed the algorithm that calculate the travel time of the complex geologic structure using shooting ray tracing by subdividing the geologic model into blocky structure having the constant velocity. With the travel time calculation, the partial derivatives of travel time can be calculated efficiently without difficulties. Since the current ray tracing technique has a limitation to calculate the travel times for extremely complex geologic model, our aim in the future is to develop the powerful ray tracer using the finite element technique. After applying the pseudo waveform inversion to the seismic data of Korea offshore, we can obtain the subsurface velocity model and use the result in bring up the quality of the seismic data processing. If conventional seismic data processing and seismic interpretation are linked with this inversion technique, the high quality of seismic data processing can be expected to image the structure of the subsurface. Future research area is to develop the powerful ray tracer of ray tracing which can calculate the travel times for the extremely complex geologic model. (author). 39 refs., 32 figs., 2 tabs.
Calculation of the inverse data space via sparse inversion
Saragiotis, Christos
2011-01-01
The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from trivial as theory requires infinite time and offset recording. Furthermore regularization issues arise during inversion. We perform the inversion by minimizing the least-squares norm of the misfit function by constraining the $ell_1$ norm of the solution, being the inverse data space. In this way a sparse inversion approach is obtained. We show results on field data with an application to surface multiple removal.
Ziegler, Gerhard
2011-01-01
Distance protection provides the basis for network protection in transmission systems and meshed distribution systems. This book covers the fundamentals of distance protection and the special features of numerical technology. The emphasis is placed on the application of numerical distance relays in distribution and transmission systems.This book is aimed at students and engineers who wish to familiarise themselves with the subject of power system protection, as well as the experienced user, entering the area of numerical distance protection. Furthermore it serves as a reference guide for s
Function representation with circle inversion map systems
Boreland, Bryson; Kunze, Herb
2017-01-01
The fractals literature develops the now well-known concept of local iterated function systems (using affine maps) with grey-level maps (LIFSM) as an approach to function representation in terms of the associated fixed point of the so-called fractal transform. While originally explored as a method to achieve signal (and 2-D image) compression, more recent work has explored various aspects of signal and image processing using this machinery. In this paper, we develop a similar framework for function representation using circle inversion map systems. Given a circle C with centre õ and radius r, inversion with respect to C transforms the point p˜ to the point p˜', such that p˜ and p˜' lie on the same radial half-line from õ and d(õ, p˜)d(õ, p˜') = r2, where d is Euclidean distance. We demonstrate the results with an example.
On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction
International Nuclear Information System (INIS)
Crop, F; Thierens, H; Rompaye, B Van; Paelinck, L; Vakaet, L; Wagter, C De
2008-01-01
The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry
Electrochemically driven emulsion inversion
International Nuclear Information System (INIS)
Johans, Christoffer; Kontturi, Kyoesti
2007-01-01
It is shown that emulsions stabilized by ionic surfactants can be inverted by controlling the electrical potential across the oil-water interface. The potential dependent partitioning of sodium dodecyl sulfate (SDS) was studied by cyclic voltammetry at the 1,2-dichlorobenzene|water interface. In the emulsion the potential control was achieved by using a potential-determining salt. The inversion of a 1,2-dichlorobenzene-in-water (O/W) emulsion stabilized by SDS was followed by conductometry as a function of added tetrapropylammonium chloride. A sudden drop in conductivity was observed, indicating the change of the continuous phase from water to 1,2-dichlorobenzene, i.e. a water-in-1,2-dichlorobenzene emulsion was formed. The inversion potential is well in accordance with that predicted by the hydrophilic-lipophilic deviation if the interfacial potential is appropriately accounted for
DEFF Research Database (Denmark)
Gale, A.S.; Surlyk, Finn; Anderskouv, Kresten
2013-01-01
Evidence from regional stratigraphical patterns in Santonian−Campanian chalk is used to infer the presence of a very broad channel system (5 km across) with a depth of at least 50 m, running NNW−SSE across the eastern Isle of Wight; only the western part of the channel wall and fill is exposed. W......−Campanian chalks in the eastern Isle of Wight, involving penecontemporaneous tectonic inversion of the underlying basement structure, are rejected....
Intersections, ideals, and inversion
International Nuclear Information System (INIS)
Vasco, D.W.
1998-01-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly one dimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons
Intersections, ideals, and inversion
Energy Technology Data Exchange (ETDEWEB)
Vasco, D.W.
1998-10-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly onedimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons.
International Nuclear Information System (INIS)
Steinhauer, L.C.; Romea, R.D.; Kimura, W.D.
1997-01-01
A new method for laser acceleration is proposed based upon the inverse process of transition radiation. The laser beam intersects an electron-beam traveling between two thin foils. The principle of this acceleration method is explored in terms of its classical and quantum bases and its inverse process. A closely related concept based on the inverse of diffraction radiation is also presented: this concept has the significant advantage that apertures are used to allow free passage of the electron beam. These concepts can produce net acceleration because they do not satisfy the conditions in which the Lawson-Woodward theorem applies (no net acceleration in an unbounded vacuum). Finally, practical aspects such as damage limits at optics are employed to find an optimized set of parameters. For reasonable assumptions an acceleration gradient of 200 MeV/m requiring a laser power of less than 1 GW is projected. An interesting approach to multi-staging the acceleration sections is also presented. copyright 1997 American Institute of Physics
Directory of Open Access Journals (Sweden)
Dr. Nursel Selver RUZGAR,
2004-04-01
Full Text Available Distance Education in Turkey Assistant Professor Dr. Nursel Selver RUZGAR Technical Education Faculty Marmara University, TURKEY ABSTRACT Many countries of the world are using distance education with various ways, by internet, by post and by TV. In this work, development of distance education in Turkey has been presented from the beginning. After discussing types and applications for different levels of distance education in Turkey, the distance education was given in the cultural aspect of the view. Then, in order to create the tendencies and thoughts of graduates of Higher Education Institutions and Distance Education Institutions about being competitors in job markets, sufficiency of education level, advantages for education system, continuing education in different Institutions, a face-to-face survey was applied to 1284 graduates, 958 from Higher Education Institutions and 326 from Distance Education Institutions. The results were evaluated and discussed. In the last part of this work, suggestions to become widespread and improve the distance education in the country were made.
C. Binder (C.); C. Heilmann (Conrad)
2017-01-01
markdownabstractEver since the publication of Peter Singer’s article ‘‘Famine, Affluence, and Morality’’ has the question of whether the (geographical) distance to people in need affects our moral duties towards them been a hotly debated issue. Does geographical distance affect our moral
Information distance in multiples
P.M.B. Vitányi (Paul)
2009-01-01
htmlabstractInformation distance is a parameter-free similarity measure based on compression, used in pattern recognition, data mining, phylogeny, clustering, and classification. The notion of information distance is extended from pairs to multiples (finite lists). We study maximal overlap,
Normalized information distance
Vitányi, P.M.B.; Balbach, F.J.; Cilibrasi, R.L.; Li, M.; Emmert-Streib, F.; Dehmer, M.
2009-01-01
The normalized information distance is a universal distance measure for objects of all kinds. It is based on Kolmogorov complexity and thus uncomputable, but there are ways to utilize it. First, compression algorithms can be used to approximate the Kolmogorov complexity if the objects have a string
... Health Information Weight Management English English Español Weight Management Obesity is a chronic condition that affects more ... Liver (NASH) Heart Disease & Stroke Sleep Apnea Weight Management Topics About Food Portions Bariatric Surgery for Severe ...
DEFF Research Database (Denmark)
Mosegaard, Klaus
2012-01-01
For non-linear inverse problems, the mathematical structure of the mapping from model parameters to data is usually unknown or partly unknown. Absence of information about the mathematical structure of this function prevents us from presenting an analytical solution, so our solution depends on our...... ability to produce efficient search algorithms. Such algorithms may be completely problem-independent (which is the case for the so-called 'meta-heuristics' or 'blind-search' algorithms), or they may be designed with the structure of the concrete problem in mind. We show that pure meta...
Timing of growth inhibition following shoot inversion in Pharbitis nil
Abdel-Rahman, A. M.; Cline, M. G.
1989-01-01
Shoot inversion in Pharbitis nil results in the enhancement of ethylene production and in the inhibition of elongation in the growth zone of the inverted shoot. The initial increase in ethylene production previously was detected within 2 to 2.75 hours after inversion. In the present study, the initial inhibition of shoot elongation was detected within 1.5 to 4 hours with a weighted mean of 2.4 hours. Ethylene treatment of upright shoots inhibited elongation in 1.5 hours. A cause and effect relationship between shoot inversion-enhanced ethylene production and inhibition of elongation cannot be excluded.
Fully probabilistic earthquake source inversion on teleseismic scales
Stähler, Simon; Sigloch, Karin
2017-04-01
Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source
Losada, David E.; Barreiro, Alvaro
2003-01-01
Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…
Learning Pullback HMM Distances.
Cuzzolin, Fabio; Sapienza, Michael
2014-07-01
Recent work in action recognition has exposed the limitations of methods which directly classify local features extracted from spatio-temporal video volumes. In opposition, encoding the actions' dynamics via generative dynamical models has a number of attractive features: however, using all-purpose distances for their classification does not necessarily deliver good results. We propose a general framework for learning distance functions for generative dynamical models, given a training set of labelled videos. The optimal distance function is selected among a family of pullback ones, induced by a parametrised automorphism of the space of models. We focus here on hidden Markov models and their model space, and design an appropriate automorphism there. Experimental results are presented which show how pullback learning greatly improves action recognition performances with respect to base distances.
Long distance quantum teleportation
Xia, Xiu-Xiu; Sun, Qi-Chao; Zhang, Qiang; Pan, Jian-Wei
2018-01-01
Quantum teleportation is a core protocol in quantum information science. Besides revealing the fascinating feature of quantum entanglement, quantum teleportation provides an ultimate way to distribute quantum state over extremely long distance, which is crucial for global quantum communication and future quantum networks. In this review, we focus on the long distance quantum teleportation experiments, especially those employing photonic qubits. From the viewpoint of real-world application, both the technical advantages and disadvantages of these experiments are discussed.
Energy Technology Data Exchange (ETDEWEB)
Lambourne, Robert [Department of Physics and Astronomy, Open University, Milton Keynes (United Kingdom)
2005-11-01
This paper examines the challenges and rewards that can arise when the teaching of Einsteinian physics has to be accomplished by means of distance education. The discussion is mainly based on experiences gathered over the past 35 years at the UK Open University, where special and general relativity, relativistic cosmology and other aspects of Einsteinian physics, have been taught at a variety of levels, and using a range of techniques, to students studying at a distance.
2016-03-02
estimation [13, 2], and manifold learning [19]. Such unsupervised methods do not have the benefit of human input on the distance metric, and overly rely...to be defined that is related to the task at hand. Many supervised and semi- supervised distance metric learning approaches have been developed [17... Unsupervised PCA seeks to identify a set of axes that best explain the variance contained in the data. LDA takes a supervised approach, minimiz- ing the intra
On Properties of the Generalized Wasserstein Distance
Piccoli, Benedetto; Rossi, Francesco
2016-12-01
The Wasserstein distances W p ( p {≥q} 1), defined in terms of a solution to the Monge-Kantorovich problem, are known to be a useful tool to investigate transport equations. In particular, the Benamou-Brenier formula characterizes the square of the Wasserstein distance W 2 as the infimum of the kinetic energy, or action functional, of all vector fields transporting one measure to the other. Another important property of the Wasserstein distances is the Kantorovich-Rubinstein duality, stating the equality between the distance W 1( μ, ν) of two probability measures μ, ν and the supremum of the integrals in d( μ - ν) of Lipschitz continuous functions with Lipschitz constant bounded by one. An intrinsic limitation of Wasserstein distances is the fact that they are defined only between measures having the same mass. To overcome such a limitation, we recently introduced the generalized Wasserstein distances {W_p^{a,b}}, defined in terms of both the classical Wasserstein distance W p and the total variation (or L 1) distance, see (Piccoli and Rossi in Archive for Rational Mechanics and Analysis 211(1):335-358, 2014). Here p plays the same role as for the classic Wasserstein distance, while a and b are weights for the transport and the total variation term. In this paper we prove two important properties of the generalized Wasserstein distances: (1) a generalized Benamou-Brenier formula providing the equality between {W_2^{a,b}} and the supremum of an action functional, which includes a transport term (kinetic energy) and a source term; (2) a duality à la Kantorovich-Rubinstein establishing the equality between {W_1^{1,1}} and the flat metric.
Displacement parameter inversion for a novel electromagnetic underground displacement sensor.
Shentu, Nanying; Li, Qing; Li, Xiong; Tong, Renyuan; Shentu, Nankai; Jiang, Guoqing; Qiu, Guohua
2014-05-22
Underground displacement monitoring is an effective method to explore deep into rock and soil masses for execution of subsurface displacement measurements. It is not only an important means of geological hazards prediction and forecasting, but also a forefront, hot and sophisticated subject in current geological disaster monitoring. In previous research, the authors had designed a novel electromagnetic underground horizontal displacement sensor (called the H-type sensor) by combining basic electromagnetic induction principles with modern sensing techniques and established a mutual voltage measurement theoretical model called the Equation-based Equivalent Loop Approach (EELA). Based on that work, this paper presents an underground displacement inversion approach named "EELA forward modeling-approximate inversion method". Combining the EELA forward simulation approach with the approximate optimization inversion theory, it can deduce the underground horizontal displacement through parameter inversion of the H-type sensor. Comprehensive and comparative studies have been conducted between the experimentally measured and theoretically inversed values of horizontal displacement under counterpart conditions. The results show when the measured horizontal displacements are in the 0-100 mm range, the horizontal displacement inversion discrepancy is generally tested to be less than 3 mm under varied tilt angles and initial axial distances conditions, which indicates that our proposed parameter inversion method can predict underground horizontal displacement measurements effectively and robustly for the H-type sensor and the technique is applicable for practical geo-engineering applications.
Displacement Parameter Inversion for a Novel Electromagnetic Underground Displacement Sensor
Directory of Open Access Journals (Sweden)
Nanying Shentu
2014-05-01
Full Text Available Underground displacement monitoring is an effective method to explore deep into rock and soil masses for execution of subsurface displacement measurements. It is not only an important means of geological hazards prediction and forecasting, but also a forefront, hot and sophisticated subject in current geological disaster monitoring. In previous research, the authors had designed a novel electromagnetic underground horizontal displacement sensor (called the H-type sensor by combining basic electromagnetic induction principles with modern sensing techniques and established a mutual voltage measurement theoretical model called the Equation-based Equivalent Loop Approach (EELA. Based on that work, this paper presents an underground displacement inversion approach named “EELA forward modeling-approximate inversion method”. Combining the EELA forward simulation approach with the approximate optimization inversion theory, it can deduce the underground horizontal displacement through parameter inversion of the H-type sensor. Comprehensive and comparative studies have been conducted between the experimentally measured and theoretically inversed values of horizontal displacement under counterpart conditions. The results show when the measured horizontal displacements are in the 0–100 mm range, the horizontal displacement inversion discrepancy is generally tested to be less than 3 mm under varied tilt angles and initial axial distances conditions, which indicates that our proposed parameter inversion method can predict underground horizontal displacement measurements effectively and robustly for the H-type sensor and the technique is applicable for practical geo-engineering applications.
Strength Training for Middle- and Long-Distance Performance: A Meta-Analysis.
Berryman, Nicolas; Mujika, Inigo; Arvisais, Denis; Roubeix, Marie; Binet, Carl; Bosquet, Laurent
2018-01-01
To assess the net effects of strength training on middle- and long-distance performance through a meta-analysis of the available literature. Three databases were searched, from which 28 of 554 potential studies met all inclusion criteria. Standardized mean differences (SMDs) were calculated and weighted by the inverse of variance to calculate an overall effect and its 95% confidence interval (CI). Subgroup analyses were conducted to determine whether the strength-training intensity, duration, and frequency and population performance level, age, sex, and sport were outcomes that might influence the magnitude of the effect. The implementation of a strength-training mesocycle in running, cycling, cross-country skiing, and swimming was associated with moderate improvements in middle- and long-distance performance (net SMD [95%CI] = 0.52 [0.33-0.70]). These results were associated with improvements in the energy cost of locomotion (0.65 [0.32-0.98]), maximal force (0.99 [0.80-1.18]), and maximal power (0.50 [0.34-0.67]). Maximal-force training led to greater improvements than other intensities. Subgroup analyses also revealed that beneficial effects on performance were consistent irrespective of the athletes' level. Taken together, these results provide a framework that supports the implementation of strength training in addition to traditional sport-specific training to improve middle- and long-distance performance, mainly through improvements in the energy cost of locomotion, maximal power, and maximal strength.
International Nuclear Information System (INIS)
Hicks, H.R.; Dory, R.A.; Holmes, J.A.
1983-01-01
We illustrate in some detail a 2D inverse-equilibrium solver that was constructed to analyze tokamak configurations and stellarators (the latter in the context of the average method). To ensure that the method is suitable not only to determine equilibria, but also to provide appropriately represented data for existing stability codes, it is important to be able to control the Jacobian, tilde J is identical to delta(R,Z)/delta(rho, theta). The form chosen is tilde J = J 0 (rho)R/sup l/rho where rho is a flux surface label, and l is an integer. The initial implementation is for a fixed conducting-wall boundary, but the technique can be extended to a free-boundary model
Topologically clean distance fields.
Gyulassy, Attila; Duchaineau, Mark; Natarajan, Vijay; Pascucci, Valerio; Bringa, Eduardo; Higginbotham, Andrew; Hamann, Bernd
2007-01-01
Analysis of the results obtained from material simulations is important in the physical sciences. Our research was motivated by the need to investigate the properties of a simulated porous solid as it is hit by a projectile. This paper describes two techniques for the generation of distance fields containing a minimal number of topological features, and we use them to identify features of the material. We focus on distance fields defined on a volumetric domain considering the distance to a given surface embedded within the domain. Topological features of the field are characterized by its critical points. Our first method begins with a distance field that is computed using a standard approach, and simplifies this field using ideas from Morse theory. We present a procedure for identifying and extracting a feature set through analysis of the MS complex, and apply it to find the invariants in the clean distance field. Our second method proceeds by advancing a front, beginning at the surface, and locally controlling the creation of new critical points. We demonstrate the value of topologically clean distance fields for the analysis of filament structures in porous solids. Our methods produce a curved skeleton representation of the filaments that helps material scientists to perform a detailed qualitative and quantitative analysis of pores, and hence infer important material properties. Furthermore, we provide a set of criteria for finding the "difference" between two skeletal structures, and use this to examine how the structure of the porous solid changes over several timesteps in the simulation of the particle impact.
Holocaust inversion and contemporary antisemitism.
Klaff, Lesley D
2014-01-01
One of the cruellest aspects of the new antisemitism is its perverse use of the Holocaust as a stick to beat 'the Jews'. This article explains the phenomenon of 'Holocaust Inversion', which involves an 'inversion of reality' (the Israelis are cast as the 'new' Nazis and the Palestinians as the 'new' Jews) and an 'inversion of morality' (the Holocaust is presented as a moral lesson for, or even a moral indictment of, 'the Jews'). Holocaust inversion is a form of soft-core Holocaust denial, yet...
DEFF Research Database (Denmark)
Hansen, Finn J. S.; Clausen, Christian
2001-01-01
The case study represents an example of a top-down introduction of distance teaching as part of Danish trials with the introduction of multimedia in education. The study is concerned with the background, aim and context of the trial as well as the role and working of the technology and the organi......The case study represents an example of a top-down introduction of distance teaching as part of Danish trials with the introduction of multimedia in education. The study is concerned with the background, aim and context of the trial as well as the role and working of the technology...
Inverse feasibility problems of the inverse maximum flow problems
Indian Academy of Sciences (India)
A linear time method to decide if any inverse maximum ﬂow (denoted General Inverse Maximum Flow problems (IMFG)) problem has solution is deduced. If IMFG does not have solution, methods to transform IMFG into a feasible problem are presented. The methods consist of modifying as little as possible the restrictions to ...
Inverse feasibility problems of the inverse maximum flow problems
Indian Academy of Sciences (India)
199–209. c Indian Academy of Sciences. Inverse feasibility problems of the inverse maximum flow problems. ADRIAN DEACONU. ∗ and ELEONOR CIUREA. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Brasov, Brasov, Iuliu Maniu st. 50,. Romania.
... Weight Gain Losing Weight Getting Started Improving Your Eating Habits Keeping It Off Healthy Eating for a Healthy ... or "program". It's about lifestyle changes in daily eating and exercise habits. Success Stories They did it. So can you! ...
Murphy, Elizabeth; Rodriguez-Manzanares, Maria A.
2012-01-01
Rapport has been recognized as important in learning in general but little is known about its importance in distance education (DE). The study we report on in this paper provides insights into the importance of rapport in DE as well as challenges to and indicators of rapport-building in DE. The study relied on interviews with 42 Canadian…
Encyclopedia of Distance Learning
Howard, Caroline, Ed.; Boettecher, Judith, Ed.; Justice, Lorraine, Ed.; Schenk, Karen, Ed.; Rogers, Patricia, Ed.; Berg, Gary, Ed.
2005-01-01
The innovations in computer and communications technologies combined with on-going needs to deliver educational programs to students regardless of their physical locations, have lead to the innovation of distance education programs and technologies. To keep up with recent developments in both areas of technologies and techniques related to…
Electromagnetic distance measurement
1967-01-01
This book brings together the work of forty-eight geodesists from twenty-five countries. They discuss various new electromagnetic distance measurement (EDM) instruments - among them the Tellurometer, Geodimeter, and air- and satellite-borne systems - and investigate the complex sources of error.
DEFF Research Database (Denmark)
Jensen, Hanne Louise; de Neergaard, Maja
2016-01-01
De-severing Distance This paper draws on the growing body of mobility literature that shows how mobility can be viewed as meaningful everyday practices (Freudendal –Pedersen 2007, Cresswell 2006) this paper examines how Heidegger’s term de-severing can help us understand the everyday coping with ...
Inverse problem in hydrogeology
Carrera, Jesús; Alcolea, Andrés; Medina, Agustín; Hidalgo, Juan; Slooten, Luit J.
2005-03-01
The state of the groundwater inverse problem is synthesized. Emphasis is placed on aquifer characterization, where modelers have to deal with conceptual model uncertainty (notably spatial and temporal variability), scale dependence, many types of unknown parameters (transmissivity, recharge, boundary conditions, etc.), nonlinearity, and often low sensitivity of state variables (typically heads and concentrations) to aquifer properties. Because of these difficulties, calibration cannot be separated from the modeling process, as it is sometimes done in other fields. Instead, it should be viewed as one step in the process of understanding aquifer behavior. In fact, it is shown that actual parameter estimation methods do not differ from each other in the essence, though they may differ in the computational details. It is argued that there is ample room for improvement in groundwater inversion: development of user-friendly codes, accommodation of variability through geostatistics, incorporation of geological information and different types of data (temperature, occurrence and concentration of isotopes, age, etc.), proper accounting of uncertainty, etc. Despite this, even with existing codes, automatic calibration facilitates enormously the task of modeling. Therefore, it is contended that its use should become standard practice. L'état du problème inverse des eaux souterraines est synthétisé. L'accent est placé sur la caractérisation de l'aquifère, où les modélisateurs doivent jouer avec l'incertitude des modèles conceptuels (notamment la variabilité spatiale et temporelle), les facteurs d'échelle, plusieurs inconnues sur différents paramètres (transmissivité, recharge, conditions aux limites, etc.), la non linéarité, et souvent la sensibilité de plusieurs variables d'état (charges hydrauliques, concentrations) des propriétés de l'aquifère. A cause de ces difficultés, le calibrage ne peut êtreséparé du processus de modélisation, comme c'est le
Face inversion increases attractiveness.
Leder, Helmut; Goller, Juergen; Forster, Michael; Schlageter, Lena; Paul, Matthew A
2017-07-01
Assessing facial attractiveness is a ubiquitous, inherent, and hard-wired phenomenon in everyday interactions. As such, it has highly adapted to the default way that faces are typically processed: viewing faces in upright orientation. By inverting faces, we can disrupt this default mode, and study how facial attractiveness is assessed. Faces, rotated at 90 (tilting to either side) and 180°, were rated on attractiveness and distinctiveness scales. For both orientations, we found that faces were rated more attractive and less distinctive than upright faces. Importantly, these effects were more pronounced for faces rated low in upright orientation, and smaller for highly attractive faces. In other words, the less attractive a face was, the more it gained in attractiveness by inversion or rotation. Based on these findings, we argue that facial attractiveness assessments might not rely on the presence of attractive facial characteristics, but on the absence of distinctive, unattractive characteristics. These unattractive characteristics are potentially weighed against an individual, attractive prototype in assessing facial attractiveness. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Zhang, Dongliang
2013-01-01
To increase the illumination of the subsurface and to eliminate the dependency of FWI on the source wavelet, we propose multiples waveform inversion (MWI) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. These virtual sources are used to numerically generate downgoing wavefields that are correlated with the backprojected surface-related multiples to give the migration image. Since the recorded data are treated as the virtual sources, knowledge of the source wavelet is not required, and the subsurface illumination is greatly enhanced because the entire free surface acts as an extended source compared to the radiation pattern of a traditional point source. Numerical tests on the Marmousi2 model show that the convergence rate and the spatial resolution of MWI is, respectively, faster and more accurate then FWI. The potential pitfall with this method is that the multiples undergo more than one roundtrip to the surface, which increases attenuation and reduces spatial resolution. This can lead to less resolved tomograms compared to conventional FWI. The possible solution is to combine both FWI and MWI in inverting for the subsurface velocity distribution.
Effect of Geographic Distance on Distance Education: An Empirical Study
Luo, Heng; Robinson, Anthony C.; Detwiler, Jim
2014-01-01
This study investigates the effect of geographic distance on students' distance learning experience with the aim to provide tentative answers to a fundamental question--does geographic distance matter in distance education? Using educational outcome data collected from an online master's program in Geographic Information Systems, this study…
Coin tossing and Laplace inversion
Indian Academy of Sciences (India)
MS received 5 May 1999; revised 3 April 2000. Abstract. An analysis of exchangeable sequences of coin tossings leads to inversion formulae for Laplace transforms of probability measures. Keywords. Laplace inversion; moment problem; exchangeable probabilities. 1. Introduction. There is an intimate relationship between ...
Inverse problems for Maxwell's equations
Romanov, V G
1994-01-01
The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.
Signed distance computation using the angle weighted pseudonormal
DEFF Research Database (Denmark)
Bærentzen, Jakob Andreas; Aanæs, Henrik
2005-01-01
The normals of closed, smooth surfaces have long been used to determine whether a point is inside or outside such a surface. It is tempting to also use this method for polyhedra represented as triangle meshes. Unfortunately, this is not possible since, at the vertices and edges of a triangle mesh...
Algebraic properties of generalized inverses
Cvetković‐Ilić, Dragana S
2017-01-01
This book addresses selected topics in the theory of generalized inverses. Following a discussion of the “reverse order law” problem and certain problems involving completions of operator matrices, it subsequently presents a specific approach to solving the problem of the reverse order law for {1} -generalized inverses. Particular emphasis is placed on the existence of Drazin invertible completions of an upper triangular operator matrix; on the invertibility and different types of generalized invertibility of a linear combination of operators on Hilbert spaces and Banach algebra elements; on the problem of finding representations of the Drazin inverse of a 2x2 block matrix; and on selected additive results and algebraic properties for the Drazin inverse. In addition to the clarity of its content, the book discusses the relevant open problems for each topic discussed. Comments on the latest references on generalized inverses are also included. Accordingly, the book will be useful for graduate students, Ph...
Acoustic source inversion to estimate volume flux from volcanic explosions
Kim, Keehoon; Fee, David; Yokoo, Akihiko; Lees, Jonathan M.
2015-07-01
We present an acoustic waveform inversion technique for infrasound data to estimate volume fluxes from volcanic eruptions. Previous inversion techniques have been limited by the use of a 1-D Green's function in a free space or half space, which depends only on the source-receiver distance and neglects volcanic topography. Our method exploits full 3-D Green's functions computed by a numerical method that takes into account realistic topographic scattering. We apply this method to vulcanian eruptions at Sakurajima Volcano, Japan. Our inversion results produce excellent waveform fits to field observations and demonstrate that full 3-D Green's functions are necessary for accurate volume flux inversion. Conventional inversions without consideration of topographic propagation effects may lead to large errors in the source parameter estimate. The presented inversion technique will substantially improve the accuracy of eruption source parameter estimation (cf. mass eruption rate) during volcanic eruptions and provide critical constraints for volcanic eruption dynamics and ash dispersal forecasting for aviation safety. Application of this approach to chemical and nuclear explosions will also provide valuable source information (e.g., the amount of energy released) previously unavailable.
Directory of Open Access Journals (Sweden)
Calvez V.
2010-12-01
Full Text Available We consider the radiative transfer equation (RTE with reflection in a three-dimensional domain, infinite in two dimensions, and prove an existence result. Then, we study the inverse problem of retrieving the optical parameters from boundary measurements, with help of existing results by Choulli and Stefanov. This theoretical analysis is the framework of an attempt to model the color of the skin. For this purpose, a code has been developed to solve the RTE and to study the sensitivity of the measurements made by biophysicists with respect to the physiological parameters responsible for the optical properties of this complex, multi-layered material. On étudie l’équation du transfert radiatif (ETR dans un domaine tridimensionnel infini dans deux directions, et on prouve un résultat d’existence. On s’intéresse ensuite à la reconstruction des paramètres optiques à partir de mesures faites au bord, en s’appuyant sur des résultats de Choulli et Stefanov. Cette analyse sert de cadre théorique à un travail de modélisation de la couleur de la peau. Dans cette perspective, un code à été développé pour résoudre l’ETR et étudier la sensibilité des mesures effectuées par les biophysiciens par rapport aux paramètres physiologiques tenus pour responsables des propriétés optiques de ce complexe matériau multicouche.
Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves
Chen, W.; Ni, S.; Wang, Z.
2011-12-01
In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.
Gualtieri, J. A.; Le Moigne, J.; Packer, C. V.
1992-01-01
Comparing two binary images and assigning a quantitative measure to this comparison finds its purpose in such tasks as image recognition, image compression, and image browsing. This quantitative measurement may be computed by utilizing the Hausdorff distance of the images represented as two-dimensional point sets. In this paper, we review two algorithms that have been proposed to compute this distance, and we present a parallel implementation of one of them on the MasPar parallel processor. We study their complexity and the results obtained by these algorithms for two different types of images: a set of displaced pairs of images of Gaussian densities, and a comparison of a Canny edge image with several edge images from a hierarchical region growing code.
THE EXTRAGALACTIC DISTANCE DATABASE
International Nuclear Information System (INIS)
Tully, R. Brent; Courtois, Helene M.; Jacobs, Bradley A.; Rizzi, Luca; Shaya, Edward J.; Makarov, Dmitry I.
2009-01-01
A database can be accessed on the Web at http://edd.ifa.hawaii.edu that was developed to promote access to information related to galaxy distances. The database has three functional components. First, tables from many literature sources have been gathered and enhanced with links through a distinct galaxy naming convention. Second, comparisons of results both at the levels of parameters and of techniques have begun and are continuing, leading to increasing homogeneity and consistency of distance measurements. Third, new material is presented arising from ongoing observational programs at the University of Hawaii 2.2 m telescope, radio telescopes at Green Bank, Arecibo, and Parkes and with the Hubble Space Telescope. This new observational material is made available in tandem with related material drawn from archives and passed through common analysis pipelines.
DEFF Research Database (Denmark)
Skillicorn, David; Walther, Olivier; Zheng, Quan
is a combination of the physical geography of the target environment, and the mental and physical cost of following a seemingly random pattern of attacks. Focusing on the distance and time between attacks and taking into consideration the transaction costs that state boundaries impose, we wish to understand what......” of North and West Africa that depicts the permeability to violence. A better understanding of how location, time, and borders condition attacks enables planning, prepositioning, and response....
Anogenital distance and umbilical cord testosterone level in ...
African Journals Online (AJOL)
In this study, the anogenital distance (AGD) and anthropometric measurements such as birth weight, birth length, head circumference and placenta weight of 200 newborns (100 male, 100 female) were taken and umbilical cord serum was assayed for testosterone concentration using Radioimmunoassay (Microwell).
Full waveform inversion of solar interior flows
Energy Technology Data Exchange (ETDEWEB)
Hanasoge, Shravan M. [Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Mumbai 400005 (India)
2014-12-10
The inference of flows of material in the interior of the Sun is a subject of major interest in helioseismology. Here, we apply techniques of full waveform inversion (FWI) to synthetic data to test flow inversions. In this idealized setup, we do not model seismic realization noise, training the focus entirely on the problem of whether a chosen supergranulation flow model can be seismically recovered. We define the misfit functional as a sum of L {sub 2} norm deviations in travel times between prediction and observation, as measured using short-distance filtered f and p {sub 1} and large-distance unfiltered p modes. FWI allows for the introduction of measurements of choice and iteratively improving the background model, while monitoring the evolution of the misfit in all desired categories. Although the misfit is seen to uniformly reduce in all categories, convergence to the true model is very slow, possibly because it is trapped in a local minimum. The primary source of error is inaccurate depth localization, which, due to density stratification, leads to wrong ratios of horizontal and vertical flow velocities ({sup c}ross talk{sup )}. In the present formulation, the lack of sufficient temporal frequency and spatial resolution makes it difficult to accurately localize flow profiles at depth. We therefore suggest that the most efficient way to discover the global minimum is to perform a probabilistic forward search, involving calculating the misfit associated with a broad range of models (generated, for instance, by a Monte Carlo algorithm) and locating the deepest minimum. Such techniques possess the added advantage of being able to quantify model uncertainty as well as realization noise (data uncertainty).
Hybrid mean value of 2k-th power inversion of L-functions and ...
Indian Academy of Sciences (India)
68
Hybrid mean value of 2k-th power inversion of L-functions and general quartic Gauss sums. Shikha Singh. ∗ and Jagmohan Tanti. Centre for Applied Mathematics, Central University of Jharkhand, Ranchi-835205, India. Abstract. In this paper we find the 2k-th power mean of the inversion of L-functions with the weight of ...
Inversion for seismic moment tensors from 6-component waveform data
Donner, Stefanie; Bernauer, Felix; Wassermann, Joachim; Igel, Heiner
2017-04-01
Waveform inversion for the seismic moment tensor nowadays is a well-established standard method in teleseismic distances. Nevertheless, several difficulties remain, especially for shallow and/or regional/local distances. These difficulties include e.g. the resolution of the mechanism, especially the non-double-couple components and the resolution of the centroid depth but also the uncertainty of a determined moment tensor. During the last decade, the observation of rotational ground motions gained increasing attention amongst seismologists. So far, studies were based on one (vertical) component ring laser data but 3-component ring laser data and even data from portable rotation sensors are in reach. These new developments can contribute to solve the difficulties in waveform inversion for moment tensors. Here, we present results for moment tensors, mainly in the regional distance range, derived from collocated translational and rotational ground motion measurements. These results are based on numerical and real-data studies. We inverted the ground motions recorded by a network of stations but also addressed the question of how reliable the inversion for moment tensors is from a single 6-component measurement.
Bayesian Approach to Inverse Problems
2008-01-01
Many scientific, medical or engineering problems raise the issue of recovering some physical quantities from indirect measurements; for instance, detecting or quantifying flaws or cracks within a material from acoustic or electromagnetic measurements at its surface is an essential problem of non-destructive evaluation. The concept of inverse problems precisely originates from the idea of inverting the laws of physics to recover a quantity of interest from measurable data.Unfortunately, most inverse problems are ill-posed, which means that precise and stable solutions are not easy to devise. Regularization is the key concept to solve inverse problems.The goal of this book is to deal with inverse problems and regularized solutions using the Bayesian statistical tools, with a particular view to signal and image estimation
Testing earthquake source inversion methodologies
Page, Morgan T.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Parameter estimation and inverse problems
Aster, Richard C; Thurber, Clifford H
2005-01-01
Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...
Statistical perspectives on inverse problems
DEFF Research Database (Denmark)
Andersen, Kim Emil
of the interior of an object from electrical boundary measurements. One part of this thesis concerns statistical approaches for solving, possibly non-linear, inverse problems. Thus inverse problems are recasted in a form suitable for statistical inference. In particular, a Bayesian approach for regularisation...... problem is given in terms of probability distributions. Posterior inference is obtained by Markov chain Monte Carlo methods and new, powerful simulation techniques based on e.g. coupled Markov chains and simulated tempering is developed to improve the computational efficiency of the overall simulation......Inverse problems arise in many scientific disciplines and pertain to situations where inference is to be made about a particular phenomenon from indirect measurements. A typical example, arising in diffusion tomography, is the inverse boundary value problem for non-invasive reconstruction...
DEFF Research Database (Denmark)
Ackerman, Margareta; Ben-David, Shai; Branzei, Simina
2012-01-01
the partitional and hierarchical settings, characterizing the conditions under which algorithms react to weights. Extending a recent framework for clustering algorithm selection, we propose intuitive properties that would allow users to choose between clustering algorithms in the weighted setting and classify...
Computation of inverse magnetic cascades
International Nuclear Information System (INIS)
Montgomery, D.
1981-10-01
Inverse cascades of magnetic quantities for turbulent incompressible magnetohydrodynamics are reviewed, for two and three dimensions. The theory is extended to the Strauss equations, a description intermediate between two and three dimensions appropriate to tokamak magnetofluids. Consideration of the absolute equilibrium Gibbs ensemble for the system leads to a prediction of an inverse cascade of magnetic helicity, which may manifest itself as a major disruption. An agenda for computational investigation of this conjecture is proposed
Thermal measurements and inverse techniques
Orlande, Helcio RB; Maillet, Denis; Cotta, Renato M
2011-01-01
With its uncommon presentation of instructional material regarding mathematical modeling, measurements, and solution of inverse problems, Thermal Measurements and Inverse Techniques is a one-stop reference for those dealing with various aspects of heat transfer. Progress in mathematical modeling of complex industrial and environmental systems has enabled numerical simulations of most physical phenomena. In addition, recent advances in thermal instrumentation and heat transfer modeling have improved experimental procedures and indirect measurements for heat transfer research of both natural phe
Coin tossing and Laplace inversion
Indian Academy of Sciences (India)
of a probability measure " on Е0Y 1К via the obvious change of variables e└t И xX An inversion formula for " in terms of its moments yields an inversion formula for # in terms of the values of its Laplace transform at n И 0Y 1Y 2Y ... and vice versa. In our discussion we allow " (respectively #) to have positive mass at 0 ...
EDITORIAL: Inverse Problems in Engineering
West, Robert M.; Lesnic, Daniel
2007-01-01
Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.
Distance collaborations with industry
Energy Technology Data Exchange (ETDEWEB)
Peskin, A.; Swyler, K.
1998-06-01
The college industry relationship has been identified as a key policy issue in Engineering Education. Collaborations between academic institutions and the industrial sector have a long history and a bright future. For Engineering and Engineering Technology programs in particular, industry has played a crucial role in many areas including advisement, financial support, and practical training of both faculty and students. Among the most important and intimate interactions are collaborative projects and formal cooperative education arrangements. Most recently, such collaborations have taken on a new dimension, as advances in technology have made possible meaningful technical collaboration at a distance. There are several obvious technology areas that have contributed significantly to this trend. Foremost is the ubiquitous presence of the Internet. Perhaps almost as important are advances in computer based imaging. Because visual images offer a compelling user experience, it affords greater knowledge transfer efficiency than other modes of delivery. Furthermore, the quality of the image appears to have a strongly correlated effect on insight. A good visualization facility offers both a means for communication and a shared information space for the subjects, which are among the essential features of both peer collaboration and distance learning.
Blocky inversion of multichannel elastic impedance for elastic parameters
Mozayan, Davoud Karami; Gholami, Ali; Siahkoohi, Hamid Reza
2018-04-01
Petrophysical description of reservoirs requires proper knowledge of elastic parameters like P- and S-wave velocities (Vp and Vs) and density (ρ), which can be retrieved from pre-stack seismic data using the concept of elastic impedance (EI). We propose an inversion algorithm which recovers elastic parameters from pre-stack seismic data in two sequential steps. In the first step, using the multichannel blind seismic inversion method (exploited recently for recovering acoustic impedance from post-stack seismic data), high-resolution blocky EI models are obtained directly from partial angle-stacks. Using an efficient total-variation (TV) regularization, each angle-stack is inverted independently in a multichannel form without prior knowledge of the corresponding wavelet. The second step involves inversion of the resulting EI models for elastic parameters. Mathematically, under some assumptions, the EI's are linearly described by the elastic parameters in the logarithm domain. Thus a linear weighted least squares inversion is employed to perform this step. Accuracy of the concept of elastic impedance in predicting reflection coefficients at low and high angles of incidence is compared with that of exact Zoeppritz elastic impedance and the role of low frequency content in the problem is discussed. The performance of the proposed inversion method is tested using synthetic 2D data sets obtained from the Marmousi model and also 2D field data sets. The results confirm the efficiency and accuracy of the proposed method for inversion of pre-stack seismic data.
Chromatid Painting for Chromosomal Inversion Detection Project
National Aeronautics and Space Administration — We propose a novel approach to the detection of chromosomal inversions. Transmissible chromosome aberrations (translocations and inversions) have profound genetic...
TOPSIS with statistical distances: A new approach to MADM
Directory of Open Access Journals (Sweden)
Vijaya Babu Vommi
2017-01-01
Full Text Available Multiple attribute decision making (MADM methods are very useful in choosing the best alternative among the available finite but conflicting alternatives. TOPSIS is one of the MADM methods, which is simple in its methodology and logic. In TOPSIS, Euclidean distances of each alternative from the positive and negative ideal solutions are utilized to find the best alternative. In literature, apart from Euclidean distances, the city block distances have also been tried to find the separations measures. In general, the attribute data are distributed with unequal ranges and also possess moderate to high correlations. Hence, in the present paper, use of statistical distances is proposed in place of Euclidean distances. Procedures to find the best alternatives are developed using statistical and weighted statistical distances respectively. The proposed methods are illustrated with some industrial problems taken from literature. Results show that the proposed methods can be used as new alternatives in MADM for choosing the best solutions.
The effect of genomic inversions on estimation of population genetic parameters from SNP data.
Seich Al Basatena, Nafisa-Katrin; Hoggart, Clive J; Coin, Lachlan J; O'Reilly, Paul F
2013-01-01
In recent years it has emerged that structural variants have a substantial impact on genomic variation. Inversion polymorphisms represent a significant class of structural variant, and despite the challenges in their detection, data on inversions in the human genome are increasing rapidly. Statistical methods for inferring parameters such as the recombination rate and the selection coefficient have generally been developed without accounting for the presence of inversions. Here we exploit new software for simulating inversions in population genetic data, invertFREGENE, to assess the potential impact of inversions on such methods. Using data simulated by invertFREGENE, as well as real data from several sources, we test whether large inversions have a disruptive effect on widely applied population genetics methods for inferring recombination rates, for detecting selection, and for controlling for population structure in genome-wide association studies (GWAS). We find that recombination rates estimated by LDhat are biased downward at inversion loci relative to the true contemporary recombination rates at the loci but that recombination hotspots are not falsely inferred at inversion breakpoints as may have been expected. We find that the integrated haplotype score (iHS) method for detecting selection appears robust to the presence of inversions. Finally, we observe a strong bias in the genome-wide results of principal components analysis (PCA), used to control for population structure in GWAS, in the presence of even a single large inversion, confirming the necessity to thin SNPs by linkage disequilibrium at large physical distances to obtain unbiased results.
Are contemporary tourists consuming distance?
DEFF Research Database (Denmark)
Larsen, Gunvor Riber
, because mobilities are not equal (Manderscheid, 2009; Kaufmann 2002; Gogia 2006). A hundred miles ceases to be 'just' a hundred miles when the questions of why and how this distance is to be overcome is being asked, making the social context of the individual important for the impact of having to overcome...... distance. Without the social contexts of those 'performing mobility' distance is indeed just distance, a hundred miles is a hundred miles where-ever it is, but for meaningful social inquiries into mobilities, the social contexts cannot be omitted. It is the distance that is embedded in the social contexts....... Following from the above, distance is not solely understood and represented by euclidean units such as meters or miles, and distance will have varying degrees of impact on individuals according to their social and economic contexts. Tourism mobility and distance are linked through a spatial necessity...
An adaptive distance measure for use with nonparametric models
International Nuclear Information System (INIS)
Garvey, D. R.; Hines, J. W.
2006-01-01
Distance measures perform a critical task in nonparametric, locally weighted regression. Locally weighted regression (LWR) models are a form of 'lazy learning' which construct a local model 'on the fly' by comparing a query vector to historical, exemplar vectors according to a three step process. First, the distance of the query vector to each of the exemplar vectors is calculated. Next, these distances are passed to a kernel function, which converts the distances to similarities or weights. Finally, the model output or response is calculated by performing locally weighted polynomial regression. To date, traditional distance measures, such as the Euclidean, weighted Euclidean, and L1-norm have been used as the first step in the prediction process. Since these measures do not take into consideration sensor failures and drift, they are inherently ill-suited for application to 'real world' systems. This paper describes one such LWR model, namely auto associative kernel regression (AAKR), and describes a new, Adaptive Euclidean distance measure that can be used to dynamically compensate for faulty sensor inputs. In this new distance measure, the query observations that lie outside of the training range (i.e. outside the minimum and maximum input exemplars) are dropped from the distance calculation. This allows for the distance calculation to be robust to sensor drifts and failures, in addition to providing a method for managing inputs that exceed the training range. In this paper, AAKR models using the standard and Adaptive Euclidean distance are developed and compared for the pressure system of an operating nuclear power plant. It is shown that using the standard Euclidean distance for data with failed inputs, significant errors in the AAKR predictions can result. By using the Adaptive Euclidean distance it is shown that high fidelity predictions are possible, in spite of the input failure. In fact, it is shown that with the Adaptive Euclidean distance prediction
Kern-Steiner, R; Washecheck, H S; Kelsey, D D
1999-05-01
Case study. To demonstrate how an exercise program can be designed with specific sets, repetitions, and rest periods, and to enhance the healing process in early stages of rehabilitation when injured tissues cannot tolerate full body weight. Our goal was to enhance ankle tissue healing by reducing gravitational force through a prescriptive exercise and unloading program. This report describes a treatment method that we used to rehabilitate a collegiate soccer player with a Grade II inversion ankle sprain. This athlete sprained his ankle 6 weeks before the start of rehabilitation and was unable to participate in soccer due to persistent pain and impaired function. A 2-week functional training program was implemented, consisting of exercises chosen for specific task simulation related to soccer. Gravitational force was mechanically altered by suspending the subject or by supporting the subject on a variable incline plane. Weight-bearing was controlled so that the subject could perform exercises without pain. The outcome measures were ankle range of motion (ROM), maximum pain-free isometric strength, vertical force during unilateral squats, and unilateral hop time and distance. Pain-free weight-bearing capacity increased over the 2-week course of rehabilitation and the subject was able to return to playing soccer without pain. The ratios (involved to uninvolved extremity) at time of discharge from physical therapy were 87% to 103% for ankle ROM, 75% to 93% for isometric ankle strength, 91% for unilateral squats, 88% for unilateral hop time, and 86% for unilateral hop distance. Return to function can be achieved in a short period by exercise that is performed with a gradual increase in pain-free weight-bearing capacity.
Generating Constant Weight Binary Codes
Knight, D.G.
2008-01-01
The determination of bounds for A(n, d, w), the maximum possible number of binary vectors of length n, weight w, and pairwise Hamming distance no less than d, is a classic problem in coding theory. Such sets of vectors have many applications. A description is given of how the problem can be used in a first-year undergraduate computational…
... may become sick in the first days of life or develop infections. Others may suffer from longer-term problems such as delayed motor and social development or learning disabilities. High birth weight babies are often big because ...
Afsar, Baris; Elsurer, Rengin; Soypacaci, Zeki; Kanbay, Mehmet
2016-02-01
Although anthropometric measurements are related with clinical outcomes; these relationships are not universal and differ in some disease states such as in chronic kidney disease (CKD). The current study was aimed to analyze the relationship between height, weight and BMI with hemodynamic and arterial stiffness parameters both in normal and CKD patients separately. This cross-sectional study included 381 patients with (N 226) and without CKD (N 155) with hypertension. Routine laboratory and 24-h urine collection were performed. Augmentation index (Aix) which is the ratio of augmentation pressure to pulse pressure was calculated from the blood pressure waveform after adjusted heart rate at 75 [Aix@75 (%)]. Pulse wave velocity (PWV) is a simple measure of the time taken by the pressure wave to travel over a specific distance. Both [Aix@75 (%)] and PWV which are measures of arterial stiffness were measured by validated oscillometric methods using mobil-O-Graph device. In patients without CKD, height is inversely correlated with [Aix@75 (%)]. Additionally, weight and BMI were positively associated with PWV in multivariate analysis. However, in patients with CKD, weight and BMI were inversely and independently related with PWV. In CKD patients, as weight and BMI increased stiffness parameters such as Aix@75 (%) and PWV decreased. While BMI and weight are positively associated with arterial stiffness in normal patients, this association is negative in patients with CKD. In conclusion, height, weight and BMI relationship with hemodynamic and arterial stiffness parameters differs in patients with and without CKD.
Continuing Education for Distance Librarians
Cassner, Mary; Adams, Kate E.
2012-01-01
Distance librarians as engaged professionals work in a complex environment of changes in technologies, user expectations, and institutional goals. They strive to keep current with skills and competencies to support distance learners. This article provides a selection of continuing education opportunities for distance librarians, and is relevant…
Exact expression for information distance
P.M.B. Vitányi (Paul)
2017-01-01
textabstractInformation distance can be defined not only between two strings but also in a finite multiset of strings of cardinality greater than two. We determine a best upper bound on the information distance. It is exact, since the upper bound on the information distance for all multisets is the
DISTANCES TO DARK CLOUDS: COMPARING EXTINCTION DISTANCES TO MASER PARALLAX DISTANCES
International Nuclear Information System (INIS)
Foster, Jonathan B.; Jackson, James M.; Stead, Joseph J.; Hoare, Melvin G.; Benjamin, Robert A.
2012-01-01
We test two different methods of using near-infrared extinction to estimate distances to dark clouds in the first quadrant of the Galaxy using large near-infrared (Two Micron All Sky Survey and UKIRT Infrared Deep Sky Survey) surveys. Very long baseline interferometry parallax measurements of masers around massive young stars provide the most direct and bias-free measurement of the distance to these dark clouds. We compare the extinction distance estimates to these maser parallax distances. We also compare these distances to kinematic distances, including recent re-calibrations of the Galactic rotation curve. The extinction distance methods agree with the maser parallax distances (within the errors) between 66% and 100% of the time (depending on method and input survey) and between 85% and 100% of the time outside of the crowded Galactic center. Although the sample size is small, extinction distance methods reproduce maser parallax distances better than kinematic distances; furthermore, extinction distance methods do not suffer from the kinematic distance ambiguity. This validation gives us confidence that these extinction methods may be extended to additional dark clouds where maser parallaxes are not available.
DISTANCES TO DARK CLOUDS: COMPARING EXTINCTION DISTANCES TO MASER PARALLAX DISTANCES
Energy Technology Data Exchange (ETDEWEB)
Foster, Jonathan B.; Jackson, James M. [Institute for Astrophysical Research, Boston University, Boston, MA 02215 (United States); Stead, Joseph J.; Hoare, Melvin G. [School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT (United Kingdom); Benjamin, Robert A., E-mail: jbfoster@bu.edu, E-mail: jackson@bu.edu, E-mail: J.J.Stead@leeds.ac.uk, E-mail: mgh@ast.leeds.ac.uk, E-mail: benjamin@astro.wisc.edu [Physics Department, University of Wisconsin Whitewater, Whitewater, WI 53190 (United States)
2012-06-01
We test two different methods of using near-infrared extinction to estimate distances to dark clouds in the first quadrant of the Galaxy using large near-infrared (Two Micron All Sky Survey and UKIRT Infrared Deep Sky Survey) surveys. Very long baseline interferometry parallax measurements of masers around massive young stars provide the most direct and bias-free measurement of the distance to these dark clouds. We compare the extinction distance estimates to these maser parallax distances. We also compare these distances to kinematic distances, including recent re-calibrations of the Galactic rotation curve. The extinction distance methods agree with the maser parallax distances (within the errors) between 66% and 100% of the time (depending on method and input survey) and between 85% and 100% of the time outside of the crowded Galactic center. Although the sample size is small, extinction distance methods reproduce maser parallax distances better than kinematic distances; furthermore, extinction distance methods do not suffer from the kinematic distance ambiguity. This validation gives us confidence that these extinction methods may be extended to additional dark clouds where maser parallaxes are not available.
Distances to Dark Clouds: Comparing Extinction Distances to Maser Parallax Distances
Foster, Jonathan B.; Stead, Joseph J.; Benjamin, Robert A.; Hoare, Melvin G.; Jackson, James M.
2012-06-01
We test two different methods of using near-infrared extinction to estimate distances to dark clouds in the first quadrant of the Galaxy using large near-infrared (Two Micron All Sky Survey and UKIRT Infrared Deep Sky Survey) surveys. Very long baseline interferometry parallax measurements of masers around massive young stars provide the most direct and bias-free measurement of the distance to these dark clouds. We compare the extinction distance estimates to these maser parallax distances. We also compare these distances to kinematic distances, including recent re-calibrations of the Galactic rotation curve. The extinction distance methods agree with the maser parallax distances (within the errors) between 66% and 100% of the time (depending on method and input survey) and between 85% and 100% of the time outside of the crowded Galactic center. Although the sample size is small, extinction distance methods reproduce maser parallax distances better than kinematic distances; furthermore, extinction distance methods do not suffer from the kinematic distance ambiguity. This validation gives us confidence that these extinction methods may be extended to additional dark clouds where maser parallaxes are not available.
Asymptotics of weighted random sums
DEFF Research Database (Denmark)
Corcuera, José Manuel; Nualart, David; Podolskij, Mark
2014-01-01
In this paper we study the asymptotic behaviour of weighted random sums when the sum process converges stably in law to a Brownian motion and the weight process has continuous trajectories, more regular than that of a Brownian motion. We show that these sums converge in law to the integral...... of the weight process with respect to the Brownian motion when the distance between observations goes to zero. The result is obtained with the help of fractional calculus showing the power of this technique. This study, though interesting by itself, is motivated by an error found in the proof of Theorem 4...
Modular inverse reinforcement learning for visuomotor behavior.
Rothkopf, Constantin A; Ballard, Dana H
2013-08-01
In a large variety of situations one would like to have an expressive and accurate model of observed animal or human behavior. While general purpose mathematical models may capture successfully properties of observed behavior, it is desirable to root models in biological facts. Because of ample empirical evidence for reward-based learning in visuomotor tasks, we use a computational model based on the assumption that the observed agent is balancing the costs and benefits of its behavior to meet its goals. This leads to using the framework of reinforcement learning, which additionally provides well-established algorithms for learning of visuomotor task solutions. To quantify the agent's goals as rewards implicit in the observed behavior, we propose to use inverse reinforcement learning, which quantifies the agent's goals as rewards implicit in the observed behavior. Based on the assumption of a modular cognitive architecture, we introduce a modular inverse reinforcement learning algorithm that estimates the relative reward contributions of the component tasks in navigation, consisting of following a path while avoiding obstacles and approaching targets. It is shown how to recover the component reward weights for individual tasks and that variability in observed trajectories can be explained succinctly through behavioral goals. It is demonstrated through simulations that good estimates can be obtained already with modest amounts of observation data, which in turn allows the prediction of behavior in novel configurations.
Information Distances versus Entropy Metric
Directory of Open Access Journals (Sweden)
Bo Hu
2017-06-01
Full Text Available Information distance has become an important tool in a wide variety of applications. Various types of information distance have been made over the years. These information distance measures are different from entropy metric, as the former is based on Kolmogorov complexity and the latter on Shannon entropy. However, for any computable probability distributions, up to a constant, the expected value of Kolmogorov complexity equals the Shannon entropy. We study the similar relationship between entropy and information distance. We also study the relationship between entropy and the normalized versions of information distances.
Solving inverse problems for biological models using the collage method for differential equations.
Capasso, V; Kunze, H E; La Torre, D; Vrscay, E R
2013-07-01
In the first part of this paper we show how inverse problems for differential equations can be solved using the so-called collage method. Inverse problems can be solved by minimizing the collage distance in an appropriate metric space. We then provide several numerical examples in mathematical biology. We consider applications of this approach to the following areas: population dynamics, mRNA and protein concentration, bacteria and amoeba cells interaction, tumor growth.
Review of the inverse scattering problem at fixed energy in quantum mechanics
Sabatier, P. C.
1972-01-01
Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.
Inverse comorbidity in multiple sclerosis
DEFF Research Database (Denmark)
Thormann, Anja; Koch-Henriksen, Nils; Laursen, Bjarne
2016-01-01
discovery rate and investigated each of eight pre-specified comorbidity categories: psychiatric, cerebrovascular, cardiovascular, lung, and autoimmune comorbidities, diabetes, cancer, and Parkinson's disease. Results A total of 8947 MS-cases and 44,735 controls were eligible for inclusion. We found...... This study showed a decreased risk of cancers and pulmonary diseases after onset of MS. Identification of inverse comorbidity and of its underlying mechanisms may provide important new entry points into the understanding of MS.......Background Inverse comorbidity is disease occurring at lower rates than expected among persons with a given index disease. The objective was to identify inverse comorbidity in MS. Methods We performed a combined case-control and cohort study in a total nationwide cohort of cases with clinical onset...
Inverse photoemission of uranium oxides
International Nuclear Information System (INIS)
Roussel, P.; Morrall, P.; Tull, S.J.
2009-01-01
Understanding the itinerant-localised bonding role of the 5f electrons in the light actinides will afford an insight into their unusual physical and chemical properties. In recent years, the combination of core and valance band electron spectroscopies with theoretic modelling have already made significant progress in this area. However, information of the unoccupied density of states is still scarce. When compared to the forward photoemission techniques, measurements of the unoccupied states suffer from significantly less sensitivity and lower resolution. In this paper, we report on our experimental apparatus, which is designed to measure the inverse photoemission spectra of the light actinides. Inverse photoemission spectra of UO 2 and UO 2.2 along with the corresponding core and valance electron spectra are presented in this paper. UO 2 has been reported previously, although through its inclusion here it allows us to compare and contrast results from our experimental apparatus to the previous Bremsstrahlung Isochromat Spectroscopy and Inverse Photoemission Spectroscopy investigations
Inverse source problems in elastodynamics
Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao
2018-04-01
We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.
Optimization for nonlinear inverse problem
International Nuclear Information System (INIS)
Boyadzhiev, G.; Brandmayr, E.; Pinat, T.; Panza, G.F.
2007-06-01
The nonlinear inversion of geophysical data in general does not yield a unique solution, but a single model, representing the investigated field, is preferred for an easy geological interpretation of the observations. The analyzed region is constituted by a number of sub-regions where the multi-valued nonlinear inversion is applied, which leads to a multi-valued solution. Therefore, combining the values of the solution in each sub-region, many acceptable models are obtained for the entire region and this complicates the geological interpretation of geophysical investigations. In this paper are presented new methodologies, capable to select one model, among all acceptable ones, that satisfies different criteria of smoothness in the explored space of solutions. In this work we focus on the non-linear inversion of surface waves dispersion curves, which gives structural models of shear-wave velocity versus depth, but the basic concepts have a general validity. (author)
Defining functional distances over Gene Ontology
Directory of Open Access Journals (Sweden)
del Pozo Angela
2008-01-01
Full Text Available Abstract Background A fundamental problem when trying to define the functional relationships between proteins is the difficulty in quantifying functional similarities, even when well-structured ontologies exist regarding the activity of proteins (i.e. 'gene ontology' -GO-. However, functional metrics can overcome the problems in the comparing and evaluating functional assignments and predictions. As a reference of proximity, previous approaches to compare GO terms considered linkage in terms of ontology weighted by a probability distribution that balances the non-uniform 'richness' of different parts of the Direct Acyclic Graph. Here, we have followed a different approach to quantify functional similarities between GO terms. Results We propose a new method to derive 'functional distances' between GO terms that is based on the simultaneous occurrence of terms in the same set of Interpro entries, instead of relying on the structure of the GO. The coincidence of GO terms reveals natural biological links between the GO functions and defines a distance model Df which fulfils the properties of a Metric Space. The distances obtained in this way can be represented as a hierarchical 'Functional Tree'. Conclusion The method proposed provides a new definition of distance that enables the similarity between GO terms to be quantified. Additionally, the 'Functional Tree' defines groups with biological meaning enhancing its utility for protein function comparison and prediction. Finally, this approach could be for function-based protein searches in databases, and for analysing the gene clusters produced by DNA array experiments.
Fast computation of distance estimators
Directory of Open Access Journals (Sweden)
Lagergren Jens
2007-03-01
Full Text Available Abstract Background Some distance methods are among the most commonly used methods for reconstructing phylogenetic trees from sequence data. The input to a distance method is a distance matrix, containing estimated pairwise distances between all pairs of taxa. Distance methods themselves are often fast, e.g., the famous and popular Neighbor Joining (NJ algorithm reconstructs a phylogeny of n taxa in time O(n3. Unfortunately, the fastest practical algorithms known for Computing the distance matrix, from n sequences of length l, takes time proportional to l·n2. Since the sequence length typically is much larger than the number of taxa, the distance estimation is the bottleneck in phylogeny reconstruction. This bottleneck is especially apparent in reconstruction of large phylogenies or in applications where many trees have to be reconstructed, e.g., bootstrapping and genome wide applications. Results We give an advanced algorithm for Computing the number of mutational events between DNA sequences which is significantly faster than both Phylip and Paup. Moreover, we give a new method for estimating pairwise distances between sequences which contain ambiguity Symbols. This new method is shown to be more accurate as well as faster than earlier methods. Conclusion Our novel algorithm for Computing distance estimators provides a valuable tool in phylogeny reconstruction. Since the running time of our distance estimation algorithm is comparable to that of most distance methods, the previous bottleneck is removed. All distance methods, such as NJ, require a distance matrix as input and, hence, our novel algorithm significantly improves the overall running time of all distance methods. In particular, we show for real world biological applications how the running time of phylogeny reconstruction using NJ is improved from a matter of hours to a matter of seconds.
Directory of Open Access Journals (Sweden)
Meng-Meng Shan
2016-01-01
Full Text Available With respect to multicriteria supplier selection problems with interval 2-tuple linguistic information, a new decision making approach that uses distance measures is proposed. Motivated by the ordered weighted distance (OWD measures, in this paper, we develop some interval 2-tuple linguistic distance operators such as the interval 2-tuple weighted distance (ITWD, the interval 2-tuple ordered weighted distance (ITOWD, and the interval 2-tuple hybrid weighted distance (ITHWD operators. These aggregation operators are very useful for the treatment of input data in the form of interval 2-tuple linguistic variables. We study some desirable properties of the ITOWD operator and further generalize it by using the generalized and the quasi-arithmetic means. Finally, the new approach is utilized to complete a supplier selection study for an actual hospital from the healthcare industry.
Inverse methods in hydrologic optics
Directory of Open Access Journals (Sweden)
Howard R. Gordon
2002-03-01
Full Text Available Methods for solving the hydrologic-optics inverse problem, i.e., estimating the inherent optical properties of a water body based solely on measurements of the apparent optical properties, are reviewed in detail. A new method is developed for the inverse problem in water bodies in which fluorescence is important. It is shown that in principle, given profiles of the spectra of up- and downwelling irradiance, estimation of the coefficient of inelastic scattering from any wave band to any other wave band can be effected.
Inverse Interval Matrix: A Survey
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Farhadsefat, R.
2011-01-01
Roč. 22, - (2011), s. 704-719 E-ISSN 1081-3810 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval matrix * inverse interval matrix * NP-hardness * enclosure * unit midpoint * inverse sign stability * nonnegative invertibility * absolute value equation * algorithm Subject RIV: BA - General Mathematics Impact factor: 0.808, year: 2010 http://www.math.technion.ac.il/iic/ela/ela-articles/articles/vol22_pp704-719.pdf
Size Estimates in Inverse Problems
Di Cristo, Michele
2014-01-06
Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.
-Dimensional Fractional Lagrange's Inversion Theorem
Directory of Open Access Journals (Sweden)
F. A. Abd El-Salam
2013-01-01
Full Text Available Using Riemann-Liouville fractional differential operator, a fractional extension of the Lagrange inversion theorem and related formulas are developed. The required basic definitions, lemmas, and theorems in the fractional calculus are presented. A fractional form of Lagrange's expansion for one implicitly defined independent variable is obtained. Then, a fractional version of Lagrange's expansion in more than one unknown function is generalized. For extending the treatment in higher dimensions, some relevant vectors and tensors definitions and notations are presented. A fractional Taylor expansion of a function of -dimensional polyadics is derived. A fractional -dimensional Lagrange inversion theorem is proved.
Planning with Reachable Distances
Tang, Xinyu
2009-01-01
Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the robot\\'s number of degrees of freedom. In addition to supporting efficient sampling, we show that the RD-space formulation naturally supports planning, and in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1000 links in time comparable to open chain sampling, and we can generate samples for 1000-link multi-loop systems of varying topology in less than a second. © 2009 Springer-Verlag.
Encyclopedia of Distance Learning
Directory of Open Access Journals (Sweden)
Tojde
2005-07-01
Full Text Available Encyclopedia of Distance Learning reference book published by Idea Group consists of four volumes. The encyclopedia edited by 6 editors of whom the five are from USA and one is from Hong Kong. Besides these editors, 19 advisors have supported the encyclopedia. The editors and the advisory board are from different institutions. There are more than 400 contributors to the book. Although majority of the contributors are from USA, there are several contributors from several Asian, European and Australian countries like Hong Kong, Malaysia, Germany, Australia, Turkey, Singapore, and Belgium. All of the contributors are academics and represent a variety of universities. All volumes have editorial advisory board list, list of authors, editors’ prefaces, publisher’s note, editors’ biographies, a foreword, contents, an index and a key terms index. There are more than 3000 terms and definitions and over 6000 additional references through four volumes. This four volume set encyclopedia are ordered alphabetically. Volume I consists of initials from A to C, Volume II consists of initials from D to H, Volume III consists of initials from I to Q and Volume IV consists of initials from R to Z.
DEFF Research Database (Denmark)
Bærenholdt, Jørgen Ole
Coping with distances - Producing Nordic Atlantic Societies Mennesker håndterer afstande og producerer derved samfund. Dette er det grundlæggende synspunkt i en afhandling, hvor samfund ikke tages for givet. Samfund er tværtimod noget som hele tiden må produceres, genproduceres og forandres, og det...... været afgørende. Afhandlingen tages sit afsæt i en teoretisk diskussion af begreberne samfund, håndtering (coping på engelsk), social kapital, territorialitet, mobilitet, bonding (stærke identitetsbærende bånd) og bridging (svage, brobyggende forbindelser). Der gås på tværs af vante skel mellem kultur...... trussel mod samfundsbygningen. Men kap. 6 viser de mange måder hvori gennem turismen bidrager til samfundsbygningen, og det er en historie som indledes med en grundig diskussion af opdagelsesrejsernes grundlæggende betydning, også kulturelt. Begge disse kapitler tilføjer coping tilgangen flere analytiske...
Testing the gravitational inverse-square law
International Nuclear Information System (INIS)
Adelberger, Eric; Heckel, B.; Hoyle, C.D.
2005-01-01
If the universe contains more than three spatial dimensions, as many physicists believe, our current laws of gravity should break down at small distances. When Isaac Newton realized that the acceleration of the Moon as it orbited around the Earth could be related to the acceleration of an apple as it fell to the ground, it was the first time that two seemingly unrelated physical phenomena had been 'unified'. The quest to unify all the forces of nature is one that still keeps physicists busy today. Newton showed that the gravitational attraction between two point bodies is proportional to the product of their masses and inversely proportional to the square of the distance between them. Newton's theory, which assumes that the gravitational force acts instantaneously, remained essentially unchallenged for roughly two centuries until Einstein proposed the general theory of relativity in 1915. Einstein's radical new theory made gravity consistent with the two basic ideas of relativity: the world is 4D - the three directions of space combined with time - and no physical effect can travel faster than light. The theory of general relativity states that gravity is not a force in the usual sense but a consequence of the curvature of this space-time produced by mass or energy. However, in the limit of low velocities and weak gravitational fields, Einstein's theory still predicts that the gravitational force between two point objects obeys an inverse-square law. One of the outstanding challenges in physics is to finish what Newton started and achieve the ultimate 'grand unification' - to unify gravity with the other three fundamental forces (the electromagnetic force, and the strong and weak nuclear forces) into a single quantum theory. In string theory - one of the leading candidates for an ultimate theory - the fundamental entities of nature are 1D strings and higher-dimensional objects called 'branes', rather than the point-like particles we are familiar with. String
Lee, Sun-Min; Lee, Jung-Hoon
2015-01-01
[Purpose] The purpose of this study was to report the effects of ankle inversion taping using kinesiology tape in a patient with a medial ankle sprain. [Subject] A 28-year-old amateur soccer player suffered a Grade 2 medial ankle sprain during a match. [Methods] Ankle inversion taping was applied to the sprained ankle every day for 2 months. [Results] His symptoms were reduced after ankle inversion taping application for 2 months. The self-reported function score, the reach distances in the S...
Two-Dimensional Linear Inversion of GPR Data with a Shifting Zoom along the Observation Line
Directory of Open Access Journals (Sweden)
Raffaele Persico
2017-09-01
Full Text Available Linear inverse scattering problems can be solved by regularized inversion of a matrix, whose calculation and inversion may require significant computing resources, in particular, a significant amount of RAM memory. This effort is dependent on the extent of the investigation domain, which drives a large amount of data to be gathered and a large number of unknowns to be looked for, when this domain becomes electrically large. This leads, in turn, to the problem of inversion of excessively large matrices. Here, we consider the problem of a ground-penetrating radar (GPR survey in two-dimensional (2D geometry, with antennas at an electrically short distance from the soil. In particular, we present a strategy to afford inversion of large investigation domains, based on a shifting zoom procedure. The proposed strategy was successfully validated using experimental radar data.
Superconductivity in Pb inverse opal
International Nuclear Information System (INIS)
Aliev, Ali E.; Lee, Sergey B.; Zakhidov, Anvar A.; Baughman, Ray H.
2007-01-01
Type-II superconducting behavior was observed in highly periodic three-dimensional lead inverse opal prepared by infiltration of melted Pb in blue (D = 160 nm), green (D = 220 nm) and red (D = 300 nm) opals and followed by the extraction of the SiO 2 spheres by chemical etching. The onset of a broad phase transition (ΔT = 0.3 K) was shifted from T c = 7.196 K for bulk Pb to T c = 7.325 K. The upper critical field H c2 (3150 Oe) measured from high-field hysteresis loops exceeds the critical field for bulk lead (803 Oe) fourfold. Two well resolved peaks observed in the hysteresis loops were ascribed to flux penetration into the cylindrical void space that can be found in inverse opal structure and into the periodic structure of Pb nanoparticles. The red inverse opal shows pronounced oscillations of magnetic moment in the mixed state at low temperatures, T 0.9T c has been observed for all of the samples studied. The magnetic field periodicity of resistivity modulation is in good agreement with the lattice parameter of the inverse opal structure. We attribute the failure to observe pronounced modulation in magneto-resistive measurement to difficulties in the precision orientation of the sample along the magnetic field
Statistical and Computational Inverse Problems
Kaipio, Jari
2005-01-01
Develops the statistical approach to inverse problems with an emphasis on modeling and computations. The book discusses the measurement noise modeling and Bayesian estimation, and uses Markov Chain Monte Carlo methods to explore the probability distributions. It is for researchers and advanced students in applied mathematics.
Coin Tossing and Laplace Inversion
Indian Academy of Sciences (India)
An analysis of exchangeable sequences of coin tossings leads to inversion formulae for Laplace transforms of probability measures. Author Affiliations. J C Gupta1 2. Indian Statistical Institute, New Delhi 110 016, India; 32, Mirdha Tola, Budaun 243 601, India. Dates. Manuscript received: 5 May 1999; Manuscript revised: 3 ...
Givental Graphs and Inversion Symmetry
Dunin-Barkovskiy, P.; Shadrin, S.; Spitz, L.
2013-01-01
Inversion symmetry is a very non-trivial discrete symmetry of Frobenius manifolds. It was obtained by Dubrovin from one of the elementary Schlesinger transformations of a special ODE associated to a Frobenius manifold. In this paper, we review the Givental group action on Frobenius manifolds in
Wave-equation dispersion inversion
Li, Jing
2016-12-08
We present the theory for wave-equation inversion of dispersion curves, where the misfit function is the sum of the squared differences between the wavenumbers along the predicted and observed dispersion curves. The dispersion curves are obtained from Rayleigh waves recorded by vertical-component geophones. Similar to wave-equation traveltime tomography, the complicated surface wave arrivals in traces are skeletonized as simpler data, namely the picked dispersion curves in the phase-velocity and frequency domains. Solutions to the elastic wave equation and an iterative optimization method are then used to invert these curves for 2-D or 3-D S-wave velocity models. This procedure, denoted as wave-equation dispersion inversion (WD), does not require the assumption of a layered model and is significantly less prone to the cycle-skipping problems of full waveform inversion. The synthetic and field data examples demonstrate that WD can approximately reconstruct the S-wave velocity distributions in laterally heterogeneous media if the dispersion curves can be identified and picked. The WD method is easily extended to anisotropic data and the inversion of dispersion curves associated with Love waves.
Adjoint modeling for acoustic inversion
Hursky, Paul; Porter, Michael B.; Cornuelle, B. D.; Hodgkiss, W. S.; Kuperman, W. A.
2004-02-01
The use of adjoint modeling for acoustic inversion is investigated. An adjoint model is derived from a linearized forward propagation model to propagate data-model misfit at the observation points back through the medium to the medium perturbations not being accounted for in the model. This adjoint model can be used to aid in inverting for these unaccounted medium perturbations. Adjoint methods are being applied to a variety of inversion problems, but have not drawn much attention from the underwater acoustic community. This paper presents an application of adjoint methods to acoustic inversion. Inversions are demonstrated in simulation for both range-independent and range-dependent sound speed profiles using the adjoint of a parabolic equation model. Sensitivity and error analyses are discussed showing how the adjoint model enables calculations to be performed in the space of observations, rather than the often much larger space of model parameters. Using an adjoint model enables directions of steepest descent in the model parameters (what we invert for) to be calculated using far fewer modeling runs than if a forward model only were used.
Laboratory Tests of the Inverse Square Law of Gravity
Schlamminger, Stephan
2010-02-01
Newton's inverse square force law of gravity follows directly from the fact that we live in a 3-dimensional world. For sub-millimeter length scales there may be undiscovered, extra dimensions. Such extra dimensions can be detected with inverse square law tests accessible to torsion balances. I will present an overview of two experiments that are being conducted at the University of Washington to search for gravitational-strength deviations from the inverse square law for extra dimension length scales smaller than 50 micrometers. One experiment is designed to measure the distance dependent force between closely spaced masses, whereas the second experiment is a null experiment and is only sensitive to a deviation from the inverse square law of gravity. The first experiment consists of a torsion pendulum that is suspended above a continuously rotating attractor. The attractor and the pendulum are disks with azimuthal sectors of alternating high and a low density. The torque on the pendulum disk varies as a function of the attractor angle with a 3 degree period. The amplitude of the torque signal is analyzed as a function of the separation between the pendulum and the attractor. The second experiment consists of a plate pendulum that is suspended parallel to a larger vertical plate attractor. The pendulum plate has an internal density asymmetry with a dense inlay on one half facing the attractor and another inlay on the other half on the side away from the attractor. If the inverse square law holds, the gravitational field of the attractor is uniform and the torque on the pendulum is independent of the gap between pendulum and attractor. The attractor position is modulated between a near and far position and the torque difference on the pendulum is recorded and analyzed for a possible inverse square law violation. )
Workflows for Full Waveform Inversions
Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas
2017-04-01
Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.
Hausdorff distance and image processing
International Nuclear Information System (INIS)
Sendov, Bl
2004-01-01
Mathematical methods for image processing make use of function spaces which are usually Banach spaces with integral L p norms. The corresponding mathematical models of the images are functions in these spaces. There are discussions here involving the value of p for which the distance between two functions is most natural when they represent images, or the metric in which our eyes measure the distance between the images. In this paper we argue that the Hausdorff distance is more natural to measure the distance (difference) between images than any L p norm
Inverse treatment planning based on MRI for HDR prostate brachytherapy
International Nuclear Information System (INIS)
Citrin, Deborah; Ning, Holly; Guion, Peter; Li Guang; Susil, Robert C.; Miller, Robert W.; Lessard, Etienne; Pouliot, Jean; Xie Huchen; Capala, Jacek; Coleman, C. Norman; Camphausen, Kevin; Menard, Cynthia
2005-01-01
Purpose: To develop and optimize a technique for inverse treatment planning based solely on magnetic resonance imaging (MRI) during high-dose-rate brachytherapy for prostate cancer. Methods and materials: Phantom studies were performed to verify the spatial integrity of treatment planning based on MRI. Data were evaluated from 10 patients with clinically localized prostate cancer who had undergone two high-dose-rate prostate brachytherapy boosts under MRI guidance before and after pelvic radiotherapy. Treatment planning MRI scans were systematically evaluated to derive a class solution for inverse planning constraints that would reproducibly result in acceptable target and normal tissue dosimetry. Results: We verified the spatial integrity of MRI for treatment planning. MRI anatomic evaluation revealed no significant displacement of the prostate in the left lateral decubitus position, a mean distance of 14.47 mm from the prostatic apex to the penile bulb, and clear demarcation of the neurovascular bundles on postcontrast imaging. Derivation of a class solution for inverse planning constraints resulted in a mean target volume receiving 100% of the prescribed dose of 95.69%, while maintaining a rectal volume receiving 75% of the prescribed dose of <5% (mean 1.36%) and urethral volume receiving 125% of the prescribed dose of <2% (mean 0.54%). Conclusion: Systematic evaluation of image spatial integrity, delineation uncertainty, and inverse planning constraints in our procedure reduced uncertainty in planning and treatment
Weighted filtered backprojection for quantitative fluorescence optical projection tomography
Energy Technology Data Exchange (ETDEWEB)
Darrell, A; Marias, K [BMI Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, Vassilika Vouton, PO Box 1385, GR 711 10 Heraklion (Greece); Meyer, H; Ripoll, J [Institute of Electronic Structure and Laser, Foundation for Research and Technology-Hellas, Vassilika Vouton, PO Box 1385, GR 711 10 Heraklion (Greece); Brady, M [Medical Vision Laboratory, Department of Engineering Science, Oxford University, Parks Road, Oxford OX1 3PJ (United Kingdom)
2008-07-21
Reconstructing images from a set of fluorescence optical projection tomography (OPT) projections is a relatively new problem. Several physical aspects of fluorescence OPT necessitate a different treatment of the inverse problem to that required for non-fluorescence tomography. Given a fluorophore within the depth of field of the imaging system, the power received by the optical system, and therefore the CCD detector, is related to the distance of the fluorophore from the objective entrance pupil. Additionally, due to the slight blurring of images of sources positioned off the focal plane, the CCD image of a fluorophore off the focal plane is lower in intensity than the CCD image of an identical fluorophore positioned on the focal plane. The filtered backprojection (FBP) algorithm does not take these effects into account and so cannot be expected to yield truly quantitative results. A full model of image formation is introduced which takes into account the effects of isotropic emission and defocus. The model is used to obtain a weighting function which is used in a variation of the FBP algorithm called weighted filtered backprojection (WFBP). This new algorithm is tested with simulated data and with experimental data from a phantom consisting of fluorescent microspheres embedded in an agarose gel.
Dynamics of an N-vortex state at small distances
Ovchinnikov, Yu. N.
2013-01-01
We investigate the dynamics of a state of N vortices, placed at the initial instant at small distances from some point, close to the "weight center" of vortices. The general solution of the time-dependent Ginsburg-Landau equation for N vortices in a large time interval is found. For N = 2, the position of the "weight center" of two vortices is time independent. For N ≥ 3, the position of the "weight center" weakly depends on time and is located in the range of the order of a 3, where a is a characteristic distance of a single vortex from the "weight center." For N = 3, the time evolution of the N-vortex state is fixed by the position of vortices at any time instant and by the values of two small parameters. For N ≥ 4, a new parameter arises in the problem, connected with relative increases in the number of decay modes.
Real time monitoring of moment magnitude by waveform inversion
Lee, J.; Friederich, W.; Meier, T.
2012-01-01
An instantaneous measure of the moment magnitude (Mw) of an ongoing earthquake is estimated from the moment rate function (MRF) determined in real-time from available seismic data using waveform inversion. Integration of the MRF gives the moment function from which an instantaneous Mw is derived. By repeating the inversion procedure at regular intervals while seismic data are coming in we can monitor the evolution of seismic moment and Mw with time. The final size and duration of a strong earthquake can be obtained within 12 to 15 minutes after the origin time. We show examples of Mw monitoring for three large earthquakes at regional distances. The estimated Mw is only weakly sensitive to changes in the assumed source parameters. Depending on the availability of seismic stations close to the epicenter, a rapid estimation of the Mw as a prerequisite for the assessment of earthquake damage potential appears to be feasible.
3D stochastic inversion and joint inversion of potential fields for multi scale parameters
Shamsipour, Pejman
In this thesis we present the development of new techniques for the interpretation of potential field (gravity and magnetic data), which are the most widespread economic geophysical methods used for oil and mineral exploration. These new techniques help to address the long-standing issue with the interpretation of potential fields, namely the intrinsic non-uniqueness inversion of these types of data. The thesis takes the form of three papers (four including Appendix), which have been published, or soon to be published, in respected international journals. The purpose of the thesis is to introduce new methods based on 3D stochastical approaches for: 1) Inversion of potential field data (magnetic), 2) Multiscale Inversion using surface and borehole data and 3) Joint inversion of geophysical potential field data. We first present a stochastic inversion method based on a geostatistical approach to recover 3D susceptibility models from magnetic data. The aim of applying geostatistics is to provide quantitative descriptions of natural variables distributed in space or in time and space. We evaluate the uncertainty on the parameter model by using geostatistical unconditional simulations. The realizations are post-conditioned by cokriging to observation data. In order to avoid the natural tendency of the estimated structure to lay near the surface, depth weighting is included in the cokriging system. Then, we introduce algorithm for multiscale inversion, the presented algorithm has the capability of inverting data on multiple supports. The method involves four main steps: i. upscaling of borehole parameters (It could be density or susceptibility) to block parameters, ii. selection of block to use as constraints based on a threshold on kriging variance, iii. inversion of observation data with selected block densities as constraints, and iv. downscaling of inverted parameters to small prisms. Two modes of application are presented: estimation and simulation. Finally, a novel
Quality Content in Distance Education
Yildiz, Ezgi Pelin; Isman, Aytekin
2016-01-01
In parallel with technological advances in today's world of education activities can be conducted without the constraints of time and space. One of the most important of these activities is distance education. The success of the distance education is possible with content quality. The proliferation of e-learning environment has brought a need for…
The Psychology of Psychic Distance
DEFF Research Database (Denmark)
Håkanson, Lars; Ambos, Björn; Schuster, Anja
2016-01-01
and their theoretical underpinnings assume psychic distances to be symmetric. Building on insights from psychology and sociology, this paper demonstrates how national factors and cognitive processes interact in the formation of asymmetric distance perceptions. The results suggest that exposure to other countries...
McQuinn, Kristen. B. W.; Skillman, Evan D.; Dolphin, Andrew E.; Berg, Danielle; Kennicutt, Robert
2016-07-01
Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark matter distributions. Accurate distances are fundamental to interpreting observations of these galaxies, yet many of the best studied nearby galaxies have distances based on methods with relatively large uncertainties. We have started a program to derive accurate distances to these galaxies. Here we measure the distance to M51—the Whirlpool galaxy—from newly obtained Hubble Space Telescope optical imaging using the tip of the red giant branch method. We measure the distance modulus to be 8.58 ± 0.10 Mpc (statistical), corresponding to a distance modulus of 29.67 ± 0.02 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian Maximum Likelihood technique that reduces measurement uncertainties. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
Distance transforms: Academics versus industry
van den Broek, Egon; Schouten, Th.E.
In image and video analysis, distance transformations (DT) are frequently used. They provide a distance image (DI) of background pixels to the nearest object pixel. DT touches upon the core of many applications; consequently, not only science but also industry has conducted a significant body of
Distance Education and the WWW.
Peraya, Daniel
1995-01-01
Examines the evolution of distance education and learning via the use of communication technology. Focuses on distance education as a way of preparing returning adult students to meet demands of the labor market, and reviews uses of the World Wide Web as a communication tool to create electronic classrooms and deliver instructional materials. (JMV)
Distance criterion for hydrogen bond
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Distance criterion for hydrogen bond. In a D-H ...A contact, the D...A distance must be less than the sum of van der Waals Radii of the D and A atoms, for it to be a hydrogen bond.
Energy Technology Data Exchange (ETDEWEB)
McQuinn, Kristen B. W. [University of Texas at Austin, McDonald Observatory, 2515 Speedway, Stop C1400 Austin, TX 78712 (United States); Skillman, Evan D. [Minnesota Institute for Astrophysics, School of Physics and Astronomy, 116 Church Street, SE, University of Minnesota, Minneapolis, MN 55455 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 E. Hermans Road, Tucson, AZ 85756 (United States); Berg, Danielle [Center for Gravitation, Cosmology and Astrophysics, Department of Physics, University of Wisconsin Milwaukee, 1900 East Kenwood Boulevard, Milwaukee, WI 53211 (United States); Kennicutt, Robert, E-mail: kmcquinn@astro.as.utexas.edu [Institute for Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom)
2016-11-01
M104 (NGC 4594; the Sombrero galaxy) is a nearby, well-studied elliptical galaxy included in scores of surveys focused on understanding the details of galaxy evolution. Despite the importance of observations of M104, a consensus distance has not yet been established. Here, we use newly obtained Hubble Space Telescope optical imaging to measure the distance to M104 based on the tip of the red giant branch (TRGB) method. Our measurement yields the distance to M104 to be 9.55 ± 0.13 ± 0.31 Mpc equivalent to a distance modulus of 29.90 ± 0.03 ± 0.07 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian maximum likelihood technique that reduces measurement uncertainties. The most discrepant previous results are due to Tully–Fisher method distances, which are likely inappropriate for M104 given its peculiar morphology and structure. Our results are part of a larger program to measure accurate distances to a sample of well-known spiral galaxies (including M51, M74, and M63) using the TRGB method.
Energy Technology Data Exchange (ETDEWEB)
McQuinn, Kristen B. W. [University of Texas at Austin, McDonald Observatory, 2515 Speedway, Stop C1400 Austin, TX 78712 (United States); Skillman, Evan D. [Minnesota Institute for Astrophysics, School of Physics and Astronomy, 116 Church Street, S.E., University of Minnesota, Minneapolis, MN 55455 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 E. Hermans Road, Tucson, AZ 85756 (United States); Berg, Danielle [Center for Gravitation, Cosmology and Astrophysics, Department of Physics, University of Wisconsin Milwaukee, 1900 East Kenwood Boulevard, Milwaukee, WI 53211 (United States); Kennicutt, Robert, E-mail: kmcquinn@astro.as.utexas.edu [Institute for Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom)
2016-07-20
Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark matter distributions. Accurate distances are fundamental to interpreting observations of these galaxies, yet many of the best studied nearby galaxies have distances based on methods with relatively large uncertainties. We have started a program to derive accurate distances to these galaxies. Here we measure the distance to M51—the Whirlpool galaxy—from newly obtained Hubble Space Telescope optical imaging using the tip of the red giant branch method. We measure the distance modulus to be 8.58 ± 0.10 Mpc (statistical), corresponding to a distance modulus of 29.67 ± 0.02 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian Maximum Likelihood technique that reduces measurement uncertainties.
Virtual Bioinformatics Distance Learning Suite
Tolvanen, Martti; Vihinen, Mauno
2004-01-01
Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…
Hedland, D. A.; Degonia, P. K.
1974-01-01
The RAE-1 spacecraft inversion performed October 31, 1972 is described based upon the in-orbit dynamical data in conjunction with results obtained from previously developed computer simulation models. The computer simulations used are predictive of the satellite dynamics, including boom flexing, and are applicable during boom deployment and retraction, inter-phase coast periods, and post-deployment operations. Attitude data, as well as boom tip data, were analyzed in order to obtain a detailed description of the dynamical behavior of the spacecraft during and after the inversion. Runs were made using the computer model and the results were analyzed and compared with the real time data. Close agreement between the actual recorded spacecraft attitude and the computer simulation results was obtained.
Validation of OSIRIS Ozone Inversions
Gudnason, P.; Evans, W. F.; von Savigny, C.; Sioris, C.; Halley, C.; Degenstein, D.; Llewellyn, E. J.; Petelina, S.; Gattinger, R. L.; Odin Team
2002-12-01
The OSIRIS instrument onboard the Odin satellite, that was launched on February 20, 2001, is a combined optical spectrograph and infrared imager that obtains profil sets of atmospheric spectra from 280 to 800 nm when Odin scans the terrestrial limb. It has been possible to make a preliminary analysis of the ozone profiles using the Chappuis absorption feature. Three algorithms have been developed for ozone profile inversions from these limb spectra sets. We have dubbed these the Gattinger, Von Savigny-Flittner and DOAS methods. These are being evaluated against POAM and other satellite data. Based on performance, one of these will be selected for the operational algorithm. The infrared imager data have been used by Degenstein with the tomographic inversion procedure to derive ozone concentrations above 60 km. This paper will present some of these initial observations and indicate the best algorithm potential of OSIRIS to make spectacular advances in the study of terrestrial ozone.
Hierarchical traits distances explain grassland Fabaceae species’ ecological niches distances
Directory of Open Access Journals (Sweden)
Florian eFort
2015-02-01
Full Text Available Fabaceae species play a key role in ecosystem functioning through their capacity to fix atmospheric nitrogen via their symbiosis with Rhizobium bacteria. To increase benefits of using Fabaceae in agricultural systems, it is necessary to find ways to evaluate species or genotypes having potential adaptations to sub-optimal growth conditions. We evaluated the relevance of phylogenetic distance, absolute trait distance and hierarchical trait distance for comparing the adaptation of 13 grassland Fabaceae species to different habitats, i.e. ecological niches. We measured a wide range of functional traits (root traits, leaf traits and whole plant traits in these species. Species phylogenetic and ecological distances were assessed from a species-level phylogenetic tree and species’ ecological indicator values, respectively. We demonstrated that differences in ecological niches between grassland Fabaceae species were related more to their hierarchical trait distances than to their phylogenetic distances. We showed that grassland Fabaceae functional traits tend to converge among species with the same ecological requirements. Species with acquisitive root strategies (thin roots, shallow root systems are competitive species adapted to non-stressful meadows, while conservative ones (coarse roots, deep root systems are able to tolerate stressful continental climates. In contrast, acquisitive species appeared to be able to tolerate low soil-P availability, while conservative ones need high P availability. Finally we highlight that traits converge along the ecological gradient, providing the assumption that species with similar root-trait values are better able to coexist, regardless of their phylogenetic distance.
Inverse problem in transformation optics
Novitsky, Andrey V.
2011-01-01
The straightforward method of transformation optics implies that one starts from the coordinate transformation and determines the Jacobian matrix, the fields and material parameters of the cloak. However, the coordinate transformation appears as an optional function: it is not necessary to know it. We offer the solution of some sort of inverse problem: starting from the fields in the invisibility cloak we directly derive the permittivity and permeability tensors of the cloaking shell. This ap...
Fourier reconstruction with sparse inversions
Zwartjes, P.M.
2005-01-01
In seismic exploration an image of the subsurface is generated from seismic data through various data processing algorithms. When the data is not acquired on an equidistantly spaced grid, artifacts may result in the final image. Fourier reconstruction is an interpolation technique that can reduce these artifacts by generating uniformly sampled data from such non-uniformly sampled data. The method works by estimating via least-squares inversion the Fourier coefficients that describe the non-un...
The Inverse of Banded Matrices
2013-01-01
for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite ...numbers of summed or subtracted terms in computing the inverse of a term of an upper (lower) triangular matrix are the generalized order-k Fibonacci ... Fibonacci numbers are the usual Fibonacci numbers, that is, f 2m = Fm (mth Fibonacci number). When also k = 3, c1 = c2 = c3 = 1, then the generalized order-3
Towards inverse modeling of intratumor heterogeneity
Brutovsky, Branislav; Horvath, Denis
2015-08-01
Development of resistance limits efficiency of present anticancer therapies and preventing it remains a big challenge in cancer research. It is accepted, at the intuitive level, that resistance emerges as a consequence of the heterogeneity of cancer cells at the molecular, genetic and cellular levels. Produced by many sources, tumor heterogeneity is extremely complex time dependent statistical characteristics which may be quantified by measures defined in many different ways, most of them coming from statistical mechanics. In this paper, we apply the Markovian framework to relate population heterogeneity to the statistics of the environment. As, from an evolutionary viewpoint, therapy corresponds to a purposeful modi- fication of the cells' fitness landscape, we assume that understanding general relationship between the spatiotemporal statistics of a tumor microenvironment and intratumor heterogeneity will allow to conceive the therapy as an inverse problem and to solve it by optimization techniques. To account for the inherent stochasticity of biological processes at cellular scale, the generalized distancebased concept was applied to express distances between probabilistically described cell states and environmental conditions, respectively.
Inverse-magnetron mass spectrometer
International Nuclear Information System (INIS)
Pakulin, V.N.
1979-01-01
Considered is the operation of a typical magnetron mass spectrometer with an internal ion source and that of an inverse magnetron mass spectrometer with an external ion source. It is found that for discrimination of the same mass using the inverse design of mass spectrometers it is possible to employ either r 2 /r 1 times lesser magnetic fields at equal accelerating source-collector voltages, or r 2 /r 1 higher accelerating voltages at equal magnetic fields, as compared to the typical design (r 1 and r 2 being radii of the internal and external electrodes of the analyser, respectively). The design of an inverse-magnetron mass spectrometer is described. The mass analyzer is formed by a cylindrical electrode of 3 mm diameter and a coaxial tubular cylinder of 55 mm diameter. External to the analyzer is an ionizing chamber at the pressure of up to 5x10 -6 torr. The magnetic field along the chamber axis produced by a solenoid was 300 Oe. At the accelerating voltage of 100 V and mass 28, the spectrometer has a resolution of 30 at a half-peak height
Approximating Tree Edit Distance through String Edit Distance
Akutsu, Tatsuya; Fukagawa, Daiji; Takasu, Atsuhiro
2010-01-01
We present an algorithm to approximate edit distance between two ordered and rooted trees of bounded degree. In this algorithm, each input tree is transformed into a string by computing the Euler string, where labels of some edges in the input trees are modified so that structures of small subtrees are reflected to the labels. We show that the edit distance between trees is at least 1/6 and at most O(n 3/4) of the edit distance between the transformed strings, where n is the maximum size of t...
Inverse problems for difference equations with quadratic ...
African Journals Online (AJOL)
Inverse problems for difference equations with quadratic Eigenparameter dependent boundary conditions. Sonja Currie, Anne D. Love. Abstract. This paper inductively investigates an inverse problem for difference boundary value problems with boundary conditions that depend quadratically on the eigenparameter.
A robust probabilistic approach for variational inversion in shallow water acoustic tomography
International Nuclear Information System (INIS)
Berrada, M; Badran, F; Crépon, M; Thiria, S; Hermand, J-P
2009-01-01
This paper presents a variational methodology for inverting shallow water acoustic tomography (SWAT) measurements. The aim is to determine the vertical profile of the speed of sound c(z), knowing the acoustic pressures generated by a frequency source and collected by a sparse vertical hydrophone array (VRA). A variational approach that minimizes a cost function measuring the distance between observations and their modeled equivalents is used. A regularization term in the form of a quadratic restoring term to a background is also added. To avoid inverting the variance–covariance matrix associated with the above-weighted quadratic background, this work proposes to model the sound speed vector using probabilistic principal component analysis (PPCA). The PPCA introduces an optimum reduced number of non-correlated latent variables η, which determine a new control vector and a new regularization term, expressed as η T η. The PPCA represents a rigorous formalism for the use of a priori information and allows an efficient implementation of the variational inverse method
COLLAGE-BASED INVERSE PROBLEMS FOR IFSM WITH ENTROPY MAXIMIZATION AND SPARSITY CONSTRAINTS
Directory of Open Access Journals (Sweden)
Herb Kunze
2013-11-01
Full Text Available We consider the inverse problem associated with IFSM: Given a target function f, find an IFSM, such that its invariant fixed point f is sufficiently close to f in the Lp distance. In this paper, we extend the collage-based method developed by Forte and Vrscay (1995 along two different directions. We first search for a set of mappings that not only minimizes the collage error but also maximizes the entropy of the dynamical system. We then include an extra term in the minimization process which takes into account the sparsity of the set of mappings. In this new formulation, the minimization of collage error is treated as multi-criteria problem: we consider three different and conflicting criteria i.e., collage error, entropy and sparsity. To solve this multi-criteria program we proceed by scalarization and we reduce the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented. Numerical studies indicate that a maximum entropy principle exists for this approximation problem, i.e., that the suboptimal solutions produced by collage coding can be improved at least slightly by adding a maximum entropy criterion.
Fast computation of the inverse CMH model
Patel, Umesh D.; Della Torre, Edward
2001-12-01
A fast computational method based on differential equation approach for inverse Della Torre, Oti, Kádár (DOK) model has been extended for the inverse Complete Moving Hysteresis (CMH) model. A cobweb technique for calculating the inverse CMH model is also presented. The two techniques differ from the point of view of flexibility, accuracy, and computation time. Simulation results of the inverse computation for both methods are presented.
LA INVERSION INMOBILIARIA INDIRECTA EN ESPANA.
Joan MONTLLOR-SERRATS; Anna M. PANOSA-GUBAU
2013-01-01
En este articulo se revisan los instrumentos de inversion indirecta inmobiliaria en Espana, desde la creacion en 1992 de los Fondos y Sociedades de Inversion inmobiliaria (FII y SII) hasta la creacion de la primera Sociedad de inversion del mercado inmobiliario (SOCIMI) en 2013. Se analizan las caracteristicas de los mismos y asimismo los motivos por los cuales estas figuras de inversion no han tenido mucha demanda hasta el momento, en comparacion con los REITs (Real Estate Investment Trusts)...
Inversion of potential-field data for layers with uneven thickness
Caratori Tontini, F.; Cocchi, L.; Carmisciano, C.; Stefanelli, P.
2008-01-01
AB: Inversion of large-scale potential-field anomalies, aimed at determining density or magnetization, is usually made in the Fourier domain. The commonly adopted geometry is based on a layer of constant thickness, characterized by a bottom surface at a fixed distance from the top surface.....
Inversion of full acoustic wavefield in local helioseismology: A study with synthetic data
Cobden, L.J.|info:eu-repo/dai/nl/323068758; Tong, C.H.; Warner, M.R.
We present the first results from the inversion of full acoustic wavefield in the helioseismic context. In contrast to time-distance helioseismology, which involves analyzing the travel times of seismic waves propagating into the solar interior, wavefield tomography models both the travel times and
Tracking frequency laser distance gauge
International Nuclear Information System (INIS)
Phillips, J.D.; Reasenberg, R.D.
2005-01-01
Advanced astronomical missions with greatly enhanced resolution and physics missions of unprecedented accuracy will require laser distance gauges of substantially improved performance. We describe a laser gauge, based on Pound-Drever-Hall locking, in which the optical frequency is adjusted to maintain an interferometer's null condition. This technique has been demonstrated with pm performance. Automatic fringe hopping allows it to track arbitrary distance changes. The instrument is intrinsically free of the nm-scale cyclic bias present in traditional (heterodyne) high-precision laser gauges. The output is a radio frequency, readily measured to sufficient accuracy. The laser gauge has operated in a resonant cavity, which improves precision, can suppress the effects of misalignments, and makes possible precise automatic alignment. The measurement of absolute distance requires little or no additional hardware, and has also been demonstrated. The proof-of-concept version, based on a stabilized HeNe laser and operating on a 0.5 m path, has achieved 10 pm precision with 0.1 s integration time, and 0.1 mm absolute distance accuracy. This version has also followed substantial distance changes as fast as 16 mm/s. We show that, if the precision in optical frequency is a fixed fraction of the linewidth, both incremental and absolute distance precision are independent of the distance measured. We discuss systematic error sources, and present plans for a new version of the gauge based on semiconductor lasers and fiber-coupled components
Language distance and tree reconstruction
International Nuclear Information System (INIS)
Petroni, Filippo; Serva, Maurizio
2008-01-01
Languages evolve over time according to a process in which reproduction, mutation and extinction are all possible. This is very similar to haploid evolution for asexual organisms and for the mitochondrial DNA of complex ones. Exploiting this similarity, it is possible, in principle, to verify hypotheses concerning the relationship among languages and to reconstruct their family tree. The key point is the definition of the distances among pairs of languages in analogy with the genetic distances among pairs of organisms. Distances can be evaluated by comparing grammar and/or vocabulary, but while it is difficult, if not impossible, to quantify grammar distance, it is possible to measure a distance from vocabulary differences. The method used by glottochronology computes distances from the percentage of shared 'cognates', which are words with a common historical origin. The weak point of this method is that subjective judgment plays a significant role. Here we define the distance of two languages by considering a renormalized edit distance among words with the same meaning and averaging over the two hundred words contained in a Swadesh list. In our approach the vocabulary of a language is the analogue of DNA for organisms. The advantage is that we avoid subjectivity and, furthermore, reproducibility of results is guaranteed. We apply our method to the Indo-European and the Austronesian groups, considering, in both cases, fifty different languages. The two trees obtained are, in many respects, similar to those found by glottochronologists, with some important differences as regards the positions of a few languages. In order to support these different results we separately analyze the structure of the distances of these languages with respect to all the others
Language distance and tree reconstruction
Petroni, Filippo; Serva, Maurizio
2008-08-01
Languages evolve over time according to a process in which reproduction, mutation and extinction are all possible. This is very similar to haploid evolution for asexual organisms and for the mitochondrial DNA of complex ones. Exploiting this similarity, it is possible, in principle, to verify hypotheses concerning the relationship among languages and to reconstruct their family tree. The key point is the definition of the distances among pairs of languages in analogy with the genetic distances among pairs of organisms. Distances can be evaluated by comparing grammar and/or vocabulary, but while it is difficult, if not impossible, to quantify grammar distance, it is possible to measure a distance from vocabulary differences. The method used by glottochronology computes distances from the percentage of shared 'cognates', which are words with a common historical origin. The weak point of this method is that subjective judgment plays a significant role. Here we define the distance of two languages by considering a renormalized edit distance among words with the same meaning and averaging over the two hundred words contained in a Swadesh list. In our approach the vocabulary of a language is the analogue of DNA for organisms. The advantage is that we avoid subjectivity and, furthermore, reproducibility of results is guaranteed. We apply our method to the Indo-European and the Austronesian groups, considering, in both cases, fifty different languages. The two trees obtained are, in many respects, similar to those found by glottochronologists, with some important differences as regards the positions of a few languages. In order to support these different results we separately analyze the structure of the distances of these languages with respect to all the others.
Gawlitza, Matthias; Friedrich, Benjamin; Hobohm, Carsten; Schaudinn, Alexander; Schob, Stefan; Quäschling, Ulf; Hoffmann, Karl-Titus; Lobsien, Donald
2016-02-01
In patients with occlusion of the middle cerebral artery (MCA) treated by intravenous thrombolysis (IVT), the distance to thrombus (DT) has been proposed as a predictor of outcome. The purpose of the present study was to investigate how DT relates to dynamic susceptibility contrast perfusion metrics. Retrospective analysis was undertaken of patients who were diagnosed with acute MCA occlusion by magnetic resonance imaging and treated with IVT. Volumes of time-to-maximum (Tmax) perfusion deficits and diffusion-weighted imaging (DWI) lesions, diffusion-perfusion mismatch volumes, and the presence of target mismatch were determined. Correlations between the above stoke measures and DT were then calculated. Fifty-five patients were included. DT showed significant inverse correlations with Tmax greater than 4, 6, 8, and 10 seconds, respectively, and mismatch volumes. Using the DT group median (14 mm) as a separator, significant intergroup differences were observed for Tmax greater than 4, 6, and 8 seconds, respectively, and for mismatch volumes. Grouping DT into quartiles showed significant intergroup differences regarding mismatch volumes and Tmax values greater than 4 and 6 seconds. Binary logistic regression identified DT (odds ratio [OR] = .89; 95% confidence interval [CI], .81-.99) and DWI lesion volumes (OR = .92; 95% CI, .86-.97) as independent predictors of target mismatch. A low DT predicted target mismatch with an area under the curve of .69. DT correlates inversely with Tmax perfusion deficits and mismatch volumes and acts as an independent predictor of target mismatch. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Issues with time-distance inversions for supergranular flows (Research Note)
Czech Academy of Sciences Publication Activity Database
Švanda, Michal
2015-01-01
Roč. 575, March (2015), A122/1-A122/8 ISSN 0004-6361 R&D Projects: GA ČR GPP209/12/P568 Institutional support: RVO:67985815 Keywords : Sun * helioseismology * convection Subject RIV: AC - Archeology, Anthropology, Ethnology Impact factor: 4.378, year: 2014
Inversion of GPS meteorology data
Directory of Open Access Journals (Sweden)
K. Hocke
Full Text Available The GPS meteorology (GPS/MET experiment, led by the Universities Corporation for Atmospheric Research (UCAR, consists of a GPS receiver aboard a low earth orbit (LEO satellite which was launched on 3 April 1995. During a radio occultation the LEO satellite rises or sets relative to one of the 24 GPS satellites at the Earth's horizon. Thereby the atmospheric layers are successively sounded by radio waves which propagate from the GPS satellite to the LEO satellite. From the observed phase path increases, which are due to refraction of the radio waves by the ionosphere and the neutral atmosphere, the atmospheric parameter refractivity, density, pressure and temperature are calculated with high accuracy and resolution (0.5–1.5 km. In the present study, practical aspects of the GPS/MET data analysis are discussed. The retrieval is based on the Abelian integral inversion of the atmospheric bending angle profile into the refractivity index profile. The problem of the upper boundary condition of the Abelian integral is described by examples. The statistical optimization approach which is applied to the data above 40 km and the use of topside bending angle profiles from model atmospheres stabilize the inversion. The retrieved temperature profiles are compared with corresponding profiles which have already been calculated by scientists of UCAR and Jet Propulsion Laboratory (JPL, using Abelian integral inversion too. The comparison shows that in some cases large differences occur (5 K and more. This is probably due to different treatment of the upper boundary condition, data runaways and noise. Several temperature profiles with wavelike structures at tropospheric and stratospheric heights are shown. While the periodic structures at upper stratospheric heights could be caused by residual errors of the ionospheric correction method, the periodic temperature fluctuations at heights below 30 km are most likely caused by atmospheric waves (vertically
A Kinematic Source Inversion Scheme With New Parametrisation
Burjanek, J.
2007-12-01
We present a kinematic finite extent source inversion scheme, introducing an innovative parametrization of the problem. Particularly, we assume a spatial slip distribution composed of overlapping 2D Gaussian functions on regular grid. Temporal evolution of slip is described with prescribed slip velocity function, with free rupture velocity and rise time parameters. Fixing the values of rupture velocity and rise time for whole fault makes the problem linear in static slip. The inversion algorithm works as follows. At first we fix rise time and rupture velocity and calculate seismograms for each Gaussian function separately. Then a linear inversion is performed to get optimal weights (=amplitudes) of these Gaussian functions. Such procedure is done for a number of rupture velocity and rise-time distributions. Optimal values of these distributions are obtained by neighborhood algorithm. L2 norm is used as an objective function. The linear inversion for static slip is done with positivity constraint, so called 'Quadratic programming' was applied to solve such problem. Our method benefits from simplicity of linear problems and favourable spectral properties of Gaussian function. The latter prevent from employment of artificial smoothing operators. Thus a direct insight into spectral properties of the static slip distribution of real earthquakes is obtained. The method has been applied to the 2000 M6.6 Western Tottori, Japan, earthquake.
Cleator, Sean; Harrison, Sandy P.; Roulstone, Ian; Nichols, Nancy K.; Prentice, Iain Colin
2017-04-01
) and LGM climate derived as an ensemble average of the CMIP5-PMIP3 LGM simulations (Harrison et al, 2015). This assimilation technique will ultimately be extended to include information from adjacent LGM sites through a distance-weighting function and constrained smoothing techniques to produce a spatially consistent reconstruction of LGM climate fields. The use of model inversion together with a 3D-Var assimilation scheme provides a well-formulated way of maximising the use of limited palaeoclimate data.
Designing Instruction for Distance Learning
National Research Council Canada - National Science Library
Main, Robert
1998-01-01
... pressure on the training and education fields to meet the demand. Emerging communication technologies have enabled alternative methods for delivering training via distance learning that are being rapidly adopted by academia and industry...
Interactive Multimedia Distance Learning (IMDL)
National Research Council Canada - National Science Library
Crhistinaz, Daniel
1999-01-01
.... One avenue of investigation has been to evaluate emerging computer and network technologies to determine if training can be delivered at a distance more efficiently than traditional classroom training...
ECONOMICS OF DISTANCE EDUCATION RECONSIDERED
Directory of Open Access Journals (Sweden)
Wolfram LAASER
2008-07-01
Full Text Available ABSTRACT According to Gartner a certain hype of e-Learning was followed by a downturn but eLearning will continue to be an important factor in learning scenarios. However the economic viability of e-learning projects will be questioned with more scrutiny than in earlier periods. Therefore it seems to be a good opportunity to see what can be learned from past experience in costing distance learning projects and what aspects are added by current attempts to measure economic efficiency. After reviewing early research about costing distance learning some more recent approaches will be discussed, such as eLearning ROI-calculators and the concept of total cost of ownership. Furthermore some microeconomic effects referring to localization of distance learning courses are outlined. Finally several unsolved issues in costing distance education are summarized.
Academy Distance Learning Tools (IRIS) -
Department of Transportation — IRIS is a suite of front-end web applications utilizing a centralized back-end Oracle database. The system fully supports the FAA Academy's Distance Learning Program...
Distance labeling schemes for trees
DEFF Research Database (Denmark)
Alstrup, Stephen; Gørtz, Inge Li; Bistrup Halvorsen, Esben
2016-01-01
We consider distance labeling schemes for trees: given a tree with n nodes, label the nodes with binary strings such that, given the labels of any two nodes, one can determine, by looking only at the labels, the distance in the tree between the two nodes. A lower bound by Gavoille et al. [Gavoille...... variants such as, for example, small distances in trees [Alstrup et al., SODA, 2003]. We improve the known upper and lower bounds of exact distance labeling by showing that 1/4 log2(n) bits are needed and that 1/2 log2(n) bits are sufficient. We also give (1 + ε)-stretch labeling schemes using Theta...
Distance Education in Technological Age
Directory of Open Access Journals (Sweden)
R .C. SHARMA
2005-04-01
Full Text Available Distance Education in Technological AgeRomesh Verma (Editor, New Delhi: Anmol Publications, 2005, ISBN 81-261-2210-2, pp. 419 Reviewed by R C SHARMARegional DirectorIndira Gandhi National Open University-INDIA The advancements in information and communication technologies have brought significant changes in the way the open and distance learning are provided to the learners. The impact of such changes is quite visible in both developed and developing countries. Switching over to online mode, joining hands with private initiatives and making a presence in foreign waters, are some of the hallmarks of the open and distance education (ODE institutions in developing countries. The compilation of twenty six essays on themes as applicable to ODE has resulted in the book, Distance Education in Technological Age. These essays follow a progressive style of narration, starting from describing conceptual framework of distance education, how the distance education was emerged on the global scene and in India, and then goes on to discuss emergence of online distance education and research aspects in ODE. The initial four chapters provide a detailed account of historical development and growth of distance education in India and State Open University and National Open University Model in India . Student support services are pivot to any distance education and much of its success depends on how well the support services are provided. These are discussed from national and international perspective. The issues of collaborative learning, learning on demand, life long learning, learning-unlearning and re-learning model and strategic alliances have also given due space by the authors. An assortment of technologies like communication technology, domestic technology, information technology, mass media and entertainment technology, media technology and educational technology give an idea of how these technologies are being adopted in the open universities. The study
Designing Instruction for Distance Learning
National Research Council Canada - National Science Library
Main, Robert
1998-01-01
.... While distance learning has been demonstrated to be an effective and efficient tool for increased access it also requires greater emphasis on instructional design and instructor training to obtain satisfactory results...
Inverse problem in transformation optics
DEFF Research Database (Denmark)
Novitsky, Andrey
2011-01-01
. We offer the solution of some sort of inverse problem: starting from the fields in the invisibility cloak we directly derive the permittivity and permeability tensors of the cloaking shell. This approach can be useful for finding material parameters for the specified electromagnetic fields......The straightforward method of transformation optics implies that one starts from the coordinate transformation and determines the Jacobian matrix, the fields and material parameters of the cloak. However, the coordinate transformation appears as an optional function: it is not necessary to know it...... in the cloaking shell without knowing the coordinate transformation....
Iterative optimization in inverse problems
Byrne, Charles L
2014-01-01
Iterative Optimization in Inverse Problems brings together a number of important iterative algorithms for medical imaging, optimization, and statistical estimation. It incorporates recent work that has not appeared in other books and draws on the author's considerable research in the field, including his recently developed class of SUMMA algorithms. Related to sequential unconstrained minimization methods, the SUMMA class includes a wide range of iterative algorithms well known to researchers in various areas, such as statistics and image processing. Organizing the topics from general to more
Effect of objective function on multi-objective inverse planning of radiation therapy
International Nuclear Information System (INIS)
Li Guoli; Wu Yican; Song Gang; Wang Shifang
2006-01-01
There are two kinds of objective functions in radiotherapy inverse planning: dose distribution-based and Dose-Volume Histogram (DVH)-based functions. The treatment planning in our days is still a trial and error process because the multi-objective problem is solved by transforming it into a single objective problem using a specific set of weights for each object. This work investigates the problem of objective function setting based on Pareto multi-optimization theory, and compares the effect on multi-objective inverse planning of those two kinds of objective functions including calculation time, converge speed, etc. The basis of objective function setting on inverse planning is discussed. (authors)
The effects of changing exercise levels on weight and age-relatedweight gain
Energy Technology Data Exchange (ETDEWEB)
Williams, Paul T.; Wood, Peter D.
2004-06-01
To determine prospectively whether physical activity canprevent age-related weight gain and whether changing levels of activityaffect body weight. DESIGN/SUBJECTS: The study consisted of 8,080 maleand 4,871 female runners who completed two questionnaires an average(+/-standard deviation (s.d.)) of 3.20+/-2.30 and 2.59+/-2.17 yearsapart, respectively, as part of the National Runners' Health Study.RESULTS: Changes in running distance were inversely related to changes inmen's and women's body mass indices (BMIs) (slope+/-standard error(s.e.): -0.015+/-0.001 and -0.009+/-0.001 kg/m(2) per Deltakm/week,respectively), waist circumferences (-0.030+/-0.002 and -0.022+/-0.005 cmper Deltakm/week, respectively) and percent changes in body weight(-0.062+/-0.003 and -0.041+/-0.003 percent per Deltakm/week,respectively, all P<0.0001). The regression slopes were significantlysteeper (more negative) in men than women for DeltaBMI and Deltapercentbody weight (P<0.0001). A longer history of running diminishedthe impact of changing running distance on men's weights. When adjustedfor Deltakm/week, years of aging in men and years of aging in women wereassociated with increases of 0.066+/-0.005 and 0.056+/-0.006 kg/m(2) inBMI, respectively, increases of 0.294+/-0.019 and 0.279+/-0.028 percentin Delta percentbody weight, respectively, and increases of 0.203+/-0.016and 0.271+/-0.033 cm in waist circumference, respectively (allP<0.0001). These regression slopes suggest that vigorous exercise mayneed to increase 4.4 km/week annually in men and 6.2 km/week annually inwomen to compensate for the expected gain in weight associated with aging(2.7 and 3.9 km/week annually when correct for the attenuation due tomeasurement error). CONCLUSIONS: Age-related weight gain occurs evenamong the most active individuals when exercise is constant.Theoretically, vigorous exercise must increase significantly with age tocompensate for the expected gain in weight associated withaging.
Waveform inversion of lateral velocity variation from wavefield source location perturbation
Choi, Yun Seok
2013-09-22
It is challenge in waveform inversion to precisely define the deep part of the velocity model compared to the shallow part. The lateral velocity variation, or what referred to as the derivative of velocity with respect to the horizontal distance, with well log data can be used to update the deep part of the velocity model more precisely. We develop a waveform inversion algorithm to obtain the lateral velocity variation by inverting the wavefield variation associated with the lateral shot location perturbation. The gradient of the new waveform inversion algorithm is obtained by the adjoint-state method. Our inversion algorithm focuses on resolving the lateral changes of the velocity model with respect to a fixed reference vertical velocity profile given by a well log. We apply the method on a simple-dome model to highlight the methods potential.
Prescription weight loss drugs; Diabetes - weight loss drugs; Obesity - weight loss drugs; Overweight - weight loss drugs ... Several weight-loss medicines are available. About 5 to 10 pounds (2 to 4.5 kilograms) can be lost by ...
LHC Report: 2 inverse femtobarns!
Mike Lamont for the LHC Team
2011-01-01
The LHC is enjoying a confluence of twos. This morning (Friday 5 August) we passed 2 inverse femtobarns delivered in 2011; the peak luminosity is now just over 2 x1033 cm-2s-1; and recently fill 2000 was in for nearly 22 hours and delivered around 90 inverse picobarns, almost twice 2010's total. In order to increase the luminosity we can increase of number of bunches, increase the number of particles per bunch, or decrease the transverse beam size at the interaction point. The beam size can be tackled in two ways: either reduce the size of the injected bunches or squeeze harder with the quadrupole magnets situated on either side of the experiments. Having increased the number of bunches to 1380, the maximum possible with a 50 ns bunch spacing, a one day meeting in Crozet decided to explore the other possibilities. The size of the beams coming from the injectors has been reduced to the minimum possible. This has brought an increase in the peak luminosity of about 50% and the 2 x 1033 cm...
Inverse problems in systems biology
International Nuclear Information System (INIS)
Engl, Heinz W; Lu, James; Müller, Stefan; Flamm, Christoph; Schuster, Peter; Kügler, Philipp
2009-01-01
Systems biology is a new discipline built upon the premise that an understanding of how cells and organisms carry out their functions cannot be gained by looking at cellular components in isolation. Instead, consideration of the interplay between the parts of systems is indispensable for analyzing, modeling, and predicting systems' behavior. Studying biological processes under this premise, systems biology combines experimental techniques and computational methods in order to construct predictive models. Both in building and utilizing models of biological systems, inverse problems arise at several occasions, for example, (i) when experimental time series and steady state data are used to construct biochemical reaction networks, (ii) when model parameters are identified that capture underlying mechanisms or (iii) when desired qualitative behavior such as bistability or limit cycle oscillations is engineered by proper choices of parameter combinations. In this paper we review principles of the modeling process in systems biology and illustrate the ill-posedness and regularization of parameter identification problems in that context. Furthermore, we discuss the methodology of qualitative inverse problems and demonstrate how sparsity enforcing regularization allows the determination of key reaction mechanisms underlying the qualitative behavior. (topical review)
Inverse problems and inverse scattering of plane waves
Ghosh Roy, Dilip N
2001-01-01
The purpose of this text is to present the theory and mathematics of inverse scattering, in a simple way, to the many researchers and professionals who use it in their everyday research. While applications range across a broad spectrum of disciplines, examples in this text will focus primarly, but not exclusively, on acoustics. The text will be especially valuable for those applied workers who would like to delve more deeply into the fundamentally mathematical character of the subject matter.Practitioners in this field comprise applied physicists, engineers, and technologists, whereas the theory is almost entirely in the domain of abstract mathematics. This gulf between the two, if bridged, can only lead to improvement in the level of scholarship in this highly important discipline. This is the book''s primary focus.
Inversion method applied to the rotation curves of galaxies
Márquez-Caicedo, L. A.; Lora-Clavijo, F. D.; Sanabria-Gómez, J. D.
2017-07-01
We used simulated annealing, Montecarlo and genetic algorithm methods for matching both numerical data of density and velocity profiles in some low surface brigthness galaxies with theoretical models of Boehmer-Harko, Navarro-Frenk-White and Pseudo Isothermal Profiles for galaxies with dark matter halos. We found that Navarro-Frenk-White model does not fit at all in contrast with the other two models which fit very well. Inversion methods have been widely used in various branches of science including astrophysics (Charbonneau 1995, ApJS, 101, 309). In this work we have used three different parametric inversion methods (MonteCarlo, Genetic Algorithm and Simmulated Annealing) in order to determine the best fit of the observed data of the density and velocity profiles of a set of low surface brigthness galaxies (De Block et al. 2001, ApJ, 122, 2396) with three models of galaxies containing dark mattter. The parameters adjusted by the inversion methods were the central density and a characteristic distance in the Boehmer-Harko BH (Boehmer & Harko 2007, JCAP, 6, 25), Navarro-Frenk-White NFW (Navarro et al. 2007, ApJ, 490, 493) and Pseudo Isothermal Profile PI (Robles & Matos 2012, MNRAS, 422, 282). The results obtained showed that the BH and PI Profile dark matter galaxies fit very well for both the density and the velocity profiles, in contrast the NFW model did not make good adjustments to the profiles in any analized galaxy.
INFO-RNA--a fast approach to inverse RNA folding.
Busch, Anke; Backofen, Rolf
2006-08-01
The structure of RNA molecules is often crucial for their function. Therefore, secondary structure prediction has gained much interest. Here, we consider the inverse RNA folding problem, which means designing RNA sequences that fold into a given structure. We introduce a new algorithm for the inverse folding problem (INFO-RNA) that consists of two parts; a dynamic programming method for good initial sequences and a following improved stochastic local search that uses an effective neighbor selection method. During the initialization, we design a sequence that among all sequences adopts the given structure with the lowest possible energy. For the selection of neighbors during the search, we use a kind of look-ahead of one selection step applying an additional energy-based criterion. Afterwards, the pre-ordered neighbors are tested using the actual optimization criterion of minimizing the structure distance between the target structure and the mfe structure of the considered neighbor. We compared our algorithm to RNAinverse and RNA-SSD for artificial and biological test sets. Using INFO-RNA, we performed better than RNAinverse and in most cases, we gained better results than RNA-SSD, the probably best inverse RNA folding tool on the market. www.bioinf.uni-freiburg.de?Subpages/software.html.
Long-distance asymptotics of temperature correlators of the impenetrable Bose gas
International Nuclear Information System (INIS)
Its, A.R.; Izergin, A.G.; Korepin, V.E.
1989-06-01
The inverse scattering method is applied to the integrable nonlinear system describing temperature correlators of the impenetrable bosons in one space dimension. The corresponding matrix Riemann problems are constructed for two-point as well as for multi-point correlators. Long-distance asymptotics of two-point correlators is calculated. (author). 8 refs
Counting distance: Effects of egocentric distance on numerical perception.
Directory of Open Access Journals (Sweden)
Nurit Gronau
Full Text Available Numerical value is long known to be associated with a variety of magnitude representations, such as size, time and space. The present study focused on the interactive relations of numerical magnitude with a spatial factor which is dominant in everyday vision and is often overlooked, namely, egocentric distance, or depth. We hypothesized that digits denoting large magnitudes are associated with large perceived distances, and vice versa. While the relations of numerical value and size have been long documented, effects of egocentric distance on numeral perception have been scarcely investigated, presumably due to the difficulty to disentangle size and depth factors within three-dimensional visual displays. The current study aimed to assess the potential linkage between egocentric distance and number magnitude, while neutralizing any perceived and/or physical size parameters of target digits. In Experiment 1, participants conducted a numeral size-classification task ('bigger or smaller than 5', to which they responded with a near-to-body or a far-from-body key. Results revealed shorter responses for small than for large numbers when responded with a key positioned close to the body, and for large than small numbers when responded with a key positioned far from the body (regardless of hand-key mapping. Experiment 2 used verbal stimuli denoting near/remote concepts as irrelevant primes to target digits, further demonstrating a priming effect of conceived distance on numerical value processing. Collectively, our results suggest that distance magnitudes are associatively linked to numerical magnitudes and may affect digit processing independently of the effects of visual size.
Distance sampling methods and applications
Buckland, S T; Marques, T A; Oedekoven, C S
2015-01-01
In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website. Some of the case studies use the software Distance, while others use R code. The book is in three parts. The first part addresses basic methods, the ...
Computing Distances between Probabilistic Automata
Directory of Open Access Journals (Sweden)
Mathieu Tracol
2011-07-01
Full Text Available We present relaxed notions of simulation and bisimulation on Probabilistic Automata (PA, that allow some error epsilon. When epsilon is zero we retrieve the usual notions of bisimulation and simulation on PAs. We give logical characterisations of these notions by choosing suitable logics which differ from the elementary ones, L with negation and L without negation, by the modal operator. Using flow networks, we show how to compute the relations in PTIME. This allows the definition of an efficiently computable non-discounted distance between the states of a PA. A natural modification of this distance is introduced, to obtain a discounted distance, which weakens the influence of long term transitions. We compare our notions of distance to others previously defined and illustrate our approach on various examples. We also show that our distance is not expansive with respect to process algebra operators. Although L without negation is a suitable logic to characterise epsilon-(bisimulation on deterministic PAs, it is not for general PAs; interestingly, we prove that it does characterise weaker notions, called a priori epsilon-(bisimulation, which we prove to be NP-difficult to decide.
High accuracy absolute distance metrology
Swinkels, Bas L.; Bhattacharya, Nandini; Verlaan, Ad L.; Braat, Joseph J. M.
2017-11-01
One of ESA's future missions is the Darwin Space Interferometer, which aims to detect planets around nearby stars using optical aperture synthesis with free-flying telescopes. Since this involves interfering white (infra-red) light over large distances, the mission is not possible without a complex metrology system that monitors various speeds, distances and angles between the satellites. One of its sub-systems should measure absolute distances with an accuracy of around 70 micrometer over distances up to 250 meter. To enable such measurements, we are investigating a technique called frequency sweeping interferometry, in which a single laser is swept over a large known frequency range. Central to our approach is the use of a very stable, high finesse Fabry-Ṕerot cavity, to which the laser is stabilized at the endpoints of the frequency sweep. We will discuss the optical set-up, the control system that controls the fast sweeping, the calibration and the data analysis. We tested the system using long fibers and achieved a repeatability of 50 micrometers at a distance of 55 meters. We conclude with some recommendations for further improvements and the adaption for use in space.
Social interaction distance and stratification.
Bottero, Wendy; Prandy, Kenneth
2003-06-01
There have been calls from several sources recently for a renewal of class analysis that would encompass social and cultural, as well as economic elements. This paper explores a tradition in stratification that is founded on this idea: relational or social distance approaches to mapping hierarchy and inequality which theorize stratification as a social space. The idea of 'social space' is not treated as a metaphor of hierarchy nor is the nature of the structure determined a priori. Rather, the space is identified by mapping social interactions. Exploring the nature of social space involves mapping the network of social interaction--patterns of friendship, partnership and cultural similarity--which gives rise to relations of social closeness and distance. Differential association has long been seen as the basis of hierarchy, but the usual approach is first to define a structure composed of a set of groups and then to investigate social interaction between them. Social distance approaches reverse this, using patterns of interaction to determine the nature of the structure. Differential association can be seen as a way of defining proximity within a social space, from the distances between social groups, or between social groups and social objects (such as lifestyle items). The paper demonstrates how the very different starting point of social distance approaches also leads to strikingly different theoretical conclusions about the nature of stratification and inequality.
Statistical Inversion of Seismic Noise Inversion statistique du bruit sismique
Directory of Open Access Journals (Sweden)
Adler P. M.
2006-11-01
Full Text Available A systematic investigation of wave propagation in random media is presented. Spectral analysis, inversion of codas and attenuation of the direct wave front are studied for synthetic data obtained in isotropic or anisotropic, 2D or 3D media. A coda inversion process is developed and checked on two sets of real data. In both cases, it is possible to compare the correlation lengths obtained by inversion to characteristic lengths measured on seismic logs, for the full scale seismic survey, or on a thin section, for the laboratory experiment. These two experiments prove the feasibility and the efficiency of the statistical inversion of codas. Correct characteristic lengths can be obtained which cannot be determined by another method. Le problème de la géophysique est la recherche d'informations concernant le sous-sol, dans des signaux sismiques enregistrés en surface ou dans des puits. Ces informations sont habituellement recherchées sous forme déterministe, c'est-à-dire sous la forme de la donnée en chaque point d'une valeur du paramètre étudié. Notre point de vue est différent puisque notre objectif est de déduire certaines propriétés statistiques du milieu, supposé hétérogène, à partir des sismogrammes enregistrés après propagation. Il apparaît alors deux moyens de remplir l'objectif fixé. Le premier est l'analyse spectrale des codas ; cette analyse permet de déterminer les tailles moyennes des hétérogénéités du sous-sol. La deuxième possibilité est l'étude de l'atténuation du front direct de l'onde, qui conduit aussi à la connaissance des longueurs caractéristiques du sous-sol ; contrairement à la première méthode, elle ne semble pas pouvoir être transposée efficacement à des cas réels. Dans la première partie, on teste numériquement la proportionnalité entre le facteur de rétrodiffraction, relié aux propriétés statistiques du milieu, et le spectre des codas. Les distributions de vitesse, à valeur
Solution for Ill-Posed Inverse Kinematics of Robot Arm by Network Inversion
Directory of Open Access Journals (Sweden)
Takehiko Ogawa
2010-01-01
Full Text Available In the context of controlling a robot arm with multiple joints, the method of estimating the joint angles from the given end-effector coordinates is called inverse kinematics, which is a type of inverse problems. Network inversion has been proposed as a method for solving inverse problems by using a multilayer neural network. In this paper, network inversion is introduced as a method to solve the inverse kinematics problem of a robot arm with multiple joints, where the joint angles are estimated from the given end-effector coordinates. In general, inverse problems are affected by ill-posedness, which implies that the existence, uniqueness, and stability of their solutions are not guaranteed. In this paper, we show the effectiveness of applying network inversion with regularization, by which ill-posedness can be reduced, to the ill-posed inverse kinematics of an actual robot arm with multiple joints.
Covariant chronogeometry and extreme distances
International Nuclear Information System (INIS)
Segal, I.E.
1981-01-01
A theory for the analysis of major features of the fundamental physical structure of the universe, from micro- to macroscopic is proposed. It indicates that gravity is essentially the transform of the aggregate of the basic microscopic forces under conformal inversion. The theory also suggests a natural form for elementary particle structure that implies a nonparametric cosmological effect and indicates an intrinsic hierarchy among the microscopic forces. (author)
Inverse problem in neutron reflection
International Nuclear Information System (INIS)
Zhou, Xiao-Lin; Felcher, G.P.; Chen, Sow-Hsin
1991-05-01
Reflectance and transmittance of neutrons from a thin film deposited on a bulk substrate are derived from solution of Schroedinger wave equation in the material medium with an optical potential. A closed-form solution for the complex reflectance and transmittance is obtained in an approximation where the curvature of the scattering length density profile in the film is small. This closed-form solution reduces to all the known approximations in various limiting cases and is shown to be more accurate than the existing approximations. The closed-form solution of the reflectance is used as a starting point for an inversion algorithm whereby the reflectance data are inverted by a matrix iteration scheme to obtain the scattering length density distribution in the film. A preliminary test showed that the inverted profile is accurate for the linear scattering length density distribution but falls short in the case of an exponential distribution. 30 refs., 7 figs., 1 tab
Euclidean distance geometry an introduction
Liberti, Leo
2017-01-01
This textbook, the first of its kind, presents the fundamentals of distance geometry: theory, useful methodologies for obtaining solutions, and real world applications. Concise proofs are given and step-by-step algorithms for solving fundamental problems efficiently and precisely are presented in Mathematica®, enabling the reader to experiment with concepts and methods as they are introduced. Descriptive graphics, examples, and problems, accompany the real gems of the text, namely the applications in visualization of graphs, localization of sensor networks, protein conformation from distance data, clock synchronization protocols, robotics, and control of unmanned underwater vehicles, to name several. Aimed at intermediate undergraduates, beginning graduate students, researchers, and practitioners, the reader with a basic knowledge of linear algebra will gain an understanding of the basic theories of distance geometry and why they work in real life.
Wake Vortex Inverse Model User's Guide
Lai, David; Delisi, Donald
2008-01-01
NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input
Overweight, Obesity, and Weight Loss
... Back to section menu Healthy Weight Weight and obesity Underweight Weight, fertility, and pregnancy Weight loss and ... section Home Healthy Weight Healthy Weight Weight and obesity Underweight Weight, fertility, and pregnancy Weight loss and ...
Accommodating chromosome inversions in linkage analysis.
Chen, Gary K; Slaten, Erin; Ophoff, Roel A; Lange, Kenneth
2006-08-01
This work develops a population-genetics model for polymorphic chromosome inversions. The model precisely describes how an inversion changes the nature of and approach to linkage equilibrium. The work also describes algorithms and software for allele-frequency estimation and linkage analysis in the presence of an inversion. The linkage algorithms implemented in the software package Mendel estimate recombination parameters and calculate the posterior probability that each pedigree member carries the inversion. Application of Mendel to eight Centre d'Etude du Polymorphisme Humain pedigrees in a region containing a common inversion on 8p23 illustrates its potential for providing more-precise estimates of the location of an unmapped marker or trait gene. Our expanded cytogenetic analysis of these families further identifies inversion carriers and increases the evidence of linkage.
Optimization and inverse problems in electromagnetism
Wiak, Sławomir
2003-01-01
From 12 to 14 September 2002, the Academy of Humanities and Economics (AHE) hosted the workshop "Optimization and Inverse Problems in Electromagnetism". After this bi-annual event, a large number of papers were assembled and combined in this book. During the workshop recent developments and applications in optimization and inverse methodologies for electromagnetic fields were discussed. The contributions selected for the present volume cover a wide spectrum of inverse and optimal electromagnetic methodologies, ranging from theoretical to practical applications. A number of new optimal and inverse methodologies were proposed. There are contributions related to dedicated software. Optimization and Inverse Problems in Electromagnetism consists of three thematic chapters, covering: -General papers (survey of specific aspects of optimization and inverse problems in electromagnetism), -Methodologies, -Industrial Applications. The book can be useful to students of electrical and electronics engineering, computer sci...
Inverse analysis of turbidites by machine learning
Naruse, H.; Nakao, K.
2017-12-01
This study aims to propose a method to estimate paleo-hydraulic conditions of turbidity currents from ancient turbidites by using machine-learning technique. In this method, numerical simulation was repeated under various initial conditions, which produces a data set of characteristic features of turbidites. Then, this data set of turbidites is used for supervised training of a deep-learning neural network (NN). Quantities of characteristic features of turbidites in the training data set are given to input nodes of NN, and output nodes are expected to provide the estimates of initial condition of the turbidity current. The optimization of weight coefficients of NN is then conducted to reduce root-mean-square of the difference between the true conditions and the output values of NN. The empirical relationship with numerical results and the initial conditions is explored in this method, and the discovered relationship is used for inversion of turbidity currents. This machine learning can potentially produce NN that estimates paleo-hydraulic conditions from data of ancient turbidites. We produced a preliminary implementation of this methodology. A forward model based on 1D shallow-water equations with a correction of density-stratification effect was employed. This model calculates a behavior of a surge-like turbidity current transporting mixed-size sediment, and outputs spatial distribution of volume per unit area of each grain-size class on the uniform slope. Grain-size distribution was discretized 3 classes. Numerical simulation was repeated 1000 times, and thus 1000 beds of turbidites were used as the training data for NN that has 21000 input nodes and 5 output nodes with two hidden-layers. After the machine learning finished, independent simulations were conducted 200 times in order to evaluate the performance of NN. As a result of this test, the initial conditions of validation data were successfully reconstructed by NN. The estimated values show very small
Identifiability Scaling Laws in Bilinear Inverse Problems
Choudhary, Sunav; Mitra, Urbashi
2014-01-01
A number of ill-posed inverse problems in signal processing, like blind deconvolution, matrix factorization, dictionary learning and blind source separation share the common characteristic of being bilinear inverse problems (BIPs), i.e. the observation model is a function of two variables and conditioned on one variable being known, the observation is a linear function of the other variable. A key issue that arises for such inverse problems is that of identifiability, i.e. whether the observa...
Lectures on the inverse scattering method
International Nuclear Information System (INIS)
Zakharov, V.E.
1983-06-01
In a series of six lectures an elementary introduction to the theory of inverse scattering is given. The first four lectures contain a detailed theory of solitons in the framework of the KdV equation, together with the inverse scattering theory of the one-dimensional Schroedinger equation. In the fifth lecture the dressing method is described, while the sixth lecture gives a brief review of the equations soluble by the inverse scattering method. (author)
Inverse kinematics of OWI-535 robotic arm
DEBENEC, PRIMOŽ
2015-01-01
The thesis aims to calculate the inverse kinematics for the OWI-535 robotic arm. The calculation of the inverse kinematics determines the joint parameters that provide the right pose of the end effector. The pose consists of the position and orientation, however, we will focus only on the second one. Due to arm limitations, we have created our own type of the calculation of the inverse kinematics. At first we have derived it only theoretically, and then we have transferred the derivation into...
Automatic Flight Controller With Model Inversion
Meyer, George; Smith, G. Allan
1992-01-01
Automatic digital electronic control system based on inverse-model-follower concept being developed for proposed vertical-attitude-takeoff-and-landing airplane. Inverse-model-follower control places inverse mathematical model of dynamics of controlled plant in series with control actuators of controlled plant so response of combination of model and plant to command is unity. System includes feedback to compensate for uncertainties in mathematical model and disturbances imposed from without.
An optimal transport approach for seismic tomography: application to 3D full waveform inversion
Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J.
2016-11-01
The use of optimal transport distance has recently yielded significant progress in image processing for pattern recognition, shape identification, and histograms matching. In this study, the use of this distance is investigated for a seismic tomography problem exploiting the complete waveform; the full waveform inversion. In its conventional formulation, this high resolution seismic imaging method is based on the minimization of the L 2 distance between predicted and observed data. Application of this method is generally hampered by the local minima of the associated L 2 misfit function, which correspond to velocity models matching the data up to one or several phase shifts. Conversely, the optimal transport distance appears as a more suitable tool to compare the misfit between oscillatory signals, for its ability to detect shifted patterns. However, its application to the full waveform inversion is not straightforward, as the mass conservation between the compared data cannot be guaranteed, a crucial assumption for optimal transport. In this study, the use of a distance based on the Kantorovich-Rubinstein norm is introduced to overcome this difficulty. Its mathematical link with the optimal transport distance is made clear. An efficient numerical strategy for its computation, based on a proximal splitting technique, is introduced. We demonstrate that each iteration of the corresponding algorithm requires solving the Poisson equation, for which fast solvers can be used, relying either on the fast Fourier transform or on multigrid techniques. The development of this numerical method make possible applications to industrial scale data, involving tenths of millions of discrete unknowns. The results we obtain on such large scale synthetic data illustrate the potentialities of the optimal transport for seismic imaging. Starting from crude initial velocity models, optimal transport based inversion yields significantly better velocity reconstructions than those based on
Anisotropic magnetotelluric inversion using a mutual information constraint
Mandolesi, E.; Jones, A. G.
2012-12-01
technique: the MI constraint effects the distance between images that models draw in spite of the parameters that build the considered models. Results from a medium-size synthetic test show the MI constraint's ability in driving the inverse problem solution towards a model compatible with the known ones.
Steiner Distance in Graphs--A Survey
Mao, Yaping
2017-01-01
For a connected graph $G$ of order at least $2$ and $S\\subseteq V(G)$, the \\emph{Steiner distance} $d_G(S)$ among the vertices of $S$ is the minimum size among all connected subgraphs whose vertex sets contain $S$. In this paper, we summarize the known results on the Steiner distance parameters, including Steiner distance, Steiner diameter, Steiner center, Steiner median, Steiner interval, Steiner distance hereditary graph, Steiner distance stable graph, average Steiner distance, and Steiner ...
Time-reversal and Bayesian inversion
Debski, Wojciech
2017-04-01
Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.
Chromatid Painting for Chromosomal Inversion Detection Project
National Aeronautics and Space Administration — We propose the continued development of a novel approach to the detection of chromosomal inversions. Transmissible chromosome aberrations (translocations and...
Coronal Magnetic Field Profiles from Shock-CME Standoff Distances
Schmidt, J. M.; Cairns, Iver H.; Gopalswamy, N.; Yashiro, S.
2016-01-01
Coronagraphs observe coronal mass ejections (CMEs) and driven shocks in white light images.From these observations the shocks speed and the shocks standoff distance from the CMEs leading edge can be derived. Using these quantities, theoretical relationships between the shocks Alfvenic Mach number MA and standoff distance, and empirical radial profiles for the solar wind velocity and number density, the radial magnetic field profile upstream of the shock can be calculated. These profiles cannot be measured directly. We test the accuracy of this method for estimating the radial magnetic field profile upstream of the shock by simulating a sample CME that occurred on 29 November 2013 using the three-dimensional (3-D) magnetohydrodynamic Block-Adaptive-Tree-Solar wind-Roe-Upwind-Scheme code, retrieving shock-CME standoff distances from the simulation, and comparing the estimated and simulated radial magnetic field profiles. We find good agreement between the two profiles (within +/-30%) between 1.8 and 10R.Our simulations confirm that a linear relationship exists between the standoff distance and the inverse compression ratio at the shock. We also find very good agreement between the empirical and simulated radial profiles of the number density and speed of the solar wind and inner corona.
Approximate distance oracles for planar graphs with improved query time-space tradeoff
DEFF Research Database (Denmark)
Wulff-Nilsen, Christian
2016-01-01
We consider approximate distance oracles for edge-weighted n-vertex undirected planar graphs. Given fixed ϵ > 0, we present a (1 + ϵ)-approximate distance oracle with O(n(log log n)2) space and O((loglogr?,)3) query time. This improves the previous best product of query time and space...
Correlates of body mass index, weight goals, and weight-management practices among adolescents.
Paxton, Raheem J; Valois, Robert F; Drane, J Wanzer
2004-04-01
The study examined associations among physical activity, cigarette smoking, body mass index, perceptions of body weight, weight-management goals, and weight-management behaviors of public high school adolescents. The CDC Youth Risk Behavior Survey provided a cross-sectional sample (n = 3,089) of public high school students in South Carolina. Logistic regression models were constructed separately for four race-gender groups. Adjusted odds ratios and 95% confidence intervals were calculated to determine the magnitude of associations. Based on self-reported height and weight, 13% of students were overweight, while 15% were at risk for becoming overweight. However, 42% of students were trying to lose weight, and 22% were trying to maintain current weight. Female students were less likely than male students to be overweight, but more likely to be attempting to lose weight. Extreme weight control practices were reported by 27% of the sample. Among Black females trying to lose weight, positive associations were observed for strengthening exercises (OR = 1.55), but that relationship was associated inversely in Black males (OR = .600). Among White females, attempted weight loss was associated with strengthening exercises (OR = 1.72) and cigarette smoking (OR = 1.54). For White males, attempted weight loss was associated positively with vigorous exercise (OR = 1.41) and inversely related to moderate exercise (OR = .617). Effective weight-management practices for adolescents should focus on appropriate eating behaviors, physical activity, and low-fat/calorie diets. Multicomponent weight management interventions should be conducted within a coordinated school health framework.
Video surveillance using distance maps
Schouten, Theo E.; Kuppens, Harco C.; van den Broek, Egon L.
2006-02-01
Human vigilance is limited; hence, automatic motion and distance detection is one of the central issues in video surveillance. Hereby, many aspects are of importance, this paper specially addresses: efficiency, achieving real-time performance, accuracy, and robustness against various noise factors. To obtain fully controlled test environments, an artificial development center for robot navigation is introduced in which several parameters can be set (e.g., number of objects, trajectories and type and amount of noise). In the videos, for each following frame, movement of stationary objects is detected and pixels of moving objects are located from which moving objects are identified in a robust way. An Exact Euclidean Distance Map (E2DM) is utilized to determine accurately the distances between moving and stationary objects. Together with the determined distances between moving objects and the detected movement of stationary objects, this provides the input for detecting unwanted situations in the scene. Further, each intelligent object (e.g., a robot), is provided with its E2DM, allowing the object to plan its course of action. Timing results are specified for each program block of the processing chain for 20 different setups. So, the current paper presents extensive, experimentally controlled research on real-time, accurate, and robust motion detection for video surveillance, using E2DMs, which makes it a unique approach.
Gesture Interaction at a Distance
Fikkert, F.W.
2010-01-01
The aim of this work is to explore, from a perspective of human behavior, which gestures are suited to control large display surfaces from a short distance away; why that is so; and, equally important, how such an interface can be made a reality. A well-known example of the type of interface that is
Cooperative Distance Learning in Mathematics
Guimaraes, Luiz Carlos; Moraes, Thiago Guimaraes; Mattos, Francisco Roberto Pinto
2005-01-01
In this paper we report on two complementary research results. In the first we describe a tool that allows different modes of synchronous distance teaching of mathematics. In the second we report the preliminary results of a pilot study conducted using this tool to teach geometry, both with school students aged 14-15 and with undergraduate…
Adaptive Distance Protection for Microgrids
DEFF Research Database (Denmark)
Lin, Hengwei; Guerrero, Josep M.; Quintero, Juan Carlos Vasquez
2015-01-01
is adopted to accelerate the tripping speed of the relays on the weak lines. The protection methodology is tested on a mid-voltage microgrid network in Aalborg, Denmark. The results show that the adaptive distance protection methodology has good selectivity and sensitivity. What is more, this system also has...
Distance Education Technologies in Asia
International Development Research Centre (IDRC) Digital Library (Canada)
17 schools ... The use of other Asian DE experiences as basis for planning is a logical move for Mongolia because most Asian countries that are now involved in DE have a .... for distance education in the region, and the pioneering examples these provide for other Asian countries with similar physical and social conditions.
Quality Connection: Going the Distance
Jenney, Timothy R.; Roupas, Eva K.
2003-01-01
In 1999, Virginia Beach City Public Schools launched a completely new distance learning (DL) initiative, Quality Connection. Since that time, through perseverance and creative thinking, the program has become a model of technology as well as a highly successful method of delivering services to a wide variety of stakeholders. Not only do students…
Designing a Distance Learning Facility.
Lambert, Michael P.
1998-01-01
Details the design of a distance-learning facility through analysis of its functions, paper-handling requirements, and current and future communications-technology needs. It also lists special features the facility should have, including up-to-date wiring capacities for telecommunications, uplink and downlink capabilities to satellites, and…
Geometric Spanners for Weighted Point Sets
DEFF Research Database (Denmark)
Abam, Mohammad; de Berg, Mark; Farshi, Mohammad
2009-01-01
Let (S,d) be a finite metric space, where each element p ∈ S has a non-negative weight w(p). We study spanners for the set S with respect to weighted distance function d w , where d w (p,q) is w(p) + d(p,q) + wq if p ≠ q and 0 otherwise. We present a general method for turning spanners with respect...
Identifying Isotropic Events using an Improved Regional Moment Tensor Inversion Technique
Energy Technology Data Exchange (ETDEWEB)
Dreger, Douglas S. [Univ. of California, Berkeley, CA (United States); Ford, Sean R. [Univ. of California, Berkeley, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Walter, William R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-12-08
Research was carried out investigating the feasibility of using a regional distance seismic waveform moment tensor inverse procedure to estimate source parameters of nuclear explosions and to use the source inversion results to develop a source-type discrimination capability. The results of the research indicate that it is possible to robustly determine the seismic moment tensor of nuclear explosions, and when compared to natural seismicity in the context of the a Hudson et al. (1989) source-type diagram they are found to separate from populations of earthquakes and underground cavity collapse seismic sources.
On Markov Earth Mover's Distance.
Wei, Jie
2014-10-01
In statistics, pattern recognition and signal processing, it is of utmost importance to have an effective and efficient distance to measure the similarity between two distributions and sequences. In statistics this is referred to as goodness-of-fit problem . Two leading goodness of fit methods are chi-square and Kolmogorov-Smirnov distances. The strictly localized nature of these two measures hinders their practical utilities in patterns and signals where the sample size is usually small. In view of this problem Rubner and colleagues developed the earth mover's distance (EMD) to allow for cross-bin moves in evaluating the distance between two patterns, which find a broad spectrum of applications. EMD-L1 was later proposed to reduce the time complexity of EMD from super-cubic by one order of magnitude by exploiting the special L1 metric. EMD-hat was developed to turn the global EMD to a localized one by discarding long-distance earth movements. In this work, we introduce a Markov EMD (MEMD) by treating the source and destination nodes absolutely symmetrically. In MEMD, like hat-EMD, the earth is only moved locally as dictated by the degree d of neighborhood system. Nodes that cannot be matched locally is handled by dummy source and destination nodes. By use of this localized network structure, a greedy algorithm that is linear to the degree d and number of nodes is then developed to evaluate the MEMD. Empirical studies on the use of MEMD on deterministic and statistical synthetic sequences and SIFT-based image retrieval suggested encouraging performances.
A Streaming Distance Transform Algorithm for Neighborhood-Sequence Distances
Directory of Open Access Journals (Sweden)
Nicolas Normand
2014-09-01
Full Text Available We describe an algorithm that computes a “translated” 2D Neighborhood-Sequence Distance Transform (DT using a look up table approach. It requires a single raster scan of the input image and produces one line of output for every line of input. The neighborhood sequence is specified either by providing one period of some integer periodic sequence or by providing the rate of appearance of neighborhoods. The full algorithm optionally derives the regular (centered DT from the “translated” DT, providing the result image on-the-ﬂy, with a minimal delay, before the input image is fully processed. Its efficiency can benefit all applications that use neighborhood- sequence distances, particularly when pipelined processing architectures are involved, or when the size of objects in the source image is limited.
Extended run distance measurements of shock initiation in PBX 9502
International Nuclear Information System (INIS)
Gustavsen, R. L.; Sheffield, S. A.; Alcon, R. R.
2007-01-01
We have completed a series of shock initiation experiments on two lots of PBX 9502 (95 weight % TATB, 5 weight % Kel-F 800 binder). One PBX 9502 lot contained few fine particles (10 weight % <20 microns) while the second lot contained many fines (38 weight % <20 microns). Large, 71 mm diameter PBX 9502 samples were used and input pressures were 7.5-8.5 GPa, resulting in run distances of 25-35 mm. Buildup to detonation was measured using embedded magnetic particle velocity gauges. An unusual feature of the work was the use of metallic impactors (316 stainless steel) in combination with magnetic gauges. It has previously been assumed that conducting impactors would badly perturb the magnetic gauge measurements. However, we observed only a baseline voltage shift of ≅10% which increased linearly with time. Results include detonation coordinates (x*, t*) vs. initial shock pressure. No lot to lot differences in initiation behavior were observed
Inverse problems and uncertainty quantification
Litvinenko, Alexander
2013-12-18
In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Package inspection using inverse diffraction
McAulay, Alastair D.
2008-08-01
More efficient cost-effective hand-held methods of inspecting packages without opening them are in demand for security. Recent new work in TeraHertz sources,1 millimeter waves, presents new possibilities. Millimeter waves pass through cardboard and styrofoam, common packing materials, and also pass through most materials except those with high conductivity like metals which block light and are easily spotted. Estimating refractive index along the path of the beam through the package from observations of the beam passing out of the package provides the necessary information to inspect the package and is a nonlinear problem. So we use a generalized linear inverse technique that we first developed for finding oil by reflection in geophysics.2 The computation assumes parallel slices in the packet of homogeneous material for which the refractive index is estimated. A beam is propagated through this model in a forward computation. The output is compared with the actual observations for the package and an update computed for the refractive indices. The loop is repeated until convergence. The approach can be modified for a reflection system or to include estimation of absorption.
MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS
International Nuclear Information System (INIS)
Asensio Ramos, A.; Manso Sainz, R.; Martínez González, M. J.; Socas-Navarro, H.; Viticchié, B.; Orozco Suárez, D.
2012-01-01
Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander
2014-01-06
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse problem in radionuclide transport
International Nuclear Information System (INIS)
Yu, C.
1988-01-01
The disposal of radioactive waste must comply with the performance objectives set forth in 10 CFR 61 for low-level waste (LLW) and 10 CFR 60 for high-level waste (HLW). To determine probable compliance, the proposed disposal system can be modeled to predict its performance. One of the difficulties encountered in such a study is modeling the migration of radionuclides through a complex geologic medium for the long term. Although many radionuclide transport models exist in the literature, the accuracy of the model prediction is highly dependent on the model parameters used. The problem of using known parameters in a radionuclide transport model to predict radionuclide concentrations is a direct problem (DP); whereas the reverse of DP, i.e., the parameter identification problem of determining model parameters from known radionuclide concentrations, is called the inverse problem (IP). In this study, a procedure to solve IP is tested, using the regression technique. Several nonlinear regression programs are examined, and the best one is recommended. 13 refs., 1 tab
DEFF Research Database (Denmark)
Brodal, G. S.; Fagerberg, R.; Mailund, T.
2013-01-01
for computing the triplet and parameterized triplet distances have O(n2) running time, while the previous best algorithms for computing the quartet distance include an O(d 9n log n) time algorithm and an O(n2.688) time algorithm, where the latter can also compute the parameterized quartet distance. Since d ≤ n......), respectively, and counting how often the induced topologies in the two input trees are different. In this paper we present efficient algorithms for computing these distances. We show how to compute the triplet distance in time O(n log n) and the quartet distance in time O(dn log n), where d is the maximal...... degree of any node in the two trees. Within the same time bounds, our framework also allows us to compute the parameterized triplet and quartet distances, where a parameter is introduced to weight resolved (binary) topologies against unresolved (non-binary) topologies. The previous best algorithm...
Fully probabilistic seismic source inversion – Part 1: Efficient parameterisation
Directory of Open Access Journals (Sweden)
S. C. Stähler
2014-11-01
Full Text Available Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters themselves but also estimates of their uncertainties are of great practical importance. Probabilistic source inversion (Bayesian inference is very adapted to this challenge, provided that the parameter space can be chosen small enough to make Bayesian sampling computationally feasible. We propose a framework for PRobabilistic Inference of Seismic source Mechanisms (PRISM that parameterises and samples earthquake depth, moment tensor, and source time function efficiently by using information from previous non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible.
Reverse Universal Resolving Algorithm and inverse driving
DEFF Research Database (Denmark)
Pécseli, Thomas
2012-01-01
Inverse interpretation is a semantics based, non-standard interpretation of programs. Given a program and a value, an inverse interpreter finds all or one of the inputs, that would yield the given value as output with normal forward evaluation. The Reverse Universal Resolving Algorithm is a new v...
Third Harmonic Imaging using a Pulse Inversion
DEFF Research Database (Denmark)
Rasmussen, Joachim; Du, Yigang; Jensen, Jørgen Arendt
2011-01-01
The pulse inversion (PI) technique can be utilized to separate and enhance harmonic components of a waveform for tissue harmonic imaging. While most ultrasound systems can perform pulse inversion, only few image the 3rd harmonic component. PI pulse subtraction can isolate and enhance the 3rd...
Metaheuristic optimization of acoustic inverse problems.
van Leijen, A.V.; Rothkrantz, L.; Groen, F.
2011-01-01
Swift solving of geoacoustic inverse problems strongly depends on the application of a global optimization scheme. Given a particular inverse problem, this work aims to answer the questions how to select an appropriate metaheuristic search strategy, and how to configure it for optimal performance.
Inverse Filtering Techniques in Speech Analysis | Nwachuku ...
African Journals Online (AJOL)
inverse filtering' has been applied. The unifying features of these techniques are presented, namely: 1. a basis in the source-filter theory of speech production, 2. the use of a network whose transfer function is the inverse of the transfer function of ...
O'Neil, Patrick M; Theim, Kelly R; Boeka, Abbe; Johnson, Gail; Miller-Kovach, Karen
2012-12-01
Greater use of key self-regulatory behaviors (e.g., self-monitoring of food intake and weight) is associated with greater weight loss within behavioral weight loss treatments, although this association is less established within widely-available commercial weight loss programs. Further, high hedonic hunger (i.e., susceptibility to environmental food cues) may present a barrier to successful behavior change and weight loss, although this has not yet been examined. Adult men and women (N=111, body mass index M±SD=31.5±2.7kg/m(2)) were assessed before and after participating in a 12-week commercial weight loss program. From pre- to post-treatment, reported usage of weight control behaviors improved and hedonic hunger decreased, and these changes were inversely associated. A decrease in hedonic hunger was associated with better weight loss. An improvement in reported weight control behaviors (e.g., self-regulatory behaviors) was associated with better weight loss, and this association was even stronger among individuals with high baseline hedonic hunger. Findings highlight the importance of specific self-regulatory behaviors within weight loss treatment, including a commercial weight loss program developed for widespread community implementation. Assessment of weight control behavioral skills usage and hedonic hunger may be useful to further identify mediators of weight loss within commercial weight loss programs. Future interventions might specifically target high hedonic hunger and prospectively examine changes in hedonic hunger during other types of weight loss treatment to inform its potential impact on sustained behavior change and weight control. Copyright © 2012 Elsevier Ltd. All rights reserved.
New weighting methods for phylogenetic tree reconstruction using multiple loci.
Misawa, Kazuharu; Tajima, Fumio
2012-08-01
Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.
Inverse m-matrices and ultrametric matrices
Dellacherie, Claude; San Martin, Jaime
2014-01-01
The study of M-matrices, their inverses and discrete potential theory is now a well-established part of linear algebra and the theory of Markov chains. The main focus of this monograph is the so-called inverse M-matrix problem, which asks for a characterization of nonnegative matrices whose inverses are M-matrices. We present an answer in terms of discrete potential theory based on the Choquet-Deny Theorem. A distinguished subclass of inverse M-matrices is ultrametric matrices, which are important in applications such as taxonomy. Ultrametricity is revealed to be a relevant concept in linear algebra and discrete potential theory because of its relation with trees in graph theory and mean expected value matrices in probability theory. Remarkable properties of Hadamard functions and products for the class of inverse M-matrices are developed and probabilistic insights are provided throughout the monograph.
Fast wavelet based sparse approximate inverse preconditioner
Energy Technology Data Exchange (ETDEWEB)
Wan, W.L. [Univ. of California, Los Angeles, CA (United States)
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Solving inverse problems of optical microlithography
Granik, Yuri
2005-05-01
The direct problem of microlithography is to simulate printing features on the wafer under given mask, imaging system, and process characteristics. The goal of inverse problems is to find the best mask and/or imaging system and/or process to print the given wafer features. In this study we will describe and compare solutions of inverse mask problems. Pixel-based inverse problem of mask optimization (or "layout inversion") is harder than inverse source problem, especially for partially-coherent systems. It can be stated as a non-linear constrained minimization problem over complex domain, with large number of variables. We compare method of Nashold projections, variations of Fienap phase-retrieval algorithms, coherent approximation with deconvolution, local variations, and descent searches. We propose electrical field caching technique to substantially speedup the searching algorithms. We demonstrate applications of phase-shifted masks, assist features, and maskless printing.
Recurrent Neural Network for Computing Outer Inverse.
Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin
2016-05-01
Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.
Forward modeling. Route to electromagnetic inversion
Energy Technology Data Exchange (ETDEWEB)
Groom, R.; Walker, P. [PetRos EiKon Incorporated, Ontario (Canada)
1996-05-01
Inversion of electromagnetic data is a topical subject in the literature, and much time has been devoted to understanding the convergence properties of various inverse methods. The relative lack of success of electromagnetic inversion techniques is partly attributable to the difficulties in the kernel forward modeling software. These difficulties come in two broad classes: (1) Completeness and robustness, and (2) convergence, execution time and model simplicity. If such problems exist in the forward modeling kernel, it was demonstrated that inversion can fail to generate reasonable results. It was suggested that classical inversion techniques, which are based on minimizing a norm of the error between data and the simulated data, will only be successful when these difficulties in forward modeling kernels are properly dealt with. 4 refs., 5 figs.
Stochastic Gabor reflectivity and acoustic impedance inversion
Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John
2018-02-01
To delineate subsurface lithology to estimate petrophysical properties of a reservoir, it is possible to use acoustic impedance (AI) which is the result of seismic inversion. To change amplitude to AI, removal of wavelet effects from the seismic signal in order to get a reflection series, and subsequently transforming those reflections to AI, is vital. To carry out seismic inversion correctly it is important to not assume that the seismic signal is stationary. However, all stationary deconvolution methods are designed following that assumption. To increase temporal resolution and interpretation ability, amplitude compensation and phase correction are inevitable. Those are pitfalls of stationary reflectivity inversion. Although stationary reflectivity inversion methods are trying to estimate reflectivity series, because of incorrect assumptions their estimations will not be correct, but may be useful. Trying to convert those reflection series to AI, also merging with the low frequency initial model, can help us. The aim of this study was to apply non-stationary deconvolution to eliminate time variant wavelet effects from the signal and to convert the estimated reflection series to the absolute AI by getting bias from well logs. To carry out this aim, stochastic Gabor inversion in the time domain was used. The Gabor transform derived the signal’s time–frequency analysis and estimated wavelet properties from different windows. Dealing with different time windows gave an ability to create a time-variant kernel matrix, which was used to remove matrix effects from seismic data. The result was a reflection series that does not follow the stationary assumption. The subsequent step was to convert those reflections to AI using well information. Synthetic and real data sets were used to show the ability of the introduced method. The results highlight that the time cost to get seismic inversion is negligible related to general Gabor inversion in the frequency domain. Also
3D joint inversion of gravity-gradient and borehole gravity data
Geng, Meixia; Yang, Qingjie; Huang, Danian
2017-12-01
Borehole gravity is increasingly used in mineral exploration due to the advent of slim-hole gravimeters. Given the full-tensor gradiometry data available nowadays, joint inversion of surface and borehole data is a logical next step. Here, we base our inversions on cokriging, which is a geostatistical method of estimation where the error variance is minimised by applying cross-correlation between several variables. In this study, the density estimates are derived using gravity-gradient data, borehole gravity and known densities along the borehole as a secondary variable and the density as the primary variable. Cokriging is non-iterative and therefore is computationally efficient. In addition, cokriging inversion provides estimates of the error variance for each model, which allows direct assessment of the inverse model. Examples are shown involving data from a single borehole, from multiple boreholes, and combinations of borehole gravity and gravity-gradient data. The results clearly show that the depth resolution of gravity-gradient inversion can be improved significantly by including borehole data in addition to gravity-gradient data. However, the resolution of borehole data falls off rapidly as the distance between the borehole and the feature of interest increases. In the case where the borehole is far away from the target of interest, the inverted result can be improved by incorporating gravity-gradient data, especially all five independent components for inversion.
Mineral inversion for element capture spectroscopy logging based on optimization theory
Zhao, Jianpeng; Chen, Hui; Yin, Lu; Li, Ning
2017-12-01
Understanding the mineralogical composition of a formation is an essential key step in the petrophysical evaluation of petroleum reservoirs. Geochemical logging tools can provide quantitative measurements of a wide range of elements. In this paper, element capture spectroscopy (ECS) was taken as an example and an optimization method was adopted to solve the mineral inversion problem for ECS. This method used the converting relationship between elements and minerals as response equations and took into account the statistical uncertainty of the element measurements and established an optimization function for ECS. Objective function value and reconstructed elemental logs were used to check the robustness and reliability of the inversion method. Finally, the inversion mineral results had a good agreement with x-ray diffraction laboratory data. The accurate conversion of elemental dry weights to mineral dry weights formed the foundation for the subsequent applications based on ECS.
The intractability of computing the Hamming distance
Manthey, Bodo; Reischuk, Rüdiger
2005-01-01
Given a string x and a language L, the Hamming distance of x to L is the minimum Hamming distance of x to any string in L. The edit distance of a string to a language is analogously defined. First, we prove that there is a language in $AC^0$ such that both Hamming and edit distance to this language
[Distance learning in medical education].
Kudumović, Mensura; Masić, Izet; Novo, Ahmed; Masic, Zlatan; Omerhodzic, Ibrahim
2004-01-01
Distance learning or learning from the distance represents the educative technique with occupies all more significant place in the actual medical education of the healthcare workers at the international plan, specuale in the domains of the postgraduated and continuous medical education. It represents the educative technique of the significant effectivness, wich has to have at the disposal both adequate technological infrastructure as well as the previous education of the lecturer and users, adapted teaching plans and evaluation mechanisms of knowledge. By use of the rich choice of technological models, in relation to the traditional method of learning, enables the simultaneous education to the great number of students of the various profiles, the approach to all the relevant data basis as well as the mechanism of the evaluation knowledge institutions and the lectures.
Tuning the Cepheid distance scale
Mateo, Mario
1992-01-01
Ongoing observational programs (both from the ground and space) will provide a significantly larger sample of galaxies with well-studied Cepheids both within the Local Group and in more distant galaxies. Recent efforts in the calibration of the Cepheid distance scale utilizing Cepheids in star clusters in the Galaxy and in the Magellanic Clouds are described. Some of the significant advantages of utilizing LMC Cepheids in particular are emphasized, and the current status of the field is summarized.
Distance probes of dark energy
Energy Technology Data Exchange (ETDEWEB)
Kim, A. G.; Padmanabhan, N.; Aldering, G.; Allen, S. W.; Baltay, C.; Cahn, R. N.; D’Andrea, C. B.; Dalal, N.; Dawson, K. S.; Denney, K. D.; Eisenstein, D. J.; Finley, D. A.; Freedman, W. L.; Ho, S.; Holz, D. E.; Kasen, D.; Kent, S. M.; Kessler, R.; Kuhlmann, S.; Linder, E. V.; Martini, P.; Nugent, P. E.; Perlmutter, S.; Peterson, B. M.; Riess, A. G.; Rubin, D.; Sako, M.; Suntzeff, N. V.; Suzuki, N.; Thomas, R. C.; Wood-Vasey, W. M.; Woosley, S. E.
2015-03-01
This document presents the results from the Distances subgroup of the Cosmic Frontier Community Planning Study (Snowmass 2013). We summarize the current state of the field as well as future prospects and challenges. In addition to the established probes using Type Ia supernovae and baryon acoustic oscillations, we also consider prospective methods based on clusters, active galactic nuclei, gravitational wave sirens and strong lensing time delays.
Qualitative Visualization of Distance Information
Heitzig, Jobst
2002-01-01
Different types of two- and three-dimensional representations of a finite metric space are studied that focus on the accurate representation of the linear order among the distances rather than their actual values. Lower and upper bounds for representability probabilities are produced by experiments including random generation, a rubber-band algorithm for accuracy optimization, and automatic proof generation. It is proved that both farthest neighbour representations and cluster tree representa...
Nominal Ocular Dazzle Distance (NODD)
2015-02-23
for laser eye dazzle is presented together with calculations for laser safety applications based on the newly defined Maximum Dazzle Exposure (MDE...quantify the impact of laser eye dazzle on human performance. This allows the calculation of the MDE, the threshold laser irradiance below which a...target can be detected, and the NODD, the minimum distance for the visual detection of a target in the presence of laser dazzle . The model is suitable
Support Services for Distance Education
Directory of Open Access Journals (Sweden)
Sandra Frieden
1999-01-01
Full Text Available The creation and operation of a distance education support infrastructure requires the collaboration of virtually all administrative departments whose activities deal with students and faculty, and all participating academic departments. Implementation can build on where the institution is and design service-oriented strategies that strengthen institutional support and commitment. Issues to address include planning, faculty issues and concerns, policies and guidelines, approval processes, scheduling, training, publicity, information-line operations, informational materials, orientation and registration processes, class coordination and support, testing, evaluations, receive site management, partnerships, budgets, staffing, library and e-mail support, and different delivery modes (microwave, compressed video, radio, satellite, public television/cable, video tape and online. The process is ongoing and increasingly participative as various groups on campus begin to get involved with distance education activities. The distance education unit must continuously examine and revise its processes and procedures to maintain the academic integrity and service excellence of its programs. Its a daunting prospect to revise the way things have been done for many years, but each department has an opportunity to respond to new ways of serving and reaching students.
Teaching Chemistry via Distance Education
Boschmann, Erwin
2003-06-01
This paper describes a chemistry course taught at Indiana University Purdue University, Indianapolis via television, with a Web version added later. The television format is a delivery technology; the Web is an engagement technology and is preferred since it requires student participation. The distance-laboratory component presented the greatest challenge since laboratories via distance education are not a part of the U.S. academic culture. Appropriate experiments have been developed with the consultation of experts from The Open University in the United Kingdom, Athabasca University in Canada, and Monash University in Australia. The criteria used in the development of experiments are: (1) they must be credible academic experiences equal to or better than those used on campus, (2) they must be easy to perform without supervision, (3) they must be safe, and (4) they must meet all legal requirements. An evaluation of the program using three different approaches is described. The paper concludes that technology-mediated distance education students do as well as on-campus students, but drop out at a higher rate. It is very important to communicate with students frequently, and technology tools ought to be used only if good pedagogy is enhanced by their use.
Energy Technology Data Exchange (ETDEWEB)
Dobranszky, G.
2005-12-15
Stratigraphic modeling aims at rebuilding the history of the sedimentary basins by simulating the processes of erosion, transport and deposit of sediments using physical models. The objective is to determine the location of the bed-rocks likely to contain the organic matter, the location of the porous rocks that could trap the hydrocarbons during their migration and the location of the impermeable rocks likely to seal the reservoir. The model considered within this thesis is based on a multi-lithological diffusive transport model and applies to large scales of time and space. Due to the complexity of the phenomena and scales considered, none of the model parameters is directly measurable. Therefore it is essential to inverse them. The standard approach, which consists in inverting all the parameters by minimizing a cost function using a gradient method, proved very sensitive to the choice of the parameterization, to the weights given to the various terms of the cost function (hearing on data of very diverse nature) and to the numerical noise. These observations led us to give up this method and to carry out the in-version step by step by decoupling the parameters. This decoupling is not obtained by fixing the parameters but by making several assumptions on the model resulting in a range of reduced but relevant models. In this thesis, we show how these models enable us to inverse all the parameters in a robust and interactive way. (author)
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-04-01
Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in
Developments in inverse photoemission spectroscopy
International Nuclear Information System (INIS)
Sheils, W.; Leckey, R.C.G.; Riley, J.D.
1996-01-01
In the 1950's and 1960's, Photoemission Spectroscopy (PES) established itself as the major technique for the study of the occupied electronic energy levels of solids. During this period the field divided into two branches: X-ray Photoemission Spectroscopy (XPS) for photon energies greater than ∼l000eV, and Ultra-violet Photoemission Spectroscopy (UPS) for photon energies below ∼100eV. By the 1970's XPS and UPS had become mature techniques. Like XPS, BIS (at x-ray energies) does not have the momentum-resolving ability of UPS that has contributed much to the understanding of the occupied band structures of solids. BIS moved into a new energy regime in 1977 when Dose employed a Geiger-Mueller tube to obtain density of unoccupied states data from a tantalum sample at a photon energy of ∼9.7eV. At similar energies, the technique has since become known as Inverse Photoemission Spectroscopy (IPS), in acknowledgment of its complementary relationship to UPS and to distinguish it from the higher energy BIS. Drawing on decades of UPS expertise, IPS has quickly moved into areas of interest where UPS has been applied; metals, semiconductors, layer compounds, adsorbates, ferromagnets, and superconductors. At La Trobe University an IPS facility has been constructed. This presentation reports on developments in the experimental and analytical techniques of IPS that have been made there. The results of a study of the unoccupied bulk and surface bands of GaAs are presented
A Novel Parallel Algorithm for Edit Distance Computation
Directory of Open Access Journals (Sweden)
Muhammad Murtaza Yousaf
2018-01-01
Full Text Available The edit distance between two sequences is the minimum number of weighted transformation-operations that are required to transform one string into the other. The weighted transformation-operations are insert, remove, and substitute. Dynamic programming solution to find edit distance exists but it becomes computationally intensive when the lengths of strings become very large. This work presents a novel parallel algorithm to solve edit distance problem of string matching. The algorithm is based on resolving dependencies in the dynamic programming solution of the problem and it is able to compute each row of edit distance table in parallel. In this way, it becomes possible to compute the complete table in min(m,n iterations for strings of size m and n whereas state-of-the-art parallel algorithm solves the problem in max(m,n iterations. The proposed algorithm also increases the amount of parallelism in each of its iteration. The algorithm is also capable of exploiting spatial locality while its implementation. Additionally, the algorithm works in a load balanced way that further improves its performance. The algorithm is implemented for multicore systems having shared memory. Implementation of the algorithm in OpenMP shows linear speedup and better execution time as compared to state-of-the-art parallel approach. Efficiency of the algorithm is also proven better in comparison to its competitor.
Pareto-Optimal Multi-objective Inversion of Geophysical Data
Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham
2018-01-01
In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.
Weighted conditional least-squares estimation
International Nuclear Information System (INIS)
Booth, J.G.
1987-01-01
A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered
Oral Lactobacillus Counts Predict Weight Gain Susceptibility
DEFF Research Database (Denmark)
Rosing, Johanne Aviaja; Walker, Karen Christina; Jensen, Benjamin Anderschou Holbech
2017-01-01
the association between the level of oral Lactobacillus and the subsequent 6-year weight change in a healthy population of 322 Danish adults aged 35-65 years at baseline. Design: Prospective observational study. Results: In unadjusted analysis the level of oral Lactobacillus was inversely associated...... with subsequent 6-year change in BMI. A statistically significant interaction between the baseline level of oral Lactobacillus and the consumption of complex carbohydrates was found, e.g. high oral Lactobacillus count predicted weight loss for those with a low intake of complex carbohydrates, while a medium...... intake of complex carbohydrates predicted diminished weight gain. A closer examination of these relations showed that BMI change and Lactobacillus level was unrelated for those with high complex carbohydrate consumption. Conclusion: A high level of oral Lactobacillus seems related to weight loss among...
Verhoef, Sanne P M; Camps, Stefan G J A; Gonnissen, Hanne K J; Westerterp, Klaas R; Westerterp-Plantenga, Margriet S
2013-07-01
An inverse relation between sleep duration and body mass index (BMI) has been shown. We assessed the relation between changes in sleep duration and changes in body weight and body composition during weight loss. A total of 98 healthy subjects (25 men), aged 20-50 y and with BMI (in kg/m(2)) from 28 to 35, followed a 2-mo very-low-energy diet that was followed by a 10-mo period of weight maintenance. Body weight, body composition (measured by using deuterium dilution and air-displacement plethysmography), eating behavior (measured by using a 3-factor eating questionnaire), physical activity (measured by using the validated Baecke's questionnaire), and sleep (estimated by using a questionnaire with the Epworth Sleepiness Scale) were assessed before and immediately after weight loss and 3- and 10-mo follow-ups. The average weight loss was 10% after 2 mo of dieting and 9% and 6% after 3- and 10-mo follow-ups, respectively. Daytime sleepiness and time to fall asleep decreased during weight loss. Short (≤7 h) and average (>7 to weight loss. This change in sleep duration was concomitantly negatively correlated with the change in BMI during weight loss and after the 3-mo follow-up and with the change in fat mass after the 3-mo follow-up. Sleep duration benefits from weight loss or vice versa. Successful weight loss, loss of body fat, and 3-mo weight maintenance in short and average sleepers are underscored by an increase in sleep duration or vice versa. This trial was registered at clinicaltrials.gov as NCT01015508.
Modeling birth weight neonates and associated factors
Directory of Open Access Journals (Sweden)
Mansour Rezaei
2017-01-01
Full Text Available Background: Neonate with abnormal weight is at risk of increased mortality and morbidity. Many factors affect pregnancy outcome. Because of the importance and vital role in birth weight, in this study, some of the factors associated with birth weight in a sample of Iranians neonates were investigated. Materials and Methods: In this cross-sectional study, 245 newborns in a sample of Iranians neonates in the year 2013 were selected, and characteristics of neonate and their mothers were derived. Birth weights were registered by the neonatal scale. To identify the direct and indirect factors affecting birth weight, we used path analysis (PA and IBM AMOS and SPSS software. Results: The mean ± standard deviation of weight in girls (3200 ± 421 g less than boys (3310 ± 444 g significantly (P = 0.04. Gestational age (P < 0.001, birth rank (P = 0.012, distance from a previous pregnancy (P = 0.028, and mother weight (P = 0.04 had a statistical significant relationship with birth weight. In the final PA model, gestational age has a highest total effect, type of delivery with gestational age-mediated had the highest indirect effect and type of delivery, and gestational age had the greatest total impact on the birth weight. Conclusion: Gestational age, sex, distance from a previous pregnancy, maternal weight, type of delivery, number of abortion, and birth rank were related with birth weight. Due to the termination of pregnancy and avoid unnecessary deliveries through cesarean section and other related factors should be further consideration by childbirth experts. In addition, factors affecting these variables are carefully identified and prevented as much as possible.
International Nuclear Information System (INIS)
McMurray, J. S.; Williams, C. C.
1998-01-01
Scanning Capacitance Microscopy (SCM) is capable of providing two-dimensional information about dopant and carrier concentrations in semiconducting devices. This information can be used to calibrate models used in the simulation of these devices prior to manufacturing and to develop and optimize the manufacturing processes. To provide information for future generations of devices, ultra-high spatial accuracy (<10 nm) will be required. One method, which potentially provides a means to obtain these goals, is inverse modeling of SCM data. Current semiconducting devices have large dopant gradients. As a consequence, the capacitance probe signal represents an average over the local dopant gradient. Conversion of the SCM signal to dopant density has previously been accomplished with a physical model which assumes that no dopant gradient exists in the sampling area of the tip. The conversion of data using this model produces results for abrupt profiles which do not have adequate resolution and accuracy. A new inverse model and iterative method has been developed to obtain higher resolution and accuracy from the same SCM data. This model has been used to simulate the capacitance signal obtained from one and two-dimensional ideal abrupt profiles. This simulated data has been input to a new iterative conversion algorithm, which has recovered the original profiles in both one and two dimensions. In addition, it is found that the shape of the tip can significantly impact resolution. Currently SCM tips are found to degrade very rapidly. Initially the apex of the tip is approximately hemispherical, but quickly becomes flat. This flat region often has a radius of about the original hemispherical radius. This change in geometry causes the silicon directly under the disk to be sampled with approximately equal weight. In contrast, a hemispherical geometry samples most strongly the silicon centered under the SCM tip and falls off quickly with distance from the tip's apex. Simulation
Photoinduced localization and decoherence in inversion symmetric molecules
Energy Technology Data Exchange (ETDEWEB)
Langer, Burkhard, E-mail: langer@gpta.de [Physikalische und Theoretische Chemie, Freie Universitaet Berlin, Takustrasse 3, D-14195 Berlin (Germany); Ueda, Kiyoshi [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Sendai 980-8577 (Japan); Al-Dossary, Omar M. [Physics Department, College of Science, King Saud University, Riyadh 11451 (Saudi Arabia); Becker, Uwe [Physics Department, College of Science, King Saud University, Riyadh 11451 (Saudi Arabia); Fritz-Haber-Institut der Max-Planck-Gesellschaft, Faradayweg 4-6, D-14195 Berlin (Germany)
2011-04-15
Coherence of particles in form of matter waves is one of the basic properties of nature which distinguishes classical from quantum behavior. This is a direct consequence of the particle-wave dualism. It is the wave-like nature, which gives rise to coherence, whereas particle-like behavior results from decoherence. If two quantum objects are coherently coupled with respect to a particular variable, even over long distances, one speaks of entanglement. The study of entanglement is nowadays one of the most exciting research fields in physics with enormous impact on the most innovative development in information technology, the development of a future quantum computer. The loss of coherence by decoherence processes may occur due to momentum kicks or thermal heating. In this paper we report on a further decoherence process which occurs in dissociating inversion symmetric molecules due to the superposition of orthogonal symmetry states in the excitation along with freezing of the electron tunneling process afterwards.
SQUIDs and inverse problem techniques in nondestructive evaluation of metals
Bruno, A C
2001-01-01
Superconducting Quantum Interference Devices coupled to gradiometers were used to defect flaws in metals. We detected flaws in aluminium samples carrying current, measuring fields at lift-off distances up to one order of magnitude larger than the size of the flaw. Configured as a susceptometer we detected surface-braking flaws in steel samples, measuring the distortion on the applied magnetic field. We also used spatial filtering techniques to enhance the visualization of the magnetic field due to the flaws. In order to assess its severity, we used the generalized inverse method and singular value decomposition to reconstruct small spherical inclusions in steel. In addition, finite elements and optimization techniques were used to image complex shaped flaws.
Convex blind image deconvolution with inverse filtering
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Mindfulness Approaches and Weight Loss, Weight Maintenance, and Weight Regain.
Dunn, Carolyn; Haubenreiser, Megan; Johnson, Madison; Nordby, Kelly; Aggarwal, Surabhi; Myer, Sarah; Thomas, Cathy
2018-03-01
There is an urgent need for effective weight management techniques, as more than one third of US adults are overweight or obese. Recommendations for weight loss include a combination of reducing caloric intake, increasing physical activity, and behavior modification. Behavior modification includes mindful eating or eating with awareness. The purpose of this review was to summarize the literature and examine the impact of mindful eating on weight management. The practice of mindful eating has been applied to the reduction of food cravings, portion control, body mass index, and body weight. Past reviews evaluating the relationship between mindfulness and weight management did not focus on change in mindful eating as the primary outcome or mindful eating as a measured variable. This review demonstrates strong support for inclusion of mindful eating as a component of weight management programs and may provide substantial benefit to the treatment of overweight and obesity.
From Moon-fall to motions under inverse square laws
International Nuclear Information System (INIS)
Foong, S K
2008-01-01
The motion of two bodies, along a straight line, under the inverse square law of gravity is considered in detail, progressing from simpler cases to more complex ones: (a) one body fixed and one free, (b) both bodies free and identical mass, (c) both bodies free and different masses and (d) the inclusion of electrostatic forces for both bodies free and different masses. The equations of motion (EOM) are derived starting from Newton's second law or from conservation of energy. They are then reduced to dimensionless EOM using appropriate scales for time and distance. Solutions of the dimensionless EOM as well as the original EOM are given. The time interval for the bodies to fall is expressed as a function of the distance fallen. Formulae for the inverse were obtained. The coalescence times for the different cases are (a) π/2√2 √(L 3 /(Gm 1 )) where L is the initial separation of the two bodies and m 1 is the mass of the fixed body, (b) and (c) t=π/2√2 √(L 3 /(Gm T )) where m T is the total mass of the two bodies and (d) t=π/2√2 √(L 3 /[Gm T (1-Λ)]) where Λ=(kq 1 q 2 )/(Gm 1 m 2 ) and is a measure of the ratio of the electrostatic force to gravity. The last formula may also be used when Λ≥1 with the interpretation that there is no collision if t is infinity or imaginary. We also discuss this motion along the straight line as a special case of the general elliptic motion of two bodies. I believe that this paper will be useful to university tutors as well as undergraduate and even graduate students who prefer to consider the special case before the general case, and their relationship
3rd Annual Workshop on Inverse Problem
2015-01-01
This proceeding volume is based on papers presented on the Third Annual Workshop on Inverse Problems which was organized by the Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, and took place in May 2013 in Stockholm. The purpose of this workshop was to present new analytical developments and numerical techniques for solution of inverse problems for a wide range of applications in acoustics, electromagnetics, optical fibers, medical imaging, geophysics, etc. The contributions in this volume reflect these themes and will be beneficial to researchers who are working in the area of applied inverse problems.
Inverse Raman effect: applications and detection techniques
International Nuclear Information System (INIS)
Hughes, L.J. Jr.
1980-08-01
The processes underlying the inverse Raman effect are qualitatively described by comparing it to the more familiar phenomena of conventional and stimulated Raman scattering. An experession is derived for the inverse Raman absorption coefficient, and its relationship to the stimulated Raman gain is obtained. The power requirements of the two fields are examined qualitatively and quantitatively. The assumption that the inverse Raman absorption coefficient is constant over the interaction length is examined. Advantages of the technique are discussed and a brief survey of reported studies is presented
Inverse Raman effect: applications and detection techniques
Energy Technology Data Exchange (ETDEWEB)
Hughes, L.J. Jr.
1980-08-01
The processes underlying the inverse Raman effect are qualitatively described by comparing it to the more familiar phenomena of conventional and stimulated Raman scattering. An experession is derived for the inverse Raman absorption coefficient, and its relationship to the stimulated Raman gain is obtained. The power requirements of the two fields are examined qualitatively and quantitatively. The assumption that the inverse Raman absorption coefficient is constant over the interaction length is examined. Advantages of the technique are discussed and a brief survey of reported studies is presented.
Multiparameter Optimization for Electromagnetic Inversion Problem
Directory of Open Access Journals (Sweden)
M. Elkattan
2017-10-01
Full Text Available Electromagnetic (EM methods have been extensively used in geophysical investigations such as mineral and hydrocarbon exploration as well as in geological mapping and structural studies. In this paper, we developed an inversion methodology for Electromagnetic data to determine physical parameters of a set of horizontal layers. We conducted Forward model using transmission line method. In the inversion part, we solved multi parameter optimization problem where, the parameters are conductivity, dielectric constant, and permeability of each layer. The optimization problem was solved by simulated annealing approach. The inversion methodology was tested using a set of models representing common geological formations.
Population inversion in a stationary recombining plasma
International Nuclear Information System (INIS)
Otsuka, M.
1980-01-01
Population inversion, which occurs in a recombining plasma when a stationary He plasma is brought into contact with a neutral gas, is examined. With hydrogen as a contact gas, noticeable inversion between low-lying levels of H as been found. The overpopulation density is of the order of 10 8 cm -3 , which is much higher then that (approx. =10 5 cm -3 ) obtained previously with He as a contact gas. Relations between these experimental results and the conditions for population inversion are discussed with the CR model
Inverse design methods for radiative transfer systems
International Nuclear Information System (INIS)
Daun, K.J.; Howell, J.R.
2005-01-01
Radiant enclosures used in industrial processes have traditionally been designed by trial-and-error, a technique that usually demands considerable time to find a solution of limited quality. As an alternative, designers have recently adopted optimization and inverse methodologies to solve design problems involving radiative transfer; the optimization methodology solves the inverse problem implicitly by transforming it into a multivariable minimization problem, while the inverse design methodology solves the problem explicitly using regularization. This paper presents the details of both methodologies, and demonstrates them by solving for the optimal heater settings in an industrially relevant radiant enclosure design problem
The factorization method for inverse problems
Kirsch, Andreas
2008-01-01
The factorization method is a relatively new method for solving certain types of inverse scattering problems and problems in tomography. Aimed at students and researchers in Applied Mathematics, Physics and Engineering, this text introduces the reader to this promising approach for solving important classes of inverse problems. The wide applicability of this method is discussed by choosing typical examples, such as inverse scattering problems for the scalar Helmholtz equation, ascattering problem for Maxwell's equation, and a problem in impedance and optical tomography. The last section of the
Geoacoustic inversion using combustive sound source signals.
Potty, Gopu R; Miller, James H; Wilson, Preston S; Lynch, James F; Newhall, Arthur
2008-09-01
Combustive sound source (CSS) data collected on single hydrophone receiving units, in water depths ranging from 65 to 110 m, during the Shallow Water 2006 experiment clearly show modal dispersion effects and are suitable for modal geoacoustic inversions. CSS shots were set off at 26 m depth in 100 m of water. The inversions performed are based on an iterative scheme using dispersion-based short time Fourier transform in which each time-frequency tiling is adaptively rotated in the time-frequency plane, depending on the local wave dispersion. Results of the inversions are found to compare favorably to local core data.
BOOK REVIEW: Inverse Problems. Activities for Undergraduates
Yamamoto, Masahiro
2003-06-01
This book is a valuable introduction to inverse problems. In particular, from the educational point of view, the author addresses the questions of what constitutes an inverse problem and how and why we should study them. Such an approach has been eagerly awaited for a long time. Professor Groetsch, of the University of Cincinnati, is a world-renowned specialist in inverse problems, in particular the theory of regularization. Moreover, he has made a remarkable contribution to educational activities in the field of inverse problems, which was the subject of his previous book (Groetsch C W 1993 Inverse Problems in the Mathematical Sciences (Braunschweig: Vieweg)). For this reason, he is one of the most qualified to write an introductory book on inverse problems. Without question, inverse problems are important, necessary and appear in various aspects. So it is crucial to introduce students to exercises in inverse problems. However, there are not many introductory books which are directly accessible by students in the first two undergraduate years. As a consequence, students often encounter diverse concrete inverse problems before becoming aware of their general principles. The main purpose of this book is to present activities to allow first-year undergraduates to learn inverse theory. To my knowledge, this book is a rare attempt to do this and, in my opinion, a great success. The author emphasizes that it is very important to teach inverse theory in the early years. He writes; `If students consider only the direct problem, they are not looking at the problem from all sides .... The habit of always looking at problems from the direct point of view is intellectually limiting ...' (page 21). The book is very carefully organized so that teachers will be able to use it as a textbook. After an introduction in chapter 1, sucessive chapters deal with inverse problems in precalculus, calculus, differential equations and linear algebra. In order to let one gain some insight
Inverse bifurcation analysis: application to simple gene systems
Directory of Open Access Journals (Sweden)
Schuster Peter
2006-07-01
Full Text Available Abstract Background Bifurcation analysis has proven to be a powerful method for understanding the qualitative behavior of gene regulatory networks. In addition to the more traditional forward problem of determining the mapping from parameter space to the space of model behavior, the inverse problem of determining model parameters to result in certain desired properties of the bifurcation diagram provides an attractive methodology for addressing important biological problems. These include understanding how the robustness of qualitative behavior arises from system design as well as providing a way to engineer biological networks with qualitative properties. Results We demonstrate that certain inverse bifurcation problems of biological interest may be cast as optimization problems involving minimal distances of reference parameter sets to bifurcation manifolds. This formulation allows for an iterative solution procedure based on performing a sequence of eigen-system computations and one-parameter continuations of solutions, the latter being a standard capability in existing numerical bifurcation software. As applications of the proposed method, we show that the problem of maximizing regions of a given qualitative behavior as well as the reverse engineering of bistable gene switches can be modelled and efficiently solved.
Inverse Funnel Effect of Excitons in Strained Black Phosphorus
Directory of Open Access Journals (Sweden)
Pablo San-Jose
2016-09-01
Full Text Available We study the effects of strain on the properties and dynamics of Wannier excitons in monolayer (phosphorene and few-layer black phosphorus (BP, a promising two-dimensional material for optoelectronic applications due to its high mobility, mechanical strength, and strain-tunable direct band gap. We compare the results to the case of molybdenum disulphide (MoS_{2} monolayers. We find that the so-called funnel effect, i.e., the possibility of controlling exciton motion by means of inhomogeneous strains, is much stronger in few-layer BP than in MoS_{2} monolayers and, crucially, is of opposite sign. Instead of excitons accumulating isotropically around regions of high tensile strain like in MoS_{2}, excitons in BP are pushed away from said regions. This inverse funnel effect is moreover highly anisotropic, with much larger funnel distances along the armchair crystallographic direction, leading to a directional focusing of exciton flow. A strong inverse funnel effect could enable simpler designs of funnel solar cells and offer new possibilities for the manipulation and harvesting of light.
Body weight and composition dynamics of fall migrating canvasbacks
Serie, J.R.; Sharp, D.E.
1989-01-01
We studied body weights and composition of canvasbacks (Aythya valisineria) during fall migration 1975-77 on stopover sites along the upper Mississippi River near La Crosse, Wisconsin (Navigational Pools 7 and 8) and Keokuk, Iowa (Navigational Pool 19). Body weights varied (P Weights varied by year (P weights increased (P weight for each age and sex. Live weight was a good predictor of total body fat. Mean estimated total body fat ranged from 200 to 300 g and comprised 15-20% of live weights among age and sex classes. Temporal weight patterns were less variable for adults than immatures, but generally increased during migration. Length of stopover varied inversely with fat reserves among color-marked adult males. Variation in fat condition of canvasbacks during fall may explain the mechanism regulating population ingress and egress on stopover sites. Fat reserves attained by canvasbacks during fall stopover may have adaptive significance in improving survival by conditioning for winter.
Distance Measurement Solves Astrophysical Mysteries
2003-08-01
Location, location, and location. The old real-estate adage about what's really important proved applicable to astrophysics as astronomers used the sharp radio "vision" of the National Science Foundation's Very Long Baseline Array (VLBA) to pinpoint the distance to a pulsar. Their accurate distance measurement then resolved a dispute over the pulsar's birthplace, allowed the astronomers to determine the size of its neutron star and possibly solve a mystery about cosmic rays. "Getting an accurate distance to this pulsar gave us a real bonanza," said Walter Brisken, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. Monogem Ring The Monogem Ring, in X-Ray Image by ROSAT satellite CREDIT: Max-Planck Institute, American Astronomical Society (Click on Image for Larger Version) The pulsar, called PSR B0656+14, is in the constellation Gemini, and appears to be near the center of a circular supernova remnant that straddles Gemini and its neighboring constellation, Monoceros, and is thus called the Monogem Ring. Since pulsars are superdense, spinning neutron stars left over when a massive star explodes as a supernova, it was logical to assume that the Monogem Ring, the shell of debris from a supernova explosion, was the remnant of the blast that created the pulsar. However, astronomers using indirect methods of determining the distance to the pulsar had concluded that it was nearly 2500 light-years from Earth. On the other hand, the supernova remnant was determined to be only about 1000 light-years from Earth. It seemed unlikely that the two were related, but instead appeared nearby in the sky purely by a chance juxtaposition. Brisken and his colleagues used the VLBA to make precise measurements of the sky position of PSR B0656+14 from 2000 to 2002. They were able to detect the slight offset in the object's apparent position when viewed from opposite sides of Earth's orbit around the Sun. This effect, called parallax, provides a direct measurement of
Regional W-Phase Source Inversion for Moderate to Large Earthquakes in China and Neighboring Areas
Zhao, Xu; Duputel, Zacharie; Yao, Zhenxing
2017-12-01
Earthquake source characterization has been significantly speeded up in the last decade with the development of rapid inversion techniques in seismology. Among these techniques, the W-phase source inversion method quickly provides point source parameters of large earthquakes using very long period seismic waves recorded at teleseismic distances. Although the W-phase method was initially developed to work at global scale (within 20 to 30 min after the origin time), faster results can be obtained when seismological data are available at regional distances (i.e., Δ ≤ 12°). In this study, we assess the use and reliability of regional W-phase source estimates in China and neighboring areas. Our implementation uses broadband records from the Chinese network supplemented by global seismological stations installed in the region. Using this data set and minor modifications to the W-phase algorithm, we show that reliable solutions can be retrieved automatically within 4 to 7 min after the earthquake origin time. Moreover, the method yields stable results down to Mw = 5.0 events, which is well below the size of earthquakes that are rapidly characterized using W-phase inversions at teleseismic distances.
Energy Technology Data Exchange (ETDEWEB)
Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stork, Christopher L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mattingly, John K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-07-01
Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.
Sensitivity analyses of acoustic impedance inversion with full-waveform inversion
Yao, Gang; da Silva, Nuno V.; Wu, Di
2018-04-01
Acoustic impedance estimation has a significant importance to seismic exploration. In this paper, we use full-waveform inversion to recover the impedance from seismic data, and analyze the sensitivity of the acoustic impedance with respect to the source-receiver offset of seismic data and to the initial velocity model. We parameterize the acoustic wave equation with velocity and impedance, and demonstrate three key aspects of acoustic impedance inversion. First, short-offset data are most suitable for acoustic impedance inversion. Second, acoustic impedance inversion is more compatible with the data generated by density contrasts than velocity contrasts. Finally, acoustic impedance inversion requires the starting velocity model to be very accurate for achieving a high-quality inversion. Based upon these observations, we propose a workflow for acoustic impedance inversion as: (1) building a background velocity model with travel-time tomography or reflection waveform inversion; (2) recovering the intermediate wavelength components of the velocity model with full-waveform inversion constrained by Gardner’s relation; (3) inverting the high-resolution acoustic impedance model with short-offset data through full-waveform inversion. We verify this workflow by the synthetic tests based on the Marmousi model.
Anxiety and Resistance in Distance Learning
Directory of Open Access Journals (Sweden)
Nazime Tuncay
2010-06-01
Full Text Available The purpose of this study was to investigate students' anxiety and resistance towards learning through distance education.Specifically, the study sought answers to the following questions: -What are the reasons of students not choosing distancelearning courses? -Which symptoms of anxiety, if any, do distance learner’s exhibit towards distance learning? Does genderhave any significant relationships with distance learners' perception of factors that affect their anxiety and resistance? A totalof 120 distance education students in Near East University were observed and 96 of them were interviewed. Computer anxiety,language anxiety, social anxiety were observed to be among the reasons of students’ resistance to distance learning.
Weight loss surgery helps people with extreme obesity to lose weight. It may be an option if you ... caused by obesity. There are different types of weight loss surgery. They often limit the amount of food ...
... Thyroid and Weight Thyroid and Weight FAQs THYROID, BMR & WEIGHT WHAT IS THE RELATIONSHIP BETWEEN THYROID AND ... it is known as the basal metabolic rate (BMR). Indeed, measurement of the BMR was one of ...
Energy Technology Data Exchange (ETDEWEB)
De Grijs, Richard [Kavli Institute for Astronomy and Astrophysics, Peking University, Yi He Yuan Lu 5, Hai Dian District, Beijing 100871 (China); Wicker, James E. [National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100012 (China); Bono, Giuseppe [Dipartimento di Fisica, Università di Roma Tor Vergata, via Della Ricerca Scientifica 1, I-00133 Roma (Italy)
2014-05-01
The distance to the Large Magellanic Cloud (LMC) represents a key local rung of the extragalactic distance ladder yet the galaxy's distance modulus has long been an issue of contention, in particular in view of claims that most newly determined distance moduli cluster tightly—and with a small spread—around the 'canonical' distance modulus, (m – M){sub 0} = 18.50 mag. We compiled 233 separate LMC distance determinations published between 1990 and 2013. Our analysis of the individual distance moduli, as well as of their two-year means and standard deviations resulting from this largest data set of LMC distance moduli available to date, focuses specifically on Cepheid and RR Lyrae variable-star tracer populations, as well as on distance estimates based on features in the observational Hertzsprung-Russell diagram. We conclude that strong publication bias is unlikely to have been the main driver of the majority of published LMC distance moduli. However, for a given distance tracer, the body of publications leading to the tightly clustered distances is based on highly non-independent tracer samples and analysis methods, hence leading to significant correlations among the LMC distances reported in subsequent articles. Based on a careful, weighted combination, in a statistical sense, of the main stellar population tracers, we recommend that a slightly adjusted canonical distance modulus of (m – M){sub 0} = 18.49 ± 0.09 mag be used for all practical purposes that require a general distance scale without the need for accuracies of better than a few percent.
Dynamics of an N-vortex state at small distances
International Nuclear Information System (INIS)
Ovchinnikov, Yu. N.
2013-01-01
We investigate the dynamics of a state of N vortices, placed at the initial instant at small distances from some point, close to the “weight center” of vortices. The general solution of the time-dependent Ginsburg-Landau equation for N vortices in a large time interval is found. For N = 2, the position of the “weight center” of two vortices is time independent. For N ≥ 3, the position of the “weight center” weakly depends on time and is located in the range of the order of a 3 , where a is a characteristic distance of a single vortex from the “weight center.” For N = 3, the time evolution of the N-vortex state is fixed by the position of vortices at any time instant and by the values of two small parameters. For N ≥ 4, a new parameter arises in the problem, connected with relative increases in the number of decay modes.
Dynamics of an N-vortex state at small distances
Energy Technology Data Exchange (ETDEWEB)
Ovchinnikov, Yu. N., E-mail: ovc@itp.ac.ru [Max-Planck Institute for Physics of Complex Systems (Germany)
2013-01-15
We investigate the dynamics of a state of N vortices, placed at the initial instant at small distances from some point, close to the 'weight center' of vortices. The general solution of the time-dependent Ginsburg-Landau equation for N vortices in a large time interval is found. For N = 2, the position of the 'weight center' of two vortices is time independent. For N {>=} 3, the position of the 'weight center' weakly depends on time and is located in the range of the order of a{sup 3}, where a is a characteristic distance of a single vortex from the 'weight center.' For N = 3, the time evolution of the N-vortex state is fixed by the position of vortices at any time instant and by the values of two small parameters. For N {>=} 4, a new parameter arises in the problem, connected with relative increases in the number of decay modes.
Full traveltime inversion in source domain
Liu, Lu
2017-06-01
This paper presents a new method of source-domain full traveltime inversion (FTI). The objective of this study is automatically building near-surface velocity using the early arrivals of seismic data. This method can generate the inverted velocity that can kinetically best match the reconstructed plane-wave source of early arrivals with true source in source domain. It does not require picking first arrivals for tomography, which is one of the most challenging aspects of ray-based tomographic inversion. Besides, this method does not need estimate the source wavelet, which is a necessity for receiver-domain wave-equation velocity inversion. Furthermore, we applied our method on one synthetic dataset; the results show our method could generate a reasonable background velocity even when shingling first arrivals exist and could provide a good initial velocity for the conventional full waveform inversion (FWI).
n-Colour self-inverse compositions
Indian Academy of Sciences (India)
colour self-inverse composition. This introduces four new sequences which satisfy the same recurrence relation with different initial conditions like the famous Fibonacci and Lucas sequences. For these new sequences explicit formulas, recurrence ...
Inverse Doppler Effects in Broadband Acoustic Metamaterials.
Zhai, S L; Zhao, X P; Liu, S; Shen, F L; Li, L L; Luo, C R
2016-08-31
The Doppler effect refers to the change in frequency of a wave source as a consequence of the relative motion between the source and an observer. Veselago theoretically predicted that materials with negative refractions can induce inverse Doppler effects. With the development of metamaterials, inverse Doppler effects have been extensively investigated. However, the ideal material parameters prescribed by these metamaterial design approaches are complex and also challenging to obtain experimentally. Here, we demonstrated a method of designing and experimentally characterising arbitrary broadband acoustic metamaterials. These omni-directional, double-negative, acoustic metamaterials are constructed with 'flute-like' acoustic meta-cluster sets with seven double meta-molecules; these metamaterials also overcome the limitations of broadband negative bulk modulus and mass density to provide a region of negative refraction and inverse Doppler effects. It was also shown that inverse Doppler effects can be detected in a flute, which has been popular for thousands of years in Asia and Europe.
n-Colour self-inverse compositions
Indian Academy of Sciences (India)
inverse composition. This introduces four new sequences which satisfy the same recurrence relation with different initial conditions like the famous Fibonacci and Lucas sequences. For these new sequences explicit formulas, recurrence relations ...
Parametric optimization of inverse trapezoid oleophobic surfaces
DEFF Research Database (Denmark)
Cavalli, Andrea; Bøggild, Peter; Okkels, Fridolin
2012-01-01
In this paper, we introduce a comprehensive and versatile approach to the parametric shape optimization of oleophobic surfaces. We evaluate the performance of inverse trapezoid microstructures in terms of three objective parameters: apparent contact angle, maximum sustainable hydrostatic pressure...
An inverse method for radiation transport
Energy Technology Data Exchange (ETDEWEB)
Favorite, J. A. (Jeffrey A.); Sanchez, R. (Richard)
2004-01-01
Adjoint functions have been used with forward functions to compute gradients in implicit (iterative) solution methods for inverse problems in optical tomography, geoscience, thermal science, and other fields, but only once has this approach been used for inverse solutions to the Boltzmann transport equation. In this paper, this approach is used to develop an inverse method that requires only angle-independent flux measurements, rather than angle-dependent measurements as was done previously. The method is applied to a simplified form of the transport equation that does not include scattering. The resulting procedure uses measured values of gamma-ray fluxes of discrete, characteristic energies to determine interface locations in a multilayer shield. The method was implemented with a Newton-Raphson optimization algorithm, and it worked very well in numerical one-dimensional spherical test cases. A more sophisticated optimization method would better exploit the potential of the inverse method.
Quantum chromodynamics at large distances
International Nuclear Information System (INIS)
Arbuzov, B.A.
1987-01-01
Properties of QCD at large distances are considered in the framework of traditional quantum field theory. An investigation of asymptotic behaviour of lower Green functions in QCD is the starting point of the approach. The recent works are reviewed which confirm the singular infrared behaviour of gluon propagator M 2 /(k 2 ) 2 at least under some gauge conditions. A special covariant gauge comes out to be the most suitable for description of infrared region due to absence of ghost contributions to infrared asymptotics of Green functions. Solutions of Schwinger-Dyson equation for quark propagator are obtained in this special gauge and are shown to possess desirable properties: spontaneous breaking of chiral invariance and nonperturbative character. The infrared asymptotics of lower Green functions are used for calculation of vacuum expectation values of gluon and quark fields. These vacuum expectation values are obtained in a good agreement with the corresponding phenomenological values which are needed in the method of sum rules in QCD, that confirms adequacy of the infrared region description. The consideration of a behaviour of QCD at large distances leads to the conclusion that at contemporary stage of theory development one may consider two possibilities. The first one is the well-known confinement hypothesis and the second one is called incomplete confinement and stipulates for open color to be observable. Possible manifestations of incomplete confinement are discussed
Determining distances using asteroseismic methods
DEFF Research Database (Denmark)
Aguirre, Victor Silva; Casagrande, L.; Basu, Sarbina
2013-01-01
Asteroseismology has been extremely successful in determining the properties of stars in different evolutionary stages with a remarkable level of precision. However, to fully exploit its potential, robust methods for estimating stellar parameters are required and independent verification of the r......Asteroseismology has been extremely successful in determining the properties of stars in different evolutionary stages with a remarkable level of precision. However, to fully exploit its potential, robust methods for estimating stellar parameters are required and independent verification...... of the results is needed. In this talk, I present a new technique developed to obtain stellar properties by coupling asteroseismic analysis with the InfraRed Flux Method. Using two global seismic observables and multi-band photometry, the technique determines masses, radii, effective temperatures, bolometric...... fluxes, and thus distances for field stars in a self-consistent manner. Applying our method to a sample of solar-like oscillators in the {\\it Kepler} field that have accurate {\\it Hipparcos} parallaxes, we find agreement in our distance determinations to better than 5%. Comparison with measurements...
Demirci, İsmail; Candansayar, Mehmet Emin; Vafidis, Antonis; Soupios, Pantelis
2017-04-01
Direct current resistivity, radio-magnetotelluric and seismic refraction methods are widely used in the identification of near surface structures with collected data generally being interpreted separately. In recent decades, the use of joint inversion algorithms in geosciences has become widespread to identify near surface structures. However, there is no developed joint inversion algorithm using direct current resistivity, radio-magnetotelluric and seismic refraction methods. In this study, we developed a new two-dimensional joint inversion algorithm for direct current resistivity, radio-magnetotelluric and seismic refraction data based on a cross gradient approach. In addition, we proposed a new data weighting matrix to stabilize the convergence behavior of the joint inversion algorithms. We used synthetic data to show the advantage of the algorithm. The developed joint inversion algorithm found resistivity and velocity models that are better than the individual inversion of each data set. We also tested an algorithm with the field data collected in the Bafra Plain (Samsun, Turkey) to investigate saltwater intrusion. In comparing the field data inversion results with the sounding log, it can be seen that the developed joint inversion algorithm with the proposed data weighting matrix recovered the resistivity and velocity model better than the individual inversion and classical joint inversion of each data set. Our results showed that a more unique hydrogeological scenario might be obtained, especially in highly conductive media, with the joint usage of these methods.
Bayesian inversion of refraction seismic traveltime data
Ryberg, T.; Haberland, Ch
2018-03-01
We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test
An Inversion Recovery NMR Kinetics Experiment
Williams, Travis J.; Kershaw, Allan D.; Li, Vincent; Wu, Xinping
2011-01-01
A convenient laboratory experiment is described in which NMR magnetization transfer by inversion recovery is used to measure the kinetics and thermochemistry of amide bond rotation. The experiment utilizes Varian spectrometers with the VNMRJ 2.3 software, but can be easily adapted to any NMR platform. The procedures and sample data sets in this article will enable instructors to use inversion recovery as a laboratory activity in applied NMR classes and provide research students with a conveni...
Population inversion in recombining hydrogen plasma
International Nuclear Information System (INIS)
Furukane, Utaro; Yokota, Toshiaki; Oda, Toshiatsu.
1978-11-01
The collisional-radiative model is applied to a recombining hydrogen plasma in order to investigate the plasma condition in which the population inversion between the energy levels of hydrogen can be generated. The population inversion is expected in a plasma where the three body recombination has a large contribution to the recombining processes and the effective recombination rate is beyond a certain value for a given electron density and temperature. Calculated results are presented in figures and tables. (author)
Approximation of Bayesian Inverse Problems for PDEs
Cotter, S. L.; Dashti, M.; Stuart, A. M.
2010-01-01
Inverse problems are often ill posed, with solutions that depend sensitively on data.n any numerical approach to the solution of such problems, regularization of some form is needed to counteract the resulting instability. This paper is based on an approach to regularization, employing a Bayesian formulation of the problem, which leads to a notion of well posedness for inverse problems, at the level of probability measures. The stability which results from this well posedness may be used as t...
On the Inversion of the Lidar Equation
1984-11-01
sectitns briefly review the major inversion methods to date and a fourth section describes the development of the modified inversion method. All four...can be seeu when It is understood ’in terms of its ,physical significance. Equation 17 states that the normalized integrated backscatter has a limit. In...still give significant errors. 4.0 VALIDATION OF AGILE In this chapter, evidence of the success of AGILE will be reviewed and compared with Klett’s
Inverse regression for ridge recovery II: Numerics
Glaws, Andrew; Constantine, Paul G.; Cook, R. Dennis
2018-01-01
We investigate the application of sufficient dimension reduction (SDR) to a noiseless data set derived from a deterministic function of several variables. In this context, SDR provides a framework for ridge recovery. In this second part, we explore the numerical subtleties associated with using two inverse regression methods---sliced inverse regression (SIR) and sliced average variance estimation (SAVE)---for ridge recovery. This includes a detailed numerical analysis of the eigenvalues of th...
The distance-decay function of geographical gravity model: Power law or exponential law?
International Nuclear Information System (INIS)
Chen, Yanguang
2015-01-01
Highlights: •The distance-decay exponent of the gravity model is a fractal dimension. •Entropy maximization accounts for the gravity model based on power law decay. •Allometric scaling relations relate gravity models with spatial interaction models. •The four-parameter gravity models have dual mathematical expressions. •The inverse power law is the most probable distance-decay function. -- Abstract: The distance-decay function of the geographical gravity model is originally an inverse power law, which suggests a scaling process in spatial interaction. However, the distance exponent of the model cannot be reasonably explained with the ideas from Euclidean geometry. This results in a dimension dilemma in geographical analysis. Consequently, a negative exponential function was used to replace the inverse power function to serve for a distance-decay function. But a new puzzle arose that the exponential-based gravity model goes against the first law of geography. This paper is devoted for solving these kinds of problems by mathematical reasoning and empirical analysis. New findings are as follows. First, the distance exponent of the gravity model is demonstrated to be a fractal dimension using the geometric measure relation. Second, the similarities and differences between the gravity models and spatial interaction models are revealed using allometric relations. Third, a four-parameter gravity model possesses a symmetrical expression, and we need dual gravity models to describe spatial flows. The observational data of China's cities and regions (29 elements indicative of 841 data points) in 2010 are employed to verify the theoretical inferences. A conclusion can be reached that the geographical gravity model based on power-law decay is more suitable for analyzing large, complex, and scale-free regional and urban systems. This study lends further support to the suggestion that the underlying rationale of fractal structure is entropy maximization. Moreover
Bijani, Rodrigo; Lelièvre, Peter G.; Ponte-Neto, Cosme F.; Farquharson, Colin G.
2017-05-01
This paper is concerned with the applicability of Pareto Multi-Objective Global Optimization (PMOGO) algorithms for solving different types of geophysical inverse problems. The standard deterministic approach is to combine the multiple objective functions (i.e. data misfit, regularization and joint coupling terms) in a weighted-sum aggregate objective function and minimize using local (decent-based) smooth optimization methods. This approach has some disadvantages: (1) appropriate weights must be determined for the aggregate, (2) the objective functions must be differentiable and (3) local minima entrapment may occur. PMOGO algorithms can overcome these drawbacks but introduce increased computational effort. Previous work has demonstrated how PMOGO algorithms can overcome the first issue for single data set geophysical inversion, that is, the trade-off between data misfit and model regularization. However, joint inversion, which can involve many weights in the aggregate, has seen little study. The advantage of PMOGO algorithms for the other two issues has yet to be addressed in the context of geophysical inversion. In this paper, we implement a PMOGO genetic algorithm and apply it to physical-property-, lithology- and surface-geometry-based inverse problems to demonstrate the advantages of using a global optimization strategy. Lithological inversions work on a mesh but use integer model parameters representing rock unit identifiers instead of continuous physical properties. Surface geometry inversions change the geometry of wireframe surfaces that represent the contacts between discrete rock units. Despite the potentially high computational requirements of global optimization algorithms (compared to local), their application to realistically sized 2-D geophysical inverse problems is within reach of current capacity of standard computers. Furthermore, they open the door to geophysical inverse problems that could not otherwise be considered through traditional
Fast nonlinear susceptibility inversion with variational regularization.
Milovic, Carlos; Bilgic, Berkin; Zhao, Bo; Acosta-Cabronero, Julio; Tejos, Cristian
2018-01-10
Quantitative susceptibility mapping can be performed through the minimization of a function consisting of data fidelity and regularization terms. For data consistency, a Gaussian-phase noise distribution is often assumed, which breaks down when the signal-to-noise ratio is low. A previously proposed alternative is to use a nonlinear data fidelity term, which reduces streaking artifacts, mitigates noise amplification, and results in more accurate susceptibility estimates. We hereby present a novel algorithm that solves the nonlinear functional while achieving computation speeds comparable to those for a linear formulation. We developed a nonlinear quantitative susceptibility mapping algorithm (fast nonlinear susceptibility inversion) based on the variable splitting and alternating direction method of multipliers, in which the problem is split into simpler subproblems with closed-form solutions and a decoupled nonlinear inversion hereby solved with a Newton-Raphson iterative procedure. Fast nonlinear susceptibility inversion performance was assessed using numerical phantom and in vivo experiments, and was compared against the nonlinear morphology-enabled dipole inversion method. Fast nonlinear susceptibility inversion achieves similar accuracy to nonlinear morphology-enabled dipole inversion but with significantly improved computational efficiency. The proposed method enables accurate reconstructions in a fraction of the time required by state-of-the-art quantitative susceptibility mapping methods. Magn Reson Med, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
Atmospheric inverse modeling via sparse reconstruction
Directory of Open Access Journals (Sweden)
N. Hase
2017-10-01
Full Text Available Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4 emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
Atmospheric inverse modeling via sparse reconstruction
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
Open and Distance Learning Today. Routledge Studies in Distance Education Series.
Lockwood, Fred, Ed.
This book contains the following papers on open and distance learning today: "Preface" (Daniel); "Big Bang Theory in Distance Education" (Hawkridge); "Practical Agenda for Theorists of Distance Education" (Perraton); "Trends, Directions and Needs: A View from Developing Countries" (Koul); "American…
An application of sparse inversion on the calculation of the inverse data space of geophysical data
Saragiotis, Christos
2011-07-01
Multiple reflections as observed in seismic reflection measurements often hide arrivals from the deeper target reflectors and need to be removed. The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from trivial as theory requires infinite time and offset recording. Furthermore regularization issues arise during inversion. We perform the inversion by minimizing the least-squares norm of the misfit function and by constraining the 1 norm of the solution, being the inverse data space. In this way a sparse inversion approach is obtained. We show results on field data with an application to surface multiple removal. © 2011 IEEE.
Relationship between age and body weight on some linear body ...
African Journals Online (AJOL)
Relationship between age and body weight on some linear body measurement of white bornu goats reared under semi-intensive system in a southern Nigeria ... Journal of Agriculture and Food Sciences ... Parameters measured include distance between eyes, ear length, ear width and length, tail, and body weight.
Toward an Internally Consistent Astronomical Distance Scale
de Grijs, Richard; Courbin, Frédéric; Martínez-Vázquez, Clara E.; Monelli, Matteo; Oguri, Masamune; Suyu, Sherry H.
2017-11-01
Accurate astronomical distance determination is crucial for all fields in astrophysics, from Galactic to cosmological scales. Despite, or perhaps because of, significant efforts to determine accurate distances, using a wide range of methods, tracers, and techniques, an internally consistent astronomical distance framework has not yet been established. We review current efforts to homogenize the Local Group's distance framework, with particular emphasis on the potential of RR Lyrae stars as distance indicators, and attempt to extend this in an internally consistent manner to cosmological distances. Calibration based on Type Ia supernovae and distance determinations based on gravitational lensing represent particularly promising approaches. We provide a positive outlook to improvements to the status quo expected from future surveys, missions, and facilities. Astronomical distance determination has clearly reached maturity and near-consistency.
Distance Learning Plan Development: Initiating Organizational Structures
National Research Council Canada - National Science Library
Poole, Clifton
1998-01-01
.... Army distance learning plan managers to examine the DLPs they were directing. The analysis showed that neither army nor civilian distance learning plan managers used formalized requirements for organizational structure development (OSD...
Body weight relationships in early marriage. Weight relevance, weight comparisons, and weight talk.
Bove, Caron F; Sobal, Jeffery
2011-12-01
This investigation uncovered processes underlying the dynamics of body weight and body image among individuals involved in nascent heterosexual marital relationships in Upstate New York. In-depth, semi-structured qualitative interviews conducted with 34 informants, 20 women and 14 men, just prior to marriage and again one year later were used to explore continuity and change in cognitive, affective, and behavioral factors relating to body weight and body image at the time of marriage, an important transition in the life course. Three major conceptual themes operated in the process of developing and enacting informants' body weight relationships with their partner: weight relevance, weight comparisons, and weight talk. Weight relevance encompassed the changing significance of weight during early marriage and included attracting and capturing a mate, relaxing about weight, living healthily, and concentrating on weight. Weight comparisons between partners involved weight relativism, weight competition, weight envy, and weight role models. Weight talk employed pragmatic talk, active and passive reassurance, and complaining and critiquing criticism. Concepts emerging from this investigation may be useful in designing future studies of and approaches to managing body weight in adulthood. Copyright © 2011 Elsevier Ltd. All rights reserved.
Chromatid Painting for Chromosomal Inversion Detection, Phase I
National Aeronautics and Space Administration — We propose a novel approach to the detection of chromosomal inversions. Transmissible chromosome aberrations (translocations and inversions) have profound genetic...
Anxiety and Resistance in Distance Learning
Nazime Tuncay; Huseyin Uzunboylu
2010-01-01
The purpose of this study was to investigate students' anxiety and resistance towards learning through distance education.Specifically, the study sought answers to the following questions: -What are the reasons of students not choosing distancelearning courses? -Which symptoms of anxiety, if any, do distance learner’s exhibit towards distance learning? Does genderhave any significant relationships with distance learners' perception of factors that affect their anxiety and resistance? A totalo...
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method
Desmal, Abdulla
2014-07-01
A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.
Does Institutional Distance Still Matter?
DEFF Research Database (Denmark)
Møller Larsen, Marcus; Manning, Stephan
2015-01-01
’ (CMMI) process standard adoption, we show that sourcing location choices are indeed negatively impacted by institutional differences between home and host country, i.e. ‘distance’ still matters, but they are positively impacted by CMMI standard adoption in host countries, and standard adoption......This paper adds nuance to our understanding of institutional antecedents of foreign investment, in particular in global services sourcing. While prior research has stressed the various risks and effects associated with home-host country differences in national-level institutions, e.g. legal systems...... negatively moderates the importance of distance. Findings promote a more contextual, multi-level understanding of institutional antecedents of foreign firm location choices, which also has important policy implications in particular for emerging economies....
Does Institutional Distance Still Matter?
DEFF Research Database (Denmark)
Møller Larsen, Marcus; Manning, Stephan
This paper adds nuance to our understanding of institutional antecedents of foreign investment, in particular in global services sourcing. While prior research has stressed the various risks and effects associated with home-host country differences in national-level institutions, e.g. legal systems......, we argue that industry-specific, yet often transnational field institutions, e.g. standards, have become critical factors in driving sourcing location decisions. Using data from the Offshoring Research Network, Kaufmann institutional indicators, and data on ‘capability maturity model integration...... negatively moderates the importance of distance. Findings promote a more contextual, multi-level understanding of institutional antecedents of foreign firm location choices, which also has important policy implications in particular for emerging economies....
Managerial Distance and Virtual Ownership
DEFF Research Database (Denmark)
Hansmann, Henry; Thomsen, Steen
Industrial foundations are autonomous nonprofit entities that own and control one or more conventional business firms. These foundations are common in Northern Europe, where they own a number of internationally prominent companies. Previous studies have indicated, surprisingly, that companies...... controlled by industrial foundations are, on average, as profitable as companies with conventional patterns of investor ownership. In this article, we explore the reasons for this performance, not by comparing foundation-owned firms with conventional investor-owned firms, but rather by focusing...... on differences among the industrial foundations themselves. We work with a rich data set comprising 113 foundation-owned Danish companies over the period 2003-2008. We focus in particular on a composite structural factor that we term “managerial distance.” We propose this as a measure of the extent to which...
Elasticity of Long Distance Travelling
DEFF Research Database (Denmark)
Knudsen, Mette Aagaard
2011-01-01
With data from the Danish expenditure survey for 12 years 1996 through 2007, this study analyses household expenditures for long distance travelling. Household expenditures are examined at two levels of aggregation having the general expenditures on transportation and leisure relative to five other...... aggregated commodities at the highest level, and the specific expenditures on plane tickets and travel packages at the lowest level. The Almost Ideal Demand System is applied to determine the relationship between expenditures on transportation and leisure and all other purchased non-durables within...... packages has higher income elasticity of demand than plane tickets but also higher than transportation and leisure in general. The findings within price sensitiveness are not as sufficient estimated, but the model results indicate that travel packages is far more price elastic than plane tickets which...
Analysing designed experiments in distance sampling
Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block
2009-01-01
Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...
ETUDE - European Trade Union Distance Education.
Creanor, Linda; Walker, Steve
2000-01-01
Describes transnational distance learning activities among European trade union educators carried out as part of the European Trade Union Distance Education (ETUDE) project, supported by the European Commission. Highlights include the context of international trade union distance education; tutor training course; tutors' experiences; and…
What Is Distance? Popular Lectures in Mathematics.
Shreider, Yu A.
Presented is an elaboration of a course given at Moscow University for pupils in ninth and tenth grades. The development through abstraction of the general definition of distance is discussed and a class of spaces in which the notion of distance is defined, the so-called metric spaces, is introduced. The general concept of distance is related to a…
Continuity Properties of Distances for Markov Processes
DEFF Research Database (Denmark)
Jaeger, Manfred; Mao, Hua; Larsen, Kim Guldstrand
2014-01-01
In this paper we investigate distance functions on finite state Markov processes that measure the behavioural similarity of non-bisimilar processes. We consider both probabilistic bisimilarity metrics, and trace-based distances derived from standard Lp and Kullback-Leibler distances. Two desirable...
Distance learning: its advantages and disadvantages
KEGEYAN SVETLANA ERIHOVNA
2016-01-01
Distance learning has become popular in higher institutions because of its flexibility and availability to learners and teachers at anytime, regardless of geographic location. With so many definitions and phases of distance education, this paper only focuses on the delivery mode of distance education (the use of information technology), background, and its disadvantages and advantages for today’s learners.
Regional Moment Tensor Inversion for Source Type Identification
Dreger, D. S.; Ford, S. R.; Walter, W. R.
2008-12-01
With Green's functions from calibrated seismic velocity models it is possible to use regional distance moment tensor inversion for source-type identification. The deviatoric and isotropic source components for 17 explosions at the Nevada Test Site, as well as 12 earthquakes and 3 collapses in the surrounding region of the western US, are calculated using a regional time-domain full waveform inversion for the complete moment tensor. The events separate into specific populations according to their deviation from a pure double-couple and ratio of isotropic to deviatoric energy. The separation allows for anomalous event identification and discrimination between explosions, earthquakes, and collapses. Confidence regions of the model parameters are estimated from the data misfit by assuming normally distributed parameter values. We investigate the sensitivity of the resolved parameters of an explosion to imperfect Earth models, inaccurate event depths, and data with a low signal-to-noise ratio (SNR) assuming a reasonable azimuthal distribution of stations. In the band of interest (0.02-0.10 Hz) the source-type calculated from complete moment tensor inversion is insensitive to velocity models perturbations that cause less than a half-cycle shift (explosion source-type is insensitive to an incorrect depth assumption (for a true depth of 1 km), and the goodness-of-fit of the inversion result cannot be used to resolve the true depth of the explosion. Noise degrades the explosive character of the result, and a good fit and accurate result are obtained when the signal-to-noise ratio (SNR) is greater than 5. We assess the depth and frequency dependence upon the resolved explosive moment. As the depth decreases from 1 km to 200 m, the isotropic moment is no longer accurately resolved and is in error between 50-200%. However, even at the most shallow depth the resultant moment tensor is dominated by the explosive component when the data has a good SNR. Finally, the sensitivity
Beam's-Eye-View Dosimetrics-Guided Inverse Planning for Aperture-Modulated Arc Therapy
International Nuclear Information System (INIS)
Ma Yunzhi; Popple, Richard; Suh, Tae-Suk; Xing Lei
2009-01-01
Purpose: To use angular beam's-eye-view dosimetrics (BEVD) information to improve the computational efficiency and plan quality of inverse planning of aperture-modulated arc therapy (AMAT). Methods and Materials: In BEVD-guided inverse planning, the angular space spanned by a rotational arc is represented by a large number of fixed-gantry beams with angular spacing of ∼2.5 degrees. Each beam is assigned with an initial aperture shape determined by the beam's-eye-view (BEV) projection of the planning target volume (PTV) and an initial weight. Instead of setting the beam weights arbitrarily, which slows down the subsequent optimization process and may result in a suboptimal solution, a priori knowledge about the quality of the beam directions derived from a BEVD is adopted to initialize the weights. In the BEVD calculation, a higher score is assigned to directions that allow more dose to be delivered to the PTV without exceeding the dose tolerances of the organs at risk (OARs) and vice versa. Simulated annealing is then used to optimize the segment shapes and weights. The BEVD-guided inverse planning is demonstrated by using two clinical cases, and the results are compared with those of a conventional approach without BEVD guidance. Results: An a priori knowledge-guided inverse planning scheme for AMAT is established. The inclusion of BEVD guidance significantly improves the convergence behavior of AMAT inverse planning and results in much better OAR sparing as compared with the conventional approach. Conclusions: BEVD-guidance facilitates AMAT treatment planning and provides a comprehensive tool to maximally use the technical capacity of the new arc therapeutic modality.
Wu, Han; Jiang, Baofa; Zhu, Ping; Geng, Xingyi; Liu, Zhong; Cui, Liangliang; Yang, Liping
2018-02-01
When discussing the association between birth weight and air pollution, previous studies mainly focus on the maternal trimester-specific exposures during pregnancy, whereas the possible associations between birth weight and weekly-specific exposures have been largely neglected. We conducted a nested 1:4 matched case-control study in Jinan, China to examine the weekly-specific associations during pregnancy between maternal fine particulate matter (aerodynamic diameter maternity and child care hospital of this city during 2014–2016. Individual exposures to PM2.5, NO2, and SO2 during pregnancy were estimated using an inverse distance weighting method. Birth weight for gender-, gestational age-, and parity-specific standard score (BWGAP z-score) was calculated as the outcome of interest. Distributed lag non-linear models (DLNMs) were applied to estimate weekly-specific relationship between maternal air pollutant exposures and birth weight. For an increase of per inter-quartile range in maternal PM2.5 exposure concentration during pregnancy, the BWGAP z-score decreased significantly during the 27th–33th gestational weeks with the strongest association in the 30th gestational weeks (standard deviation units decrease in BWGAP z-score: ‑0.049, 95% CI: ‑0.080 ‑0.017, in three-pollutant model). No significant association between maternal weekly NO2 or SO2 BWGAP z-score was observed. In conclusion, this study provides evidence that maternal PM2.5 exposure during the 27th–33th gestational weeks may reduce the birth weight in the context of very high pollution level of PM2.5.
Byun, Jinyoung; Han, Younghun; Gorlov, Ivan P; Busam, Jonathan A; Seldin, Michael F; Amos, Christopher I
2017-10-16
Accurate inference of genetic ancestry is of fundamental interest to many biomedical, forensic, and anthropological research areas. Genetic ancestry memberships may relate to genetic disease risks. In a genome association study, failing to account for differences in genetic ancestry between cases and controls may also lead to false-positive results. Although a number of strategies for inferring and taking into account the confounding effects of genetic ancestry are available, applying them to large studies (tens thousands samples) is challenging. The goal of this study is to develop an approach for inferring genetic ancestry of samples with unknown ancestry among closely related populations and to provide accurate estimates of ancestry for application to large-scale studies. In this study we developed a novel distance-based approach, Ancestry Inference using Principal component analysis and Spatial analysis (AIPS) that incorporates an Inverse Distance Weighted (IDW) interpolation method from spatial analysis to assign individuals to population memberships. We demonstrate the benefits of AIPS in analyzing population substructure, specifically related to the four most commonly used tools EIGENSTRAT, STRUCTURE, fastSTRUCTURE, and ADMIXTURE using genotype data from various intra-European panels and European-Americans. While the aforementioned commonly used tools performed poorly in inferring ancestry from a large number of subpopulations, AIPS accurately distinguished variations between and within subpopulations. Our results show that AIPS can be applied to large-scale data sets to discriminate the modest variability among intra-continental populations as well as for characterizing inter-continental variation. The method we developed will protect against spurious associations when mapping the genetic basis of a disease. Our approach is more accurate and computationally efficient method for inferring genetic ancestry in the large-scale genetic studies.
Sleep quality is associated with weight loss maintenance status: the MedWeight study.
Yannakoulia, M; Anastasiou, C A; Karfopoulou, E; Pehlivanidis, A; Panagiotakos, D B; Vgontzas, A
2017-06-01
Sleep duration and quality have been associated with many health outcomes, including weight management. We aimed to investigate the effect of self-reported sleep duration and quality on weight loss maintenance in participants of the MedWeight study, a registry of individuals that lost at least 10% of body weight in the past and either maintained the loss (maintainers: weight maintenance of at least 10% of initial weight loss) or regained it (regainers: weight ≥95% of their maximum body weight). Study participants included 528 volunteers (61% women). Sleep quantity referred to the reported duration of nocturnal sleep, as well as the frequency of mid-day naps during the last month. Sleep quality was assessed through the Athens Insomnia Scale (AIS). Reported sleep quantity was associated with weight maintenance status, but the association became non-significant when the AIS score entered the model. In specific, AIS was inversely associated with the likelihood of being a maintainer (OR=0.89 per AIS unit, 95% CI: 0.81 - 0.98), even after adjusting for potential confounders. Sex-specific analysis revealed that the association between the AIS score and maintenance status was evident in men but not in women. Future studies are needed to confirm these results in other population groups and reveal underlying mechanisms. Copyright © 2017 Elsevier B.V. All rights reserved.
Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. ... limiting calories) usually isn’t enough to cause weight loss. But exercise plays an important part in helping ...
Sharp Boundary Inversion of 2D Magnetotelluric Data using Bayesian Method.
Zhou, S.; Huang, Q.
2017-12-01
Normally magnetotelluric(MT) inversion method cannot show the distribution of underground resistivity with clear boundary, even if there are obviously different blocks. Aiming to solve this problem, we develop a Bayesian structure to inverse 2D MT sharp boundary data, using boundary location and inside resistivity as the random variables. Firstly, we use other MT inversion results, like ModEM, to analyze the resistivity distribution roughly. Then, we select the suitable random variables and change its data format to traditional staggered grid parameters, which can be used to do finite difference forward part. Finally, we can shape the posterior probability density(PPD), which contains all the prior information and model-data correlation, by Markov Chain Monte Carlo(MCMC) sampling from prior distribution. The depth, resistivity and their uncertainty can be valued. It also works for sensibility estimation. We applied the method to a synthetic case, which composes two large abnormal blocks in a trivial background. We consider the boundary smooth and the near true model weight constrains that mimic joint inversion or constrained inversion, then we find that the model results a more precise and focused depth distribution. And we also test the inversion without constrains and find that the boundary could also be figured, though not as well. Both inversions have a good valuation of resistivity. The constrained result has a lower root mean square than ModEM inversion result. The data sensibility obtained via PPD shows that the resistivity is the most sensible, center depth comes second and both sides are the worst.
Theim, Kelly R; Brown, Joshua D; Juarascio, Adrienne S; Malcolm, Robert R; O'Neil, Patrick M
2013-11-01
Greater self-regulatory behavior usage is associated with greater weight loss within behavioral weight loss treatments. Hedonic hunger (i.e., susceptibility to environmental food cues) may impede successful behavior change and weight loss. Adult men and women (N = 111, body mass index M ± SD = 35.89 ± 6.97 kg/m(2)) were assessed before and after a 15-week lifestyle change weight loss program with a partial meal-replacement diet. From pre- to post-treatment, reported weight control behavior usage improved and hedonic hunger decreased, and these changes were inversely related. Individuals with higher hedonic hunger scores at baseline showed the greatest weight loss. Similarly, participants with lower baseline use of weight control behaviors lost more weight, and increased weight control behavior usage was associated with greater weight loss-particularly among individuals with low baseline hedonic hunger. Further study is warranted regarding the significance of hedonic hunger in weight loss treatments.
Generating Signed Distance Fields From Triangle Meshes
DEFF Research Database (Denmark)
Bærentzen, Jakob Andreas; Aanæs, Henrik
A method for generating a discrete, signed 3D distance field is proposed. Distance fields are used in a number of contexts. In particular the popular level set method is usually initialized by a distance field. The main focus of our work is on simplifying the computation of the sign when generating....... This leads to a method for generating signed distance fields which is a simple and straightforward extension of the method for generating unsigned distance fields. We prove that our choice of pseudo normal leads to a correct technique for computing the sign....
Galaxy Cluster Smashes Distance Record
2009-10-01
he most distant galaxy cluster yet has been discovered by combining data from NASA's Chandra X-ray Observatory and optical and infrared telescopes. The cluster is located about 10.2 billion light years away, and is observed as it was when the Universe was only about a quarter of its present age. The galaxy cluster, known as JKCS041, beats the previous record holder by about a billion light years. Galaxy clusters are the largest gravitationally bound objects in the Universe. Finding such a large structure at this very early epoch can reveal important information about how the Universe evolved at this crucial stage. JKCS041 is found at the cusp of when scientists think galaxy clusters can exist in the early Universe based on how long it should take for them to assemble. Therefore, studying its characteristics - such as composition, mass, and temperature - will reveal more about how the Universe took shape. "This object is close to the distance limit expected for a galaxy cluster," said Stefano Andreon of the National Institute for Astrophysics (INAF) in Milan, Italy. "We don't think gravity can work fast enough to make galaxy clusters much earlier." Distant galaxy clusters are often detected first with optical and infrared observations that reveal their component galaxies dominated by old, red stars. JKCS041 was originally detected in 2006 in a survey from the United Kingdom Infrared Telescope (UKIRT). The distance to the cluster was then determined from optical and infrared observations from UKIRT, the Canada-France-Hawaii telescope in Hawaii and NASA's Spitzer Space Telescope. Infrared observations are important because the optical light from the galaxies at large distances is shifted into infrared wavelengths because of the expansion of the universe. The Chandra data were the final - but crucial - piece of evidence as they showed that JKCS041 was, indeed, a genuine galaxy cluster. The extended X-ray emission seen by Chandra shows that hot gas has been detected
QCD-instantons and conformal inversion symmetry
International Nuclear Information System (INIS)
Klammer, D.
2006-07-01
Instantons are an essential and non-perturbative part of Quantum Chromodynamics, the theory of strong interactions. One of the most relevant quantities in the instanton calculus is the instanton-size distribution, which can be described on the one hand within the framework of instanton perturbation theory and on the other hand investigated numerically by means of lattice computations. A rapid onset of a drastic discrepancy between these respective results indicates that the underlying physics is not yet well understood. In this work we investigate the appealing possibility of a symmetry under conformal inversion of space-time leading to this deviation. The motivation being that the lattice data seem to be invariant under an inversion of the instanton size. Since the instanton solution of a given size turns into an anti-instanton solution having an inverted size under conformal inversion of space-time, we ask in a first investigation, whether this property is transferred to the quantum level. In order to introduce a new scale, which is indicated by the lattice data and corresponds to the average instanton size as inversion radius, we project the instanton calculus onto the four-dimensional surface of a five-dimensional sphere via stereographic projection. The radius of this sphere is associated with the average instanton size. The result for the instanton size-distribution projected onto the sphere agrees surprisingly well with the lattice data at qualitative level. The resulting symmetry under an inversion of the instanton size is almost perfect. (orig.)
QCD-instantons and conformal inversion symmetry
Energy Technology Data Exchange (ETDEWEB)
Klammer, D.
2006-07-15
Instantons are an essential and non-perturbative part of Quantum Chromodynamics, the theory of strong interactions. One of the most relevant quantities in the instanton calculus is the instanton-size distribution, which can be described on the one hand within the framework of instanton perturbation theory and on the other hand investigated numerically by means of lattice computations. A rapid onset of a drastic discrepancy between these respective results indicates that the underlying physics is not yet well understood. In this work we investigate the appealing possibility of a symmetry under conformal inversion of space-time leading to this deviation. The motivation being that the lattice data seem to be invariant under an inversion of the instanton size. Since the instanton solution of a given size turns into an anti-instanton solution having an inverted size under conformal inversion of space-time, we ask in a first investigation, whether this property is transferred to the quantum level. In order to introduce a new scale, which is indicated by the lattice data and corresponds to the average instanton size as inversion radius, we project the instanton calculus onto the four-dimensional surface of a five-dimensional sphere via stereographic projection. The radius of this sphere is associated with the average instanton size. The result for the instanton size-distribution projected onto the sphere agrees surprisingly well with the lattice data at qualitative level. The resulting symmetry under an inversion of the instanton size is almost perfect. (orig.)
Unwrapped phase inversion with an exponential damping
Choi, Yun Seok
2015-07-28
Full-waveform inversion (FWI) suffers from the phase wrapping (cycle skipping) problem when the frequency of data is not low enough. Unless we obtain a good initial velocity model, the phase wrapping problem in FWI causes a result corresponding to a local minimum, usually far away from the true solution, especially at depth. Thus, we have developed an inversion algorithm based on a space-domain unwrapped phase, and we also used exponential damping to mitigate the nonlinearity associated with the reflections. We construct the 2D phase residual map, which usually contains the wrapping discontinuities, especially if the model is complex and the frequency is high. We then unwrap the phase map and remove these cycle-based jumps. However, if the phase map has several residues, the unwrapping process becomes very complicated. We apply a strong exponential damping to the wavefield to eliminate much of the residues in the phase map, thus making the unwrapping process simple. We finally invert the unwrapped phases using the back-propagation algorithm to calculate the gradient. We progressively reduce the damping factor to obtain a high-resolution image. Numerical examples determined that the unwrapped phase inversion with a strong exponential damping generated convergent long-wavelength updates without low-frequency information. This model can be used as a good starting model for a subsequent inversion with a reduced damping, eventually leading to conventional waveform inversion.
RUMBLE Technical Report on Inversion Models
Simons, Dick G.; Ainslie, Michael A.; Muller, Simonette H. E.; Boek, Wilco
2002-06-01
The performance of long range low frequency active sonar (LFAS) systems in shallow water is very sensitive to the properties of the sea bed, because of the impact of these on propagation, reverberation and (to a lesser extent) ambient noise. Direct measurement of sea bed parameters using cores or grab samples is impractical for covering a wide area, and instead we consider the possibility of using the LFAS system itself to measure its operating environment. The advantages of this approach are that it exploits existing (or planned) equipment and potentially offers a wide coverage. Geo-acoustic inversion methods are reviewed, with particular consideration for the problems associated with inversion of reverberation data. Three global optimisation methods are described, known as "simulated annealing", "genetic algorithms" and "differential evolution". The Levenberg-Marquardt and downhill simplex local methods are also described. The advantages and disadvantages of each individual method, as well as some hybrid combinations, are discussed in the context of geo-acoustic inversion. A new inversion method has been developed that exploits both the shape and height of the reverberation vs time curve to obtain information about the sea bed reflection loss and scattering strength separately. Tests on synthetic reverberation data show that the inversion method is able to extract parameters representing reflection loss and scattering strength, but cannot always unambiguously separate the effects of sediment sound speed and attenuation. The method is robust to small mismatches in water depth, sonar depth, sediment sound speed gradient and wind speed.
Full wave-field reflection coefficient inversion.
Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2007-12-01
This paper develops a Bayesian inversion for recovering multilayer geoacoustic (velocity, density, attenuation) profiles from a full wave-field (spherical-wave) seabed reflection response. The reflection data originate from acoustic time series windowed for a single bottom interaction, which are processed to yield reflection coefficient data as a function of frequency and angle. Replica data for inversion are computed using a wave number-integration model to calculate the full complex acoustic pressure field, which is processed to produce a commensurate seabed response function. To address the high computational cost of calculating short range acoustic fields, the inversion algorithms are parallelized and frequency averaging is replaced by range averaging in the forward model. The posterior probability density is interpreted in terms of optimal parameter estimates, marginal distributions, and credibility intervals. Inversion results for the full wave-field seabed response are compared to those obtained using plane-wave reflection coefficients. A realistic synthetic study indicates that the plane-wave assumption can fail, producing erroneous results with misleading uncertainty bounds, whereas excellent results are obtained with the full-wave reflection inversion.