Glaz, Joseph
2009-01-01
Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.
Spatial scan statistics using elliptic windows
Christiansen, Lasse Engbo; Andersen, Jens Strodl; Wegener, Henrik Caspar
The spatial scan statistic is widely used to search for clusters in epidemiologic data. This paper shows that the usually applied elimination of secondary clusters as implemented in SatScan is sensitive to smooth changes in the shape of the clusters. We present an algorithm for generation of set...
Spatial scan statistics using elliptic windows
Christiansen, Lasse Engbo; Andersen, Jens Strodl; Wegener, Henrik Caspar
2006-01-01
The spatial scan statistic is widely used to search for clusters. This article shows that the usually applied elimination of secondary clusters as implemented in SatScan is sensitive to smooth changes in the shape of the clusters. We present an algorithm for generation of a set of confocal elliptic...
Development of scanning micromirror with discrete steering angles
Wang, Z F; Noell, W; Zickar, M; Rooij, N F de; Lim, S P
2006-01-01
This paper describes the development of a new MEMS-based optical mirror, which can perform optical switching (or scanning) function with discrete reflection angles in an outof- plane configuration. The device is fabricated through the Deep Reactive Ion Etching (DRIE) process on silicon-on-insulator (SOI) wafer, followed by wafer dicing and assembly with two metalised glass dies. The MEMS mirror can be tilted under electrostatic force between the opposite electrodes embedded on SOI and glass structures. The most outstanding feature of this MEMS mirror is the discrete and therefore, reliable tilting angles, which generated by its unique mechanical structural design and electrostatic-driven mechanism. In this paper, the concept of the new scanning mirror is presented, followed by the introduction of device design, mechanical simulation, microfabrication process, assembly solution, and some testing results. The potential applications of this new MEMS mirror include optical scanning, optical sensing (or detection), and optical switching
Discrete ellipsoidal statistical BGK model and Burnett equations
Zhang, Yu-Dong; Xu, Ai-Guo; Zhang, Guang-Cai; Chen, Zhi-Hua; Wang, Pei
2018-06-01
A new discrete Boltzmann model, the discrete ellipsoidal statistical Bhatnagar-Gross-Krook (ESBGK) model, is proposed to simulate nonequilibrium compressible flows. Compared with the original discrete BGK model, the discrete ES-BGK has a flexible Prandtl number. For the discrete ES-BGK model in the Burnett level, two kinds of discrete velocity model are introduced and the relations between nonequilibrium quantities and the viscous stress and heat flux in the Burnett level are established. The model is verified via four benchmark tests. In addition, a new idea is introduced to recover the actual distribution function through the macroscopic quantities and their space derivatives. The recovery scheme works not only for discrete Boltzmann simulation but also for hydrodynamic ones, for example, those based on the Navier-Stokes or the Burnett equations.
A nonparametric spatial scan statistic for continuous data.
Jung, Inkyung; Cho, Ho Jin
2015-10-20
Spatial scan statistics are widely used for spatial cluster detection, and several parametric models exist. For continuous data, a normal-based scan statistic can be used. However, the performance of the model has not been fully evaluated for non-normal data. We propose a nonparametric spatial scan statistic based on the Wilcoxon rank-sum test statistic and compared the performance of the method with parametric models via a simulation study under various scenarios. The nonparametric method outperforms the normal-based scan statistic in terms of power and accuracy in almost all cases under consideration in the simulation study. The proposed nonparametric spatial scan statistic is therefore an excellent alternative to the normal model for continuous data and is especially useful for data following skewed or heavy-tailed distributions.
Huffman and linear scanning methods with statistical language models.
Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris
2015-03-01
Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.
Data-driven inference for the spatial scan statistic
Duczmal Luiz H
2011-08-01
Full Text Available Abstract Background Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. Results A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. Conclusions A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.
Data-driven inference for the spatial scan statistic.
Almeida, Alexandre C L; Duarte, Anderson R; Duczmal, Luiz H; Oliveira, Fernando L P; Takahashi, Ricardo H C
2011-08-02
Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas) or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.
Parametric statistical inference for discretely observed diffusion processes
Pedersen, Asger Roer
Part 1: Theoretical results Part 2: Statistical applications of Gaussian diffusion processes in freshwater ecology......Part 1: Theoretical results Part 2: Statistical applications of Gaussian diffusion processes in freshwater ecology...
A critical look at prospective surveillance using a scan statistic.
Correa, Thais R; Assunção, Renato M; Costa, Marcelo A
2015-03-30
The scan statistic is a very popular surveillance technique for purely spatial, purely temporal, and spatial-temporal disease data. It was extended to the prospective surveillance case, and it has been applied quite extensively in this situation. When the usual signal rules, as those implemented in SaTScan(TM) (Boston, MA, USA) software, are used, we show that the scan statistic method is not appropriate for the prospective case. The reason is that it does not adjust properly for the sequential and repeated tests carried out during the surveillance. We demonstrate that the nominal significance level α is not meaningful and there is no relationship between α and the recurrence interval or the average run length (ARL). In some cases, the ARL may be equal to ∞, which makes the method ineffective. This lack of control of the type-I error probability and of the ARL leads us to strongly oppose the use of the scan statistic with the usual signal rules in the prospective context. Copyright © 2014 John Wiley & Sons, Ltd.
Universality of correlations of levels with discrete statistics
Brezin, Edouard; Kazakov, Vladimir
1999-01-01
We study the statistics of a system of N random levels with integer values, in the presence of a logarithmic repulsive potential of Dyson type. This probleme arises in sums over representations (Young tableaux) of GL(N) in various matrix problems and in the study of statistics of partitions for the permutation group. The model is generalized to include an external source and its correlators are found in closed form for any N. We reproduce the density of levels in the large N and double scalin...
A study on the use of Gumbel approximation with the Bernoulli spatial scan statistic.
Read, S; Bath, P A; Willett, P; Maheswaran, R
2013-08-30
The Bernoulli version of the spatial scan statistic is a well established method of detecting localised spatial clusters in binary labelled point data, a typical application being the epidemiological case-control study. A recent study suggests the inferential accuracy of several versions of the spatial scan statistic (principally the Poisson version) can be improved, at little computational cost, by using the Gumbel distribution, a method now available in SaTScan(TM) (www.satscan.org). We study in detail the effect of this technique when applied to the Bernoulli version and demonstrate that it is highly effective, albeit with some increase in false alarm rates at certain significance thresholds. We explain how this increase is due to the discrete nature of the Bernoulli spatial scan statistic and demonstrate that it can affect even small p-values. Despite this, we argue that the Gumbel method is actually preferable for very small p-values. Furthermore, we extend previous research by running benchmark trials on 12 000 synthetic datasets, thus demonstrating that the overall detection capability of the Bernoulli version (i.e. ratio of power to false alarm rate) is not noticeably affected by the use of the Gumbel method. We also provide an example application of the Gumbel method using data on hospital admissions for chronic obstructive pulmonary disease. Copyright © 2013 John Wiley & Sons, Ltd.
Statistical characterization of discrete conservative systems: The web map
Ruiz, Guiomar; Tirnakli, Ugur; Borges, Ernesto P.; Tsallis, Constantino
2017-10-01
We numerically study the two-dimensional, area preserving, web map. When the map is governed by ergodic behavior, it is, as expected, correctly described by Boltzmann-Gibbs statistics, based on the additive entropic functional SB G[p (x ) ] =-k ∫d x p (x ) lnp (x ) . In contrast, possible ergodicity breakdown and transitory sticky dynamical behavior drag the map into the realm of generalized q statistics, based on the nonadditive entropic functional Sq[p (x ) ] =k 1/-∫d x [p(x ) ] q q -1 (q ∈R ;S1=SB G ). We statistically describe the system (probability distribution of the sum of successive iterates, sensitivity to the initial condition, and entropy production per unit time) for typical values of the parameter that controls the ergodicity of the map. For small (large) values of the external parameter K , we observe q -Gaussian distributions with q =1.935 ⋯ (Gaussian distributions), like for the standard map. In contrast, for intermediate values of K , we observe a different scenario, due to the fractal structure of the trajectories embedded in the chaotic sea. Long-standing non-Gaussian distributions are characterized in terms of the kurtosis and the box-counting dimension of chaotic sea.
Discrete changes of current statistics in periodically driven stochastic systems
Chernyak, Vladimir Y; Sinitsyn, N A
2010-01-01
We demonstrate that the counting statistics of currents in periodically driven ergodic stochastic systems can show sharp changes of some of its properties in response to continuous changes of the driving protocol. To describe this effect, we introduce a new topological phase factor in the evolution of the moment generating function which is akin to the topological geometric phase in the evolution of a periodically driven quantum mechanical system with time-reversal symmetry. This phase leads to the prediction of a sign change for the difference of the probabilities to find even and odd numbers of particles transferred in a stochastic system in response to cyclic evolution of control parameters. The driving protocols that lead to this sign change should enclose specific degeneracy points in the space of control parameters. The relation between the topology of the paths in the control parameter space and the sign changes can be described in terms of the first Stiefel–Whitney class of topological invariants. (letter)
A spatial scan statistic for compound Poisson data.
Rosychuk, Rhonda J; Chang, Hsing-Ming
2013-12-20
The topic of spatial cluster detection gained attention in statistics during the late 1980s and early 1990s. Effort has been devoted to the development of methods for detecting spatial clustering of cases and events in the biological sciences, astronomy and epidemiology. More recently, research has examined detecting clusters of correlated count data associated with health conditions of individuals. Such a method allows researchers to examine spatial relationships of disease-related events rather than just incident or prevalent cases. We introduce a spatial scan test that identifies clusters of events in a study region. Because an individual case may have multiple (repeated) events, we base the test on a compound Poisson model. We illustrate our method for cluster detection on emergency department visits, where individuals may make multiple disease-related visits. Copyright © 2013 John Wiley & Sons, Ltd.
Identifying clusters of active transportation using spatial scan statistics.
Huang, Lan; Stinchcomb, David G; Pickle, Linda W; Dill, Jennifer; Berrigan, David
2009-08-01
There is an intense interest in the possibility that neighborhood characteristics influence active transportation such as walking or biking. The purpose of this paper is to illustrate how a spatial cluster identification method can evaluate the geographic variation of active transportation and identify neighborhoods with unusually high/low levels of active transportation. Self-reported walking/biking prevalence, demographic characteristics, street connectivity variables, and neighborhood socioeconomic data were collected from respondents to the 2001 California Health Interview Survey (CHIS; N=10,688) in Los Angeles County (LAC) and San Diego County (SDC). Spatial scan statistics were used to identify clusters of high or low prevalence (with and without age-adjustment) and the quantity of time spent walking and biking. The data, a subset from the 2001 CHIS, were analyzed in 2007-2008. Geographic clusters of significantly high or low prevalence of walking and biking were detected in LAC and SDC. Structural variables such as street connectivity and shorter block lengths are consistently associated with higher levels of active transportation, but associations between active transportation and socioeconomic variables at the individual and neighborhood levels are mixed. Only one cluster with less time spent walking and biking among walkers/bikers was detected in LAC, and this was of borderline significance. Age-adjustment affects the clustering pattern of walking/biking prevalence in LAC, but not in SDC. The use of spatial scan statistics to identify significant clustering of health behaviors such as active transportation adds to the more traditional regression analysis that examines associations between behavior and environmental factors by identifying specific geographic areas with unusual levels of the behavior independent of predefined administrative units.
Brazilian Amazonia Deforestation Detection Using Spatio-Temporal Scan Statistics
Vieira, C. A. O.; Santos, N. T.; Carneiro, A. P. S.; Balieiro, A. A. S.
2012-07-01
The spatio-temporal models, developed for analyses of diseases, can also be used for others fields of study, including concerns about forest and deforestation. The aim of this paper is to quantitatively check priority areas in order to combat deforestation on the Amazon forest, using the space-time scan statistic. The study area location is at the south of the Amazonas State and cover around 297.183 kilometre squares, including the municipality of Boca do Acre, Labrea, Canutama, Humaita, Manicore, Novo Aripuana e Apui County on the north region of Brazil. This area has showed a significant change for land cover, which has increased the number of deforestation's alerts. Therefore this situation becomes a concern and gets more investigation, trying to stop factors that increase the number of cases in the area. The methodology includes the location and year that deforestation's alert occurred. These deforestation's alerts are mapped by the DETER (Detection System of Deforestation in Real Time in Amazonia), which is carry out by the Brazilian Space Agency (INPE). The software SatScanTM v7.0 was used in order to define space-time permutation scan statistic for detection of deforestation cases. The outcome of this experiment shows an efficient model to detect space-time clusters of deforestation's alerts. The model was efficient to detect the location, the size, the order and characteristics about activities at the end of the experiments. Two clusters were considered actives and kept actives up to the end of the study. These clusters are located in Canutama and Lábrea County. This quantitative spatial modelling of deforestation warnings allowed: firstly, identifying actives clustering of deforestation, in which the environment government official are able to concentrate their actions; secondly, identifying historic clustering of deforestation, in which the environment government official are able to monitoring in order to avoid them to became actives again; and finally
BRAZILIAN AMAZONIA DEFORESTATION DETECTION USING SPATIO-TEMPORAL SCAN STATISTICS
C. A. O. Vieira
2012-07-01
Full Text Available The spatio-temporal models, developed for analyses of diseases, can also be used for others fields of study, including concerns about forest and deforestation. The aim of this paper is to quantitatively check priority areas in order to combat deforestation on the Amazon forest, using the space-time scan statistic. The study area location is at the south of the Amazonas State and cover around 297.183 kilometre squares, including the municipality of Boca do Acre, Labrea, Canutama, Humaita, Manicore, Novo Aripuana e Apui County on the north region of Brazil. This area has showed a significant change for land cover, which has increased the number of deforestation's alerts. Therefore this situation becomes a concern and gets more investigation, trying to stop factors that increase the number of cases in the area. The methodology includes the location and year that deforestation’s alert occurred. These deforestation's alerts are mapped by the DETER (Detection System of Deforestation in Real Time in Amazonia, which is carry out by the Brazilian Space Agency (INPE. The software SatScanTM v7.0 was used in order to define space-time permutation scan statistic for detection of deforestation cases. The outcome of this experiment shows an efficient model to detect space-time clusters of deforestation’s alerts. The model was efficient to detect the location, the size, the order and characteristics about activities at the end of the experiments. Two clusters were considered actives and kept actives up to the end of the study. These clusters are located in Canutama and Lábrea County. This quantitative spatial modelling of deforestation warnings allowed: firstly, identifying actives clustering of deforestation, in which the environment government official are able to concentrate their actions; secondly, identifying historic clustering of deforestation, in which the environment government official are able to monitoring in order to avoid them to became
Statistical inference for discrete-time samples from affine stochastic delay differential equations
Küchler, Uwe; Sørensen, Michael
2013-01-01
Statistical inference for discrete time observations of an affine stochastic delay differential equation is considered. The main focus is on maximum pseudo-likelihood estimators, which are easy to calculate in practice. A more general class of prediction-based estimating functions is investigated...
CROSAT: A digital computer program for statistical-spectral analysis of two discrete time series
Antonopoulos Domis, M.
1978-03-01
The program CROSAT computes directly from two discrete time series auto- and cross-spectra, transfer and coherence functions, using a Fast Fourier Transform subroutine. Statistical analysis of the time series is optional. While of general use the program is constructed to be immediately compatible with the ICL 4-70 and H316 computers at AEE Winfrith, and perhaps with minor modifications, with any other hardware system. (author)
Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing
2013-03-01
Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.
A flexible spatial scan statistic with a restricted likelihood ratio for detecting disease clusters.
Tango, Toshiro; Takahashi, Kunihiko
2012-12-30
Spatial scan statistics are widely used tools for detection of disease clusters. Especially, the circular spatial scan statistic proposed by Kulldorff (1997) has been utilized in a wide variety of epidemiological studies and disease surveillance. However, as it cannot detect noncircular, irregularly shaped clusters, many authors have proposed different spatial scan statistics, including the elliptic version of Kulldorff's scan statistic. The flexible spatial scan statistic proposed by Tango and Takahashi (2005) has also been used for detecting irregularly shaped clusters. However, this method sets a feasible limitation of a maximum of 30 nearest neighbors for searching candidate clusters because of heavy computational load. In this paper, we show a flexible spatial scan statistic implemented with a restricted likelihood ratio proposed by Tango (2008) to (1) eliminate the limitation of 30 nearest neighbors and (2) to have surprisingly much less computational time than the original flexible spatial scan statistic. As a side effect, it is shown to be able to detect clusters with any shape reasonably well as the relative risk of the cluster becomes large via Monte Carlo simulation. We illustrate the proposed spatial scan statistic with data on mortality from cerebrovascular disease in the Tokyo Metropolitan area, Japan. Copyright © 2012 John Wiley & Sons, Ltd.
The Role of Preference Axioms and Respondent Behaviour in Statistical Models for Discrete Choice
Hougaard, Jens Leth; Tjur, Tue; Østerdal, Lars Peter
Discrete choice experiments are widely used in relation to healthcare. A stream of recent literature therefore aims at testing the validityof the underlying preference axioms of completeness and transitivity,and detecting other preference phenomena such as unstability, learn-ing/tiredness effects......, ordering effects, dominance, etc. Unfortunatelythere seems to be some confusion about what is actually being tested,and the link between the statistical tests performed and the relevantunderlying model of respondent behaviour has not been explored inthis literature. The present paper tries to clarify...
Farrell, Patricio; Koprucki, Thomas; Fuhrmann, Jürgen
2017-01-01
We compare three thermodynamically consistent numerical fluxes known in the literature, appearing in a Voronoï finite volume discretization of the van Roosbroeck system with general charge carrier statistics. Our discussion includes an extension of the Scharfetter–Gummel scheme to non-Boltzmann (e.g. Fermi–Dirac) statistics. It is based on the analytical solution of a two-point boundary value problem obtained by projecting the continuous differential equation onto the interval between neighboring collocation points. Hence, it serves as a reference flux. The exact solution of the boundary value problem can be approximated by computationally cheaper fluxes which modify certain physical quantities. One alternative scheme averages the nonlinear diffusion (caused by the non-Boltzmann nature of the problem), another one modifies the effective density of states. To study the differences between these three schemes, we analyze the Taylor expansions, derive an error estimate, visualize the flux error and show how the schemes perform for a carefully designed p-i-n benchmark simulation. We present strong evidence that the flux discretization based on averaging the nonlinear diffusion has an edge over the scheme based on modifying the effective density of states.
Farrell, Patricio; Koprucki, Thomas; Fuhrmann, Jürgen
2017-10-01
We compare three thermodynamically consistent numerical fluxes known in the literature, appearing in a Voronoï finite volume discretization of the van Roosbroeck system with general charge carrier statistics. Our discussion includes an extension of the Scharfetter-Gummel scheme to non-Boltzmann (e.g. Fermi-Dirac) statistics. It is based on the analytical solution of a two-point boundary value problem obtained by projecting the continuous differential equation onto the interval between neighboring collocation points. Hence, it serves as a reference flux. The exact solution of the boundary value problem can be approximated by computationally cheaper fluxes which modify certain physical quantities. One alternative scheme averages the nonlinear diffusion (caused by the non-Boltzmann nature of the problem), another one modifies the effective density of states. To study the differences between these three schemes, we analyze the Taylor expansions, derive an error estimate, visualize the flux error and show how the schemes perform for a carefully designed p-i-n benchmark simulation. We present strong evidence that the flux discretization based on averaging the nonlinear diffusion has an edge over the scheme based on modifying the effective density of states.
Sakhr, Jamal; Nieminen, John M.
2018-03-01
Two decades ago, Wang and Ong, [Phys. Rev. A 55, 1522 (1997)], 10.1103/PhysRevA.55.1522 hypothesized that the local box-counting dimension of a discrete quantum spectrum should depend exclusively on the nearest-neighbor spacing distribution (NNSD) of the spectrum. In this Rapid Communication, we validate their hypothesis by deriving an explicit formula for the local box-counting dimension of a countably-infinite discrete quantum spectrum. This formula expresses the local box-counting dimension of a spectrum in terms of single and double integrals of the NNSD of the spectrum. As applications, we derive an analytical formula for Poisson spectra and closed-form approximations to the local box-counting dimension for spectra having Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble (GUE), and Gaussian symplectic ensemble (GSE) spacing statistics. In the Poisson and GOE cases, we compare our theoretical formulas with the published numerical data of Wang and Ong and observe excellent agreement between their data and our theory. We also study numerically the local box-counting dimensions of the Riemann zeta function zeros and the alternate levels of GOE spectra, which are often used as numerical models of spectra possessing GUE and GSE spacing statistics, respectively. In each case, the corresponding theoretical formula is found to accurately describe the numerically computed local box-counting dimension.
A Scan Statistic for Continuous Data Based on the Normal Probability Model
Konty, Kevin; Kulldorff, Martin; Huang, Lan
2009-01-01
Abstract Temporal, spatial and space-time scan statistics are commonly used to detect and evaluate the statistical significance of temporal and/or geographical disease clusters, without any prior assumptions on the location, time period or size of those clusters. Scan statistics are mostly used for count data, such as disease incidence or mortality. Sometimes there is an interest in looking for clusters with respect to a continuous variable, such as lead levels in children or low birth weight...
Muhammad Zainuddin Lubis
2017-05-01
Full Text Available Laut Punggur merupakan laut yang terletak di Batam, Kepulauan Riau yang mempunyai beragam habitat objek,dan bentuk struktur bawah laut yang memiliki dinamika laut yang sangat tinggi. Side scan sonar (SSS merupakan instrumen pengembangan sistem sonar yang mampu menunjukkan dalam gambar dua dimensional permukaan dasar laut dengan kondisi kontur, topografi, dan target secara bersamaan. Metode Beam Pattern Discrete-Equi-Spaced Unshaded Line Array digunakan untuk menghitung beam pattern dua dimensi yang tergantung pada sudut dari gelombang suara yang masuk dari sumbu array yang diterima tergantung pada sudut di mana sinar suara pada array. Penelitian ini dilakukan pada Desember 2016 di laut Punggur,Batam, Kepulauan Riau-Indonesia, dengan koordinat 104 ° 08,7102 E dan 1° 03,2448 N sampai 1 ° 03.3977 N dan 104 ° 08,8133 E, menggunakan instrumen Side Scan Sonar C-Max CM2 Tow fish dengan frekuensi 325 kHz. Hasil yang diperoleh dari perekaman terdapat 7 target, dan Beam pattern dari metode Beam Discrete-Equi-Spaced Unshaded Line Array target 4 memiliki nilai tertinggi pada directivity Pattern yaitu 21.08 dB. Hasil model beam pattern ini memiliki nilai pusat pada incidence angle (o terhadap Directivity pattern (dB tidak berada di nilai 0 ataupun pada pusat beam pattern yang dihasilkan pada target 6 dengan nilai incident angle -1.5 o dan 1.5o mengalami penurunan hingga -40 dB. Karakteristik sedimen dasar perairan di laut punggur ditemukan lebih banyak pasir. Hasil metode Beam Discrete-Equi-Spaced Unshaded Line Array ditemukan bangkai kapal tenggelam.Kata Kunci: Side Scan Sonar, Beam Pattern Discrete-Equi-Spaced Unshaded Line Array, Incidence angle, Directivity pattern IDENTIFICATION OF SEABED PROFILE USING SIDE SCAN SONAR INSTRUMENT WITH PATTERN DISCRETE-EQUI-SPACED UNSHADED LINE ARRAY METHODRiau Islands is an island that has a variety of habitat objects, and forms of submarine structures that have a very high ocean dynamics, Punggur Sea is the sea
The Role of Discrete Global Grid Systems in the Global Statistical Geospatial Framework
Purss, M. B. J.; Peterson, P.; Minchin, S. A.; Bermudez, L. E.
2016-12-01
The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) has proposed the development of a Global Statistical Geospatial Framework (GSGF) as a mechanism for the establishment of common analytical systems that enable the integration of statistical and geospatial information. Conventional coordinate reference systems address the globe with a continuous field of points suitable for repeatable navigation and analytical geometry. While this continuous field is represented on a computer in a digitized and discrete fashion by tuples of fixed-precision floating point values, it is a non-trivial exercise to relate point observations spatially referenced in this way to areal coverages on the surface of the Earth. The GSGF states the need to move to gridded data delivery and the importance of using common geographies and geocoding. The challenges associated with meeting these goals are not new and there has been a significant effort within the geospatial community to develop nested gridding standards to tackle these issues over many years. These efforts have recently culminated in the development of a Discrete Global Grid Systems (DGGS) standard which has been developed under the auspices of Open Geospatial Consortium (OGC). DGGS provide a fixed areal based geospatial reference frame for the persistent location of measured Earth observations, feature interpretations, and modelled predictions. DGGS address the entire planet by partitioning it into a discrete hierarchical tessellation of progressively finer resolution cells, which are referenced by a unique index that facilitates rapid computation, query and analysis. The geometry and location of the cell is the principle aspect of a DGGS. Data integration, decomposition, and aggregation is optimised in the DGGS hierarchical structure and can be exploited for efficient multi-source data processing, storage, discovery, transmission, visualization, computation, analysis, and modelling. During
Carter, Jeffrey R.; Simon, Wayne E.
1990-08-01
Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and
Lynn, Fiona A; Crealey, Grainne E; Alderdice, Fiona A; McElnay, James C
2015-10-01
Establish maternal preferences for a third-trimester ultrasound scan in a healthy, low-risk pregnant population. Cross-sectional study incorporating a discrete choice experiment. A large, urban maternity hospital in Northern Ireland. One hundred and forty-six women in their second trimester of pregnancy. A discrete choice experiment was designed to elicit preferences for four attributes of a third-trimester ultrasound scan: health-care professional conducting the scan, detection rate for abnormal foetal growth, provision of non-medical information, cost. Additional data collected included age, marital status, socio-economic status, obstetric history, pregnancy-specific stress levels, perceived health and whether pregnancy was planned. Analysis was undertaken using a mixed logit model with interaction effects. Women's preferences for, and trade-offs between, the attributes of a hypothetical scan and indirect willingness-to-pay estimates. Women had significant positive preference for higher rate of detection, lower cost and provision of non-medical information, with no significant value placed on scan operator. Interaction effects revealed subgroups that valued the scan most: women experiencing their first pregnancy, women reporting higher levels of stress, an adverse obstetric history and older women. Women were able to trade on aspects of care and place relative importance on clinical, non-clinical outcomes and processes of service delivery, thus highlighting the potential of using health utilities in the development of services from a clinical, economic and social perspective. Specifically, maternal preferences exhibited provide valuable information for designing a randomized trial of effectiveness and insight for clinical and policy decision makers to inform woman-centred care. © 2013 Blackwell Publishing Ltd.
Statistical image reconstruction methods for simultaneous emission/transmission PET scans
Erdogan, H.; Fessler, J.A.
1996-01-01
Transmission scans are necessary for estimating the attenuation correction factors (ACFs) to yield quantitatively accurate PET emission images. To reduce the total scan time, post-injection transmission scans have been proposed in which one can simultaneously acquire emission and transmission data using rod sources and sinogram windowing. However, since the post-injection transmission scans are corrupted by emission coincidences, accurate correction for attenuation becomes more challenging. Conventional methods (emission subtraction) for ACF computation from post-injection scans are suboptimal and require relatively long scan times. We introduce statistical methods based on penalized-likelihood objectives to compute ACFs and then use them to reconstruct lower noise PET emission images from simultaneous transmission/emission scans. Simulations show the efficacy of the proposed methods. These methods improve image quality and SNR of the estimates as compared to conventional methods
Distinguishability notion based on Wootters statistical distance: Application to discrete maps
Gomez, Ignacio S.; Portesi, M.; Lamberti, P. W.
2017-08-01
We study the distinguishability notion given by Wootters for states represented by probability density functions. This presents the particularity that it can also be used for defining a statistical distance in chaotic unidimensional maps. Based on that definition, we provide a metric d ¯ for an arbitrary discrete map. Moreover, from d ¯ , we associate a metric space with each invariant density of a given map, which results to be the set of all distinguished points when the number of iterations of the map tends to infinity. Also, we give a characterization of the wandering set of a map in terms of the metric d ¯ , which allows us to identify the dissipative regions in the phase space. We illustrate the results in the case of the logistic and the circle maps numerically and analytically, and we obtain d ¯ and the wandering set for some characteristic values of their parameters. Finally, an extension of the metric space associated for arbitrary probability distributions (not necessarily invariant densities) is given along with some consequences. The statistical properties of distributions given by histograms are characterized in terms of the cardinal of the associated metric space. For two conjugate variables, the uncertainty principle is expressed in terms of the diameters of the associated metric space with those variables.
A scan statistic for continuous data based on the normal probability model
Huang Lan
2009-10-01
Full Text Available Abstract Temporal, spatial and space-time scan statistics are commonly used to detect and evaluate the statistical significance of temporal and/or geographical disease clusters, without any prior assumptions on the location, time period or size of those clusters. Scan statistics are mostly used for count data, such as disease incidence or mortality. Sometimes there is an interest in looking for clusters with respect to a continuous variable, such as lead levels in children or low birth weight. For such continuous data, we present a scan statistic where the likelihood is calculated using the the normal probability model. It may also be used for other distributions, while still maintaining the correct alpha level. In an application of the new method, we look for geographical clusters of low birth weight in New York City.
De-Marzi, Ludovic
2016-01-01
The main objective of this thesis is to develop and optimize algorithms for intensity modulated proton therapy, taking into account the physical and biological pencil beam properties. A model based on the summation and fluence weighted division of the pencil beams has been used. A new parameterization of the lateral dose distribution has been developed using a combination of three Gaussian functions. The algorithms have been implemented into a treatment planning system, then experimentally validated and compared with Monte Carlo simulations. Some approximations have been made and validated in order to achieve reasonable calculation times for clinical purposes. In a second phase, a collaboration with Institut Curie radiobiological teams has been started in order to implement radiobiological parameters and results into the optimization loop of the treatment planning process. Indeed, scanned pencil beams are pulsed and delivered at high dose rates (from 10 to 100 Gy/s), and the relative biological efficiency of protons is still relatively unknown given the wide diversity of use of these beams: the different models available and their dependence with linear energy transfers have been studied. A good agreement between dose calculations and measurements (deviations lower than 3 % and 2 mm) has been obtained. An experimental protocol has been set in order to qualify pulsed high dose rate effects and preliminary results obtained on one cell line suggested variations of the biological efficiency up to 10 %, though with large uncertainties. (author) [fr
Farr, J B; Dessy, F; De Wilde, O; Bietzer, O; Schönenberg, D
2013-07-01
The purpose of this investigation was to compare and contrast the measured fundamental properties of two new types of modulated proton scanning systems. This provides a basis for clinical expectations based on the scanned beam quality and a benchmark for computational models. Because the relatively small beam and fast scanning gave challenges to the characterization, a secondary purpose was to develop and apply new approaches where necessary to do so. The following performances of the proton scanning systems were investigated: beamlet alignment, static in-air beamlet size and shape, scanned in-air penumbra, scanned fluence map accuracy, geometric alignment of scanning system to isocenter, maximum field size, lateral and longitudinal field uniformity of a 1 l cubic uniform field, output stability over time, gantry angle invariance, monitoring system linearity, and reproducibility. A range of detectors was used: film, ionization chambers, lateral multielement and longitudinal multilayer ionization chambers, and a scintillation screen combined with a digital video camera. Characterization of the scanned fluence maps was performed with a software analysis tool. The resulting measurements and analysis indicated that the two types of delivery systems performed within specification for those aspects investigated. The significant differences were observed between the two types of scanning systems where one type exhibits a smaller spot size and associated penumbra than the other. The differential is minimum at maximum energy and increases inversely with decreasing energy. Additionally, the large spot system showed an increase in dose precision to a static target with layer rescanning whereas the small spot system did not. The measured results from the two types of modulated scanning types of system were consistent with their designs under the conditions tested. The most significant difference between the types of system was their proton spot size and associated resolution
Farr, J. B.; Schoenenberg, D. [Westdeutsches Protonentherapiezentrum Essen, Universitaetsklinikum-Essen, Hufelandstrasse 55, 45147 Essen (Germany); Dessy, F.; De Wilde, O.; Bietzer, O. [Ion Beam Applications, Chemin du Cyclotron, 3, 1348 Louvain-la-Neuve (Belgium)
2013-07-15
Purpose: The purpose of this investigation was to compare and contrast the measured fundamental properties of two new types of modulated proton scanning systems. This provides a basis for clinical expectations based on the scanned beam quality and a benchmark for computational models. Because the relatively small beam and fast scanning gave challenges to the characterization, a secondary purpose was to develop and apply new approaches where necessary to do so.Methods: The following performances of the proton scanning systems were investigated: beamlet alignment, static in-air beamlet size and shape, scanned in-air penumbra, scanned fluence map accuracy, geometric alignment of scanning system to isocenter, maximum field size, lateral and longitudinal field uniformity of a 1 l cubic uniform field, output stability over time, gantry angle invariance, monitoring system linearity, and reproducibility. A range of detectors was used: film, ionization chambers, lateral multielement and longitudinal multilayer ionization chambers, and a scintillation screen combined with a digital video camera. Characterization of the scanned fluence maps was performed with a software analysis tool.Results: The resulting measurements and analysis indicated that the two types of delivery systems performed within specification for those aspects investigated. The significant differences were observed between the two types of scanning systems where one type exhibits a smaller spot size and associated penumbra than the other. The differential is minimum at maximum energy and increases inversely with decreasing energy. Additionally, the large spot system showed an increase in dose precision to a static target with layer rescanning whereas the small spot system did not.Conclusions: The measured results from the two types of modulated scanning types of system were consistent with their designs under the conditions tested. The most significant difference between the types of system was their proton
Bosomprah, Samuel; Dotse-Gborgbortsi, Winfred; Aboagye, Patrick; Matthews, Zoe
2016-11-01
To identify and evaluate clusters of births that occurred outside health facilities in Ghana for targeted intervention. A retrospective study was conducted using a convenience sample of live births registered in Ghanaian health facilities from January 1 to December 31, 2014. Data were extracted from the district health information system. A spatial scan statistic was used to investigate clusters of home births through a discrete Poisson probability model. Scanning with a circular spatial window was conducted only for clusters with high rates of such deliveries. The district was used as the geographic unit of analysis. The likelihood P value was estimated using Monte Carlo simulations. Ten statistically significant clusters with a high rate of home birth were identified. The relative risks ranged from 1.43 ("least likely" cluster; P=0.001) to 1.95 ("most likely" cluster; P=0.001). The relative risks of the top five "most likely" clusters ranged from 1.68 to 1.95; these clusters were located in Ashanti, Brong Ahafo, and the Western, Eastern, and Greater regions of Accra. Health facility records, geospatial techniques, and geographic information systems provided locally relevant information to assist policy makers in delivering targeted interventions to small geographic areas. Copyright © 2016 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.
A spatial scan statistic for survival data based on Weibull distribution.
Bhatt, Vijaya; Tiwari, Neeraj
2014-05-20
The spatial scan statistic has been developed as a geographical cluster detection analysis tool for different types of data sets such as Bernoulli, Poisson, ordinal, normal and exponential. We propose a scan statistic for survival data based on Weibull distribution. It may also be used for other survival distributions, such as exponential, gamma, and log normal. The proposed method is applied on the survival data of tuberculosis patients for the years 2004-2005 in Nainital district of Uttarakhand, India. Simulation studies reveal that the proposed method performs well for different survival distribution functions. Copyright © 2013 John Wiley & Sons, Ltd.
A log-Weibull spatial scan statistic for time to event data.
Usman, Iram; Rosychuk, Rhonda J
2018-06-13
Spatial scan statistics have been used for the identification of geographic clusters of elevated numbers of cases of a condition such as disease outbreaks. These statistics accompanied by the appropriate distribution can also identify geographic areas with either longer or shorter time to events. Other authors have proposed the spatial scan statistics based on the exponential and Weibull distributions. We propose the log-Weibull as an alternative distribution for the spatial scan statistic for time to events data and compare and contrast the log-Weibull and Weibull distributions through simulation studies. The effect of type I differential censoring and power have been investigated through simulated data. Methods are also illustrated on time to specialist visit data for discharged patients presenting to emergency departments for atrial fibrillation and flutter in Alberta during 2010-2011. We found northern regions of Alberta had longer times to specialist visit than other areas. We proposed the spatial scan statistic for the log-Weibull distribution as a new approach for detecting spatial clusters for time to event data. The simulation studies suggest that the test performs well for log-Weibull data.
Local multiplicity adjustment for the spatial scan statistic using the Gumbel distribution.
Gangnon, Ronald E
2012-03-01
The spatial scan statistic is an important and widely used tool for cluster detection. It is based on the simultaneous evaluation of the statistical significance of the maximum likelihood ratio test statistic over a large collection of potential clusters. In most cluster detection problems, there is variation in the extent of local multiplicity across the study region. For example, using a fixed maximum geographic radius for clusters, urban areas typically have many overlapping potential clusters, whereas rural areas have relatively few. The spatial scan statistic does not account for local multiplicity variation. We describe a previously proposed local multiplicity adjustment based on a nested Bonferroni correction and propose a novel adjustment based on a Gumbel distribution approximation to the distribution of a local scan statistic. We compare the performance of all three statistics in terms of power and a novel unbiased cluster detection criterion. These methods are then applied to the well-known New York leukemia dataset and a Wisconsin breast cancer incidence dataset. © 2011, The International Biometric Society.
A spatial scan statistic for nonisotropic two-level risk cluster.
Li, Xiao-Zhou; Wang, Jin-Feng; Yang, Wei-Zhong; Li, Zhong-Jie; Lai, Sheng-Jie
2012-01-30
Spatial scan statistic methods are commonly used for geographical disease surveillance and cluster detection. The standard spatial scan statistic does not model any variability in the underlying risks of subregions belonging to a detected cluster. For a multilevel risk cluster, the isotonic spatial scan statistic could model a centralized high-risk kernel in the cluster. Because variations in disease risks are anisotropic owing to different social, economical, or transport factors, the real high-risk kernel will not necessarily take the central place in a whole cluster area. We propose a spatial scan statistic for a nonisotropic two-level risk cluster, which could be used to detect a whole cluster and a noncentralized high-risk kernel within the cluster simultaneously. The performance of the three methods was evaluated through an intensive simulation study. Our proposed nonisotropic two-level method showed better power and geographical precision with two-level risk cluster scenarios, especially for a noncentralized high-risk kernel. Our proposed method is illustrated using the hand-foot-mouth disease data in Pingdu City, Shandong, China in May 2009, compared with two other methods. In this practical study, the nonisotropic two-level method is the only way to precisely detect a high-risk area in a detected whole cluster. Copyright © 2011 John Wiley & Sons, Ltd.
Profe, Jörn; Ohlendorf, Christian
2017-04-01
XRF-scanning is the state-of-the-art technique for geochemical analyses in marine and lacustrine sedimentology for more than a decade. However, little attention has been paid to data precision and technical limitations so far. Using homogenized, dried and powdered samples (certified geochemical reference standards and samples from a lithologically-contrasting loess-paleosol sequence) minimizes many adverse effects that influence the XRF-signal when analyzing wet sediment cores. This allows the investigation of data precision under ideal conditions and documents a new application of the XRF core-scanner technology at the same time. Reliable interpretations of XRF results require data precision evaluation of single elements as a function of X-ray tube, measurement time, sample compaction and quality of peak fitting. Ten-fold measurement of each sample constitutes data precision. Data precision of XRF measurements theoretically obeys Poisson statistics. Fe and Ca exhibit largest deviations from Poisson statistics. The same elements show the least mean relative standard deviations in the range from 0.5% to 1%. This represents the technical limit of data precision achievable by the installed detector. Measurement times ≥ 30 s reveal mean relative standard deviations below 4% for most elements. The quality of peak fitting is only relevant for elements with overlapping fluorescence lines such as Ba, Ti and Mn or for elements with low concentrations such as Y, for example. Differences in sample compaction are marginal and do not change mean relative standard deviation considerably. Data precision is in the range reported for geochemical reference standards measured by conventional techniques. Therefore, XRF scanning of discrete samples provide a cost- and time-efficient alternative to conventional multi-element analyses. As best trade-off between economical operation and data quality, we recommend a measurement time of 30 s resulting in a total scan time of 30 minutes
FROM THE CONTINUOS TO THE DISCRETE MODEL: A LASER SCANNING APPLICATION TO CONSERVATION PROJECTS
A. Cardaci
2012-09-01
Full Text Available This paper aims to demonstrate the usage of laser scanning (in particular through a methodology based on the integrated use of the software "FARO© Scene" and "GEXCEL JRC-3D Reconstructor" as a valid alternative to traditional surveying techniques, especially when finalized to the restoration and conservation repair of historical buildings. The need to recreate the complex and often irregular shapes of the ancient architecture, by acting quickly and also being accurate, as well as the subsequent implementation of FEM (Finite Element Method for structural analysis, have made nowadays the laser scanning survey a very useful technique. The point cloud obtained by laser scanning can be a flexible tool for every need; not a finished product, but a huge database from which it is possible to extract different information at different times. The use of numerical methods in data processing allows wide opportunities of further investigations starting from the fitting equations. The numerical model lends by itself to the possibility of usage in many applications, such as modelization and structure analysis software. This paper presents the case study of the Church of the Assumption and Saint Michael the Archangel, located in Borgo di Terzo (Italy, a magnificent 18th century's building that presented several structural problems like as the overturning of the façade, the cracking of part of the vaulted ceiling. The survey, carried out by laser scanner (FARO© Photon 120 allowed the reconstruction of the exact geometry of the church, offering the basis for performing structural analysis supported by a realistic model (and not an idealized regular one, useful also in the design of repair interventions.
Small nodule detectability evaluation using a generalized scan-statistic model
Popescu, Lucretiu M; Lewitt, Robert M
2006-01-01
In this paper is investigated the use of the scan statistic for evaluating the detectability of small nodules in medical images. The scan-statistic method is often used in applications in which random fields must be searched for abnormal local features. Several results of the detection with localization theory are reviewed and a generalization is presented using the noise nodule distribution obtained by scanning arbitrary areas. One benefit of the noise nodule model is that it enables determination of the scan-statistic distribution by using only a few image samples in a way suitable both for simulation and experimental setups. Also, based on the noise nodule model, the case of multiple targets per image is addressed and an image abnormality test using the likelihood ratio and an alternative test using multiple decision thresholds are derived. The results obtained reveal that in the case of low contrast nodules or multiple nodules the usual test strategy based on a single decision threshold underperforms compared with the alternative tests. That is a consequence of the fact that not only the contrast or the size, but also the number of suspicious nodules is a clue indicating the image abnormality. In the case of the likelihood ratio test, the multiple clues are unified in a single decision variable. Other tests that process multiple clues differently do not necessarily produce a unique ROC curve, as shown in examples using a test involving two decision thresholds. We present examples with two-dimensional time-of-flight (TOF) and non-TOF PET image sets analysed using the scan statistic for different search areas, as well as the fixed position observer
Sidky, Emil; Jørgensen, Jakob Heide; Pan, Xiaochuan
2012-01-01
Iterative image reconstruction in computed tomography often employs a discrete-to-discrete (DD) linear data model, and many of the aspects of the image recovery relate directly to the properties of this linear model. While much is known about the properties of the continuous X-ray, the correspond...
Drug safety data mining with a tree-based scan statistic.
Kulldorff, Martin; Dashevsky, Inna; Avery, Taliser R; Chan, Arnold K; Davis, Robert L; Graham, David; Platt, Richard; Andrade, Susan E; Boudreau, Denise; Gunter, Margaret J; Herrinton, Lisa J; Pawloski, Pamala A; Raebel, Marsha A; Roblin, Douglas; Brown, Jeffrey S
2013-05-01
In post-marketing drug safety surveillance, data mining can potentially detect rare but serious adverse events. Assessing an entire collection of drug-event pairs is traditionally performed on a predefined level of granularity. It is unknown a priori whether a drug causes a very specific or a set of related adverse events, such as mitral valve disorders, all valve disorders, or different types of heart disease. This methodological paper evaluates the tree-based scan statistic data mining method to enhance drug safety surveillance. We use a three-million-member electronic health records database from the HMO Research Network. Using the tree-based scan statistic, we assess the safety of selected antifungal and diabetes drugs, simultaneously evaluating overlapping diagnosis groups at different granularity levels, adjusting for multiple testing. Expected and observed adverse event counts were adjusted for age, sex, and health plan, producing a log likelihood ratio test statistic. Out of 732 evaluated disease groupings, 24 were statistically significant, divided among 10 non-overlapping disease categories. Five of the 10 signals are known adverse effects, four are likely due to confounding by indication, while one may warrant further investigation. The tree-based scan statistic can be successfully applied as a data mining tool in drug safety surveillance using observational data. The total number of statistical signals was modest and does not imply a causal relationship. Rather, data mining results should be used to generate candidate drug-event pairs for rigorous epidemiological studies to evaluate the individual and comparative safety profiles of drugs. Copyright © 2013 John Wiley & Sons, Ltd.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.
Kim, Sehwi; Jung, Inkyung
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.
Precipitate statistics in an Al-Mg-Si-Cu alloy from scanning precession electron diffraction data
Sunde, J. K.; Paulsen, Ø.; Wenner, S.; Holmestad, R.
2017-09-01
The key microstructural feature providing strength to age-hardenable Al alloys is nanoscale precipitates. Alloy development requires a reliable statistical assessment of these precipitates, in order to link the microstructure with material properties. Here, it is demonstrated that scanning precession electron diffraction combined with computational analysis enable the semi-automated extraction of precipitate statistics in an Al-Mg-Si-Cu alloy. Among the main findings is the precipitate number density, which agrees well with a conventional method based on manual counting and measurements. By virtue of its data analysis objectivity, our methodology is therefore seen as an advantageous alternative to existing routines, offering reproducibility and efficiency in alloy statistics. Additional results include improved qualitative information on phase distributions. The developed procedure is generic and applicable to any material containing nanoscale precipitates.
Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe
2017-07-01
This paper introduces a statistical framework for detecting cylindrical shapes in dense point clouds. We target the application of mapping fallen trees in datasets obtained through terrestrial laser scanning. This is a challenging task due to the presence of ground vegetation, standing trees, DTM artifacts, as well as the fragmentation of dead trees into non-collinear segments. Our method shares the concept of voting in parameter space with the generalized Hough transform, however two of its significant drawbacks are improved upon. First, the need to generate samples on the shape's surface is eliminated. Instead, pairs of nearby input points lying on the surface cast a vote for the cylinder's parameters based on the intrinsic geometric properties of cylindrical shapes. Second, no discretization of the parameter space is required: the voting is carried out in continuous space by means of constructing a kernel density estimator and obtaining its local maxima, using automatic, data-driven kernel bandwidth selection. Furthermore, we show how the detected cylindrical primitives can be efficiently merged to obtain object-level (entire tree) semantic information using graph-cut segmentation and a tailored dynamic algorithm for eliminating cylinder redundancy. Experiments were performed on 3 plots from the Bavarian Forest National Park, with ground truth obtained through visual inspection of the point clouds. It was found that relative to sample consensus (SAC) cylinder fitting, the proposed voting framework can improve the detection completeness by up to 10 percentage points while maintaining the correctness rate.
Statistical and Probabilistic Extensions to Ground Operations' Discrete Event Simulation Modeling
Trocine, Linda; Cummings, Nicholas H.; Bazzana, Ashley M.; Rychlik, Nathan; LeCroy, Kenneth L.; Cates, Grant R.
2010-01-01
NASA's human exploration initiatives will invest in technologies, public/private partnerships, and infrastructure, paving the way for the expansion of human civilization into the solar system and beyond. As it is has been for the past half century, the Kennedy Space Center will be the embarkation point for humankind's journey into the cosmos. Functioning as a next generation space launch complex, Kennedy's launch pads, integration facilities, processing areas, launch and recovery ranges will bustle with the activities of the world's space transportation providers. In developing this complex, KSC teams work through the potential operational scenarios: conducting trade studies, planning and budgeting for expensive and limited resources, and simulating alternative operational schemes. Numerous tools, among them discrete event simulation (DES), were matured during the Constellation Program to conduct such analyses with the purpose of optimizing the launch complex for maximum efficiency, safety, and flexibility while minimizing life cycle costs. Discrete event simulation is a computer-based modeling technique for complex and dynamic systems where the state of the system changes at discrete points in time and whose inputs may include random variables. DES is used to assess timelines and throughput, and to support operability studies and contingency analyses. It is applicable to any space launch campaign and informs decision-makers of the effects of varying numbers of expensive resources and the impact of off nominal scenarios on measures of performance. In order to develop representative DES models, methods were adopted, exploited, or created to extend traditional uses of DES. The Delphi method was adopted and utilized for task duration estimation. DES software was exploited for probabilistic event variation. A roll-up process was used, which was developed to reuse models and model elements in other less - detailed models. The DES team continues to innovate and expand
Error analysis of terrestrial laser scanning data by means of spherical statistics and 3D graphs.
Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G; Arias, Pedro
2010-01-01
This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics.
Chien-Chou Chen
2016-11-01
Full Text Available Abstract Background Cases of dengue fever have increased in areas of Southeast Asia in recent years. Taiwan hit a record-high 42,856 cases in 2015, with the majority in southern Tainan and Kaohsiung Cities. Leveraging spatial statistics and geo-visualization techniques, we aim to design an online analytical tool for local public health workers to prospectively identify ongoing hot spots of dengue fever weekly at the village level. Methods A total of 57,516 confirmed cases of dengue fever in 2014 and 2015 were obtained from the Taiwan Centers for Disease Control (TCDC. Incorporating demographic information as covariates with cumulative cases (365 days in a discrete Poisson model, we iteratively applied space–time scan statistics by SaTScan software to detect the currently active cluster of dengue fever (reported as relative risk in each village of Tainan and Kaohsiung every week. A village with a relative risk >1 and p value <0.05 was identified as a dengue-epidemic area. Assuming an ongoing transmission might continuously spread for two consecutive weeks, we estimated the sensitivity and specificity for detecting outbreaks by comparing the scan-based classification (dengue-epidemic vs. dengue-free village with the true cumulative case numbers from the TCDC’s surveillance statistics. Results Among the 1648 villages in Tainan and Kaohsiung, the overall sensitivity for detecting outbreaks increases as case numbers grow in a total of 92 weekly simulations. The specificity for detecting outbreaks behaves inversely, compared to the sensitivity. On average, the mean sensitivity and specificity of 2-week hot spot detection were 0.615 and 0.891 respectively (p value <0.001 for the covariate adjustment model, as the maximum spatial and temporal windows were specified as 50% of the total population at risk and 28 days. Dengue-epidemic villages were visualized and explored in an interactive map. Conclusions We designed an online analytical tool for
Daisuke Onozuka; Akihito Hagihara [Fukuoka Institute of Health and Environmental Sciences, Fukuoka (Japan). Department of Information Science
2007-07-01
Tuberculosis (TB) has reemerged as a global public health epidemic in recent years. Although evaluating local disease clusters leads to effective prevention and control of TB, there are few, if any, spatiotemporal comparisons for epidemic diseases. TB cases among residents in Fukuoka Prefecture between 1999 and 2004 (n = 9,119) were geocoded at the census tract level (n = 109) based on residence at the time of diagnosis. The spatial and space-time scan statistics were then used to identify clusters of census tracts with elevated proportions of TB cases. In the purely spatial analyses, the most likely clusters were in the Chikuho coal mining area (in 1999, 2002, 2003, 2004), the Kita-Kyushu industrial area (in 2000), and the Fukuoka urban area (in 2001). In the space-time analysis, the most likely cluster was the Kita-Kyushu industrial area (in 2000). The north part of Fukuoka Prefecture was the most likely to have a cluster with a significantly high occurrence of TB. The spatial and space-time scan statistics are effective ways of describing circular disease clusters. Since, in reality, infectious diseases might form other cluster types, the effectiveness of the method may be limited under actual practice. The sophistication of the analytical methodology, however, is a topic for future study. 48 refs., 3 figs., 3 tabs.
Onozuka Daisuke
2007-04-01
Full Text Available Abstract Background Tuberculosis (TB has reemerged as a global public health epidemic in recent years. Although evaluating local disease clusters leads to effective prevention and control of TB, there are few, if any, spatiotemporal comparisons for epidemic diseases. Methods TB cases among residents in Fukuoka Prefecture between 1999 and 2004 (n = 9,119 were geocoded at the census tract level (n = 109 based on residence at the time of diagnosis. The spatial and space-time scan statistics were then used to identify clusters of census tracts with elevated proportions of TB cases. Results In the purely spatial analyses, the most likely clusters were in the Chikuho coal mining area (in 1999, 2002, 2003, 2004, the Kita-Kyushu industrial area (in 2000, and the Fukuoka urban area (in 2001. In the space-time analysis, the most likely cluster was the Kita-Kyushu industrial area (in 2000. The north part of Fukuoka Prefecture was the most likely to have a cluster with a significantly high occurrence of TB. Conclusion The spatial and space-time scan statistics are effective ways of describing circular disease clusters. Since, in reality, infectious diseases might form other cluster types, the effectiveness of the method may be limited under actual practice. The sophistication of the analytical methodology, however, is a topic for future study.
Matsuzaki, Y; Jenkins, C; Yang, Y; Xing, L; Yoshimura, T; Fujii, Y; Umegaki, K
2016-01-01
Purpose: With the growing adoption of proton beam therapy there is an increasing need for effective and user-friendly tools for performing quality assurance (QA) measurements. The speed and versatility of spot-scanning proton beam (PB) therapy systems present unique challenges for traditional QA tools. To address these challenges a proof-of-concept system was developed to visualize, in real-time, the delivery of individual spots from a spot-scanning PB in order to perform QA measurements. Methods: The PB is directed toward a custom phantom with planar faces coated with a radioluminescent phosphor (Gd2O2s:Tb). As the proton beam passes through the phantom visible light is emitted from the coating and collected by a nearby CMOS camera. The images are processed to determine the locations at which the beam impinges on each face of the phantom. By so doing, the location of each beam can be determined relative to the phantom. The cameras are also used to capture images of the laser alignment system. The phantom contains x-ray fiducials so that it can be easily located with kV imagers. Using this data several quality assurance parameters can be evaluated. Results: The proof-of-concept system was able to visualize discrete PB spots with energies ranging from 70 MeV to 220 MeV. Images were obtained with integration times ranging from 20 to 0.019 milliseconds. If not limited by data transmission, this would correspond to a frame rate of 52,000 fps. Such frame rates enabled visualization of individual spots in real time. Spot locations were found to be highly correlated (R"2=0.99) with the nozzle-mounted spot position monitor indicating excellent spot positioning accuracy Conclusion: The system was shown to be capable of imaging individual spots for all clinical beam energies. Future development will focus on extending the image processing software to provide automated results for a variety of QA tests.
Matsuzaki, Y [Proton Beam Therapy Center, Hokkaido University Hospital, Sapporo, Hokkaido (Japan); Jenkins, C; Yang, Y; Xing, L [Stanford University, Stanford, California (United States); Yoshimura, T; Fujii, Y [Hokkaido University Graduate School of Medicine, Sapporo, Hokkaido (Japan); Umegaki, K [Global Institution for Collaborative Research and Education (GI-CoRE), Hokkaido University, Sapporo, Hokkaido (Japan)
2016-06-15
Purpose: With the growing adoption of proton beam therapy there is an increasing need for effective and user-friendly tools for performing quality assurance (QA) measurements. The speed and versatility of spot-scanning proton beam (PB) therapy systems present unique challenges for traditional QA tools. To address these challenges a proof-of-concept system was developed to visualize, in real-time, the delivery of individual spots from a spot-scanning PB in order to perform QA measurements. Methods: The PB is directed toward a custom phantom with planar faces coated with a radioluminescent phosphor (Gd2O2s:Tb). As the proton beam passes through the phantom visible light is emitted from the coating and collected by a nearby CMOS camera. The images are processed to determine the locations at which the beam impinges on each face of the phantom. By so doing, the location of each beam can be determined relative to the phantom. The cameras are also used to capture images of the laser alignment system. The phantom contains x-ray fiducials so that it can be easily located with kV imagers. Using this data several quality assurance parameters can be evaluated. Results: The proof-of-concept system was able to visualize discrete PB spots with energies ranging from 70 MeV to 220 MeV. Images were obtained with integration times ranging from 20 to 0.019 milliseconds. If not limited by data transmission, this would correspond to a frame rate of 52,000 fps. Such frame rates enabled visualization of individual spots in real time. Spot locations were found to be highly correlated (R{sup 2}=0.99) with the nozzle-mounted spot position monitor indicating excellent spot positioning accuracy Conclusion: The system was shown to be capable of imaging individual spots for all clinical beam energies. Future development will focus on extending the image processing software to provide automated results for a variety of QA tests.
On the propagation of a charged particle beam in a random medium. II: Discrete binary statistics
Promraning, G.C.; Prinja, A.K.
1995-01-01
The authors consider the linear transport of energetic charged particles through a background stochastic mixture consisting of two immiscible fluids or solids. The transport model used is the continuous slowing down description in the straight ahead approximation. Under the assumption of homogeneous Markovian mixing statistics and separable (in space and energy) stopping powers with a common energy dependence, the problem of finding the ensemble averaged intensity and dose is reduced to simple quadrature. The use of the Liouville master equation offers an alternate approach to this problem, and leads to an exact differential equations whose solutions give the ensemble-averaged intensity and dose. This master equation approach applies to inhomogeneous Markovian statistics as well as non-separable stopping powers. Both treatments can be extended, in an approximate way, to non-Markovian statistics. Typical numerical results are given, contrasting this stochastic treatment with the standard treatment which ignores the stochastic nature of the problem. 11 refs., 9 figs., 1 tab
Xin Jun; Zhao Zhoushe; Li Hong; Lu Zhe; Wu Wenkai; Guo Qiyong
2013-01-01
Objective: To improve image quality of low dose CT in whole body PET/CT using adaptive statistical iterative reconstruction (ASiR) technology. Methods: Twice CT scans were performed with GE water model,scan parameters were: 120 kV, 120 and 300 mA respectively. In addition, 30 subjects treated with PET/CT were selected randomly, whole body PET/CT were performed after 18 F-FDG injection of 3.70 MBq/kg, Sharp IR+time of flight + VUE Point HD technology were used for 1.5 min/bed in PET; CT of spiral scan was performed under 120 kV using automatic exposure control technology (30-210 mA, noise index 25). Model and patients whole body CT images were reconstructed with conventional and 40% ASiR methods respectively, and the CT attenuation value and noise index were measured. Results: Research of model and clinical showed that standard deviation of ASiR method in model CT was 33.0% lower than the conventional CT reconstruction method (t =27.76, P<0.01), standard deviation of CT in normal tissues (brain, lung, mediastinum, liver and vertebral body) and lesions (brain, lung, mediastinum, liver and vertebral body) reduced by 21.08% (t =23.35, P<0.01) and 24.43% (t =16.15, P<0.01) respectively, especially for normal liver tissue and liver lesions, standard deviations of CT were reduced by 51.33% (t=34.21, P<0.0) and 49.54% (t=15.21, P<0.01) respectively. Conclusion: ASiR reconstruction method was significantly reduced the noise of low dose CT image and improved the quality of CT image in whole body PET/CT, which seems more suitable for quantitative analysis and clinical applications. (authors)
A statistical pixel intensity model for segmentation of confocal laser scanning microscopy images.
Calapez, Alexandre; Rosa, Agostinho
2010-09-01
Confocal laser scanning microscopy (CLSM) has been widely used in the life sciences for the characterization of cell processes because it allows the recording of the distribution of fluorescence-tagged macromolecules on a section of the living cell. It is in fact the cornerstone of many molecular transport and interaction quantification techniques where the identification of regions of interest through image segmentation is usually a required step. In many situations, because of the complexity of the recorded cellular structures or because of the amounts of data involved, image segmentation either is too difficult or inefficient to be done by hand and automated segmentation procedures have to be considered. Given the nature of CLSM images, statistical segmentation methodologies appear as natural candidates. In this work we propose a model to be used for statistical unsupervised CLSM image segmentation. The model is derived from the CLSM image formation mechanics and its performance is compared to the existing alternatives. Results show that it provides a much better description of the data on classes characterized by their mean intensity, making it suitable not only for segmentation methodologies with known number of classes but also for use with schemes aiming at the estimation of the number of classes through the application of cluster selection criteria.
Statistical geological discrete fracture network model. Forsmark modelling stage 2.2
Fox, Aaron; La Pointe, Paul; Simeonov, Assen; Hermanson, Jan; Oehman, Johan
2007-11-01
The Swedish Nuclear Fuel and Waste Management Company (SKB) is performing site characterization at two different locations, Forsmark and Laxemar, in order to locate a site for a final geologic repository for spent nuclear fuel. The program is built upon the development of Site Descriptive Models (SDMs) at specific timed data freezes. Each SDM is formed from discipline-specific reports from across the scientific spectrum. This report describes the methods, analyses, and conclusions of the geological modeling team with respect to a geological and statistical model of fractures and minor deformation zones (henceforth referred to as the geological DFN), version 2.2, at the Forsmark site. The geological DFN builds upon the work of other geological modelers, including the deformation zone (DZ), rock domain (RD), and fracture domain (FD) models. The geological DFN is a statistical model for stochastically simulating rock fractures and minor deformation zones as a scale of less than 1,000 m (the lower cut-off of the DZ models). The geological DFN is valid within four specific fracture domains inside the local model region, and encompassing the candidate volume at Forsmark: FFM01, FFM02, FFM03, and FFM06. The models are build using data from detailed surface outcrop maps and the cored borehole record at Forsmark. The conceptual model for the Forsmark 2.2 geological revolves around the concept of orientation sets; for each fracture domain, other model parameters such as size and intensity are tied to the orientation sets. Two classes of orientation sets were described; Global sets, which are encountered everywhere in the model region, and Local sets, which represent highly localized stress environments. Orientation sets were described in terms of their general cardinal direction (NE, NW, etc). Two alternatives are presented for fracture size modeling: - the tectonic continuum approach (TCM, TCMF) described by coupled size-intensity scaling following power law distributions
Statistical geological discrete fracture network model. Forsmark modelling stage 2.2
Fox, Aaron; La Pointe, Paul [Golder Associates Inc (United States); Simeonov, Assen [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Hermanson, Jan; Oehman, Johan [Golder Associates AB, Stockholm (Sweden)
2007-11-15
The Swedish Nuclear Fuel and Waste Management Company (SKB) is performing site characterization at two different locations, Forsmark and Laxemar, in order to locate a site for a final geologic repository for spent nuclear fuel. The program is built upon the development of Site Descriptive Models (SDMs) at specific timed data freezes. Each SDM is formed from discipline-specific reports from across the scientific spectrum. This report describes the methods, analyses, and conclusions of the geological modeling team with respect to a geological and statistical model of fractures and minor deformation zones (henceforth referred to as the geological DFN), version 2.2, at the Forsmark site. The geological DFN builds upon the work of other geological modelers, including the deformation zone (DZ), rock domain (RD), and fracture domain (FD) models. The geological DFN is a statistical model for stochastically simulating rock fractures and minor deformation zones as a scale of less than 1,000 m (the lower cut-off of the DZ models). The geological DFN is valid within four specific fracture domains inside the local model region, and encompassing the candidate volume at Forsmark: FFM01, FFM02, FFM03, and FFM06. The models are build using data from detailed surface outcrop maps and the cored borehole record at Forsmark. The conceptual model for the Forsmark 2.2 geological revolves around the concept of orientation sets; for each fracture domain, other model parameters such as size and intensity are tied to the orientation sets. Two classes of orientation sets were described; Global sets, which are encountered everywhere in the model region, and Local sets, which represent highly localized stress environments. Orientation sets were described in terms of their general cardinal direction (NE, NW, etc). Two alternatives are presented for fracture size modeling: - the tectonic continuum approach (TCM, TCMF) described by coupled size-intensity scaling following power law distributions
Marinescu, D.C.; Radulescu, T.G.
1977-06-01
The Integral Fourier Transform has a large range of applications in such areas as communication theory, circuit theory, physics, etc. In order to perform discrete Fourier Transform the Finite Fourier Transform is defined; it operates upon N samples of a uniformely sampled continuous function. All the properties known in the continuous case can be found in the discrete case also. The first part of the paper presents the relationship between the Finite Fourier Transform and the Integral one. The computing of a Finite Fourier Transform is a problem in itself since in order to transform a set of N data we have to perform N 2 ''operations'' if the transformation relations are used directly. An algorithm known as the Fast Fourier Transform (FFT) reduces this figure from N 2 to a more reasonable Nlog 2 N, when N is a power of two. The original Cooley and Tuckey algorithm for FFT can be further improved when higher basis are used. The price to be paid in this case is the increase in complexity of such algorithms. The recurrence relations and a comparation among such algorithms are presented. The key point in understanding the application of FFT resides in the convolution theorem which states that the convolution (an N 2 type procedure) of the primitive functions is equivalent to the ordinar multiplication of their transforms. Since filtering is actually a convolution process we present several procedures to perform digital filtering by means of FFT. The best is the one using the segmentation of records and the transformation of pairs of records. In the digital processing of signals, besides digital filtering a special attention is paid to the estimation of various statistical characteristics of a signal as: autocorrelation and correlation functions, periodiograms, density power sepctrum, etc. We give several algorithms for the consistent and unbiased estimation of such functions, by means of FFT. (author)
A scan statistic to extract causal gene clusters from case-control genome-wide rare CNV data
Scherer Stephen W
2011-05-01
Full Text Available Abstract Background Several statistical tests have been developed for analyzing genome-wide association data by incorporating gene pathway information in terms of gene sets. Using these methods, hundreds of gene sets are typically tested, and the tested gene sets often overlap. This overlapping greatly increases the probability of generating false positives, and the results obtained are difficult to interpret, particularly when many gene sets show statistical significance. Results We propose a flexible statistical framework to circumvent these problems. Inspired by spatial scan statistics for detecting clustering of disease occurrence in the field of epidemiology, we developed a scan statistic to extract disease-associated gene clusters from a whole gene pathway. Extracting one or a few significant gene clusters from a global pathway limits the overall false positive probability, which results in increased statistical power, and facilitates the interpretation of test results. In the present study, we applied our method to genome-wide association data for rare copy-number variations, which have been strongly implicated in common diseases. Application of our method to a simulated dataset demonstrated the high accuracy of this method in detecting disease-associated gene clusters in a whole gene pathway. Conclusions The scan statistic approach proposed here shows a high level of accuracy in detecting gene clusters in a whole gene pathway. This study has provided a sound statistical framework for analyzing genome-wide rare CNV data by incorporating topological information on the gene pathway.
Analysis of health in health centers area in Depok using correspondence analysis and scan statistic
Basir, C.; Widyaningsih, Y.; Lestari, D.
2017-07-01
Hotspots indicate area that has a higher case intensity than others. For example, in health problems of an area, the number of sickness of a region can be used as parameter and condition of area that determined severity of an area. If this condition is known soon, it can be overcome preventively. Many factors affect the severity level of area. Some health factors to be considered in this study are the number of infant with low birth weight, malnourished children under five years old, under five years old mortality, maternal deaths, births without the help of health personnel, infants without handling the baby's health, and infant without basic immunization. The number of cases is based on every public health center area in Depok. Correspondence analysis provides graphical information about two nominal variables relationship. It create plot based on row and column scores and show categories that have strong relation in a close distance. Scan Statistic method is used to examine hotspot based on some selected variables that occurred in the study area; and Correspondence Analysis is used to picturing association between the regions and variables. Apparently, using SaTScan software, Sukatani health center is obtained as a point hotspot; and Correspondence Analysis method shows health centers and the seven variables have a very significant relationship and the majority of health centers close to all variables, except Cipayung which is distantly related to the number of pregnant mother death. These results can be used as input for the government agencies to upgrade the health level in the area.
Honda, O; Yanagawa, M; Inoue, A; Kikuyama, A; Yoshida, S; Sumikawa, H; Tobino, K; Koyama, M; Tomiyama, N
2011-04-01
We investigated the image quality of multiplanar reconstruction (MPR) using adaptive statistical iterative reconstruction (ASIR). Inflated and fixed lungs were scanned with a garnet detector CT in high-resolution mode (HR mode) or non-high-resolution (HR) mode, and MPR images were then reconstructed. Observers compared 15 MPR images of ASIR (40%) and ASIR (80%) with those of ASIR (0%), and assessed image quality using a visual five-point scale (1, definitely inferior; 5, definitely superior), with particular emphasis on normal pulmonary structures, artefacts, noise and overall image quality. The mean overall image quality scores in HR mode were 3.67 with ASIR (40%) and 4.97 with ASIR (80%). Those in non-HR mode were 3.27 with ASIR (40%) and 3.90 with ASIR (80%). The mean artefact scores in HR mode were 3.13 with ASIR (40%) and 3.63 with ASIR (80%), but those in non-HR mode were 2.87 with ASIR (40%) and 2.53 with ASIR (80%). The mean scores of the other parameters were greater than 3, whereas those in HR mode were higher than those in non-HR mode. There were significant differences between ASIR (40%) and ASIR (80%) in overall image quality (pASIR did not suppress the severe artefacts of contrast medium. In general, MPR image quality with ASIR (80%) was superior to that with ASIR (40%). However, there was an increased incidence of artefacts by ASIR when CT images were obtained in non-HR mode.
Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong
2016-01-01
Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.
Ozonoff Al
2010-07-01
Full Text Available Abstract Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM
Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F
2010-07-19
A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression
Darcel, C.; Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O.
2009-11-01
Investigations led for several years at Laxemar and Forsmark reveal the large heterogeneity of geological formations and associated fracturing. This project aims at reinforcing the statistical DFN modeling framework adapted to a site scale. This leads therefore to develop quantitative methods of characterization adapted to the nature of fracturing and data availability. We start with the hypothesis that the maximum likelihood DFN model is a power-law model with a density term depending on orientations. This is supported both by literature and specifically here by former analyses of the SKB data. This assumption is nevertheless thoroughly tested by analyzing the fracture trace and lineament maps. Fracture traces range roughly between 0.5 m and 10 m - i e the usual extension of the sample outcrops. Between the raw data and final data used to compute the fracture size distribution from which the size distribution model will arise, several steps are necessary, in order to correct data from finite-size, topographical and sampling effects. More precisely, a particular attention is paid to fracture segmentation status and fracture linkage consistent with the DFN model expected. The fracture scaling trend observed over both sites displays finally a shape parameter k t close to 1.2 with a density term (α 2d ) between 1.4 and 1.8. Only two outcrops clearly display a different trend with k t close to 3 and a density term (α 2d ) between 2 and 3.5. The fracture lineaments spread over the range between 100 meters and a few kilometers. When compared with fracture trace maps, these datasets are already interpreted and the linkage process developed previously has not to be done. Except for the subregional lineament map from Forsmark, lineaments display a clear power-law trend with a shape parameter k t equal to 3 and a density term between 2 and 4.5. The apparent variation in scaling exponent, from the outcrop scale (k t = 1.2) on one side, to the lineament scale (k t = 2) on
Background Noise Removal in Ultrasonic B-scan Images Using Iterative Statistical Techniques
Wells, I.; Charlton, P. C.; Mosey, S.; Donne, K. E.
2008-01-01
The interpretation of ultrasonic B-scan images can be a time-consuming process and its success depends on operator skills and experience. Removal of the image background will potentially improve its quality and hence improve operator diagnosis. An automatic background noise removal algorithm is
Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P
2015-03-01
Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.
Faires, Meredith C; Pearl, David L; Ciccotelli, William A; Berke, Olaf; Reid-Smith, Richard J; Weese, J Scott
2014-07-08
In healthcare facilities, conventional surveillance techniques using rule-based guidelines may result in under- or over-reporting of methicillin-resistant Staphylococcus aureus (MRSA) outbreaks, as these guidelines are generally unvalidated. The objectives of this study were to investigate the utility of the temporal scan statistic for detecting MRSA clusters, validate clusters using molecular techniques and hospital records, and determine significant differences in the rate of MRSA cases using regression models. Patients admitted to a community hospital between August 2006 and February 2011, and identified with MRSA>48 hours following hospital admission, were included in this study. Between March 2010 and February 2011, MRSA specimens were obtained for spa typing. MRSA clusters were investigated using a retrospective temporal scan statistic. Tests were conducted on a monthly scale and significant clusters were compared to MRSA outbreaks identified by hospital personnel. Associations between the rate of MRSA cases and the variables year, month, and season were investigated using a negative binomial regression model. During the study period, 735 MRSA cases were identified and 167 MRSA isolates were spa typed. Nine different spa types were identified with spa type 2/t002 (88.6%) the most prevalent. The temporal scan statistic identified significant MRSA clusters at the hospital (n=2), service (n=16), and ward (n=10) levels (P ≤ 0.05). Seven clusters were concordant with nine MRSA outbreaks identified by hospital staff. For the remaining clusters, seven events may have been equivalent to true outbreaks and six clusters demonstrated possible transmission events. The regression analysis indicated years 2009-2011, compared to 2006, and months March and April, compared to January, were associated with an increase in the rate of MRSA cases (P ≤ 0.05). The application of the temporal scan statistic identified several MRSA clusters that were not detected by hospital
Darcel, C. (Itasca Consultants SAS (France)); Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O. (Geosciences Rennes, UMR 6118 CNRS, Univ. def Rennes, Rennes (France))
2009-11-15
Investigations led for several years at Laxemar and Forsmark reveal the large heterogeneity of geological formations and associated fracturing. This project aims at reinforcing the statistical DFN modeling framework adapted to a site scale. This leads therefore to develop quantitative methods of characterization adapted to the nature of fracturing and data availability. We start with the hypothesis that the maximum likelihood DFN model is a power-law model with a density term depending on orientations. This is supported both by literature and specifically here by former analyses of the SKB data. This assumption is nevertheless thoroughly tested by analyzing the fracture trace and lineament maps. Fracture traces range roughly between 0.5 m and 10 m - i e the usual extension of the sample outcrops. Between the raw data and final data used to compute the fracture size distribution from which the size distribution model will arise, several steps are necessary, in order to correct data from finite-size, topographical and sampling effects. More precisely, a particular attention is paid to fracture segmentation status and fracture linkage consistent with the DFN model expected. The fracture scaling trend observed over both sites displays finally a shape parameter k{sub t} close to 1.2 with a density term (alpha{sub 2d}) between 1.4 and 1.8. Only two outcrops clearly display a different trend with k{sub t} close to 3 and a density term (alpha{sub 2d}) between 2 and 3.5. The fracture lineaments spread over the range between 100 meters and a few kilometers. When compared with fracture trace maps, these datasets are already interpreted and the linkage process developed previously has not to be done. Except for the subregional lineament map from Forsmark, lineaments display a clear power-law trend with a shape parameter k{sub t} equal to 3 and a density term between 2 and 4.5. The apparent variation in scaling exponent, from the outcrop scale (k{sub t} = 1.2) on one side, to
Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module
Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); McKay, James; Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Farmer, Ben; Conrad, Jan [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Roebber, Elinore [McGill University, Department of Physics, Montreal, QC (Canada); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Collaboration: The GAMBIT Scanner Workgroup
2017-11-15
We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics. (orig.)
Więckowska, Barbara; Marcinkowska, Justyna
2017-11-06
When searching for epidemiological clusters, an important tool can be to carry out one's own research with the incidence rate from the literature as the reference level. Values exceeding this level may indicate the presence of a cluster in that location. This paper presents a method of searching for clusters that have significantly higher incidence rates than those specified by the investigator. The proposed method uses the classic binomial exact test for one proportion and an algorithm that joins areas with potential clusters while reducing the number of multiple comparisons needed. The sensitivity and specificity are preserved by this new method, while avoiding the Monte Carlo approach and still delivering results comparable to the commonly used Kulldorff's scan statistics and other similar methods of localising clusters. A strong contributing factor afforded by the statistical software that makes this possible is that it allows analysis and presentation of the results cartographically.
Hervind, Widyaningsih, Y.
2017-07-01
Concurrent infection with multiple infectious agents may occur in one patient, it appears frequently in dengue hemorrhagic fever (DHF) and typhoid fever. This paper depicted association between DHF and typhoid based on spatial point of view. Since paucity of data regarding dengue and typhoid co-infection, data that be used are the number of patients of those diseases in every district (kecamatan) in Jakarta in 2014 and 2015 obtained from Jakarta surveillance website. Poisson spatial scan statistics is used to detect DHF and typhoid hotspots area district in Jakarta separately. After obtain the hotspot, Fisher's exact test is applied to validate association between those two diseases' hotspot. The result exhibit hotspots of DHF and typhoid are located around central Jakarta. The further analysis used Poisson space-time scan statistics to reveal the hotspot in term of spatial and time. DHF and typhoid fever more likely occurr from January until May in the area which is relatively similar with pure spatial result. Preventive action could be done especially in the hotspot areas and it is required further study to observe the causes based on characteristics of the hotspot area.
Hotspot detection using space-time scan statistics on children under five years of age in Depok
Verdiana, Miranti; Widyaningsih, Yekti
2017-03-01
Some problems that affect the health level in Depok is the high malnutrition rates from year to year and the more spread infectious and non-communicable diseases in some areas. Children under five years old is a vulnerable part of population to get the malnutrition and diseases. Based on this reason, it is important to observe the location and time, where and when, malnutrition in Depok happened in high intensity. To obtain the location and time of the hotspots of malnutrition and diseases that attack children under five years old, space-time scan statistics method can be used. Space-time scan statistic is a hotspot detection method, where the area and time of information and time are taken into account simultaneously in detecting the hotspots. This method detects a hotspot with a cylindrical scanning window: the cylindrical pedestal describes the area, and the height of cylinder describe the time. Cylinders formed is a hotspot candidate that may occur, which require testing of hypotheses, whether a cylinder can be summed up as a hotspot. Hotspot detection in this study carried out by forming a combination of several variables. Some combination of variables provides hotspot detection results that tend to be the same, so as to form groups (clusters). In the case of infant health level in Depok city, Beji health care center region in 2011-2012 is a hotspot. According to the combination of the variables used in the detection of hotspots, Beji health care center is most frequently as a hotspot. Hopefully the local government can take the right policy to improve the health level of children under five in the city of Depok.
Yurkin, Maxim A; Semyanov, Konstantin A; Tarasov, Peter A; Chernyshev, Andrei V; Hoekstra, Alfons G; Maltsev, Valeri P
2005-09-01
Elastic light scattering by mature red blood cells (RBCs) was theoretically and experimentally analyzed by use of the discrete dipole approximation (DDA) and scanning flow cytometry (SFC), respectively. SFC permits measurement of the angular dependence of the light-scattering intensity (indicatrix) of single particles. A mature RBC is modeled as a biconcave disk in DDA simulations of light scattering. We have studied the effect of RBC orientation related to the direction of the light incident upon the indicatrix. Numerical calculations of indicatrices for several axis ratios and volumes of RBC have been carried out. Comparison of the simulated indicatrices and indicatrices measured by SFC showed good agreement, validating the biconcave disk model for a mature RBC. We simulated the light-scattering output signals from the SFC with the DDA for RBCs modeled as a disk-sphere and as an oblate spheroid. The biconcave disk, the disk-sphere, and the oblate spheroid models have been compared for two orientations, i.e., face-on and rim-on incidence, relative to the direction of the incident beam. Only the oblate spheroid model for rim-on incidence gives results similar to those of the rigorous biconcave disk model.
Tumur, Odgerel; Soon, Kean; Brown, Fraser; Mykytowycz, Marcus
2013-01-01
The aims of our study were to evaluate the effect of application of Adaptive Statistical Iterative Reconstruction (ASIR) algorithm on the radiation dose of coronary computed tomography angiography (CCTA) and its effects on image quality of CCTA and to evaluate the effects of various patient and CT scanning factors on the radiation dose of CCTA. This was a retrospective study that included 347 consecutive patients who underwent CCTA at a tertiary university teaching hospital between 1 July 2009 and 20 September 2011. Analysis was performed comparing patient demographics, scan characteristics, radiation dose and image quality in two groups of patients in whom conventional Filtered Back Projection (FBP) or ASIR was used for image reconstruction. There were 238 patients in the FBP group and 109 patients in the ASIR group. There was no difference between the groups in the use of prospective gating, scan length or tube voltage. In ASIR group, significantly lower tube current was used compared with FBP group, 550mA (450–600) vs. 650mA (500–711.25) (median (interquartile range)), respectively, P<0.001. There was 27% effective radiation dose reduction in the ASIR group compared with FBP group, 4.29mSv (2.84–6.02) vs. 5.84mSv (3.88–8.39) (median (interquartile range)), respectively, P<0.001. Although ASIR was associated with increased image noise compared with FBP (39.93±10.22 vs. 37.63±18.79 (mean ±standard deviation), respectively, P<001), it did not affect the signal intensity, signal-to-noise ratio, contrast-to-noise ratio or the diagnostic quality of CCTA. Application of ASIR reduces the radiation dose of CCTA without affecting the image quality.
Tumur, Odgerel; Soon, Kean; Brown, Fraser; Mykytowycz, Marcus
2013-06-01
The aims of our study were to evaluate the effect of application of Adaptive Statistical Iterative Reconstruction (ASIR) algorithm on the radiation dose of coronary computed tomography angiography (CCTA) and its effects on image quality of CCTA and to evaluate the effects of various patient and CT scanning factors on the radiation dose of CCTA. This was a retrospective study that included 347 consecutive patients who underwent CCTA at a tertiary university teaching hospital between 1 July 2009 and 20 September 2011. Analysis was performed comparing patient demographics, scan characteristics, radiation dose and image quality in two groups of patients in whom conventional Filtered Back Projection (FBP) or ASIR was used for image reconstruction. There were 238 patients in the FBP group and 109 patients in the ASIR group. There was no difference between the groups in the use of prospective gating, scan length or tube voltage. In ASIR group, significantly lower tube current was used compared with FBP group, 550 mA (450-600) vs. 650 mA (500-711.25) (median (interquartile range)), respectively, P ASIR group compared with FBP group, 4.29 mSv (2.84-6.02) vs. 5.84 mSv (3.88-8.39) (median (interquartile range)), respectively, P ASIR was associated with increased image noise compared with FBP (39.93 ± 10.22 vs. 37.63 ± 18.79 (mean ± standard deviation), respectively, P ASIR reduces the radiation dose of CCTA without affecting the image quality. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.
Whitaker, Thomas J; Beltran, Chris; Tryggestad, Erik; Bues, Martin; Kruse, Jon J; Remmes, Nicholas B; Tasson, Alexandria; Herman, Michael G
2014-08-01
Delayed charge is a small amount of charge that is delivered to the patient after the planned irradiation is halted, which may degrade the quality of the treatment by delivering unwarranted dose to the patient. This study compares two methods for minimizing the effect of delayed charge on the dose delivered with a synchrotron based discrete spot scanning proton beam. The delivery of several treatment plans was simulated by applying a normally distributed value of delayed charge, with a mean of 0.001(SD 0.00025) MU, to each spot. Two correction methods were used to account for the delayed charge. Method one (CM1), which is in active clinical use, accounts for the delayed charge by adjusting the MU of the current spot based on the cumulative MU. Method two (CM2) in addition reduces the planned MU by a predicted value. Every fraction of a treatment was simulated using each method and then recomputed in the treatment planning system. The dose difference between the original plan and the sum of the simulated fractions was evaluated. Both methods were tested in a water phantom with a single beam and simple target geometry. Two separate phantom tests were performed. In one test the dose per fraction was varied from 0.5 to 2 Gy using 25 fractions per plan. In the other test the number fractions were varied from 1 to 25, using 2 Gy per fraction. Three patient plans were used to determine the effect of delayed charge on the delivered dose under realistic clinical conditions. The order of spot delivery using CM1 was investigated by randomly selecting the starting spot for each layer, and by alternating per layer the starting spot from first to last. Only discrete spot scanning was considered in this study. Using the phantom setup and varying the dose per fraction, the maximum dose difference for each plan of 25 fractions was 0.37-0.39 Gy and 0.03-0.05 Gy for CM1 and CM2, respectively. While varying the total number of fractions, the maximum dose difference increased at a rate
Whitaker, Thomas J.; Beltran, Chris; Tryggestad, Erik; Kruse, Jon J.; Remmes, Nicholas B.; Tasson, Alexandria; Herman, Michael G.; Bues, Martin
2014-01-01
Purpose: Delayed charge is a small amount of charge that is delivered to the patient after the planned irradiation is halted, which may degrade the quality of the treatment by delivering unwarranted dose to the patient. This study compares two methods for minimizing the effect of delayed charge on the dose delivered with a synchrotron based discrete spot scanning proton beam. Methods: The delivery of several treatment plans was simulated by applying a normally distributed value of delayed charge, with a mean of 0.001(SD 0.00025) MU, to each spot. Two correction methods were used to account for the delayed charge. Method one (CM1), which is in active clinical use, accounts for the delayed charge by adjusting the MU of the current spot based on the cumulative MU. Method two (CM2) in addition reduces the planned MU by a predicted value. Every fraction of a treatment was simulated using each method and then recomputed in the treatment planning system. The dose difference between the original plan and the sum of the simulated fractions was evaluated. Both methods were tested in a water phantom with a single beam and simple target geometry. Two separate phantom tests were performed. In one test the dose per fraction was varied from 0.5 to 2 Gy using 25 fractions per plan. In the other test the number fractions were varied from 1 to 25, using 2 Gy per fraction. Three patient plans were used to determine the effect of delayed charge on the delivered dose under realistic clinical conditions. The order of spot delivery using CM1 was investigated by randomly selecting the starting spot for each layer, and by alternating per layer the starting spot from first to last. Only discrete spot scanning was considered in this study. Results: Using the phantom setup and varying the dose per fraction, the maximum dose difference for each plan of 25 fractions was 0.37–0.39 Gy and 0.03–0.05 Gy for CM1 and CM2, respectively. While varying the total number of fractions, the maximum dose
Hayslett, H T
1991-01-01
Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; Maceachren, Alan M
2008-11-07
Kulldorff's spatial scan statistic and its software implementation - SaTScan - are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
2005-01-01
For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees
Erfan Ayubi
2017-05-01
Full Text Available OBJECTIVES The aim of this study was to explore the spatial pattern of female breast cancer (BC incidence at the neighborhood level in Tehran, Iran. METHODS The present study included all registered incident cases of female BC from March 2008 to March 2011. The raw standardized incidence ratio (SIR of BC for each neighborhood was estimated by comparing observed cases relative to expected cases. The estimated raw SIRs were smoothed by a Besag, York, and Mollie spatial model and the spatial empirical Bayesian method. The purely spatial scan statistic was used to identify spatial clusters. RESULTS There were 4,175 incident BC cases in the study area from 2008 to 2011, of which 3,080 were successfully geocoded to the neighborhood level. Higher than expected rates of BC were found in neighborhoods located in northern and central Tehran, whereas lower rates appeared in southern areas. The most likely cluster of higher than expected BC incidence involved neighborhoods in districts 3 and 6, with an observed-to-expected ratio of 3.92 (p<0.001, whereas the most likely cluster of lower than expected rates involved neighborhoods in districts 17, 18, and 19, with an observed-to-expected ratio of 0.05 (p<0.001. CONCLUSIONS Neighborhood-level inequality in the incidence of BC exists in Tehran. These findings can serve as a basis for resource allocation and preventive strategies in at-risk areas.
2001-01-01
For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
1999-01-01
For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
2003-01-01
For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products
2004-01-01
For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
Hauber, A Brett; González, Juan Marcos; Groothuis-Oudshoorn, Catharina G M; Prior, Thomas; Marshall, Deborah A; Cunningham, Charles; IJzerman, Maarten J; Bridges, John F P
2016-06-01
Conjoint analysis is a stated-preference survey method that can be used to elicit responses that reveal preferences, priorities, and the relative importance of individual features associated with health care interventions or services. Conjoint analysis methods, particularly discrete choice experiments (DCEs), have been increasingly used to quantify preferences of patients, caregivers, physicians, and other stakeholders. Recent consensus-based guidance on good research practices, including two recent task force reports from the International Society for Pharmacoeconomics and Outcomes Research, has aided in improving the quality of conjoint analyses and DCEs in outcomes research. Nevertheless, uncertainty regarding good research practices for the statistical analysis of data from DCEs persists. There are multiple methods for analyzing DCE data. Understanding the characteristics and appropriate use of different analysis methods is critical to conducting a well-designed DCE study. This report will assist researchers in evaluating and selecting among alternative approaches to conducting statistical analysis of DCE data. We first present a simplistic DCE example and a simple method for using the resulting data. We then present a pedagogical example of a DCE and one of the most common approaches to analyzing data from such a question format-conditional logit. We then describe some common alternative methods for analyzing these data and the strengths and weaknesses of each alternative. We present the ESTIMATE checklist, which includes a list of questions to consider when justifying the choice of analysis method, describing the analysis, and interpreting the results. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
Hauber, A. Brett; Gonzalez, Juan Marcos; Groothuis-Oudshoorn, Catharina Gerarda Maria; Prior, Thomas; Marshall, Deborah A.; Cunningham, Charles; IJzerman, Maarten Joost; Bridges, John
2016-01-01
Conjoint analysis is a stated-preference survey method that can be used to elicit responses that reveal preferences, priorities, and the relative importance of individual features associated with health care interventions or services. Conjoint analysis methods, particularly discrete choice
Sørensen, John Aasted
2011-01-01
The objectives of Discrete Mathematics (IDISM2) are: The introduction of the mathematics needed for analysis, design and verification of discrete systems, including the application within programming languages for computer systems. Having passed the IDISM2 course, the student will be able...... to accomplish the following: -Understand and apply formal representations in discrete mathematics. -Understand and apply formal representations in problems within discrete mathematics. -Understand methods for solving problems in discrete mathematics. -Apply methods for solving problems in discrete mathematics......; construct a finite state machine for a given application. Apply these concepts to new problems. The teaching in Discrete Mathematics is a combination of sessions with lectures and students solving problems, either manually or by using Matlab. Furthermore a selection of projects must be solved and handed...
David Smith
2013-03-01
Full Text Available Background: Drug adverse event (AE signal detection using the Gamma Poisson Shrinker (GPS is commonly applied in spontaneous reporting. AE signal detection using large observational health plan databases can expand medication safety surveillance. Methods: Using data from nine health plans, we conducted a pilot study to evaluate the implementation and findings of the GPS approach for two antifungal drugs, terbinafine and itraconazole, and two diabetes drugs, pioglitazone and rosiglitazone. We evaluated 1676 diagnosis codes grouped into 183 different clinical concepts and four levels of granularity. Several signaling thresholds were assessed. GPS results were compared to findings from a companion study using the identical analytic dataset but an alternative statistical method—the tree-based scan statistic (TreeScan. Results: We identified 71 statistical signals across two signaling thresholds and two methods, including closely-related signals of overlapping diagnosis definitions. Initial review found that most signals represented known adverse drug reactions or confounding. About 31% of signals met the highest signaling threshold. Conclusions: The GPS method was successfully applied to observational health plan data in a distributed data environment as a drug safety data mining method. There was substantial concordance between the GPS and TreeScan approaches. Key method implementation decisions relate to defining exposures and outcomes and informed choice of signaling thresholds.
Izadi, F A; Bagirov, G
2009-01-01
With its origins stretching back several centuries, discrete calculus is now an increasingly central methodology for many problems related to discrete systems and algorithms. The topics covered here usually arise in many branches of science and technology, especially in discrete mathematics, numerical analysis, statistics and probability theory as well as in electrical engineering, but our viewpoint here is that these topics belong to a much more general realm of mathematics; namely calculus and differential equations because of the remarkable analogy of the subject to this branch of mathemati
Faires, Meredith C; Pearl, David L; Ciccotelli, William A; Berke, Olaf; Reid-Smith, Richard J; Weese, J Scott
2014-05-12
In hospitals, Clostridium difficile infection (CDI) surveillance relies on unvalidated guidelines or threshold criteria to identify outbreaks. This can result in false-positive and -negative cluster alarms. The application of statistical methods to identify and understand CDI clusters may be a useful alternative or complement to standard surveillance techniques. The objectives of this study were to investigate the utility of the temporal scan statistic for detecting CDI clusters and determine if there are significant differences in the rate of CDI cases by month, season, and year in a community hospital. Bacteriology reports of patients identified with a CDI from August 2006 to February 2011 were collected. For patients detected with CDI from March 2010 to February 2011, stool specimens were obtained. Clostridium difficile isolates were characterized by ribotyping and investigated for the presence of toxin genes by PCR. CDI clusters were investigated using a retrospective temporal scan test statistic. Statistically significant clusters were compared to known CDI outbreaks within the hospital. A negative binomial regression model was used to identify associations between year, season, month and the rate of CDI cases. Overall, 86 CDI cases were identified. Eighteen specimens were analyzed and nine ribotypes were classified with ribotype 027 (n = 6) the most prevalent. The temporal scan statistic identified significant CDI clusters at the hospital (n = 5), service (n = 6), and ward (n = 4) levels (P ≤ 0.05). Three clusters were concordant with the one C. difficile outbreak identified by hospital personnel. Two clusters were identified as potential outbreaks. The negative binomial model indicated years 2007-2010 (P ≤ 0.05) had decreased CDI rates compared to 2006 and spring had an increased CDI rate compared to the fall (P = 0.023). Application of the temporal scan statistic identified several clusters, including potential outbreaks not detected by hospital
Bansal, Ravi; Hao, Xuejun; Peterson, Bradley S
2015-05-01
We hypothesize that coordinated functional activity within discrete neural circuits induces morphological organization and plasticity within those circuits. Identifying regions of morphological covariation that are independent of morphological covariation in other regions therefore may therefore allow us to identify discrete neural systems within the brain. Comparing the magnitude of these variations in individuals who have psychiatric disorders with the magnitude of variations in healthy controls may allow us to identify aberrant neural pathways in psychiatric illnesses. We measured surface morphological features by applying nonlinear, high-dimensional warping algorithms to manually defined brain regions. We transferred those measures onto the surface of a unit sphere via conformal mapping and then used spherical wavelets and their scaling coefficients to simplify the data structure representing these surface morphological features of each brain region. We used principal component analysis (PCA) to calculate covariation in these morphological measures, as represented by their scaling coefficients, across several brain regions. We then assessed whether brain subregions that covaried in morphology, as identified by large eigenvalues in the PCA, identified specific neural pathways of the brain. To do so, we spatially registered the subnuclei for each eigenvector into the coordinate space of a Diffusion Tensor Imaging dataset; we used these subnuclei as seed regions to track and compare fiber pathways with known fiber pathways identified in neuroanatomical atlases. We applied these procedures to anatomical MRI data in a cohort of 82 healthy participants (42 children, 18 males, age 10.5 ± 2.43 years; 40 adults, 22 males, age 32.42 ± 10.7 years) and 107 participants with Tourette's Syndrome (TS) (71 children, 59 males, age 11.19 ± 2.2 years; 36 adults, 21 males, age 37.34 ± 10.9 years). We evaluated the construct validity of the identified covariation in morphology
Sørensen, John Aasted
2011-01-01
; construct a finite state machine for a given application. Apply these concepts to new problems. The teaching in Discrete Mathematics is a combination of sessions with lectures and students solving problems, either manually or by using Matlab. Furthermore a selection of projects must be solved and handed...... to accomplish the following: -Understand and apply formal representations in discrete mathematics. -Understand and apply formal representations in problems within discrete mathematics. -Understand methods for solving problems in discrete mathematics. -Apply methods for solving problems in discrete mathematics...... to new problems. Relations and functions: Define a product set; define and apply equivalence relations; construct and apply functions. Apply these concepts to new problems. Natural numbers and induction: Define the natural numbers; apply the principle of induction to verify a selection of properties...
Busch, Peter Andre; Zinner Henriksen, Helle
2018-01-01
discretion is suggested to reduce this footprint by influencing or replacing their discretionary practices using ICT. What is less researched is whether digital discretion can cause changes in public policy outcomes, and under what conditions such changes can occur. Using the concept of public service values......This study reviews 44 peer-reviewed articles on digital discretion published in the period from 1998 to January 2017. Street-level bureaucrats have traditionally had a wide ability to exercise discretion stirring debate since they can add their personal footprint on public policies. Digital......, we suggest that digital discretion can strengthen ethical and democratic values but weaken professional and relational values. Furthermore, we conclude that contextual factors such as considerations made by policy makers on the macro-level and the degree of professionalization of street...
Evaluation of the ICS and DEW scatter correction methods for low statistical content scans in 3D PET
Sossi, V.; Oakes, T.R.; Ruth, T.J.
1996-01-01
The performance of the Integral Convolution and the Dual Energy Window scatter correction methods in 3D PET has been evaluated over a wide range of statistical content of acquired data (1M to 400M events) The order in which scatter correction and detector normalization should be applied has also been investigated. Phantom and human neuroreceptor studies were used with the following figures of merit: axial and radial uniformity, sinogram and image noise, contrast accuracy and contrast accuracy uniformity. Both scatter correction methods perform reliably in the range of number of events examined. Normalization applied after scatter correction yields better radial uniformity and fewer image artifacts
IMANISHI, M.; NEWTON, A. E.; VIEIRA, A. R.; GONZALEZ-AVILES, G.; KENDALL SCOTT, M. E.; MANIKONDA, K.; MAXWELL, T. N.; HALPIN, J. L.; FREEMAN, M. M.; MEDALLA, F.; AYERS, T. L.; DERADO, G.; MAHON, B. E.; MINTZ, E. D.
2016-01-01
SUMMARY Although rare, typhoid fever cases acquired in the United States continue to be reported. Detection and investigation of outbreaks in these domestically acquired cases offer opportunities to identify chronic carriers. We searched surveillance and laboratory databases for domestically acquired typhoid fever cases, used a space–time scan statistic to identify clusters, and classified clusters as outbreaks or non-outbreaks. From 1999 to 2010, domestically acquired cases accounted for 18% of 3373 reported typhoid fever cases; their isolates were less often multidrug-resistant (2% vs. 15%) compared to isolates from travel-associated cases. We identified 28 outbreaks and two possible outbreaks within 45 space–time clusters of ⩾2 domestically acquired cases, including three outbreaks involving ⩾2 molecular subtypes. The approach detected seven of the ten outbreaks published in the literature or reported to CDC. Although this approach did not definitively identify any previously unrecognized outbreaks, it showed the potential to detect outbreaks of typhoid fever that may escape detection by routine analysis of surveillance data. Sixteen outbreaks had been linked to a carrier. Every case of typhoid fever acquired in a non-endemic country warrants thorough investigation. Space–time scan statistics, together with shoe-leather epidemiology and molecular subtyping, may improve outbreak detection. PMID:25427666
Imanishi, M; Newton, A E; Vieira, A R; Gonzalez-Aviles, G; Kendall Scott, M E; Manikonda, K; Maxwell, T N; Halpin, J L; Freeman, M M; Medalla, F; Ayers, T L; Derado, G; Mahon, B E; Mintz, E D
2015-08-01
Although rare, typhoid fever cases acquired in the United States continue to be reported. Detection and investigation of outbreaks in these domestically acquired cases offer opportunities to identify chronic carriers. We searched surveillance and laboratory databases for domestically acquired typhoid fever cases, used a space-time scan statistic to identify clusters, and classified clusters as outbreaks or non-outbreaks. From 1999 to 2010, domestically acquired cases accounted for 18% of 3373 reported typhoid fever cases; their isolates were less often multidrug-resistant (2% vs. 15%) compared to isolates from travel-associated cases. We identified 28 outbreaks and two possible outbreaks within 45 space-time clusters of ⩾2 domestically acquired cases, including three outbreaks involving ⩾2 molecular subtypes. The approach detected seven of the ten outbreaks published in the literature or reported to CDC. Although this approach did not definitively identify any previously unrecognized outbreaks, it showed the potential to detect outbreaks of typhoid fever that may escape detection by routine analysis of surveillance data. Sixteen outbreaks had been linked to a carrier. Every case of typhoid fever acquired in a non-endemic country warrants thorough investigation. Space-time scan statistics, together with shoe-leather epidemiology and molecular subtyping, may improve outbreak detection.
Yih, W Katherine; Maro, Judith C; Nguyen, Michael; Baker, Meghan A; Balsbaugh, Carolyn; Cole, David V; Dashevsky, Inna; Mba-Jonas, Adamma; Kulldorff, Martin
2018-06-01
The self-controlled tree-temporal scan statistic-a new signal-detection method-can evaluate whether any of a wide variety of health outcomes are temporally associated with receipt of a specific vaccine, while adjusting for multiple testing. Neither health outcomes nor postvaccination potential periods of increased risk need be prespecified. Using US medical claims data in the Food and Drug Administration's Sentinel system, we employed the method to evaluate adverse events occurring after receipt of quadrivalent human papillomavirus vaccine (4vHPV). Incident outcomes recorded in emergency department or inpatient settings within 56 days after first doses of 4vHPV received by 9- through 26.9-year-olds in 2006-2014 were identified using International Classification of Diseases, Ninth Revision, diagnosis codes and analyzed by pairing the new method with a standard hierarchical classification of diagnoses. On scanning diagnoses of 1.9 million 4vHPV recipients, 2 statistically significant categories of adverse events were found: cellulitis on days 2-3 after vaccination and "other complications of surgical and medical procedures" on days 1-3 after vaccination. Cellulitis is a known adverse event. Clinically informed investigation of electronic claims records of the patients with "other complications" did not suggest any previously unknown vaccine safety problem. Considering that thousands of potential short-term adverse events and hundreds of potential risk intervals were evaluated, these findings add significantly to the growing safety record of 4vHPV.
Zielinski, Jerzy S.
The dramatic increase in number and volume of digital images produced in medical diagnostics, and the escalating demand for rapid access to these relevant medical data, along with the need for interpretation and retrieval has become of paramount importance to a modern healthcare system. Therefore, there is an ever growing need for processed, interpreted and saved images of various types. Due to the high cost and unreliability of human-dependent image analysis, it is necessary to develop an automated method for feature extraction, using sophisticated mathematical algorithms and reasoning. This work is focused on digital image signal processing of biological and biomedical data in one- two- and three-dimensional space. Methods and algorithms presented in this work were used to acquire data from genomic sequences, breast cancer, and biofilm images. One-dimensional analysis was applied to DNA sequences which were presented as a non-stationary sequence and modeled by a time-dependent autoregressive moving average (TD-ARMA) model. Two-dimensional analyses used 2D-ARMA model and applied it to detect breast cancer from x-ray mammograms or ultrasound images. Three-dimensional detection and classification techniques were applied to biofilm images acquired using confocal laser scanning microscopy. Modern medical images are geometrically arranged arrays of data. The broadening scope of imaging as a way to organize our observations of the biophysical world has led to a dramatic increase in our ability to apply new processing techniques and to combine multiple channels of data into sophisticated and complex mathematical models of physiological function and dysfunction. With explosion of the amount of data produced in a field of biomedicine, it is crucial to be able to construct accurate mathematical models of the data at hand. Two main purposes of signal modeling are: data size conservation and parameter extraction. Specifically, in biomedical imaging we have four key problems
Sørensen, John Aasted
2010-01-01
The introduction of the mathematics needed for analysis, design and verification of discrete systems, including applications within programming languages for computer systems. Course sessions and project work. Semester: Spring 2010 Ectent: 5 ects Class size: 18......The introduction of the mathematics needed for analysis, design and verification of discrete systems, including applications within programming languages for computer systems. Course sessions and project work. Semester: Spring 2010 Ectent: 5 ects Class size: 18...
Sørensen, John Aasted
2010-01-01
The introduction of the mathematics needed for analysis, design and verification of discrete systems, including applications within programming languages for computer systems. Course sessions and project work. Semester: Autumn 2010 Ectent: 5 ects Class size: 15......The introduction of the mathematics needed for analysis, design and verification of discrete systems, including applications within programming languages for computer systems. Course sessions and project work. Semester: Autumn 2010 Ectent: 5 ects Class size: 15...
Caltagirone, Jean-Paul
2014-01-01
This book presents the fundamental principles of mechanics to re-establish the equations of Discrete Mechanics. It introduces physics and thermodynamics associated to the physical modeling. The development and the complementarity of sciences lead to review today the old concepts that were the basis for the development of continuum mechanics. The differential geometry is used to review the conservation laws of mechanics. For instance, this formalism requires a different location of vector and scalar quantities in space. The equations of Discrete Mechanics form a system of equations where the H
Lee, T.D.
1985-01-01
This paper reviews the role of time throughout all phases of mechanics: classical mechanics, non-relativistic quantum mechanics, and relativistic quantum theory. As an example of the relativistic quantum field theory, the case of a massless scalar field interacting with an arbitrary external current is discussed. The comparison between the new discrete theory and the usual continuum formalism is presented. An example is given of a two-dimensional random lattice and its duel. The author notes that there is no evidence that the discrete mechanics is more appropriate than the usual continuum mechanics
Rubino, Corrado; Mazzarello, Vittorio; Faenza, Mario; Montella, Andrea; Santanelli, Fabio; Farace, Francesco
2015-06-01
The aim of this study was to evaluate the effects on adipocyte morphology of 2 techniques of fat harvesting and of fat purification in lipofilling, considering that the number of viable healthy adipocytes is important in fat survival in recipient areas of lipofilling. Fat harvesting was performed in 10 female patients from flanks, on one side with a 2-mm Coleman cannula and on the other side with a 3-mm Mercedes cannula. Thirty milliliter of fat tissue from each side was collected and divided into three 10 mL syringes: A, B, and C. The fat inside syringe A was left untreated, the fat in syringe B underwent simple sedimentation, and the fat inside syringe C underwent centrifugation at 3000 rpm for 3 minutes. Each fat graft specimen was processed for examination under low-vacuum scanning electron microscope. Diameter (μ) and number of adipocytes per square millimeter and number of altered adipocytes per square millimeter were evaluated. Untreated specimens harvested with the 2 different techniques were first compared, then sedimented versus centrifuged specimens harvested with the same technique were compared. Statistical analysis was performed using Wilcoxon signed rank test. The number of adipocytes per square millimeter was statistically higher in specimens harvested with the 3-mm Mercedes cannula (P = 0.0310). The number of altered cells was statistically higher in centrifuged specimens than in sedimented ones using both methods of fat harvesting (P = 0.0080) with a 2-mm Coleman cannula and (P = 0.0050) with a 3-mm Mercedes cannula. Alterations in adipocyte morphology consisted in wrinkling of the membrane, opening of pore with leakage of oily material, reduction of cellular diameter, and total collapse of the cellular membrane. Fat harvesting by a 3-mm cannula results in a higher number of adipocytes and centrifugation of the harvested fat results in a higher number of morphologic altered cells than sedimentation.
Wang, Xin Lian; He, Wen; Chen, Jian Hong; Hu, Zhi Hai; Zhao, Li Qin
2015-01-01
To evaluate image quality of female pelvic computed tomography (CT) scans reconstructed with the adaptive statistical iterative reconstruction (ASIR) technique combined with low tube-voltage and to explore the feasibility of its clinical application. Ninety-four patients were divided into two groups. The study group used 100 kVp, and images were reconstructed with 30%, 50%, 70%, and 90% ASIR. The control group used 120 kVp, and images were reconstructed with 30% ASIR. The noise index was 15 for the study group and 11 for the control group. The CT values and noise levels of different tissues were measured. The contrast to noise ratio (CNR) was calculated. A subjective evaluation was carried out by two experienced radiologists. The CT dose index volume (CTDIvol) was recorded. A 44.7% reduction in CTDIvol was observed in the study group (8.18 ± 3.58 mGy) compared with that in the control group (14.78 ± 6.15 mGy). No significant differences were observed in the tissue noise levels and CNR values between the 70% ASIR group and the control group (p = 0.068-1.000). The subjective scores indicated that visibility of small structures, diagnostic confidence, and the overall image quality score in the 70% ASIR group was the best, and were similar to those in the control group (1.87 vs. 1.79, 1.26 vs. 1.28, and 4.53 vs. 4.57; p = 0.122-0.585). No significant difference in diagnostic accuracy was detected between the study group and the control group (42/47 vs. 43/47, p = 1.000). Low tube-voltage combined with automatic tube current modulation and 70% ASIR allowed the low CT radiation dose to be reduced by 44.7% without losing image quality on female pelvic scan
Wang, Xin Lian; He, Wen; Chen, Jian Hong; Hu, Zhi Hai; Zhao, Li Qin [Dept. of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing (China)
2015-10-15
To evaluate image quality of female pelvic computed tomography (CT) scans reconstructed with the adaptive statistical iterative reconstruction (ASIR) technique combined with low tube-voltage and to explore the feasibility of its clinical application. Ninety-four patients were divided into two groups. The study group used 100 kVp, and images were reconstructed with 30%, 50%, 70%, and 90% ASIR. The control group used 120 kVp, and images were reconstructed with 30% ASIR. The noise index was 15 for the study group and 11 for the control group. The CT values and noise levels of different tissues were measured. The contrast to noise ratio (CNR) was calculated. A subjective evaluation was carried out by two experienced radiologists. The CT dose index volume (CTDIvol) was recorded. A 44.7% reduction in CTDIvol was observed in the study group (8.18 ± 3.58 mGy) compared with that in the control group (14.78 ± 6.15 mGy). No significant differences were observed in the tissue noise levels and CNR values between the 70% ASIR group and the control group (p = 0.068-1.000). The subjective scores indicated that visibility of small structures, diagnostic confidence, and the overall image quality score in the 70% ASIR group was the best, and were similar to those in the control group (1.87 vs. 1.79, 1.26 vs. 1.28, and 4.53 vs. 4.57; p = 0.122-0.585). No significant difference in diagnostic accuracy was detected between the study group and the control group (42/47 vs. 43/47, p = 1.000). Low tube-voltage combined with automatic tube current modulation and 70% ASIR allowed the low CT radiation dose to be reduced by 44.7% without losing image quality on female pelvic scan.
Wang, Xinlian; Chen, Jianghong; Hu, Zhihai; Zhao, Liqin
2015-01-01
Objective To evaluate image quality of female pelvic computed tomography (CT) scans reconstructed with the adaptive statistical iterative reconstruction (ASIR) technique combined with low tube-voltage and to explore the feasibility of its clinical application. Materials and Methods Ninety-four patients were divided into two groups. The study group used 100 kVp, and images were reconstructed with 30%, 50%, 70%, and 90% ASIR. The control group used 120 kVp, and images were reconstructed with 30% ASIR. The noise index was 15 for the study group and 11 for the control group. The CT values and noise levels of different tissues were measured. The contrast to noise ratio (CNR) was calculated. A subjective evaluation was carried out by two experienced radiologists. The CT dose index volume (CTDIvol) was recorded. Results A 44.7% reduction in CTDIvol was observed in the study group (8.18 ± 3.58 mGy) compared with that in the control group (14.78 ± 6.15 mGy). No significant differences were observed in the tissue noise levels and CNR values between the 70% ASIR group and the control group (p = 0.068-1.000). The subjective scores indicated that visibility of small structures, diagnostic confidence, and the overall image quality score in the 70% ASIR group was the best, and were similar to those in the control group (1.87 vs. 1.79, 1.26 vs. 1.28, and 4.53 vs. 4.57; p = 0.122-0.585). No significant difference in diagnostic accuracy was detected between the study group and the control group (42/47 vs. 43/47, p = 1.000). Conclusion Low tube-voltage combined with automatic tube current modulation and 70% ASIR allowed the low CT radiation dose to be reduced by 44.7% without losing image quality on female pelvic scan. PMID:26357499
Exarchakis, Georgios; Lücke, Jörg
2017-11-01
Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.
Nadja Stumberg
2014-05-01
Full Text Available The vegetation in the forest-tundra ecotone zone is expected to be highly affected by climate change and requires effective monitoring techniques. Airborne laser scanning (ALS has been proposed as a tool for the detection of small pioneer trees for such vast areas using laser height and intensity data. The main objective of the present study was to assess a possible improvement in the performance of classifying tree and nontree laser echoes from high-density ALS data. The data were collected along a 1000 km long transect stretching from southern to northern Norway. Different geostatistical and statistical measures derived from laser height and intensity values were used to extent and potentially improve more simple models ignoring the spatial context. Generalised linear models (GLM and support vector machines (SVM were employed as classification methods. Total accuracies and Cohen’s kappa coefficients were calculated and compared to those of simpler models from a previous study. For both classification methods, all models revealed total accuracies similar to the results of the simpler models. Concerning classification performance, however, the comparison of the kappa coefficients indicated a significant improvement for some models both using GLM and SVM, with classification accuracies >94%.
Maté-González, Miguel Ángel; Aramendi, Julia; Yravedra, José; Blasco, Ruth; Rosell, Jordi; González-Aguilera, Diego; Domínguez-Rodrigo, Manuel
2017-09-01
In the last few years, the study of cut marks on bone surfaces has become fundamental for the interpretation of prehistoric butchery practices. Due to the difficulties in the correct identification of cut marks, many criteria for their description and classification have been suggested. Different techniques, such as three-dimensional digital microscope (3D DM), laser scanning confocal microscopy (LSCM) and micro-photogrammetry (M-PG) have been recently applied to the study of cut marks. Although the 3D DM and LSCM microscopic techniques are the most commonly used for the 3D identification of cut marks, M-PG has also proved to be very efficient and a low-cost method. M-PG is a noninvasive technique that allows the study of the cortical surface without any previous preparation of the samples, and that generates high-resolution models. Despite the current application of microscopic and micro-photogrammetric techniques to taphonomy, their reliability has never been tested. In this paper, we compare 3D DM, LSCM and M-PG in order to assess their resolution and results. In this study, we analyse 26 experimental cut marks generated with a metal knife. The quantitative and qualitative information registered is analysed by means of standard multivariate statistics and geometric morphometrics to assess the similarities and differences obtained with the different methodologies. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Parker, R Gary
1988-01-01
This book treats the fundamental issues and algorithmic strategies emerging as the core of the discipline of discrete optimization in a comprehensive and rigorous fashion. Following an introductory chapter on computational complexity, the basic algorithmic results for the two major models of polynomial algorithms are introduced--models using matroids and linear programming. Further chapters treat the major non-polynomial algorithms: branch-and-bound and cutting planes. The text concludes with a chapter on heuristic algorithms.Several appendixes are included which review the fundamental ideas o
Foster, Guy M.; Graham, Jennifer L.
2016-04-06
The Kansas River is a primary source of drinking water for about 800,000 people in northeastern Kansas. Source-water supplies are treated by a combination of chemical and physical processes to remove contaminants before distribution. Advanced notification of changing water-quality conditions and cyanobacteria and associated toxin and taste-and-odor compounds provides drinking-water treatment facilities time to develop and implement adequate treatment strategies. The U.S. Geological Survey (USGS), in cooperation with the Kansas Water Office (funded in part through the Kansas State Water Plan Fund), and the City of Lawrence, the City of Topeka, the City of Olathe, and Johnson County Water One, began a study in July 2012 to develop statistical models at two Kansas River sites located upstream from drinking-water intakes. Continuous water-quality monitors have been operated and discrete-water quality samples have been collected on the Kansas River at Wamego (USGS site number 06887500) and De Soto (USGS site number 06892350) since July 2012. Continuous and discrete water-quality data collected during July 2012 through June 2015 were used to develop statistical models for constituents of interest at the Wamego and De Soto sites. Logistic models to continuously estimate the probability of occurrence above selected thresholds were developed for cyanobacteria, microcystin, and geosmin. Linear regression models to continuously estimate constituent concentrations were developed for major ions, dissolved solids, alkalinity, nutrients (nitrogen and phosphorus species), suspended sediment, indicator bacteria (Escherichia coli, fecal coliform, and enterococci), and actinomycetes bacteria. These models will be used to provide real-time estimates of the probability that cyanobacteria and associated compounds exceed thresholds and of the concentrations of other water-quality constituents in the Kansas River. The models documented in this report are useful for characterizing changes
Zhao Yongxia; Chang Jin; Zuo Ziwei; Zhang Changda; Zhang Tianle
2014-01-01
Objective: To investigate the best weighting of adaptive statistical iterative reconstruction (ASIR) algorithm and optimized low-dose scanning parameters in thoracic aorta CT angiography(CTA). Methods: Totally 120 patients with the body mass index (BMI) of 19-24 were randomly divided into 6 groups. All patients underwent thoracic aorta CTA with a GE Discovery CT 750 HD scanner (ranging from 290-330 mm). The default parameters (100 kV, 240 mAs) were applied in Group 1. Reconstructions were performed with different weightings of ASIR(10%-100% with 10%), and the signal to noise ratio (S/N) and contrast to noise ratio(C/N) of images were calculated. The images of series were evaluated by 2 independent radiologists with 5-point-scale and lastly the best weighting were revealed. Then the mAs in Group 2-6 were defined as 210, 180, 150, 120 and 90 with the kilovoltage 100. The CTDI_v_o_l and DLP in every scan series were recorded and the effective dose (E) was calculated. The S/N and C/N were calculated and the image quality was assessed by two radiologists. Results: The best weighing of ASIR was 60% at the 100 kV, 240 mAs. Under 60% of ASIR and 100 kV, the scores of image quality from 240 mAs to 90 mAs were (4.78±0.30)-(3.15±0.23). The CTDI_v_o_l and DLP were 12.64-4.41 mGy and 331.81-128.27 mGy, and the E was 4.98-1.92 mSv. The image qualities among Group 1-5 were nor significantly different (F = 5.365, P > 0.05), but the CTDI_v_o_l and DLP of Group 5 were reduced by 37.0% and 36.9%, respectively compared with Group 1. Conclusions: In thoracic aorta CT Angiography, the best weighting of ASIR is 60%, and 120 mAs is the best mAs with 100 kV in patients with BMI 19-24. (authors)
Discrete gradients in discrete classical mechanics
Renna, L.
1987-01-01
A simple model of discrete classical mechanics is given where, starting from the continuous Hamilton equations, discrete equations of motion are established together with a proper discrete gradient definition. The conservation laws of the total discrete momentum, angular momentum, and energy are demonstrated
Antonello Sindona
2015-03-01
Full Text Available The sudden introduction of a local impurity in a Fermi sea leads to an anomalous disturbance of its quantum state that represents a local quench, leaving the system out of equilibrium and giving rise to the Anderson orthogonality catastrophe. The statistics of the work done describe the energy fluctuations produced by the quench, providing an accurate and detailed insight into the fundamental physics of the process. We present here a numerical approach to the non-equilibrium work distribution, supported by applications to phenomena occurring at very diverse energy ranges. One of them is the valence electron shake-up induced by photo-ionization of a core state in a fullerene molecule. The other is the response of an ultra-cold gas of trapped fermions to an embedded two-level atom excited by a fast pulse. Working at low thermal energies, we detect the primary role played by many-particle states of the perturbed system with one or two excited fermions. We validate our approach through the comparison with some photoemission data on fullerene films and previous analytical calculations on harmonically trapped Fermi gases.
Firth, Jean M
1992-01-01
The analysis of signals and systems using transform methods is a very important aspect of the examination of processes and problems in an increasingly wide range of applications. Whereas the initial impetus in the development of methods appropriate for handling discrete sets of data occurred mainly in an electrical engineering context (for example in the design of digital filters), the same techniques are in use in such disciplines as cardiology, optics, speech analysis and management, as well as in other branches of science and engineering. This text is aimed at a readership whose mathematical background includes some acquaintance with complex numbers, linear differen tial equations, matrix algebra, and series. Specifically, a familiarity with Fourier series (in trigonometric and exponential forms) is assumed, and an exposure to the concept of a continuous integral transform is desirable. Such a background can be expected, for example, on completion of the first year of a science or engineering degree cour...
Exact analysis of discrete data
Hirji, Karim F
2005-01-01
Researchers in fields ranging from biology and medicine to the social sciences, law, and economics regularly encounter variables that are discrete or categorical in nature. While there is no dearth of books on the analysis and interpretation of such data, these generally focus on large sample methods. When sample sizes are not large or the data are otherwise sparse, exact methods--methods not based on asymptotic theory--are more accurate and therefore preferable.This book introduces the statistical theory, analysis methods, and computation techniques for exact analysis of discrete data. After reviewing the relevant discrete distributions, the author develops the exact methods from the ground up in a conceptually integrated manner. The topics covered range from univariate discrete data analysis, a single and several 2 x 2 tables, a single and several 2 x K tables, incidence density and inverse sampling designs, unmatched and matched case -control studies, paired binary and trinomial response models, and Markov...
Discrete Curvatures and Discrete Minimal Surfaces
Sun, Xiang
2012-01-01
This thesis presents an overview of some approaches to compute Gaussian and mean curvature on discrete surfaces and discusses discrete minimal surfaces. The variety of applications of differential geometry in visualization and shape design leads
Quantum chaos: statistical relaxation in discrete spectrum
Chirikov, B.V.
1990-01-01
The controversial phenomenon of quantum chaos is discussed using the quantized standard map, or the kicked rotator, as a simple model. The relation to the classical dynamical chaos is tracked down on the basis of the correspondence principle. Several definitions of the quantum chaos are discussed. 27 refs
Quantum chaos: Statistical relaxation in discrete spectrum
Chirikov, B.V.
1991-01-01
The controversial phenomenon of quantum chaos is discussed using the quantized standard map, or the kicked rotator, as a simple model. The relation to the classical dynamical chaos is tracked down on the basis of the correspondence principle. Various mechanisms of the quantum suppression of classical chaos are considered with an application to the excitation and ionization of Rydberg atoms in a microwave field. Several definitions of the quantum chaos are discussed. (author). 27 refs
Discrete Curvatures and Discrete Minimal Surfaces
Sun, Xiang
2012-06-01
This thesis presents an overview of some approaches to compute Gaussian and mean curvature on discrete surfaces and discusses discrete minimal surfaces. The variety of applications of differential geometry in visualization and shape design leads to great interest in studying discrete surfaces. With the rich smooth surface theory in hand, one would hope that this elegant theory can still be applied to the discrete counter part. Such a generalization, however, is not always successful. While discrete surfaces have the advantage of being finite dimensional, thus easier to treat, their geometric properties such as curvatures are not well defined in the classical sense. Furthermore, the powerful calculus tool can hardly be applied. The methods in this thesis, including angular defect formula, cotangent formula, parallel meshes, relative geometry etc. are approaches based on offset meshes or generalized offset meshes. As an important application, we discuss discrete minimal surfaces and discrete Koenigs meshes.
Rohatgi, Vijay K
2003-01-01
Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth
Miranda, A.B. de; Delmas, A; Sacadura, J F [Institut National des Sciences Appliquees (INSA), 69 - Villeurbanne (France)
1997-12-31
A formulation based on the use of the discrete ordinate method applied to the integral form of the radiant heat transfer equation is proposed for non-grey gases. The correlations between transmittances are neglected and no explicit wall reflexion is considered. The configuration analyzed consists in a flat layer of non-isothermal steam-nitrogen mixture. Cavity walls are grey with diffuse reflexion and emission. A narrow band statistical model is used to represent the radiative properties of the gas. The distribution of the radiative source term inside the cavity is calculated along two temperature profiles in a uniform steam concentration. Results obtained using this simplified approach are in good agreement with those found in the literature for the same temperature and concentration distributions. This preliminary study seems to indicate that the algorithm based on the integration of radiant heat transfer along the luminance path is less sensitive to de-correlation effects than formulations based on the differential form the the radiant heat transfer. Thus, a more systematic study of the influence of the neglecting of correlations on the integral approach is analyzed in this work. (J.S.) 16 refs.
Miranda, A.B. de; Delmas, A.; Sacadura, J.F. [Institut National des Sciences Appliquees (INSA), 69 - Villeurbanne (France)
1996-12-31
A formulation based on the use of the discrete ordinate method applied to the integral form of the radiant heat transfer equation is proposed for non-grey gases. The correlations between transmittances are neglected and no explicit wall reflexion is considered. The configuration analyzed consists in a flat layer of non-isothermal steam-nitrogen mixture. Cavity walls are grey with diffuse reflexion and emission. A narrow band statistical model is used to represent the radiative properties of the gas. The distribution of the radiative source term inside the cavity is calculated along two temperature profiles in a uniform steam concentration. Results obtained using this simplified approach are in good agreement with those found in the literature for the same temperature and concentration distributions. This preliminary study seems to indicate that the algorithm based on the integration of radiant heat transfer along the luminance path is less sensitive to de-correlation effects than formulations based on the differential form the the radiant heat transfer. Thus, a more systematic study of the influence of the neglecting of correlations on the integral approach is analyzed in this work. (J.S.) 16 refs.
Discrete Morse functions for graph configuration spaces
Sawicki, A
2012-01-01
We present an alternative application of discrete Morse theory for two-particle graph configuration spaces. In contrast to previous constructions, which are based on discrete Morse vector fields, our approach is through Morse functions, which have a nice physical interpretation as two-body potentials constructed from one-body potentials. We also give a brief introduction to discrete Morse theory. Our motivation comes from the problem of quantum statistics for particles on networks, for which generalized versions of anyon statistics can appear. (paper)
Nuclear scans use radioactive substances to see structures and functions inside your body. They use a special ... images. Most scans take 20 to 45 minutes. Nuclear scans can help doctors diagnose many conditions, including ...
Application of an efficient Bayesian discretization method to biomedical data
Gopalakrishnan Vanathi
2011-07-01
Full Text Available Abstract Background Several data mining methods require data that are discrete, and other methods often perform better with discrete data. We introduce an efficient Bayesian discretization (EBD method for optimal discretization of variables that runs efficiently on high-dimensional biomedical datasets. The EBD method consists of two components, namely, a Bayesian score to evaluate discretizations and a dynamic programming search procedure to efficiently search the space of possible discretizations. We compared the performance of EBD to Fayyad and Irani's (FI discretization method, which is commonly used for discretization. Results On 24 biomedical datasets obtained from high-throughput transcriptomic and proteomic studies, the classification performances of the C4.5 classifier and the naïve Bayes classifier were statistically significantly better when the predictor variables were discretized using EBD over FI. EBD was statistically significantly more stable to the variability of the datasets than FI. However, EBD was less robust, though not statistically significantly so, than FI and produced slightly more complex discretizations than FI. Conclusions On a range of biomedical datasets, a Bayesian discretization method (EBD yielded better classification performance and stability but was less robust than the widely used FI discretization method. The EBD discretization method is easy to implement, permits the incorporation of prior knowledge and belief, and is sufficiently fast for application to high-dimensional data.
Mimetic discretization methods
Castillo, Jose E
2013-01-01
To help solve physical and engineering problems, mimetic or compatible algebraic discretization methods employ discrete constructs to mimic the continuous identities and theorems found in vector calculus. Mimetic Discretization Methods focuses on the recent mimetic discretization method co-developed by the first author. Based on the Castillo-Grone operators, this simple mimetic discretization method is invariably valid for spatial dimensions no greater than three. The book also presents a numerical method for obtaining corresponding discrete operators that mimic the continuum differential and
Time Discretization Techniques
Gottlieb, S.; Ketcheson, David I.
2016-01-01
The time discretization of hyperbolic partial differential equations is typically the evolution of a system of ordinary differential equations obtained by spatial discretization of the original problem. Methods for this time evolution include
Auvinen, Jussi; Bernhard, Jonah E.; Bass, Steffen A.; Karpenko, Iurii
2018-04-01
We determine the probability distributions of the shear viscosity over the entropy density ratio η /s in the quark-gluon plasma formed in Au + Au collisions at √{sN N}=19.6 ,39 , and 62.4 GeV , using Bayesian inference and Gaussian process emulators for a model-to-data statistical analysis that probes the full input parameter space of a transport + viscous hydrodynamics hybrid model. We find the most likely value of η /s to be larger at smaller √{sN N}, although the uncertainties still allow for a constant value between 0.10 and 0.15 for the investigated collision energy range.
Handbook of Spatial Statistics
Gelfand, Alan E
2010-01-01
Offers an introduction detailing the evolution of the field of spatial statistics. This title focuses on the three main branches of spatial statistics: continuous spatial variation (point referenced data); discrete spatial variation, including lattice and areal unit data; and, spatial point patterns.
Qiao, Xue; Lin, Xiong-hao; Ji, Shuai; Zhang, Zheng-xiang; Bo, Tao; Guo, De-an; Ye, Min
2016-01-05
To fully understand the chemical diversity of an herbal medicine is challenging. In this work, we describe a new approach to globally profile and discover novel compounds from an herbal extract using multiple neutral loss/precursor ion scanning combined with substructure recognition and statistical analysis. Turmeric (the rhizomes of Curcuma longa L.) was used as an example. This approach consists of three steps: (i) multiple neutral loss/precursor ion scanning to obtain substructure information; (ii) targeted identification of new compounds by extracted ion current and substructure recognition; and (iii) untargeted identification using total ion current and multivariate statistical analysis to discover novel structures. Using this approach, 846 terpecurcumins (terpene-conjugated curcuminoids) were discovered from turmeric, including a number of potentially novel compounds. Furthermore, two unprecedented compounds (terpecurcumins X and Y) were purified, and their structures were identified by NMR spectroscopy. This study extended the application of mass spectrometry to global profiling of natural products in herbal medicines and could help chemists to rapidly discover novel compounds from a complex matrix.
... this page: //medlineplus.gov/ency/article/003790.htm Renal scan To use the sharing features on this ... anaphylaxis . Alternative Names Renogram; Kidney scan Images Kidney anatomy Kidney - blood and urine flow References Chernecky CC, ...
... disease, lung nodules and liver masses Monitor the effectiveness of certain treatments, such as cancer treatment Detect ... scan done in a hospital or an outpatient facility. CT scans are painless and, with newer machines, ...
Alruwaili, A R; Pannek, K; Coulthard, A; Henderson, R; Kurniawan, N D; McCombe, P
2018-02-01
This study aims to compare the cortical and subcortical deep gray matter (GM) and white matter (WM) of ALS subjects and controls and to compare ALS subjects with (ALScog) and without (ALSnon-cog) cognitive impairment. The study was performed in 30 ALS subjects, and 19 healthy controls. Structural T1- and diffusion-weighted MRI data were analyzed using voxel-based morphometry (VBM) and tract-based spatial statistics (TBSS). All DTI measures and GM volume differed significantly between ALS subjects and controls. Compared to controls, greater DTI changes were present in ALScog than ALSnon-cog subjects. GM results showed reduction in the caudate nucleus volume in ALScog subjects compared to ALSnon-cog. and comparing all ALS with controls, there were changes on the right side and in a small region in the left middle frontal gyrus. This combined DTI and VBM study showed changes in motor and extra-motor regions. The DTI changes were more extensive in ALScog than ALSnon-cog subjects. It is likely that the inclusion of ALS subjects with cognitive impairment in previous studies resulted in extra-motor WM abnormalities being reported in ALS subjects. Copyright © 2017. Published by Elsevier Masson SAS.
Kissick, David J.; Muir, Ryan D.; Sullivan, Shane Z.; Oglesbee, Robert A.; Simpson, Garth J.
2013-02-01
Despite the ubiquitous use of multi-photon and confocal microscopy measurements in biology, the core techniques typically suffer from fundamental compromises between signal to noise (S/N) and linear dynamic range (LDR). In this study, direct synchronous digitization of voltage transients coupled with statistical analysis is shown to allow S/N approaching the theoretical maximum throughout an LDR spanning more than 8 decades, limited only by the dark counts of the detector on the low end and by the intrinsic nonlinearities of the photomultiplier tube (PMT) detector on the high end. Synchronous digitization of each voltage transient represents a fundamental departure from established methods in confocal/multi-photon imaging, which are currently based on either photon counting or signal averaging. High information-density data acquisition (up to 3.2 GB/s of raw data) enables the smooth transition between the two modalities on a pixel-by-pixel basis and the ultimate writing of much smaller files (few kB/s). Modeling of the PMT response allows extraction of key sensor parameters from the histogram of voltage peak-heights. Applications in second harmonic generation (SHG) microscopy are described demonstrating S/N approaching the shot-noise limit of the detector over large dynamic ranges.
Llope, W. J.; STAR Collaboration
2013-10-01
Specific products of the statistical moments of the multiplicity distributions of identified particles can be directly compared to susceptibility ratios obtained from lattice QCD calculations. They may also diverge for nuclear systems formed close to a possible QCD critical point due to the phenomenon of critical opalescence. Of particular interest are the moments products for net-protons, net-kaons, and net-charge, as these are considered proxies for conserved quantum numbers. The moments products have been measured by the STAR experiment for Au+Au collisions at seven beam energies ranging from 7.7 to 200 GeV. In this presentation, the experimental results are compared to data-based calculations in which the intra-event correlations of the numbers of positive and negative particles are broken by construction. The importance of intra-event correlations to the moments products values for net-protons, net-kaons, and net-charge can thus be evaluated. Work supported by the U.S. Dept of Energy under grant DE-PS02-09ER09.
Testing Preference Axioms in Discrete Choice experiments
Hougaard, Jens Leth; Østerdal, Lars Peter; Tjur, Tue
Recent studies have tested the preference axioms of completeness and transitivity, and have detected other preference phenomena such as unstability, learning- and tiredness effects, ordering effects and dominance, in stated preference discrete choice experiments. However, it has not been explicitly...... of the preference axioms and other preference phenomena in the context of stated preference discrete choice experiments, and examine whether or how these can be subject to meaningful (statistical) tests...
Knutson, N; Schmidt, M [Rhode Island Hospital, Providence, RI (United States); University of Rhode Island, Kingston, RI (United States); University of Massachusetts Lowell, Lowell, MA (United States); Nguyen, N [Rhode Island Hospital, Providence, RI (United States); University of Massachusetts Lowell, Lowell, MA (United States); Belley, M [Rhode Island Hospital, Providence, RI (United States); University of Rhode Island, Kingston, RI (United States); Price, M [Rhode Island Hospital, Providence, RI (United States); University of Rhode Island, Kingston, RI (United States); Alpert Medical School of Brown University, Providence, RI (United States)
2016-06-15
Purpose: To develop a method to exploit real-time dynamic machine and couch parameter control during linear accelerator (LINAC) beam delivery to facilitate efficient performance of TG-142 suggested, Annual LINAC QA tests. Methods: Varian’s TrueBeam Developer Mode (Varian Medical Systems, Palo Alto, CA) facilitates control of Varian’s TrueBeam LINAC via instructions provided in Extensible Markup Language (XML) files. This allows machine and couch parameters to be varied dynamically, in real-time, during beam delivery. Custom XML files were created to allow for the collection of (1) continuous Tissue Maximum Ratios (TMRs), (2) beam profiles, and (3) continuous output factors using a 1D-scanning tank. TMRs were acquired by orienting an ionization chamber (IC) at isocenter (depth=25cm) and synchronizing a depth scan towards the water surface while lowering the couch at 1mm/s. For beam profiles, the couch was driven laterally and longitudinally while logging IC electrometer readings. Output factors (OFs) where collected by continually varying field sizes (4×4 to 30×30-cm{sup 2}) at a constant speed of 6.66 mm/s. To validate measurements, comparisons were made to data collected using traditional methods (e.g. 1D or 3D tank). Results: All data collecting using the proposed methods agreed with traditionally collected data (TMRs within 1%, OFs within 0.5% and beam profile agreement within 1% / 1mm) while taking less time to collect (factor of approximately 1/10) and with a finer sample resolution. Conclusion: TrueBeam developer mode facilitates collection of continuous data with the same accuracy as traditionally collected data with a finer resolution in less time. Results demonstrate an order of magnitude increase in sampled resolution and an order of magnitude reduction in collection time compared to traditional acquisition methods (e.g. 3D scanning tank). We are currently extending this approach to perform other TG-142 tasks.
Modeling discrete time-to-event data
Tutz, Gerhard
2016-01-01
This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are expla...
Greene, Gretchen; Greene, Robert; Markvorsen, Steen
1998-01-01
A strategy for the reproduction (by milling in marble using a vertically moving spherical tool) of a given height function is found using a level curve analysis of the largest principal (upward pointing) curvature of the corresponding surface. Data fitting is applied to the discrete data set whic...
Baecklund transformations for discrete Painleve equations: Discrete PII-PV
Sakka, A.; Mugan, U.
2006-01-01
Transformation properties of discrete Painleve equations are investigated by using an algorithmic method. This method yields explicit transformations which relates the solutions of discrete Painleve equations, discrete P II -P V , with different values of parameters. The particular solutions which are expressible in terms of the discrete analogue of the classical special functions of discrete Painleve equations can also be obtained from these transformations
Discrete Gabor transform and discrete Zak transform
Bastiaans, M.J.; Namazi, N.M.; Matthews, K.
1996-01-01
Gabor's expansion of a discrete-time signal into a set of shifted and modulated versions of an elementary signal or synthesis window is introduced, along with the inverse operation, i.e. the Gabor transform, which uses an analysis window that is related to the synthesis window and with the help of
Discrete Mathematics Re "Tooled."
Grassl, Richard M.; Mingus, Tabitha T. Y.
1999-01-01
Indicates the importance of teaching discrete mathematics. Describes how the use of technology can enhance the teaching and learning of discrete mathematics. Explorations using Excel, Derive, and the TI-92 proved how preservice and inservice teachers experienced a new dimension in problem solving and discovery. (ASK)
Discrete modeling considerations in multiphase fluid dynamics
Ransom, V.H.; Ramshaw, J.D.
1988-01-01
The modeling of multiphase flows play a fundamental role in light water reactor safety. The main ingredients in our discrete modeling Weltanschauung are the following considerations: (1) Any physical model must be cast into discrete form for a digital computer. (2) The usual approach of formulating models in differential form and then discretizing them is potentially hazardous. It may be preferable to formulate the model in discrete terms from the outset. (3) Computer time and storage constraints limit the resolution that can be employed in practical calculations. These limits effectively define the physical phenomena, length scales, and time scales which cannot be directly represented in the calculation and therefore must be modeled. This information should be injected into the model formulation process at an early stage. (4) Practical resolution limits are generally so coarse that traditional convergence and truncation-error analyses become irrelevant. (5) A discrete model constitutes a reduced description of a physical system, from which fine-scale details are eliminated. This elimination creates a statistical closure problem. Methods from statistical physics may therefore be useful in the formulation of discrete models. In the present paper we elaborate on these themes and illustrate them with simple examples. 48 refs
Homogenization of discrete media
Pradel, F.; Sab, K.
1998-01-01
Material such as granular media, beam assembly are easily seen as discrete media. They look like geometrical points linked together thanks to energetic expressions. Our purpose is to extend discrete kinematics to the one of an equivalent continuous material. First we explain how we build the localisation tool for periodic materials according to estimated continuum medium type (classical Cauchy, and Cosserat media). Once the bridge built between discrete and continuum media, we exhibit its application over two bidimensional beam assembly structures : the honey comb and a structural reinforced variation. The new behavior is then applied for the simple plan shear problem in a Cosserat continuum and compared with the real discrete solution. By the mean of this example, we establish the agreement of our new model with real structures. The exposed method has a longer range than mechanics and can be applied to every discrete problems like electromagnetism in which relationship between geometrical points can be summed up by an energetic function. (orig.)
Aydin, Alhun; Sisman, Altug
2016-01-01
By considering the quantum-mechanically minimum allowable energy interval, we exactly count number of states (NOS) and introduce discrete density of states (DOS) concept for a particle in a box for various dimensions. Expressions for bounded and unbounded continua are analytically recovered from discrete ones. Even though substantial fluctuations prevail in discrete DOS, they're almost completely flattened out after summation or integration operation. It's seen that relative errors of analytical expressions of bounded/unbounded continua rapidly decrease for high NOS values (weak confinement or high energy conditions), while the proposed analytical expressions based on Weyl's conjecture always preserve their lower error characteristic. - Highlights: • Discrete density of states considering minimum energy difference is proposed. • Analytical DOS and NOS formulas based on Weyl conjecture are given. • Discrete DOS and NOS functions are examined for various dimensions. • Relative errors of analytical formulas are much better than the conventional ones.
Aydin, Alhun; Sisman, Altug, E-mail: sismanal@itu.edu.tr
2016-03-22
By considering the quantum-mechanically minimum allowable energy interval, we exactly count number of states (NOS) and introduce discrete density of states (DOS) concept for a particle in a box for various dimensions. Expressions for bounded and unbounded continua are analytically recovered from discrete ones. Even though substantial fluctuations prevail in discrete DOS, they're almost completely flattened out after summation or integration operation. It's seen that relative errors of analytical expressions of bounded/unbounded continua rapidly decrease for high NOS values (weak confinement or high energy conditions), while the proposed analytical expressions based on Weyl's conjecture always preserve their lower error characteristic. - Highlights: • Discrete density of states considering minimum energy difference is proposed. • Analytical DOS and NOS formulas based on Weyl conjecture are given. • Discrete DOS and NOS functions are examined for various dimensions. • Relative errors of analytical formulas are much better than the conventional ones.
Okuyama, Yoshifumi
2014-01-01
Discrete Control Systems establishes a basis for the analysis and design of discretized/quantized control systemsfor continuous physical systems. Beginning with the necessary mathematical foundations and system-model descriptions, the text moves on to derive a robust stability condition. To keep a practical perspective on the uncertain physical systems considered, most of the methods treated are carried out in the frequency domain. As part of the design procedure, modified Nyquist–Hall and Nichols diagrams are presented and discretized proportional–integral–derivative control schemes are reconsidered. Schemes for model-reference feedback and discrete-type observers are proposed. Although single-loop feedback systems form the core of the text, some consideration is given to multiple loops and nonlinearities. The robust control performance and stability of interval systems (with multiple uncertainties) are outlined. Finally, the monograph describes the relationship between feedback-control and discrete ev...
Discrete repulsive oscillator wavefunctions
Munoz, Carlos A; Rueda-Paz, Juvenal; Wolf, Kurt Bernardo
2009-01-01
For the study of infinite discrete systems on phase space, the three-dimensional Lorentz algebra and group, so(2,1) and SO(2,1), provide a discrete model of the repulsive oscillator. Its eigenfunctions are found in the principal irreducible representation series, where the compact generator-that we identify with the position operator-has the infinite discrete spectrum of the integers Z, while the spectrum of energies is a double continuum. The right- and left-moving wavefunctions are given by hypergeometric functions that form a Dirac basis for l 2 (Z). Under contraction, the discrete system limits to the well-known quantum repulsive oscillator. Numerical computations of finite approximations raise further questions on the use of Dirac bases for infinite discrete systems.
Morris, J; Johnson, S
2007-12-03
The Distinct Element Method (also frequently referred to as the Discrete Element Method) (DEM) is a Lagrangian numerical technique where the computational domain consists of discrete solid elements which interact via compliant contacts. This can be contrasted with Finite Element Methods where the computational domain is assumed to represent a continuum (although many modern implementations of the FEM can accommodate some Distinct Element capabilities). Often the terms Discrete Element Method and Distinct Element Method are used interchangeably in the literature, although Cundall and Hart (1992) suggested that Discrete Element Methods should be a more inclusive term covering Distinct Element Methods, Displacement Discontinuity Analysis and Modal Methods. In this work, DEM specifically refers to the Distinct Element Method, where the discrete elements interact via compliant contacts, in contrast with Displacement Discontinuity Analysis where the contacts are rigid and all compliance is taken up by the adjacent intact material.
M. Zukowski (Marcin); P.A. Boncz (Peter); M.L. Kersten (Martin)
2004-01-01
textabstractData mining, information retrieval and other application areas exhibit a query load with multiple concurrent queries touching a large fraction of a relation. This leads to individual query plans based on a table scan or large index scan. The implementation of this access path in most
Introduction to Bayesian statistics
Bolstad, William M
2017-01-01
There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...
Understanding advanced statistical methods
Westfall, Peter
2013-01-01
Introduction: Probability, Statistics, and ScienceReality, Nature, Science, and ModelsStatistical Processes: Nature, Design and Measurement, and DataModelsDeterministic ModelsVariabilityParametersPurely Probabilistic Statistical ModelsStatistical Models with Both Deterministic and Probabilistic ComponentsStatistical InferenceGood and Bad ModelsUses of Probability ModelsRandom Variables and Their Probability DistributionsIntroductionTypes of Random Variables: Nominal, Ordinal, and ContinuousDiscrete Probability Distribution FunctionsContinuous Probability Distribution FunctionsSome Calculus-Derivatives and Least SquaresMore Calculus-Integrals and Cumulative Distribution FunctionsProbability Calculation and SimulationIntroductionAnalytic Calculations, Discrete and Continuous CasesSimulation-Based ApproximationGenerating Random NumbersIdentifying DistributionsIntroductionIdentifying Distributions from Theory AloneUsing Data: Estimating Distributions via the HistogramQuantiles: Theoretical and Data-Based Estimate...
Shapiro, B.
1986-01-01
Radionuclide scanning is the production of images of normal and diseased tissues and organs by means of the gamma-ray emissions from radiopharmaceutical agents having specific distributions in the body. The gamma rays are detected at the body surface by a variety of instruments that convert the invisible rays into visible patterns representing the distribution of the radionuclide in the body. The patterns, or images, obtained can be interpreted to provide or to aid diagnoses, to follow the course of disease, and to monitor the management of various illnesses. Scanning is a sensitive technique, but its specificity may be low when interpreted alone. To be used most successfully, radionuclide scanning must be interpreted in conjunction with other techniques, such as bone radiographs with bone scans, chest radiographs with lung scans, and ultrasonic studies with thyroid scans. Interpretation is also enhanced by providing pertinent clinical information because the distribution of radiopharmaceutical agents can be altered by drugs and by various procedures besides physiologic and pathologic conditions. Discussion of the patient with the radionuclide scanning specialist prior to the study and review of the results with that specialist after the study are beneficial
Finite Discrete Gabor Analysis
Søndergaard, Peter Lempel
2007-01-01
frequency bands at certain times. Gabor theory can be formulated for both functions on the real line and for discrete signals of finite length. The two theories are largely the same because many aspects come from the same underlying theory of locally compact Abelian groups. The two types of Gabor systems...... can also be related by sampling and periodization. This thesis extends on this theory by showing new results for window construction. It also provides a discussion of the problems associated to discrete Gabor bases. The sampling and periodization connection is handy because it allows Gabor systems...... on the real line to be well approximated by finite and discrete Gabor frames. This method of approximation is especially attractive because efficient numerical methods exists for doing computations with finite, discrete Gabor systems. This thesis presents new algorithms for the efficient computation of finite...
Adaptive Discrete Hypergraph Matching.
Yan, Junchi; Li, Changsheng; Li, Yin; Cao, Guitao
2018-02-01
This paper addresses the problem of hypergraph matching using higher-order affinity information. We propose a solver that iteratively updates the solution in the discrete domain by linear assignment approximation. The proposed method is guaranteed to converge to a stationary discrete solution and avoids the annealing procedure and ad-hoc post binarization step that are required in several previous methods. Specifically, we start with a simple iterative discrete gradient assignment solver. This solver can be trapped in an -circle sequence under moderate conditions, where is the order of the graph matching problem. We then devise an adaptive relaxation mechanism to jump out this degenerating case and show that the resulting new path will converge to a fixed solution in the discrete domain. The proposed method is tested on both synthetic and real-world benchmarks. The experimental results corroborate the efficacy of our method.
Goodrich, Christopher
2015-01-01
This text provides the first comprehensive treatment of the discrete fractional calculus. Experienced researchers will find the text useful as a reference for discrete fractional calculus and topics of current interest. Students who are interested in learning about discrete fractional calculus will find this text to provide a useful starting point. Several exercises are offered at the end of each chapter and select answers have been provided at the end of the book. The presentation of the content is designed to give ample flexibility for potential use in a myriad of courses and for independent study. The novel approach taken by the authors includes a simultaneous treatment of the fractional- and integer-order difference calculus (on a variety of time scales, including both the usual forward and backwards difference operators). The reader will acquire a solid foundation in the classical topics of the discrete calculus while being introduced to exciting recent developments, bringing them to the frontiers of the...
Williams, Ruth M
2006-01-01
A review is given of a number of approaches to discrete quantum gravity, with a restriction to those likely to be relevant in four dimensions. This paper is dedicated to Rafael Sorkin on the occasion of his sixtieth birthday
Understanding Statistics - Cancer Statistics
Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.
Discrete computational structures
Korfhage, Robert R
1974-01-01
Discrete Computational Structures describes discrete mathematical concepts that are important to computing, covering necessary mathematical fundamentals, computer representation of sets, graph theory, storage minimization, and bandwidth. The book also explains conceptual framework (Gorn trees, searching, subroutines) and directed graphs (flowcharts, critical paths, information network). The text discusses algebra particularly as it applies to concentrates on semigroups, groups, lattices, propositional calculus, including a new tabular method of Boolean function minimization. The text emphasize
Mouse manipulation through single-switch scanning.
Blackstien-Adler, Susie; Shein, Fraser; Quintal, Janet; Birch, Shae; Weiss, Patrice L Tamar
2004-01-01
Given the current extensive reliance on the graphical user interface, independent access to computer software requires that users be able to manipulate a pointing device of some type (e.g., mouse, trackball) or be able to emulate a mouse by some other means (e.g., scanning). The purpose of the present study was to identify one or more optimal single-switch scanning mouse emulation strategies. Four alternative scanning strategies (continuous Cartesian, discrete Cartesian, rotational, and hybrid quadrant/continuous Cartesian) were selected for testing based on current market availability as well as on theoretical considerations of their potential speed and accuracy. Each strategy was evaluated using a repeated measures study design by means of a test program that permitted mouse emulation via any one of four scanning strategies in a motivating environment; response speed and accuracy could be automatically recorded and considered in view of the motor, cognitive, and perceptual demands of each scanning strategy. Ten individuals whose disabilities required them to operate a computer via single-switch scanning participated in the study. Results indicated that Cartesian scanning was the preferred and most effective scanning strategy. There were no significant differences between results from the Continuous Cartesian and Discrete Cartesian scanning strategies. Rotational scanning was quite slow with respect to the other strategies, although it was equally accurate. Hybrid Quadrant scanning improved access time but at the cost of fewer correct selections. These results demonstrated the importance of testing and comparing alternate single-switch scanning strategies.
Homogenization of discrete media
Pradel, F.; Sab, K. [CERAM-ENPC, Marne-la-Vallee (France)
1998-11-01
Material such as granular media, beam assembly are easily seen as discrete media. They look like geometrical points linked together thanks to energetic expressions. Our purpose is to extend discrete kinematics to the one of an equivalent continuous material. First we explain how we build the localisation tool for periodic materials according to estimated continuum medium type (classical Cauchy, and Cosserat media). Once the bridge built between discrete and continuum media, we exhibit its application over two bidimensional beam assembly structures : the honey comb and a structural reinforced variation. The new behavior is then applied for the simple plan shear problem in a Cosserat continuum and compared with the real discrete solution. By the mean of this example, we establish the agreement of our new model with real structures. The exposed method has a longer range than mechanics and can be applied to every discrete problems like electromagnetism in which relationship between geometrical points can be summed up by an energetic function. (orig.) 7 refs.
1960-01-01
Before the invention of wire chambers, particles tracks were analysed on scanning tables like this one. Today, the process is electronic and much faster. Bubble chamber film - currently available - (links can be found below) was used for this analysis of the particle tracks.
DISCRETE MATHEMATICS/NUMBER THEORY
Mrs. Manju Devi*
2017-01-01
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics such as integers, graphs, and statements do not vary smoothly in this way, but have distinct, separated values. Discrete mathematics therefore excludes topics in "continuous mathematics" such as calculus and analysis. Discrete objects can often be enumerated by ...
Prateek Sharma
2015-04-01
Full Text Available Abstract Simulation can be regarded as the emulation of the behavior of a real-world system over an interval of time. The process of simulation relies upon the generation of the history of a system and then analyzing that history to predict the outcome and improve the working of real systems. Simulations can be of various kinds but the topic of interest here is one of the most important kind of simulation which is Discrete-Event Simulation which models the system as a discrete sequence of events in time. So this paper aims at introducing about Discrete-Event Simulation and analyzing how it is beneficial to the real world systems.
Discrete systems and integrability
Hietarinta, J; Nijhoff, F W
2016-01-01
This first introductory text to discrete integrable systems introduces key notions of integrability from the vantage point of discrete systems, also making connections with the continuous theory where relevant. While treating the material at an elementary level, the book also highlights many recent developments. Topics include: Darboux and Bäcklund transformations; difference equations and special functions; multidimensional consistency of integrable lattice equations; associated linear problems (Lax pairs); connections with Padé approximants and convergence algorithms; singularities and geometry; Hirota's bilinear formalism for lattices; intriguing properties of discrete Painlevé equations; and the novel theory of Lagrangian multiforms. The book builds the material in an organic way, emphasizing interconnections between the various approaches, while the exposition is mostly done through explicit computations on key examples. Written by respected experts in the field, the numerous exercises and the thoroug...
Introductory discrete mathematics
Balakrishnan, V K
2010-01-01
This concise text offers an introduction to discrete mathematics for undergraduate students in computer science and mathematics. Mathematics educators consider it vital that their students be exposed to a course in discrete methods that introduces them to combinatorial mathematics and to algebraic and logical structures focusing on the interplay between computer science and mathematics. The present volume emphasizes combinatorics, graph theory with applications to some stand network optimization problems, and algorithms to solve these problems.Chapters 0-3 cover fundamental operations involv
Prateek Sharma
2015-01-01
Abstract Simulation can be regarded as the emulation of the behavior of a real-world system over an interval of time. The process of simulation relies upon the generation of the history of a system and then analyzing that history to predict the outcome and improve the working of real systems. Simulations can be of various kinds but the topic of interest here is one of the most important kind of simulation which is Discrete-Event Simulation which models the system as a discrete sequence of ev...
Natali, S.
1984-01-01
This chapter reports on the scanning of 1000 holograms taken in HOBC at CERN. Each hologram is triggered by an interaction in the chamber, the primary particles being pions at 340 GeV/c. The aim of the experiment is the study of charm production. The holograms, recorded on 50 mm film with the ''in line'' technique, can be analyzed by shining a parallel expanded laser beam through the film, obtaining immediately above it the real image of the chamber which can then be scanned and measured with a technique half way between emulsions and bubble chambers. The results indicate that holograms can be analyzed as quickly and reliably as in other visual techniques and that to them is open the same order of magnitude of large scale experiments
Hetherington, V.J.
1989-01-01
Oftentimes, in managing podiatric complaints, clinical and conventional radiographic techniques are insufficient in determining a patient's problem. This is especially true in the early stages of bone infection. Bone scanning or imaging can provide additional information in the diagnosis of the disorder. However, bone scans are not specific and must be correlated with clinical, radiographic, and laboratory evaluation. In other words, bone scanning does not provide the diagnosis but is an important bit of information aiding in the process of diagnosis. The more useful radionuclides in skeletal imaging are technetium phosphate complexes and gallium citrate. These compounds are administered intravenously and are detected at specific time intervals postinjection by a rectilinear scanner with minification is used and the entire skeleton can be imaged from head to toe. Minification allows visualization of the entire skeleton in a single image. A gamma camera can concentrate on an isolated area. However, it requires multiple views to complete the whole skeletal image. Recent advances have allowed computer augmentation of the data received from radionucleotide imaging. The purpose of this chapter is to present the current radionuclides clinically useful in podiatric patients
We also describe discrete-time systems in terms of difference ... A more modern alternative, especially for larger systems, is to convert ... In other words, ..... picture?) State-variable equations are also called state-space equations because the ...
Discrete Lorentzian quantum gravity
Loll, R.
2000-01-01
Just as for non-abelian gauge theories at strong coupling, discrete lattice methods are a natural tool in the study of non-perturbative quantum gravity. They have to reflect the fact that the geometric degrees of freedom are dynamical, and that therefore also the lattice theory must be formulated
Sharp, Karen Tobey
This paper cites information received from a number of sources, e.g., mathematics teachers in two-year colleges, publishers, and convention speakers, about the nature of discrete mathematics and about what topics a course in this subject should contain. Note is taken of the book edited by Ralston and Young which discusses the future of college…
Discrete Exterior Calculus Discretization of Incompressible Navier-Stokes Equations
Mohamed, Mamdouh S.; Hirani, Anil N.; Samtaney, Ravi
2017-01-01
A conservative discretization of incompressible Navier-Stokes equations over surface simplicial meshes is developed using discrete exterior calculus (DEC). Numerical experiments for flows over surfaces reveal a second order accuracy
Discrete mKdV and discrete sine-Gordon flows on discrete space curves
Inoguchi, Jun-ichi; Kajiwara, Kenji; Matsuura, Nozomu; Ohta, Yasuhiro
2014-01-01
In this paper, we consider the discrete deformation of the discrete space curves with constant torsion described by the discrete mKdV or the discrete sine-Gordon equations, and show that it is formulated as the torsion-preserving equidistant deformation on the osculating plane which satisfies the isoperimetric condition. The curve is reconstructed from the deformation data by using the Sym–Tafel formula. The isoperimetric equidistant deformation of the space curves does not preserve the torsion in general. However, it is possible to construct the torsion-preserving deformation by tuning the deformation parameters. Further, it is also possible to make an arbitrary choice of the deformation described by the discrete mKdV equation or by the discrete sine-Gordon equation at each step. We finally show that the discrete deformation of discrete space curves yields the discrete K-surfaces. (paper)
Discrete mathematics with applications
Koshy, Thomas
2003-01-01
This approachable text studies discrete objects and the relationsips that bind them. It helps students understand and apply the power of discrete math to digital computer systems and other modern applications. It provides excellent preparation for courses in linear algebra, number theory, and modern/abstract algebra and for computer science courses in data structures, algorithms, programming languages, compilers, databases, and computation.* Covers all recommended topics in a self-contained, comprehensive, and understandable format for students and new professionals * Emphasizes problem-solving techniques, pattern recognition, conjecturing, induction, applications of varying nature, proof techniques, algorithm development and correctness, and numeric computations* Weaves numerous applications into the text* Helps students learn by doing with a wealth of examples and exercises: - 560 examples worked out in detail - More than 3,700 exercises - More than 150 computer assignments - More than 600 writing projects*...
Discrete and computational geometry
Devadoss, Satyan L
2011-01-01
Discrete geometry is a relatively new development in pure mathematics, while computational geometry is an emerging area in applications-driven computer science. Their intermingling has yielded exciting advances in recent years, yet what has been lacking until now is an undergraduate textbook that bridges the gap between the two. Discrete and Computational Geometry offers a comprehensive yet accessible introduction to this cutting-edge frontier of mathematics and computer science. This book covers traditional topics such as convex hulls, triangulations, and Voronoi diagrams, as well as more recent subjects like pseudotriangulations, curve reconstruction, and locked chains. It also touches on more advanced material, including Dehn invariants, associahedra, quasigeodesics, Morse theory, and the recent resolution of the Poincaré conjecture. Connections to real-world applications are made throughout, and algorithms are presented independently of any programming language. This richly illustrated textbook also fe...
2002-01-01
Discrete geometry investigates combinatorial properties of configurations of geometric objects. To a working mathematician or computer scientist, it offers sophisticated results and techniques of great diversity and it is a foundation for fields such as computational geometry or combinatorial optimization. This book is primarily a textbook introduction to various areas of discrete geometry. In each area, it explains several key results and methods, in an accessible and concrete manner. It also contains more advanced material in separate sections and thus it can serve as a collection of surveys in several narrower subfields. The main topics include: basics on convex sets, convex polytopes, and hyperplane arrangements; combinatorial complexity of geometric configurations; intersection patterns and transversals of convex sets; geometric Ramsey-type results; polyhedral combinatorics and high-dimensional convexity; and lastly, embeddings of finite metric spaces into normed spaces. Jiri Matousek is Professor of Com...
Time Discretization Techniques
Gottlieb, S.
2016-10-12
The time discretization of hyperbolic partial differential equations is typically the evolution of a system of ordinary differential equations obtained by spatial discretization of the original problem. Methods for this time evolution include multistep, multistage, or multiderivative methods, as well as a combination of these approaches. The time step constraint is mainly a result of the absolute stability requirement, as well as additional conditions that mimic physical properties of the solution, such as positivity or total variation stability. These conditions may be required for stability when the solution develops shocks or sharp gradients. This chapter contains a review of some of the methods historically used for the evolution of hyperbolic PDEs, as well as cutting edge methods that are now commonly used.
Mesiar, Radko; Li, J.; Pap, E.
2013-01-01
Roč. 54, č. 3 (2013), s. 357-364 ISSN 0888-613X R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : concave integral * pseudo-addition * pseudo-multiplication Subject RIV: BA - General Mathematics Impact factor: 1.977, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-discrete pseudo-integrals.pdf
Discrete variational Hamiltonian mechanics
Lall, S; West, M
2006-01-01
The main contribution of this paper is to present a canonical choice of a Hamiltonian theory corresponding to the theory of discrete Lagrangian mechanics. We make use of Lagrange duality and follow a path parallel to that used for construction of the Pontryagin principle in optimal control theory. We use duality results regarding sensitivity and separability to show the relationship between generating functions and symplectic integrators. We also discuss connections to optimal control theory and numerical algorithms
Jalnapurkar, Sameer M; Leok, Melvin; Marsden, Jerrold E; West, Matthew
2006-01-01
This paper develops the theory of Abelian Routh reduction for discrete mechanical systems and applies it to the variational integration of mechanical systems with Abelian symmetry. The reduction of variational Runge-Kutta discretizations is considered, as well as the extent to which symmetry reduction and discretization commute. These reduced methods allow the direct simulation of dynamical features such as relative equilibria and relative periodic orbits that can be obscured or difficult to identify in the unreduced dynamics. The methods are demonstrated for the dynamics of an Earth orbiting satellite with a non-spherical J 2 correction, as well as the double spherical pendulum. The J 2 problem is interesting because in the unreduced picture, geometric phases inherent in the model and those due to numerical discretization can be hard to distinguish, but this issue does not appear in the reduced algorithm, where one can directly observe interesting dynamical structures in the reduced phase space (the cotangent bundle of shape space), in which the geometric phases have been removed. The main feature of the double spherical pendulum example is that it has a non-trivial magnetic term in its reduced symplectic form. Our method is still efficient as it can directly handle the essential non-canonical nature of the symplectic structure. In contrast, a traditional symplectic method for canonical systems could require repeated coordinate changes if one is evoking Darboux' theorem to transform the symplectic structure into canonical form, thereby incurring additional computational cost. Our method allows one to design reduced symplectic integrators in a natural way, despite the non-canonical nature of the symplectic structure
Discrete approach to complex planar geometries
Cupini, E.; De Matteis, A.
1974-01-01
Planar regions in Monte Carlo transport problems have been represented by a finite set of points with a corresponding region index for each. The simulation of particle free-flight reduces then to the simple operations necessary for scanning appropriate grid points to determine whether a region other than the starting one is encountered. When the complexity of the geometry is restricted to only some regions of the assembly examined, a mixed discrete-continuous philosophy may be adopted. By this approach, the lattice of a thermal reactor has been treated, discretizing only the central regions of the cell containing the fuel rods. Excellent agreement with experimental results has been obtained in the computation of cell parameters in the energy range from fission to thermalization through the 238 U resonance region. (U.S.)
Discrete port-Hamiltonian systems
Talasila, V.; Clemente-Gallardo, J.; Schaft, A.J. van der
2006-01-01
Either from a control theoretic viewpoint or from an analysis viewpoint it is necessary to convert smooth systems to discrete systems, which can then be implemented on computers for numerical simulations. Discrete models can be obtained either by discretizing a smooth model, or by directly modeling
A paradigm for discrete physics
Noyes, H.P.; McGoveran, D.; Etter, T.; Manthey, M.J.; Gefwert, C.
1987-01-01
An example is outlined for constructing a discrete physics using as a starting point the insight from quantum physics that events are discrete, indivisible and non-local. Initial postulates are finiteness, discreteness, finite computability, absolute nonuniqueness (i.e., homogeneity in the absence of specific cause) and additivity
Two new discrete integrable systems
Chen Xiao-Hong; Zhang Hong-Qing
2013-01-01
In this paper, we focus on the construction of new (1+1)-dimensional discrete integrable systems according to a subalgebra of loop algebra Ã 1 . By designing two new (1+1)-dimensional discrete spectral problems, two new discrete integrable systems are obtained, namely, a 2-field lattice hierarchy and a 3-field lattice hierarchy. When deriving the two new discrete integrable systems, we find the generalized relativistic Toda lattice hierarchy and the generalized modified Toda lattice hierarchy. Moreover, we also obtain the Hamiltonian structures of the two lattice hierarchies by means of the discrete trace identity
Topology and statistics in zero dimensions
Aneziris, Charilaos.
1992-05-01
It has been suggested that space-time may be intrinsically not continuous, but discrete. Here we review some topological notions of discrete manifolds, in particular ones made out of final number of points, and discuss the possibilties for statistics in such spaces. (author)
Hirsch, M; Peinado, E; Valle, J W F
2010-01-01
We propose a new motivation for the stability of dark matter (DM). We suggest that the same non-abelian discrete flavor symmetry which accounts for the observed pattern of neutrino oscillations, spontaneously breaks to a Z2 subgroup which renders DM stable. The simplest scheme leads to a scalar doublet DM potentially detectable in nuclear recoil experiments, inverse neutrino mass hierarchy, hence a neutrinoless double beta decay rate accessible to upcoming searches, while reactor angle equal to zero gives no CP violation in neutrino oscillations.
Wuensche, Andrew
DDLab is interactive graphics software for creating, visualizing, and analyzing many aspects of Cellular Automata, Random Boolean Networks, and Discrete Dynamical Networks in general and studying their behavior, both from the time-series perspective — space-time patterns, and from the state-space perspective — attractor basins. DDLab is relevant to research, applications, and education in the fields of complexity, self-organization, emergent phenomena, chaos, collision-based computing, neural networks, content addressable memory, genetic regulatory networks, dynamical encryption, generative art and music, and the study of the abstract mathematical/physical/dynamical phenomena in their own right.
Lim, Gyeong Hui
2008-03-01
This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics
Mathematical statistics and stochastic processes
Bosq, Denis
2013-01-01
Generally, books on mathematical statistics are restricted to the case of independent identically distributed random variables. In this book however, both this case AND the case of dependent variables, i.e. statistics for discrete and continuous time processes, are studied. This second case is very important for today's practitioners.Mathematical Statistics and Stochastic Processes is based on decision theory and asymptotic statistics and contains up-to-date information on the relevant topics of theory of probability, estimation, confidence intervals, non-parametric statistics and rob
Scheck, Florian
2016-01-01
Scheck’s textbook starts with a concise introduction to classical thermodynamics, including geometrical aspects. Then a short introduction to probabilities and statistics lays the basis for the statistical interpretation of thermodynamics. Phase transitions, discrete models and the stability of matter are explained in great detail. Thermodynamics has a special role in theoretical physics. Due to the general approach of thermodynamics the field has a bridging function between several areas like the theory of condensed matter, elementary particle physics, astrophysics and cosmology. The classical thermodynamics describes predominantly averaged properties of matter, reaching from few particle systems and state of matter to stellar objects. Statistical Thermodynamics covers the same fields, but explores them in greater depth and unifies classical statistical mechanics with quantum theory of multiple particle systems. The content is presented as two tracks: the fast track for master students, providing the essen...
Souza, Manoelito M. de
1997-01-01
We discuss the physical meaning and the geometric interpretation of implementation in classical field theories. The origin of infinities and other inconsistencies in field theories is traced to fields defined with support on the light cone; a finite and consistent field theory requires a light-cone generator as the field support. Then, we introduce a classical field theory with support on the light cone generators. It results on a description of discrete (point-like) interactions in terms of localized particle-like fields. We find the propagators of these particle-like fields and discuss their physical meaning, properties and consequences. They are conformally invariant, singularity-free, and describing a manifestly covariant (1 + 1)-dimensional dynamics in a (3 = 1) spacetime. Remarkably this conformal symmetry remains even for the propagation of a massive field in four spacetime dimensions. We apply this formalism to Classical electrodynamics and to the General Relativity Theory. The standard formalism with its distributed fields is retrieved in terms of spacetime average of the discrete field. Singularities are the by-products of the averaging process. This new formalism enlighten the meaning and the problem of field theory, and may allow a softer transition to a quantum theory. (author)
... scan - orbits; CT scan - sinuses; Computed tomography - cranial; CAT scan - brain ... head size in children Changes in thinking or behavior Fainting Headache, when you have certain other signs ...
Discrete Exterior Calculus Discretization of Incompressible Navier-Stokes Equations
Mohamed, Mamdouh S.
2017-05-23
A conservative discretization of incompressible Navier-Stokes equations over surface simplicial meshes is developed using discrete exterior calculus (DEC). Numerical experiments for flows over surfaces reveal a second order accuracy for the developed scheme when using structured-triangular meshes, and first order accuracy otherwise. The mimetic character of many of the DEC operators provides exact conservation of both mass and vorticity, in addition to superior kinetic energy conservation. The employment of barycentric Hodge star allows the discretization to admit arbitrary simplicial meshes. The discretization scheme is presented along with various numerical test cases demonstrating its main characteristics.
Ji Wei
2010-10-01
Full Text Available Abstract Background Microarray data discretization is a basic preprocess for many algorithms of gene regulatory network inference. Some common discretization methods in informatics are used to discretize microarray data. Selection of the discretization method is often arbitrary and no systematic comparison of different discretization has been conducted, in the context of gene regulatory network inference from time series gene expression data. Results In this study, we propose a new discretization method "bikmeans", and compare its performance with four other widely-used discretization methods using different datasets, modeling algorithms and number of intervals. Sensitivities, specificities and total accuracies were calculated and statistical analysis was carried out. Bikmeans method always gave high total accuracies. Conclusions Our results indicate that proper discretization methods can consistently improve gene regulatory network inference independent of network modeling algorithms and datasets. Our new method, bikmeans, resulted in significant better total accuracies than other methods.
Advances in discrete differential geometry
2016-01-01
This is one of the first books on a newly emerging field of discrete differential geometry and an excellent way to access this exciting area. It surveys the fascinating connections between discrete models in differential geometry and complex analysis, integrable systems and applications in computer graphics. The authors take a closer look at discrete models in differential geometry and dynamical systems. Their curves are polygonal, surfaces are made from triangles and quadrilaterals, and time is discrete. Nevertheless, the difference between the corresponding smooth curves, surfaces and classical dynamical systems with continuous time can hardly be seen. This is the paradigm of structure-preserving discretizations. Current advances in this field are stimulated to a large extent by its relevance for computer graphics and mathematical physics. This book is written by specialists working together on a common research project. It is about differential geometry and dynamical systems, smooth and discrete theories, ...
Poisson hierarchy of discrete strings
Ioannidou, Theodora; Niemi, Antti J.
2016-01-01
The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed.
Poisson hierarchy of discrete strings
Ioannidou, Theodora, E-mail: ti3@auth.gr [Faculty of Civil Engineering, School of Engineering, Aristotle University of Thessaloniki, 54249, Thessaloniki (Greece); Niemi, Antti J., E-mail: Antti.Niemi@physics.uu.se [Department of Physics and Astronomy, Uppsala University, P.O. Box 803, S-75108, Uppsala (Sweden); Laboratoire de Mathematiques et Physique Theorique CNRS UMR 6083, Fédération Denis Poisson, Université de Tours, Parc de Grandmont, F37200, Tours (France); Department of Physics, Beijing Institute of Technology, Haidian District, Beijing 100081 (China)
2016-01-28
The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed.
Principles of discrete time mechanics
Jaroszkiewicz, George
2014-01-01
Could time be discrete on some unimaginably small scale? Exploring the idea in depth, this unique introduction to discrete time mechanics systematically builds the theory up from scratch, beginning with the historical, physical and mathematical background to the chronon hypothesis. Covering classical and quantum discrete time mechanics, this book presents all the tools needed to formulate and develop applications of discrete time mechanics in a number of areas, including spreadsheet mechanics, classical and quantum register mechanics, and classical and quantum mechanics and field theories. A consistent emphasis on contextuality and the observer-system relationship is maintained throughout.
Dark discrete gauge symmetries
Batell, Brian
2011-01-01
We investigate scenarios in which dark matter is stabilized by an Abelian Z N discrete gauge symmetry. Models are surveyed according to symmetries and matter content. Multicomponent dark matter arises when N is not prime and Z N contains one or more subgroups. The dark sector interacts with the visible sector through the renormalizable kinetic mixing and Higgs portal operators, and we highlight the basic phenomenology in these scenarios. In particular, multiple species of dark matter can lead to an unconventional nuclear recoil spectrum in direct detection experiments, while the presence of new light states in the dark sector can dramatically affect the decays of the Higgs at the Tevatron and LHC, thus providing a window into the gauge origin of the stability of dark matter.
Noyes, H.P.; Starson, S.
1991-03-01
Discrete physics, because it replaces time evolution generated by the energy operator with a global bit-string generator (program universe) and replaces ''fields'' with the relativistic Wheeler-Feynman ''action at a distance,'' allows the consistent formulation of the concept of signed gravitational charge for massive particles. The resulting prediction made by this version of the theory is that free anti-particles near the surface of the earth will ''fall'' up with the same acceleration that the corresponding particles fall down. So far as we can see, no current experimental information is in conflict with this prediction of our theory. The experiment crusis will be one of the anti-proton or anti-hydrogen experiments at CERN. Our prediction should be much easier to test than the small effects which those experiments are currently designed to detect or bound. 23 refs
... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...
Computational statistics handbook with Matlab
Martinez, Wendy L
2007-01-01
Prefaces Introduction What Is Computational Statistics? An Overview of the Book Probability Concepts Introduction Probability Conditional Probability and Independence Expectation Common Distributions Sampling Concepts Introduction Sampling Terminology and Concepts Sampling Distributions Parameter Estimation Empirical Distribution Function Generating Random Variables Introduction General Techniques for Generating Random Variables Generating Continuous Random Variables Generating Discrete Random Variables Exploratory Data Analysis Introduction Exploring Univariate Data Exploring Bivariate and Trivariate Data Exploring Multidimensional Data Finding Structure Introduction Projecting Data Principal Component Analysis Projection Pursuit EDA Independent Component Analysis Grand Tour Nonlinear Dimensionality Reduction Monte Carlo Methods for Inferential Statistics Introduction Classical Inferential Statistics Monte Carlo Methods for Inferential Statist...
Control of Discrete Event Systems
Smedinga, Rein
1989-01-01
Systemen met discrete gebeurtenissen spelen in vele gebieden een rol. In dit proefschrift staat de volgorde van gebeurtenissen centraal en worden tijdsaspecten buiten beschouwing gelaten. In dat geval kunnen systemen met discrete gebeurtenissen goed worden gemodelleerd door gebruik te maken van
Discrete Mathematics and Its Applications
Oxley, Alan
2010-01-01
The article gives ideas that lecturers of undergraduate Discrete Mathematics courses can use in order to make the subject more interesting for students and encourage them to undertake further studies in the subject. It is possible to teach Discrete Mathematics with little or no reference to computing. However, students are more likely to be…
Discrete Mathematics and Curriculum Reform.
Kenney, Margaret J.
1996-01-01
Defines discrete mathematics as the mathematics necessary to effect reasoned decision making in finite situations and explains how its use supports the current view of mathematics education. Discrete mathematics can be used by curriculum developers to improve the curriculum for students of all ages and abilities. (SLD)
Connections on discrete fibre bundles
Manton, N.S.; Cambridge Univ.
1987-01-01
A new approach to gauge fields on a discrete space-time is proposed, in which the fundamental object is a discrete version of a principal fibre bundle. If the bundle is twisted, the gauge fields are topologically non-trivial automatically. (orig.)
Discrete dynamics versus analytic dynamics
Toxværd, Søren
2014-01-01
For discrete classical Molecular dynamics obtained by the “Verlet” algorithm (VA) with the time increment h there exists a shadow Hamiltonian H˜ with energy E˜(h) , for which the discrete particle positions lie on the analytic trajectories for H˜ . Here, we proof that there, independent...... of such an analytic analogy, exists an exact hidden energy invariance E * for VA dynamics. The fact that the discrete VA dynamics has the same invariances as Newtonian dynamics raises the question, which of the formulations that are correct, or alternatively, the most appropriate formulation of classical dynamics....... In this context the relation between the discrete VA dynamics and the (general) discrete dynamics investigated by Lee [Phys. Lett. B122, 217 (1983)] is presented and discussed....
Modern approaches to discrete curvature
Romon, Pascal
2017-01-01
This book provides a valuable glimpse into discrete curvature, a rich new field of research which blends discrete mathematics, differential geometry, probability and computer graphics. It includes a vast collection of ideas and tools which will offer something new to all interested readers. Discrete geometry has arisen as much as a theoretical development as in response to unforeseen challenges coming from applications. Discrete and continuous geometries have turned out to be intimately connected. Discrete curvature is the key concept connecting them through many bridges in numerous fields: metric spaces, Riemannian and Euclidean geometries, geometric measure theory, topology, partial differential equations, calculus of variations, gradient flows, asymptotic analysis, probability, harmonic analysis, graph theory, etc. In spite of its crucial importance both in theoretical mathematics and in applications, up to now, almost no books have provided a coherent outlook on this emerging field.
Discretion and Disproportionality
Jason A. Grissom
2015-12-01
Full Text Available Students of color are underrepresented in gifted programs relative to White students, but the reasons for this underrepresentation are poorly understood. We investigate the predictors of gifted assignment using nationally representative, longitudinal data on elementary students. We document that even among students with high standardized test scores, Black students are less likely to be assigned to gifted services in both math and reading, a pattern that persists when controlling for other background factors, such as health and socioeconomic status, and characteristics of classrooms and schools. We then investigate the role of teacher discretion, leveraging research from political science suggesting that clients of government services from traditionally underrepresented groups benefit from diversity in the providers of those services, including teachers. Even after conditioning on test scores and other factors, Black students indeed are referred to gifted programs, particularly in reading, at significantly lower rates when taught by non-Black teachers, a concerning result given the relatively low incidence of assignment to own-race teachers among Black students.
Vlad, Valentin I.; Ionescu-Pallas, Nicholas
2000-10-01
The Planck radiation spectrum of ideal cubic and spherical cavities, in the region of small adiabatic invariance, γ = TV 1/3 , is shown to be discrete and strongly dependent on the cavity geometry and temperature. This behavior is the consequence of the random distribution of the state weights in the cubic cavity and of the random overlapping of the successive multiplet components, for the spherical cavity. The total energy (obtained by summing up the exact contributions of the eigenvalues and their weights, for low values of the adiabatic invariance) does not obey any longer Stefan-Boltzmann law. The new law includes a corrective factor depending on γ and imposes a faster decrease of the total energy to zero, for γ → 0. We have defined the double quantized regime both for cubic and spherical cavities by the superior and inferior limits put on the principal quantum numbers or the adiabatic invariance. The total energy of the double quantized cavities shows large differences from the classical calculations over unexpected large intervals, which are measurable and put in evidence important macroscopic quantum effects. (author)
... results on a PET scan. Blood sugar or insulin levels may affect the test results in people with diabetes . PET scans may be done along with a CT scan. This combination scan is called a PET/CT. Alternative Names Brain positron emission tomography; PET scan - brain References Chernecky ...
Statistical inference based on divergence measures
Pardo, Leandro
2005-01-01
The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, prese...
Perfect discretization of path integrals
Steinhaus, Sebastian
2012-01-01
In order to obtain a well-defined path integral one often employs discretizations. In the case of General Relativity these generically break diffeomorphism symmetry, which has severe consequences since these symmetries determine the dynamics of the corresponding system. In this article we consider the path integral of reparametrization invariant systems as a toy example and present an improvement procedure for the discretized propagator. Fixed points and convergence of the procedure are discussed. Furthermore we show that a reparametrization invariant path integral implies discretization independence and acts as a projector onto physical states.
Perfect discretization of path integrals
Steinhaus, Sebastian
2012-05-01
In order to obtain a well-defined path integral one often employs discretizations. In the case of General Relativity these generically break diffeomorphism symmetry, which has severe consequences since these symmetries determine the dynamics of the corresponding system. In this article we consider the path integral of reparametrization invariant systems as a toy example and present an improvement procedure for the discretized propagator. Fixed points and convergence of the procedure are discussed. Furthermore we show that a reparametrization invariant path integral implies discretization independence and acts as a projector onto physical states.
The origin of discrete particles
Bastin, T
2009-01-01
This book is a unique summary of the results of a long research project undertaken by the authors on discreteness in modern physics. In contrast with the usual expectation that discreteness is the result of mathematical tools for insertion into a continuous theory, this more basic treatment builds up the world from the discrimination of discrete entities. This gives an algebraic structure in which certain fixed numbers arise. As such, one agrees with the measured value of the fine-structure constant to one part in 10,000,000 (10 7 ). Sample Chapter(s). Foreword (56 KB). Chapter 1: Introduction
Transverse scan-field imaging apparatus
Lyons, F.T.
1978-01-01
A description is given of an array of opposed pairs of radiation detectors which could be used in tomography or scintiscanning. The opposed detectors scan in opposite tangential directions in a pre-programmed fashion. The associated control system receives the detector outputs into a buffer store and also provides an address for each element of information detected. The addresses are such that information from one buffer store is read into the RAM of a central processing unit in the opposite direction to that from the store associated with the opposite detector, thus effectively reversing the scan direction of one detector of each pair. Also described are the detectors themselves with focussed collimators, the scan drive mechanism, and the method of calculating radioactive emission intensity at discrete points throughout the scan-field. (author)
... nuclear medicine scan; Heart positron emission tomography; Myocardial PET scan ... A PET scan requires a small amount of radioactive material (tracer). This tracer is given through a vein (IV), ...
Synchronization Techniques in Parallel Discrete Event Simulation
Lindén, Jonatan
2018-01-01
Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...
3-D Discrete Analytical Ridgelet Transform
Helbert , David; Carré , Philippe; Andrès , Éric
2006-01-01
International audience; In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines:...
... this page: https://medlineplus.gov/usestatistics.html MedlinePlus Statistics To use the sharing features on this page, ... By Quarter View image full size Quarterly User Statistics Quarter Page Views Unique Visitors Oct-Dec-98 ...
Pestman, Wiebe R
2009-01-01
This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.
Discrete geometric structures for architecture
Pottmann, Helmut
2010-01-01
. The talk will provide an overview of recent progress in this field, with a particular focus on discrete geometric structures. Most of these result from practical requirements on segmenting a freeform shape into planar panels and on the physical realization
Causal Dynamics of Discrete Surfaces
Pablo Arrighi
2014-03-01
Full Text Available We formalize the intuitive idea of a labelled discrete surface which evolves in time, subject to two natural constraints: the evolution does not propagate information too fast; and it acts everywhere the same.
Whole Frog Project and Virtual Frog Dissection Statistics wwwstats output for January 1 through duplicate or extraneous accesses. For example, in these statistics, while a POST requesting an image is as well. Note that this under-represents the bytes requested. Starting date for following statistics
Perfect discretization of path integrals
Steinhaus, Sebastian
2011-01-01
In order to obtain a well-defined path integral one often employs discretizations. In the case of General Relativity these generically break diffeomorphism symmetry, which has severe consequences since these symmetries determine the dynamics of the corresponding system. In this article we consider the path integral of reparametrization invariant systems as a toy example and present an improvement procedure for the discretized propagator. Fixed points and convergence of the procedure are discu...
Alfa, Attahiru S
2016-01-01
This book introduces the theoretical fundamentals for modeling queues in discrete-time, and the basic procedures for developing queuing models in discrete-time. There is a focus on applications in modern telecommunication systems. It presents how most queueing models in discrete-time can be set up as discrete-time Markov chains. Techniques such as matrix-analytic methods (MAM) that can used to analyze the resulting Markov chains are included. This book covers single node systems, tandem system and queueing networks. It shows how queues with time-varying parameters can be analyzed, and illustrates numerical issues associated with computations for the discrete-time queueing systems. Optimal control of queues is also covered. Applied Discrete-Time Queues targets researchers, advanced-level students and analysts in the field of telecommunication networks. It is suitable as a reference book and can also be used as a secondary text book in computer engineering and computer science. Examples and exercises are includ...
Statistical data fusion for cross-tabulation
Kamakura, W.A.; Wedel, M.
The authors address the situation in which a researcher wants to cross-tabulate two sets of discrete variables collected in independent samples, but a subset of the variables is common to both samples. The authors propose a statistical data-fusion model that allows for statistical tests of
Discrete Curvature Theories and Applications
Sun, Xiang
2016-08-25
Discrete Di erential Geometry (DDG) concerns discrete counterparts of notions and methods in di erential geometry. This thesis deals with a core subject in DDG, discrete curvature theories on various types of polyhedral surfaces that are practically important for free-form architecture, sunlight-redirecting shading systems, and face recognition. Modeled as polyhedral surfaces, the shapes of free-form structures may have to satisfy di erent geometric or physical constraints. We study a combination of geometry and physics { the discrete surfaces that can stand on their own, as well as having proper shapes for the manufacture. These proper shapes, known as circular and conical meshes, are closely related to discrete principal curvatures. We study curvature theories that make such surfaces possible. Shading systems of freeform building skins are new types of energy-saving structures that can re-direct the sunlight. From these systems, discrete line congruences across polyhedral surfaces can be abstracted. We develop a new curvature theory for polyhedral surfaces equipped with normal congruences { a particular type of congruences de ned by linear interpolation of vertex normals. The main results are a discussion of various de nitions of normality, a detailed study of the geometry of such congruences, and a concept of curvatures and shape operators associated with the faces of a triangle mesh. These curvatures are compatible with both normal congruences and the Steiner formula. In addition to architecture, we consider the role of discrete curvatures in face recognition. We use geometric measure theory to introduce the notion of asymptotic cones associated with a singular subspace of a Riemannian manifold, which is an extension of the classical notion of asymptotic directions. We get a simple expression of these cones for polyhedral surfaces, as well as convergence and approximation theorems. We use the asymptotic cones as facial descriptors and demonstrate the
Yanagawa, Masahiro; Honda, Osamu; Kikuyama, Ayano; Gyobu, Tomoko; Sumikawa, Hiromitsu; Koyama, Mitsuhiro; Tomiyama, Noriyuki
2012-10-01
To evaluate the effects of ASIR on CAD system of pulmonary nodules using clinical routine-dose CT and lower-dose CT. Thirty-five patients (body mass index, 22.17 ± 4.37 kg/m(2)) were scanned by multidetector-row CT with tube currents (clinical routine-dose CT, automatically adjusted mA; lower-dose CT, 10 mA) and X-ray voltage (120 kVp). Each 0.625-mm-thick image was reconstructed at 0%-, 50%-, and 100%-ASIR: 0%-ASIR is reconstructed using only the filtered back-projection algorithm (FBP), while 100%-ASIR is reconstructed using the maximum ASIR and 50%-ASIR implies a blending of 50% FBP and ASIR. CAD output was compared retrospectively with the results of the reference standard which was established using a consensus panel of three radiologists. Data were analyzed using Bonferroni/Dunn's method. Radiation dose was calculated by multiplying dose-length product by conversion coefficient of 0.021. The consensus panel found 265 non-calcified nodules ≤ 30 mm (ground-glass opacity [GGO], 103; part-solid, 34; and solid, 128). CAD sensitivity was significantly higher at 100%-ASIR [clinical routine-dose CT, 71% (overall), 49% (GGO); lower-dose CT, 52% (overall), 67% (solid)] than at 0%-ASIR [clinical routine-dose CT, 54% (overall), 25% (GGO); lower-dose CT, 36% (overall), 50% (solid)] (pASIR (clinical routine-dose CT, 8.5; lower-dose CT, 6.2) than at 0%-ASIR (clinical routine-dose CT, 4.6; lower-dose CT, 3.5; pASIR on lower-dose CT is almost equal to that at 0%-ASIR on clinical routine-dose CT. ASIR can increase CAD sensitivity despite increased false-positive findings. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Sadovskii, Michael V
2012-01-01
This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.
Analysis of Discrete Mittag - Leffler Functions
N. Shobanadevi
2015-03-01
Full Text Available Discrete Mittag - Leffler functions play a major role in the development of the theory of discrete fractional calculus. In the present article, we analyze qualitative properties of discrete Mittag - Leffler functions and establish sufficient conditions for convergence, oscillation and summability of the infinite series associated with discrete Mittag - Leffler functions.
Goodman, Joseph W
2015-01-01
This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications. The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
2017-05-15
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.
Eliazar, Iddo
2017-01-01
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.
Szulc, Stefan
1965-01-01
Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then
... Testing Treatment & Outcomes Health Professionals Statistics More Resources Candidiasis Candida infections of the mouth, throat, and esophagus Vaginal candidiasis Invasive candidiasis Definition Symptoms Risk & Prevention Sources Diagnosis ...
A goodness of fit statistic for the geometric distribution
J.A. Ferreira
2003-01-01
textabstractWe propose a goodness of fit statistic for the geometric distribution and compare it in terms of power, via simulation, with the chi-square statistic. The statistic is based on the Lau-Rao theorem and can be seen as a discrete analogue of the total time on test statistic. The results
Distribution for fermionic discrete lattice gas within the canonical ensemble
Kutner, R.; Barszczak, T.
1991-01-01
The distinct deviations from the Fermi-Dirac statistics ascertained recently at low temperatures for a one-dimensional, spinless fermionic discrete lattice gas with conserved number of noninteracting particles hopping on the nondegenerated, well-separated single-particle energy levels are studied in numerical and theoretical terms. The generalized distribution is derived in the form n(h) = {Y h exp[(var-epsilon h -μ)β]+1} -1 valid even in the thermodynamic limit, when the discreteness of the energy levels is kept. This distribution demonstrates good agreement with the data obtained numerically both by the canonical partition-function technique and by Monte Carlo simulation
Connection between Fourier coefficient and Discretized Cartesian path integration
Coalson, R.D.
1986-01-01
The relationship between so-called Discretized and Fourier coefficient formulations of Cartesian path integration is examined. In particular, an intimate connection between the two is established by rewriting the Discretized formulation in a manifestly Fourier-like way. This leads to improved understanding of both the limit behavior and the convergence properties of computational prescriptions based on the two formalisms. The performance of various prescriptions is compared with regard to calculation of on-diagonal statistical density matrix elements for a number of prototypical 1-d potentials. A consistent convergence order among these prescriptions is established
Scanning probe recognition microscopy investigation of tissue scaffold properties
Fan, Yuan; Chen, Qian; Ayres, Virginia M; Baczewski, Andrew D; Udpa, Lalita; Kumar, Shiva
2007-01-01
Scanning probe recognition microscopy is a new scanning probe microscopy technique which enables selective scanning along individual nanofibers within a tissue scaffold. Statistically significant data for multiple properties can be collected by repetitively fine-scanning an identical region of interest. The results of a scanning probe recognition microscopy investigation of the surface roughness and elasticity of a series of tissue scaffolds are presented. Deconvolution and statistical methods were developed and used for data accuracy along curved nanofiber surfaces. Nanofiber features were also independently analyzed using transmission electron microscopy, with results that supported the scanning probe recognition microscopy-based analysis. PMID:18203431
Foundations of a discrete physics
McGoveran, D.; Noyes, P.
1988-01-01
Starting from the principles of finiteness, discreteness, finite computability and absolute nonuniqueness, we develop the ordering operator calculus, a strictly constructive mathematical system having the empirical properties required by quantum mechanical and special relativistic phenomena. We show how to construct discrete distance functions, and both rectangular and spherical coordinate systems(with a discrete version of ''π''). The richest discrete space constructible without a preferred axis and preserving translational and rotational invariance is shown to be a discrete 3-space with the usual symmetries. We introduce a local ordering parameter with local (proper) time-like properties and universal ordering parameters with global (cosmological) time-like properties. Constructed ''attribute velocities'' connect ensembles with attributes that are invariant as the appropriate time-like parameter increases. For each such attribute, we show how to construct attribute velocities which must satisfy the '' relativistic Doppler shift'' and the ''relativistic velocity composition law,'' as well as the Lorentz transformations. By construction, these velocities have finite maximum and minimum values. In the space of all attributes, the minimum of these maximum velocities will predominate in all multiple attribute computations, and hence can be identified as a fundamental limiting velocity, General commutation relations are constructed which under the physical interpretation are shown to reduce to the usual quantum mechanical commutation relations. 50 refs., 18 figs
Yanagawa, Masahiro, E-mail: m-yanagawa@radiol.med.osaka-u.ac.jp [Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-city, Osaka 565-0871 (Japan); Honda, Osamu, E-mail: ohonda@radiol.med.osaka-u.ac.jp [Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-city, Osaka 565-0871 (Japan); Kikuyama, Ayano, E-mail: a-kikuyama@radiol.med.osaka-u.ac.jp [Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-city, Osaka 565-0871 (Japan); Gyobu, Tomoko, E-mail: t-gyobu@radiol.med.osaka-u.ac.jp [Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-city, Osaka 565-0871 (Japan); Sumikawa, Hiromitsu, E-mail: h-sumikawa@radiol.med.osaka-u.ac.jp [Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-city, Osaka 565-0871 (Japan); Koyama, Mitsuhiro, E-mail: m-koyama@radiol.med.osaka-u.ac.jp [Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-city, Osaka 565-0871 (Japan); Tomiyama, Noriyuki, E-mail: tomiyama@radiol.med.osaka-u.ac.jp [Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-city, Osaka 565-0871 (Japan)
2012-10-15
Purpose: To evaluate the effects of ASIR on CAD system of pulmonary nodules using clinical routine-dose CT and lower-dose CT. Materials and methods: Thirty-five patients (body mass index, 22.17 ± 4.37 kg/m{sup 2}) were scanned by multidetector-row CT with tube currents (clinical routine-dose CT, automatically adjusted mA; lower-dose CT, 10 mA) and X-ray voltage (120 kVp). Each 0.625-mm-thick image was reconstructed at 0%-, 50%-, and 100%-ASIR: 0%-ASIR is reconstructed using only the filtered back-projection algorithm (FBP), while 100%-ASIR is reconstructed using the maximum ASIR and 50%-ASIR implies a blending of 50% FBP and ASIR. CAD output was compared retrospectively with the results of the reference standard which was established using a consensus panel of three radiologists. Data were analyzed using Bonferroni/Dunn's method. Radiation dose was calculated by multiplying dose-length product by conversion coefficient of 0.021. Results: The consensus panel found 265 non-calcified nodules ≤30 mm (ground-glass opacity [GGO], 103; part-solid, 34; and solid, 128). CAD sensitivity was significantly higher at 100%-ASIR [clinical routine-dose CT, 71% (overall), 49% (GGO); lower-dose CT, 52% (overall), 67% (solid)] than at 0%-ASIR [clinical routine-dose CT, 54% (overall), 25% (GGO); lower-dose CT, 36% (overall), 50% (solid)] (p < 0.001). Mean number of false-positive findings per examination was significantly higher at 100%-ASIR (clinical routine-dose CT, 8.5; lower-dose CT, 6.2) than at 0%-ASIR (clinical routine-dose CT, 4.6; lower-dose CT, 3.5; p < 0.001). Effective doses were 10.77 ± 3.41 mSv in clinical routine-dose CT and 2.67 ± 0.17 mSv in lower-dose CT. Conclusion: CAD sensitivity at 100%-ASIR on lower-dose CT is almost equal to that at 0%-ASIR on clinical routine-dose CT. ASIR can increase CAD sensitivity despite increased false-positive findings.
Yanagawa, Masahiro; Honda, Osamu; Kikuyama, Ayano; Gyobu, Tomoko; Sumikawa, Hiromitsu; Koyama, Mitsuhiro; Tomiyama, Noriyuki
2012-01-01
Purpose: To evaluate the effects of ASIR on CAD system of pulmonary nodules using clinical routine-dose CT and lower-dose CT. Materials and methods: Thirty-five patients (body mass index, 22.17 ± 4.37 kg/m 2 ) were scanned by multidetector-row CT with tube currents (clinical routine-dose CT, automatically adjusted mA; lower-dose CT, 10 mA) and X-ray voltage (120 kVp). Each 0.625-mm-thick image was reconstructed at 0%-, 50%-, and 100%-ASIR: 0%-ASIR is reconstructed using only the filtered back-projection algorithm (FBP), while 100%-ASIR is reconstructed using the maximum ASIR and 50%-ASIR implies a blending of 50% FBP and ASIR. CAD output was compared retrospectively with the results of the reference standard which was established using a consensus panel of three radiologists. Data were analyzed using Bonferroni/Dunn's method. Radiation dose was calculated by multiplying dose-length product by conversion coefficient of 0.021. Results: The consensus panel found 265 non-calcified nodules ≤30 mm (ground-glass opacity [GGO], 103; part-solid, 34; and solid, 128). CAD sensitivity was significantly higher at 100%-ASIR [clinical routine-dose CT, 71% (overall), 49% (GGO); lower-dose CT, 52% (overall), 67% (solid)] than at 0%-ASIR [clinical routine-dose CT, 54% (overall), 25% (GGO); lower-dose CT, 36% (overall), 50% (solid)] (p < 0.001). Mean number of false-positive findings per examination was significantly higher at 100%-ASIR (clinical routine-dose CT, 8.5; lower-dose CT, 6.2) than at 0%-ASIR (clinical routine-dose CT, 4.6; lower-dose CT, 3.5; p < 0.001). Effective doses were 10.77 ± 3.41 mSv in clinical routine-dose CT and 2.67 ± 0.17 mSv in lower-dose CT. Conclusion: CAD sensitivity at 100%-ASIR on lower-dose CT is almost equal to that at 0%-ASIR on clinical routine-dose CT. ASIR can increase CAD sensitivity despite increased false-positive findings
Ovesen, Christian; Jakobsen, Janus Christian; Gluud, Christian; Steiner, Thorsten; Law, Zhe; Flaherty, Katie; Dineen, Rob A; Bath, Philip M; Sprigg, Nikola; Christensen, Hanne
2018-06-13
We present the statistical analysis plan of a prespecified Tranexamic Acid for Hyperacute Primary Intracerebral Haemorrhage (TICH)-2 sub-study aiming to investigate, if tranexamic acid has a different effect in intracerebral haemorrhage patients with the spot sign on admission compared to spot sign negative patients. The TICH-2 trial recruited above 2000 participants with intracerebral haemorrhage arriving in hospital within 8 h after symptom onset. They were included irrespective of radiological signs of on-going haematoma expansion. Participants were randomised to tranexamic acid versus matching placebo. In this subgroup analysis, we will include all participants in TICH-2 with a computed tomography angiography on admission allowing adjudication of the participants' spot sign status. Primary outcome will be the ability of tranexamic acid to limit absolute haematoma volume on computed tomography at 24 h (± 12 h) after randomisation among spot sign positive and spot sign negative participants, respectively. Within all outcome measures, the effect of tranexamic acid in spot sign positive/negative participants will be compared using tests of interaction. This sub-study will investigate the important clinical hypothesis that spot sign positive patients might benefit more from administration of tranexamic acid compared to spot sign negative patients. Trial registration ISRCTN93732214 ( http://www.isrctn.com ).
Petocz, Peter; Sowey, Eric
2012-01-01
The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…
Petocz, Peter; Sowey, Eric
2008-01-01
In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…
Lyons, L.
2016-01-01
Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.
Discrete differential geometry. Consistency as integrability
Bobenko, Alexander I.; Suris, Yuri B.
2005-01-01
A new field of discrete differential geometry is presently emerging on the border between differential and discrete geometry. Whereas classical differential geometry investigates smooth geometric shapes (such as surfaces), and discrete geometry studies geometric shapes with finite number of elements (such as polyhedra), the discrete differential geometry aims at the development of discrete equivalents of notions and methods of smooth surface theory. Current interest in this field derives not ...
Integrable structure in discrete shell membrane theory.
Schief, W K
2014-05-08
We present natural discrete analogues of two integrable classes of shell membranes. By construction, these discrete shell membranes are in equilibrium with respect to suitably chosen internal stresses and external forces. The integrability of the underlying equilibrium equations is proved by relating the geometry of the discrete shell membranes to discrete O surface theory. We establish connections with generalized barycentric coordinates and nine-point centres and identify a discrete version of the classical Gauss equation of surface theory.
Zhou Yuan; Dai Ruping; Cao Cheng; Jing Baolian
2003-01-01
Objective: To evaluate the clinical efficacy of EBCT in diagnosing the congenital discrete subaortic stenosis. Methods: Data of four patients with congenital discrete subaortic stenosis diagnosed by EBCT were retrospectively analyzed and further compared with that of surgery and histopathologic examination. Results: Contrast enhanced EBCT scanning clearly demonstrated both a direct non-opacified sign in subvalvular regions in all four patients' left ventricle and associated cardiovascular anomalies. Movie mode scanning showed the movement of aortic valve and 'discrete membrane', and revealed distinct topography of subaortic outflow tracts as well. Conclusion: EBCT is highly valuable in the diagnosis of congenital discrete subaortic stenosis and associated anomalies by clearly demonstrating the subaortic outflow tract topography and complicated cardiovascular malformations. EBCT could be a complementary examination to cardioangiography, and could replace the cineangiography in the follow-up after operation
Degree distribution in discrete case
Wang, Li-Na; Chen, Bin; Yan, Zai-Zai
2011-01-01
Vertex degree of many network models and real-life networks is limited to non-negative integer. By means of measure and integral, the relation of the degree distribution and the cumulative degree distribution in discrete case is analyzed. The degree distribution, obtained by the differential of its cumulative, is only suitable for continuous case or discrete case with constant degree change. When degree change is not a constant but proportional to degree itself, power-law degree distribution and its cumulative have the same exponent and the mean value is finite for power-law exponent greater than 1. -- Highlights: → Degree change is the crux for using the cumulative degree distribution method. → It suits for discrete case with constant degree change. → If degree change is proportional to degree, power-law degree distribution and its cumulative have the same exponent. → In addition, the mean value is finite for power-law exponent greater than 1.
Nick, Todd G
2007-01-01
Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.
On the discrete Gabor transform and the discrete Zak transform
Bastiaans, M.J.; Geilen, M.C.W.
1996-01-01
Gabor's expansion of a discrete-time signal into a set of shifted and modulated versions of an elementary signal (or synthesis window) and the inverse operation -- the Gabor transform -- with which Gabor's expansion coefficients can be determined, are introduced. It is shown how, in the case of a
Discrete Choice and Rational Inattention
Fosgerau, Mogens; Melo, Emerson; de Palma, André
2017-01-01
This paper establishes a general equivalence between discrete choice and rational inattention models. Matejka and McKay (2015, AER) showed that when information costs are modelled using the Shannon entropy, the result- ing choice probabilities in the rational inattention model take the multinomial...... logit form. We show that when information costs are modelled using a class of generalized entropies, then the choice probabilities in any rational inattention model are observationally equivalent to some additive random utility discrete choice model and vice versa. This equivalence arises from convex...
Optimization of Operations Resources via Discrete Event Simulation Modeling
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
Discrete hierarchical organization of social group sizes.
Zhou, W-X; Sornette, D; Hill, R A; Dunbar, R I M
2005-02-22
The 'social brain hypothesis' for the evolution of large brains in primates has led to evidence for the coevolution of neocortical size and social group sizes, suggesting that there is a cognitive constraint on group size that depends, in some way, on the volume of neural material available for processing and synthesizing information on social relationships. More recently, work on both human and non-human primates has suggested that social groups are often hierarchically structured. We combine data on human grouping patterns in a comprehensive and systematic study. Using fractal analysis, we identify, with high statistical confidence, a discrete hierarchy of group sizes with a preferred scaling ratio close to three: rather than a single or a continuous spectrum of group sizes, humans spontaneously form groups of preferred sizes organized in a geometrical series approximating 3-5, 9-15, 30-45, etc. Such discrete scale invariance could be related to that identified in signatures of herding behaviour in financial markets and might reflect a hierarchical processing of social nearness by human brains.
Blakemore, J S
1962-01-01
Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co
Wannier, Gregory Hugh
1966-01-01
Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.Designed for
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
Radiopharmaceutical scanning agents
1976-01-01
This invention is directed to dispersions useful in preparing radiopharmaceutical scanning agents, to technetium labelled dispersions, to methods for preparing such dispersions and to their use as scanning agents
Full Text Available ... Scan and Uptake Thyroid scan and uptake uses small amounts of radioactive materials called radiotracers, a special ... is a branch of medical imaging that uses small amounts of radioactive material to diagnose and determine ...
... Home / Nuclear Heart Scan Nuclear Heart Scan Also known as Nuclear Stress Test , ... Learn More Connect With Us Contact Us Directly Policies Privacy Policy Freedom of Information Act (FOIA) Accessibility ...
Full Text Available ... of page What will I experience during and after the procedure? Most thyroid scan and thyroid uptake ... you otherwise, you may resume your normal activities after your nuclear medicine scan. If any special instructions ...
... page: //medlineplus.gov/ency/article/003835.htm RBC nuclear scan To use the sharing features on this page, please enable JavaScript. An RBC nuclear scan uses small amounts of radioactive material to ...
Wendelberger, Laura Jean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-08
In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.
Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...
U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Data about the usage of the WPRDC site and its various datasets, obtained by combining Google Analytics statistics with information from the WPRDC's data portal.
Serdobolskii, Vadim Ivanovich
2007-01-01
This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...
... Search Form Controls Cancel Submit Search the CDC Gonorrhea Note: Javascript is disabled or is not supported ... Twitter STD on Facebook Sexually Transmitted Diseases (STDs) Gonorrhea Statistics Recommend on Facebook Tweet Share Compartir Gonorrhea ...
Tryggestad, Kjell
2004-01-01
The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...
Discrete Hamiltonian evolution and quantum gravity
Husain, Viqar; Winkler, Oliver
2004-01-01
We study constrained Hamiltonian systems by utilizing general forms of time discretization. We show that for explicit discretizations, the requirement of preserving the canonical Poisson bracket under discrete evolution imposes strong conditions on both allowable discretizations and Hamiltonians. These conditions permit time discretizations for a limited class of Hamiltonians, which does not include homogeneous cosmological models. We also present two general classes of implicit discretizations which preserve Poisson brackets for any Hamiltonian. Both types of discretizations generically do not preserve first class constraint algebras. Using this observation, we show that time discretization provides a complicated time gauge fixing for quantum gravity models, which may be compared with the alternative procedure of gauge fixing before discretization
MacKenzie, Dana
2004-01-01
The drawbacks of using 19th-century mathematics in physics and astronomy are illustrated. To continue with the expansion of the knowledge about the cosmos, the scientists will have to come in terms with modern statistics. Some researchers have deliberately started importing techniques that are used in medical research. However, the physicists need to identify the brand of statistics that will be suitable for them, and make a choice between the Bayesian and the frequentists approach. (Edited abstract).
Mohamed, Mamdouh S.; Hirani, Anil N.; Samtaney, Ravi
2016-01-01
A conservative discretization of incompressible Navier–Stokes equations is developed based on discrete exterior calculus (DEC). A distinguishing feature of our method is the use of an algebraic discretization of the interior product operator and a
Solving discrete zero point problems
van der Laan, G.; Talman, A.J.J.; Yang, Z.F.
2004-01-01
In this paper an algorithm is proposed to .nd a discrete zero point of a function on the collection of integral points in the n-dimensional Euclidean space IRn.Starting with a given integral point, the algorithm generates a .nite sequence of adjacent integral simplices of varying dimension and
Succinct Sampling from Discrete Distributions
Bringmann, Karl; Larsen, Kasper Green
2013-01-01
We revisit the classic problem of sampling from a discrete distribution: Given n non-negative w-bit integers x_1,...,x_n, the task is to build a data structure that allows sampling i with probability proportional to x_i. The classic solution is Walker's alias method that takes, when implemented...
Symplectomorphisms and discrete braid invariants
Czechowski, Aleksander; Vandervorst, Robert
2017-01-01
Area and orientation preserving diffeomorphisms of the standard 2-disc, referred to as symplectomorphisms of D2, allow decompositions in terms of positive twist diffeomorphisms. Using the latter decomposition, we utilize the Conley index theory of discrete braid classes as introduced in Ghrist et
The remarkable discreteness of being
Life is a discrete, stochastic phenomenon: for a biological organism, the time of the two most important events of its life (reproduction and death) is random and these events change the number of individuals of the species by single units. These facts can have surprising, counterintuitive consequences. I review here three ...
Discrete tomography in neutron radiography
Kuba, Attila; Rodek, Lajos; Kiss, Zoltan; Rusko, Laszlo; Nagy, Antal; Balasko, Marton
2005-01-01
Discrete tomography (DT) is an imaging technique for reconstructing discrete images from their projections using the knowledge that the object to be reconstructed contains only a few homogeneous materials characterized by known discrete absorption values. One of the main reasons for applying DT is that we will hopefully require relatively few projections. Using discreteness and some a priori information (such as an approximate shape of the object) we can apply two DT methods in neutron imaging by reducing the problem to an optimization task. The first method is a special one because it is only suitable if the object is composed of cylinders and sphere shapes. The second method is a general one in the sense that it can be used for reconstructing objects of any shape. Software was developed and physical experiments performed in order to investigate the effects of several reconstruction parameters: the number of projections, noise levels, and complexity of the object to be reconstructed. We give a summary of the experimental results and make a comparison of the results obtained using a classical reconstruction technique (FBP). The programs we developed are available in our DT reconstruction program package DIRECT
Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.
1977-01-01
A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures
Discrete and mesoscopic regimes of finite-size wave turbulence
L'vov, V. S.; Nazarenko, S.
2010-01-01
Bounding volume results in discreteness of eigenmodes in wave systems. This leads to a depletion or complete loss of wave resonances (three-wave, four-wave, etc.), which has a strong effect on wave turbulence (WT) i.e., on the statistical behavior of broadband sets of weakly nonlinear waves. This paper describes three different regimes of WT realizable for different levels of the wave excitations: discrete, mesoscopic and kinetic WT. Discrete WT comprises chaotic dynamics of interacting wave 'clusters' consisting of discrete (often finite) number of connected resonant wave triads (or quarters). Kinetic WT refers to the infinite-box theory, described by well-known wave-kinetic equations. Mesoscopic WT is a regime in which either the discrete and the kinetic evolutions alternate or when none of these two types is purely realized. We argue that in mesoscopic systems the wave spectrum experiences a sandpile behavior. Importantly, the mesoscopic regime is realized for a broad range of wave amplitudes which typically spans over several orders on magnitude, and not just for a particular intermediate level.
Discrete elements method of neutron transport
Mathews, K.A.
1988-01-01
In this paper a new neutron transport method, called discrete elements (L N ) is derived and compared to discrete ordinates methods, theoretically and by numerical experimentation. The discrete elements method is based on discretizing the Boltzmann equation over a set of elements of angle. The discrete elements method is shown to be more cost-effective than discrete ordinates, in terms of accuracy versus execution time and storage, for the cases tested. In a two-dimensional test case, a vacuum duct in a shield, the L N method is more consistently convergent toward a Monte Carlo benchmark solution
Discrete gauge symmetries in discrete MSSM-like orientifolds
Ibáñez, L.E.; Schellekens, A.N.; Uranga, A.M.
2012-01-01
Motivated by the necessity of discrete Z N symmetries in the MSSM to insure baryon stability, we study the origin of discrete gauge symmetries from open string sector U(1)'s in orientifolds based on rational conformal field theory. By means of an explicit construction, we find an integral basis for the couplings of axions and U(1) factors for all simple current MIPFs and orientifolds of all 168 Gepner models, a total of 32 990 distinct cases. We discuss how the presence of discrete symmetries surviving as a subgroup of broken U(1)'s can be derived using this basis. We apply this procedure to models with MSSM chiral spectrum, concretely to all known U(3)×U(2)×U(1)×U(1) and U(3)×Sp(2)×U(1)×U(1) configurations with chiral bi-fundamentals, but no chiral tensors, as well as some SU(5) GUT models. We find examples of models with Z 2 (R-parity) and Z 3 symmetries that forbid certain B and/or L violating MSSM couplings. Their presence is however relatively rare, at the level of a few percent of all cases.
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
Schwabl, Franz
2006-01-01
The completely revised new edition of the classical book on Statistical Mechanics covers the basic concepts of equilibrium and non-equilibrium statistical physics. In addition to a deductive approach to equilibrium statistics and thermodynamics based on a single hypothesis - the form of the microcanonical density matrix - this book treats the most important elements of non-equilibrium phenomena. Intermediate calculations are presented in complete detail. Problems at the end of each chapter help students to consolidate their understanding of the material. Beyond the fundamentals, this text demonstrates the breadth of the field and its great variety of applications. Modern areas such as renormalization group theory, percolation, stochastic equations of motion and their applications to critical dynamics, kinetic theories, as well as fundamental considerations of irreversibility, are discussed. The text will be useful for advanced students of physics and other natural sciences; a basic knowledge of quantum mechan...
Jana, Madhusudan
2015-01-01
Statistical mechanics is self sufficient, written in a lucid manner, keeping in mind the exam system of the universities. Need of study this subject and its relation to Thermodynamics is discussed in detail. Starting from Liouville theorem gradually, the Statistical Mechanics is developed thoroughly. All three types of Statistical distribution functions are derived separately with their periphery of applications and limitations. Non-interacting ideal Bose gas and Fermi gas are discussed thoroughly. Properties of Liquid He-II and the corresponding models have been depicted. White dwarfs and condensed matter physics, transport phenomenon - thermal and electrical conductivity, Hall effect, Magneto resistance, viscosity, diffusion, etc. are discussed. Basic understanding of Ising model is given to explain the phase transition. The book ends with a detailed coverage to the method of ensembles (namely Microcanonical, canonical and grand canonical) and their applications. Various numerical and conceptual problems ar...
Guénault, Tony
2007-01-01
In this revised and enlarged second edition of an established text Tony Guénault provides a clear and refreshingly readable introduction to statistical physics, an essential component of any first degree in physics. The treatment itself is self-contained and concentrates on an understanding of the physical ideas, without requiring a high level of mathematical sophistication. A straightforward quantum approach to statistical averaging is adopted from the outset (easier, the author believes, than the classical approach). The initial part of the book is geared towards explaining the equilibrium properties of a simple isolated assembly of particles. Thus, several important topics, for example an ideal spin-½ solid, can be discussed at an early stage. The treatment of gases gives full coverage to Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein statistics. Towards the end of the book the student is introduced to a wider viewpoint and new chapters are included on chemical thermodynamics, interactions in, for exam...
Mandl, Franz
1988-01-01
The Manchester Physics Series General Editors: D. J. Sandiford; F. Mandl; A. C. Phillips Department of Physics and Astronomy, University of Manchester Properties of Matter B. H. Flowers and E. Mendoza Optics Second Edition F. G. Smith and J. H. Thomson Statistical Physics Second Edition E. Mandl Electromagnetism Second Edition I. S. Grant and W. R. Phillips Statistics R. J. Barlow Solid State Physics Second Edition J. R. Hook and H. E. Hall Quantum Mechanics F. Mandl Particle Physics Second Edition B. R. Martin and G. Shaw The Physics of Stars Second Edition A. C. Phillips Computing for Scient
Levine-Wissing, Robin
2012-01-01
All Access for the AP® Statistics Exam Book + Web + Mobile Everything you need to prepare for the Advanced Placement® exam, in a study system built around you! There are many different ways to prepare for an Advanced Placement® exam. What's best for you depends on how much time you have to study and how comfortable you are with the subject matter. To score your highest, you need a system that can be customized to fit you: your schedule, your learning style, and your current level of knowledge. This book, and the online tools that come with it, will help you personalize your AP® Statistics prep
Davidson, Norman
2003-01-01
Clear and readable, this fine text assists students in achieving a grasp of the techniques and limitations of statistical mechanics. The treatment follows a logical progression from elementary to advanced theories, with careful attention to detail and mathematical development, and is sufficiently rigorous for introductory or intermediate graduate courses.Beginning with a study of the statistical mechanics of ideal gases and other systems of non-interacting particles, the text develops the theory in detail and applies it to the study of chemical equilibrium and the calculation of the thermody
Positivity for Convective Semi-discretizations
Fekete, Imre; Ketcheson, David I.; Loczi, Lajos
2017-01-01
We propose a technique for investigating stability properties like positivity and forward invariance of an interval for method-of-lines discretizations, and apply the technique to study positivity preservation for a class of TVD semi-discretizations
inference and finite population sampling. Sudhakar Kunte. Elements of statistical computing are discussed in this series. ... which captain gets an option to decide whether to field first or bat first ... may of course not be fair, in the sense that the team which wins ... describe two methods of drawing a random number between 0.
Schrödinger, Erwin
1952-01-01
Nobel Laureate's brilliant attempt to develop a simple, unified standard method of dealing with all cases of statistical thermodynamics - classical, quantum, Bose-Einstein, Fermi-Dirac, and more.The work also includes discussions of Nernst theorem, Planck's oscillator, fluctuations, the n-particle problem, problem of radiation, much more.
Quantum chaos on discrete graphs
Smilansky, Uzy
2007-01-01
Adapting a method developed for the study of quantum chaos on quantum (metric) graphs (Kottos and Smilansky 1997 Phys. Rev. Lett. 79 4794, Kottos and Smilansky 1999 Ann. Phys., NY 274 76), spectral ζ functions and trace formulae for discrete Laplacians on graphs are derived. This is achieved by expressing the spectral secular equation in terms of the periodic orbits of the graph and obtaining functions which belong to the class of ζ functions proposed originally by Ihara (1966 J. Mat. Soc. Japan 18 219) and expanded by subsequent authors (Stark and Terras 1996 Adv. Math. 121 124, Kotani and Sunada 2000 J. Math. Sci. Univ. Tokyo 7 7). Finally, a model of 'classical dynamics' on the discrete graph is proposed. It is analogous to the corresponding classical dynamics derived for quantum graphs (Kottos and Smilansky 1997 Phys. Rev. Lett. 79 4794, Kottos and Smilansky 1999 Ann. Phys., NY 274 76). (fast track communication)
Dark energy from discrete spacetime.
Aaron D Trout
Full Text Available Dark energy accounts for most of the matter-energy content of our universe, yet current theories of its origin rely on radical physical assumptions such as the holographic principle or controversial anthropic arguments. We give a better motivated explanation for dark energy, claiming that it arises from a small negative scalar-curvature present even in empty spacetime. The vacuum has this curvature because spacetime is fundamentally discrete and there are more ways for a discrete geometry to have negative curvature than positive. We explicitly compute this effect using a variant of the well known dynamical-triangulations (DT model for quantum gravity. Our model predicts a time-varying non-zero cosmological constant with a current value, [Formula: see text] in natural units, in agreement with observation. This calculation is made possible by a novel characterization of the possible DT action values combined with numerical evidence concerning their degeneracies.
Applied geometry and discrete mathematics
Sturm; Gritzmann, Peter; Sturmfels, Bernd
1991-01-01
This volume, published jointly with the Association for Computing Machinery, comprises a collection of research articles celebrating the occasion of Victor Klee's sixty-fifth birthday in September 1990. During his long career, Klee has made contributions to a wide variety of areas, such as discrete and computational geometry, convexity, combinatorics, graph theory, functional analysis, mathematical programming and optimization, and theoretical computer science. In addition, Klee made important contributions to mathematics education, mathematical methods in economics and the decision sciences, applications of discrete mathematics in the biological and social sciences, and the transfer of knowledge from applied mathematics to industry. In honor of Klee's achievements, this volume presents more than forty papers on topics related to Klee's research. While the majority of the papers are research articles, a number of survey articles are also included. Mirroring the breadth of Klee's mathematical contributions, th...
Emissivity of discretized diffusion problems
Densmore, Jeffery D.; Davidson, Gregory; Carrington, David B.
2006-01-01
The numerical modeling of radiative transfer by the diffusion approximation can produce artificially damped radiation propagation if spatial cells are too optically thick. In this paper, we investigate this nonphysical behavior at external problem boundaries by examining the emissivity of the discretized diffusion approximation. We demonstrate that the standard cell-centered discretization produces an emissivity that is too low for optically thick cells, a situation that leads to the lack of radiation propagation. We then present a modified boundary condition that yields an accurate emissivity regardless of cell size. This modified boundary condition can be used with a deterministic calculation or as part of a hybrid transport-diffusion method for increasing the efficiency of Monte Carlo simulations. We also discuss the range of applicability, as a function of cell size and material properties, when this modified boundary condition is employed in a hybrid technique. With a set of numerical calculations, we demonstrate the accuracy and usefulness of this modified boundary condition
Discrete symmetries in the MSSM
Schieren, Roland
2010-12-02
The use of discrete symmetries, especially abelian ones, in physics beyond the standard model of particle physics is discussed. A method is developed how a general, abelian, discrete symmetry can be obtained via spontaneous symmetry breaking. In addition, anomalies are treated in the path integral approach with special attention to anomaly cancellation via the Green-Schwarz mechanism. All this is applied to the minimal supersymmetric standard model. A unique Z{sup R}{sub 4} symmetry is discovered which solves the {mu}-problem as well as problems with proton decay and allows to embed the standard model gauge group into a simple group, i.e. the Z{sup R}{sub 4} is compatible with grand unification. Also the flavor problem in the context of minimal flavor violation is addressed. Finally, a string theory model is presented which exhibits the mentioned Z{sup R}{sub 4} symmetry and other desirable features. (orig.)
Domain Discretization and Circle Packings
Dias, Kealey
A circle packing is a configuration of circles which are tangent with one another in a prescribed pattern determined by a combinatorial triangulation, where the configuration fills a planar domain or a two-dimensional surface. The vertices in the triangulation correspond to centers of circles...... to domain discretization problems such as triangulation and unstructured mesh generation techniques. We wish to ask ourselves the question: given a cloud of points in the plane (we restrict ourselves to planar domains), is it possible to construct a circle packing preserving the positions of the vertices...... and constrained meshes having predefined vertices as constraints. A standard method of two-dimensional mesh generation involves conformal mapping of the surface or domain to standardized shapes, such as a disk. Since circle packing is a new technique for constructing discrete conformal mappings, it is possible...
Discrete Bose-Einstein spectra
Vlad, Valentin I.; Ionescu-Pallas, Nicholas
2001-03-01
The Bose-Einstein energy spectrum of a quantum gas, confined in a rigid cubic box, is shown to become discrete and strongly dependent on the box geometry (size L), temperature, T and atomic mass number, A at , in the region of small γ=A at TV 1/3 . This behavior is the consequence of the random state degeneracy in the box. Furthermore, we demonstrate that the total energy does not obey the conventional law any longer, but a new law, which depends on γ and on the quantum gas fugacity. This energy law imposes a faster decrease to zero than it is classically expected, for γ→0. The lighter the gas atoms, the higher the temperatures or the box size, for the same effects in the discrete Bose-Einstein regime. (author)
Discrete symmetries in the MSSM
Schieren, Roland
2010-01-01
The use of discrete symmetries, especially abelian ones, in physics beyond the standard model of particle physics is discussed. A method is developed how a general, abelian, discrete symmetry can be obtained via spontaneous symmetry breaking. In addition, anomalies are treated in the path integral approach with special attention to anomaly cancellation via the Green-Schwarz mechanism. All this is applied to the minimal supersymmetric standard model. A unique Z R 4 symmetry is discovered which solves the μ-problem as well as problems with proton decay and allows to embed the standard model gauge group into a simple group, i.e. the Z R 4 is compatible with grand unification. Also the flavor problem in the context of minimal flavor violation is addressed. Finally, a string theory model is presented which exhibits the mentioned Z R 4 symmetry and other desirable features. (orig.)
Dark energy from discrete spacetime.
Trout, Aaron D
2013-01-01
Dark energy accounts for most of the matter-energy content of our universe, yet current theories of its origin rely on radical physical assumptions such as the holographic principle or controversial anthropic arguments. We give a better motivated explanation for dark energy, claiming that it arises from a small negative scalar-curvature present even in empty spacetime. The vacuum has this curvature because spacetime is fundamentally discrete and there are more ways for a discrete geometry to have negative curvature than positive. We explicitly compute this effect using a variant of the well known dynamical-triangulations (DT) model for quantum gravity. Our model predicts a time-varying non-zero cosmological constant with a current value, [Formula: see text] in natural units, in agreement with observation. This calculation is made possible by a novel characterization of the possible DT action values combined with numerical evidence concerning their degeneracies.
Discrete mathematics using a computer
Hall, Cordelia
2000-01-01
Several areas of mathematics find application throughout computer science, and all students of computer science need a practical working understanding of them. These core subjects are centred on logic, sets, recursion, induction, relations and functions. The material is often called discrete mathematics, to distinguish it from the traditional topics of continuous mathematics such as integration and differential equations. The central theme of this book is the connection between computing and discrete mathematics. This connection is useful in both directions: • Mathematics is used in many branches of computer science, in applica tions including program specification, datastructures,design and analysis of algorithms, database systems, hardware design, reasoning about the correctness of implementations, and much more; • Computers can help to make the mathematics easier to learn and use, by making mathematical terms executable, making abstract concepts more concrete, and through the use of software tools su...
Duality for discrete integrable systems
Quispel, G R W; Capel, H W; Roberts, J A G
2005-01-01
A new class of discrete dynamical systems is introduced via a duality relation for discrete dynamical systems with a number of explicitly known integrals. The dual equation can be defined via the difference of an arbitrary linear combination of integrals and its upshifted version. We give an example of an integrable mapping with two parameters and four integrals leading to a (four-dimensional) dual mapping with four parameters and two integrals. We also consider a more general class of higher-dimensional mappings arising via a travelling-wave reduction from the (integrable) MKdV partial-difference equation. By differencing the trace of the monodromy matrix we obtain a class of novel dual mappings which is shown to be integrable as level-set-dependent versions of the original ones
Observability of discretized partial differential equations
Cohn, Stephen E.; Dee, Dick P.
1988-01-01
It is shown that complete observability of the discrete model used to assimilate data from a linear partial differential equation (PDE) system is necessary and sufficient for asymptotic stability of the data assimilation process. The observability theory for discrete systems is reviewed and applied to obtain simple observability tests for discretized constant-coefficient PDEs. Examples are used to show how numerical dispersion can result in discrete dynamics with multiple eigenvalues, thereby detracting from observability.
On the putative essential discreteness of q-generalized entropies
Plastino, A.; Rocca, M. C.
2017-12-01
It has been argued in Abe (2010), entitled Essential discreteness in generalized thermostatistics with non-logarithmic entropy, that ;continuous Hamiltonian systems with long-range interactions and the so-called q-Gaussian momentum distributions are seen to be outside the scope of non-extensive statistical mechanics;. The arguments are clever and appealing. We show here that, however, some mathematical subtleties render them unconvincing.
A compressed sensing based approach on Discrete Algebraic Reconstruction Technique.
Demircan-Tureyen, Ezgi; Kamasak, Mustafa E
2015-01-01
Discrete tomography (DT) techniques are capable of computing better results, even using less number of projections than the continuous tomography techniques. Discrete Algebraic Reconstruction Technique (DART) is an iterative reconstruction method proposed to achieve this goal by exploiting a prior knowledge on the gray levels and assuming that the scanned object is composed from a few different densities. In this paper, DART method is combined with an initial total variation minimization (TvMin) phase to ensure a better initial guess and extended with a segmentation procedure in which the threshold values are estimated from a finite set of candidates to minimize both the projection error and the total variation (TV) simultaneously. The accuracy and the robustness of the algorithm is compared with the original DART by the simulation experiments which are done under (1) limited number of projections, (2) limited view problem and (3) noisy projections conditions.
DART: a practical reconstruction algorithm for discrete tomography.
Batenburg, Kees Joost; Sijbers, Jan
2011-09-01
In this paper, we present an iterative reconstruction algorithm for discrete tomography, called discrete algebraic reconstruction technique (DART). DART can be applied if the scanned object is known to consist of only a few different compositions, each corresponding to a constant gray value in the reconstruction. Prior knowledge of the gray values for each of the compositions is exploited to steer the current reconstruction towards a reconstruction that contains only these gray values. Based on experiments with both simulated CT data and experimental μCT data, it is shown that DART is capable of computing more accurate reconstructions from a small number of projection images, or from a small angular range, than alternative methods. It is also shown that DART can deal effectively with noisy projection data and that the algorithm is robust with respect to errors in the estimation of the gray values.
Effective lagrangian description on discrete gauge symmetries
Banks, T.
1989-01-01
We exhibit a simple low-energy lagrangian which describes a system with a discrete remnant of a spontaneously broken continuous gauge symmetry. The lagrangian gives a simple description of the effects ascribed to such systems by Krauss and Wilczek: black holes carry discrete hair and interact with cosmic strings, and wormholes cannot lead to violation of discrete gauge symmetries. (orig.)
Discrete port-Hamiltonian systems : mixed interconnections
Talasila, Viswanath; Clemente-Gallardo, J.; Schaft, A.J. van der
2005-01-01
Either from a control theoretic viewpoint or from an analysis viewpoint it is necessary to convert smooth systems to discrete systems, which can then be implemented on computers for numerical simulations. Discrete models can be obtained either by discretizing a smooth model, or by directly modeling
Discrete fractional solutions of a Legendre equation
Yılmazer, Resat
2018-01-01
One of the most popular research interests of science and engineering is the fractional calculus theory in recent times. Discrete fractional calculus has also an important position in fractional calculus. In this work, we acquire new discrete fractional solutions of the homogeneous and non homogeneous Legendre differential equation by using discrete fractional nabla operator.
Oh, Jung Su; Lee, Jae Sung; Kim, Yu Kyeong; Chung, June Key; Lee, Myung Chul; Lee, Dong Soo
2005-01-01
In the statistical probabilistic mapping, commonly, differences between two or more groups of subjects are statistically analyzed following spatial normalization. However, to our best knowledge, there is few study which performed the statistical mapping in the individual brain space rather than in the stereotaxic brain space, i.e., template space. Therefore, in the current study, a new method for mapping the statistical results in the template space onto individual brain space has been developed. Four young subjects with epilepsy and their age-matched thirty normal healthy subjects were recruited. Both FDG PET and T1 structural MRI was scanned in these groups. Statistical analysis on the decreased FDG metabolism in epilepsy was performed on the SPM with two sample t-test (p < 0.001, intensity threshold 100). To map the statistical results onto individual space, inverse deformation was performed as follows. With SPM deformation toolbox, DCT (discrete cosine transform) basis-encoded deformation fields between individual T1 images and T1 MNI template were obtained. Afterward, inverse of those fields, i.e., inverse deformation fields were obtained. Since both PET and T1 images have been already normalized in the same MNI space, inversely deformed results in PET is on the individual brain MRI space. By applying inverse deformation field on the statistical results of the PET, the statistical map of decreased metabolism in individual spaces were obtained. With statistical results in the template space, localization of decreased metabolism was in the inferior temporal lobe, which was slightly inferior to the hippocampus. The statistical results in the individual space were commonly located in the hippocampus, where the activation should be decreased according to a priori knowledge of neuroscience. With our newly developed statistical mapping on the individual spaces, the localization of the brain functional mapping became more appropriate in the sense of neuroscience
Oh, Jung Su; Lee, Jae Sung; Kim, Yu Kyeong; Chung, June Key; Lee, Myung Chul; Lee, Dong Soo [Seoul National University Hospital, Seoul (Korea, Republic of)
2005-07-01
In the statistical probabilistic mapping, commonly, differences between two or more groups of subjects are statistically analyzed following spatial normalization. However, to our best knowledge, there is few study which performed the statistical mapping in the individual brain space rather than in the stereotaxic brain space, i.e., template space. Therefore, in the current study, a new method for mapping the statistical results in the template space onto individual brain space has been developed. Four young subjects with epilepsy and their age-matched thirty normal healthy subjects were recruited. Both FDG PET and T1 structural MRI was scanned in these groups. Statistical analysis on the decreased FDG metabolism in epilepsy was performed on the SPM with two sample t-test (p < 0.001, intensity threshold 100). To map the statistical results onto individual space, inverse deformation was performed as follows. With SPM deformation toolbox, DCT (discrete cosine transform) basis-encoded deformation fields between individual T1 images and T1 MNI template were obtained. Afterward, inverse of those fields, i.e., inverse deformation fields were obtained. Since both PET and T1 images have been already normalized in the same MNI space, inversely deformed results in PET is on the individual brain MRI space. By applying inverse deformation field on the statistical results of the PET, the statistical map of decreased metabolism in individual spaces were obtained. With statistical results in the template space, localization of decreased metabolism was in the inferior temporal lobe, which was slightly inferior to the hippocampus. The statistical results in the individual space were commonly located in the hippocampus, where the activation should be decreased according to a priori knowledge of neuroscience. With our newly developed statistical mapping on the individual spaces, the localization of the brain functional mapping became more appropriate in the sense of neuroscience.
Geographic analysis of forest health indicators using spatial scan statistics
John W. Coulston; Kurt H. Riitters
2003-01-01
Forest health analysts seek to define the location, extent, and magnitude of changes in forest ecosystems, to explain the observed changes when possible, and to draw attention to the unexplained changes for further investigation. The data come from a variety of sources including satellite images, field plot measurements, and low-altitude aerial surveys. Indicators...
MR guided spatial normalization of SPECT scans
Crouch, B.; Barnden, L.R.; Kwiatek, R.
2010-01-01
Full text: In SPECT population studies where magnetic resonance (MR) scans are also available, the higher resolution of the MR scans allows for an improved spatial normalization of the SPECT scans. In this approach, the SPECT images are first coregistered to their corresponding MR images by a linear (affine) transformation which is calculated using SPM's mutual information maximization algorithm. Non-linear spatial normalization maps are then computed either directly from the MR scans using SPM's built in spatial normalization algorithm, or, from segmented TI MR images using DARTEL, an advanced diffeomorphism based spatial normalization algorithm. We compare these MR based methods to standard SPECT based spatial normalization for a population of 27 fibromyalgia patients and 25 healthy controls with spin echo T 1 scans. We identify significant perfusion deficits in prefrontal white matter in FM patients, with the DARTEL based spatial normalization procedure yielding stronger statistics than the standard SPECT based spatial normalization. (author)
A goodness of fit statistic for the geometric distribution
Ferreira, J.A.
2003-01-01
textabstractWe propose a goodness of fit statistic for the geometric distribution and compare it in terms of power, via simulation, with the chi-square statistic. The statistic is based on the Lau-Rao theorem and can be seen as a discrete analogue of the total time on test statistic. The results suggest that the test based on the new statistic is generally superior to the chi-square test.
Continuous versus discrete structures II -- Discrete Hamiltonian systems and Helmholtz conditions
Cresson, Jacky; Pierret, Frédéric
2015-01-01
We define discrete Hamiltonian systems in the framework of discrete embeddings. An explicit comparison with previous attempts is given. We then solve the discrete Helmholtz's inverse problem for the discrete calculus of variation in the Hamiltonian setting. Several applications are discussed.
Asymptotic behavior of discrete holomorphic maps z^c, log(z) and discrete Painleve transcedents
Agafonov, S. I.
2005-01-01
It is shown that discrete analogs of z^c and log(z) have the same asymptotic behavior as their smooth counterparts. These discrete maps are described in terms of special solutions of discrete Painleve-II equations, asymptotics of these solutions providing the behaviour of discrete z^c and log(z) at infinity.
Zhang Yufeng; Fan Engui; Zhang Yongqing
2006-01-01
With the help of two semi-direct sum Lie algebras, an efficient way to construct discrete integrable couplings is proposed. As its applications, the discrete integrable couplings of the Toda-type lattice equations are obtained. The approach can be devoted to establishing other discrete integrable couplings of the discrete lattice integrable hierarchies of evolution equations
... Chest PET scan; Lung positron emission tomography; PET - chest; PET - lung; PET - tumor imaging; ... Grainger & Allison's Diagnostic Radiology: A Textbook of Medical Imaging . 6th ed. Philadelphia, ...
Robillard, J.
1977-01-01
The Centers against cancer of Caen, Angers, Montpellier, Strasbourg and 'the Curie Foundation' have confronted their experience in detection of bone metastases by total body scanning. From the investigation by this procedure, of 1,467 patients with cancer, it results: the confrontation between radio and scanning shows a rate of false positive and false negative identical to the literature ones; the countage scanning allows to reduce the number of false positive; scanning allows to direct bone biopsy and to improve efficiency of histological examination [fr
Anon.
1994-01-01
For the years 1992 and 1993, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period. The tables and figures shown in this publication are: Changes in the volume of GNP and energy consumption; Coal consumption; Natural gas consumption; Peat consumption; Domestic oil deliveries; Import prices of oil; Price development of principal oil products; Fuel prices for power production; Total energy consumption by source; Electricity supply; Energy imports by country of origin in 1993; Energy exports by recipient country in 1993; Consumer prices of liquid fuels; Consumer prices of hard coal and natural gas, prices of indigenous fuels; Average electricity price by type of consumer; Price of district heating by type of consumer and Excise taxes and turnover taxes included in consumer prices of some energy sources
Goodman, Joseph W.
2000-07-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research
Pivato, Marcus
2013-01-01
We show that, in a sufficiently large population satisfying certain statistical regularities, it is often possible to accurately estimate the utilitarian social welfare function, even if we only have very noisy data about individual utility functions and interpersonal utility comparisons. In particular, we show that it is often possible to identify an optimal or close-to-optimal utilitarian social choice using voting rules such as the Borda rule, approval voting, relative utilitarianism, or a...
Natrella, Mary Gibbons
1963-01-01
Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations
Evaluation of processing methods for static radioisotope scan images
Oakberg, J.A.
1976-12-01
Radioisotope scanning in the field of nuclear medicine provides a method for the mapping of a radioactive drug in the human body to produce maps (images) which prove useful in detecting abnormalities in vital organs. At best, radioisotope scanning methods produce images with poor counting statistics. One solution to improving the body scan images is using dedicated small computers with appropriate software to process the scan data. Eleven methods for processing image data are compared
Cuspidal discrete series for projective hyperbolic spaces
Andersen, Nils Byrial; Flensted-Jensen, Mogens
2013-01-01
Abstract. We have in [1] proposed a definition of cusp forms on semisimple symmetric spaces G/H, involving the notion of a Radon transform and a related Abel transform. For the real non-Riemannian hyperbolic spaces, we showed that there exists an infinite number of cuspidal discrete series......, and at most finitely many non-cuspidal discrete series, including in particular the spherical discrete series. For the projective spaces, the spherical discrete series are the only non-cuspidal discrete series. Below, we extend these results to the other hyperbolic spaces, and we also study the question...
Space-Time Discrete KPZ Equation
Cannizzaro, G.; Matetski, K.
2018-03-01
We study a general family of space-time discretizations of the KPZ equation and show that they converge to its solution. The approach we follow makes use of basic elements of the theory of regularity structures (Hairer in Invent Math 198(2):269-504, 2014) as well as its discrete counterpart (Hairer and Matetski in Discretizations of rough stochastic PDEs, 2015. arXiv:1511.06937). Since the discretization is in both space and time and we allow non-standard discretization for the product, the methods mentioned above have to be suitably modified in order to accommodate the structure of the models under study.
Discrete tomography in an in vivo small animal bone study.
Van de Casteele, Elke; Perilli, Egon; Van Aarle, Wim; Reynolds, Karen J; Sijbers, Jan
2018-01-01
This study aimed at assessing the feasibility of a discrete algebraic reconstruction technique (DART) to be used in in vivo small animal bone studies. The advantage of discrete tomography is the possibility to reduce the amount of X-ray projection images, which makes scans faster and implies also a significant reduction of radiation dose, without compromising the reconstruction results. Bone studies are ideal for being performed with discrete tomography, due to the relatively small number of attenuation coefficients contained in the image [namely three: background (air), soft tissue and bone]. In this paper, a validation is made by comparing trabecular bone morphometric parameters calculated from images obtained by using DART and the commonly used standard filtered back-projection (FBP). Female rats were divided into an ovariectomized (OVX) and a sham-operated group. In vivo micro-CT scanning of the tibia was done at baseline and at 2, 4, 8 and 12 weeks after surgery. The cross-section images were reconstructed using first the full set of projection images and afterwards reducing them in number to a quarter and one-sixth (248, 62, 42 projection images, respectively). For both reconstruction methods, similar changes in morphometric parameters were observed over time: bone loss for OVX and bone growth for sham-operated rats, although for DART the actual values were systematically higher (bone volume fraction) or lower (structure model index) compared to FBP, depending on the morphometric parameter. The DART algorithm was, however, more robust when using fewer projection images, where the standard FBP reconstruction was more prone to noise, showing a significantly bigger deviation from the morphometric parameters obtained using all projection images. This study supports the use of DART as a potential alternative method to FBP in X-ray micro-CT animal studies, in particular, when the number of projections has to be drastically minimized, which directly reduces
Statistical Model Checking for Biological Systems
David, Alexandre; Larsen, Kim Guldstrand; Legay, Axel
2014-01-01
Statistical Model Checking (SMC) is a highly scalable simulation-based verification approach for testing and estimating the probability that a stochastic system satisfies a given linear temporal property. The technique has been applied to (discrete and continuous time) Markov chains, stochastic...
Applicability of statistical process control techniques
Schippers, W.A.J.
1998-01-01
This paper concerns the application of Process Control Techniques (PCTs) for the improvement of the technical performance of discrete production processes. Successful applications of these techniques, such as Statistical Process Control Techniques (SPC), can be found in the literature. However, some
Infinite Random Graphs as Statistical Mechanical Models
Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria
2011-01-01
We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe a ...
Integrable discretizations of the short pulse equation
Feng Baofeng; Maruno, Ken-ichi; Ohta, Yasuhiro
2010-01-01
In this paper, we propose integrable semi-discrete and full-discrete analogues of the short pulse (SP) equation. The key construction is the bilinear form and determinant structure of solutions of the SP equation. We also give the determinant formulas of N-soliton solutions of the semi-discrete and full-discrete analogues of the SP equations, from which the multi-loop and multi-breather solutions can be generated. In the continuous limit, the full-discrete SP equation converges to the semi-discrete SP equation, and then to the continuous SP equation. Based on the semi-discrete SP equation, an integrable numerical scheme, i.e. a self-adaptive moving mesh scheme, is proposed and used for the numerical computation of the short pulse equation.
Probability and statistics for computer science
Johnson, James L
2011-01-01
Comprehensive and thorough development of both probability and statistics for serious computer scientists; goal-oriented: ""to present the mathematical analysis underlying probability results"" Special emphases on simulation and discrete decision theory Mathematically-rich, but self-contained text, at a gentle pace Review of calculus and linear algebra in an appendix Mathematical interludes (in each chapter) which examine mathematical techniques in the context of probabilistic or statistical importance Numerous section exercises, summaries, historical notes, and Further Readings for reinforcem
Strunk, Amber; Gazdovich, Jennifer; Redouté, Oriane; Reverte, Juan Manuel; Shelley, Samantha; Todorova, Vesela
2018-05-01
This paper provides a brief introduction to antimatter and how it, along with other modern physics topics, is utilized in positron emission tomography (PET) scans. It further describes a hands-on activity for students to help them gain an understanding of how PET scans assist in detecting cancer. Modern physics topics provide an exciting way to introduce students to current applications of physics.
Scanning laser Doppler vibrometry
Brøns, Marie; Thomsen, Jon Juel
With a Scanning Laser Doppler Vibrometer (SLDV) a vibrating surface is automatically scanned over predefined grid points, and data processed for displaying vibration properties like mode shapes, natural frequencies, damping ratios, and operational deflection shapes. Our SLDV – a PSV-500H from...
Full Text Available Toggle navigation Test/Treatment Patient Type Screening/Wellness Disease/Condition Safety En Español More Info Images/Videos About Us News Physician Resources Professions Site Index A-Z Thyroid Scan and Uptake Thyroid scan and uptake uses ...
Image processing tensor transform and discrete tomography with Matlab
Grigoryan, Artyom M
2012-01-01
Focusing on mathematical methods in computer tomography, Image Processing: Tensor Transform and Discrete Tomography with MATLAB(R) introduces novel approaches to help in solving the problem of image reconstruction on the Cartesian lattice. Specifically, it discusses methods of image processing along parallel rays to more quickly and accurately reconstruct images from a finite number of projections, thereby avoiding overradiation of the body during a computed tomography (CT) scan. The book presents several new ideas, concepts, and methods, many of which have not been published elsewhere. New co
Anon.
1989-01-01
World data from the United Nation's latest Energy Statistics Yearbook, first published in our last issue, are completed here. The 1984-86 data were revised and 1987 data added for world commercial energy production and consumption, world natural gas plant liquids production, world LP-gas production, imports, exports, and consumption, world residual fuel oil production, imports, exports, and consumption, world lignite production, imports, exports, and consumption, world peat production and consumption, world electricity production, imports, exports, and consumption (Table 80), and world nuclear electric power production
Transverse section scanning mechanism
Doherty, E.J.
1978-01-01
Apparatus is described for scanning a transverse, radionuclide scan-field using an array of focussed collimators. The collimators are movable tangentially on rails, driven by a single motor via a coupled screw. The collimators are also movable in a radial direction on rails driven by a step motor via coupled screws and bevel gears. Adjacent bevel gears rotate in opposite directions so adjacent collimators move in radially opposite directions. In use, the focal point of each collimator scans at least half of the scan-field, e.g. a human head located in the central aperture, and the electrical outputs of detectors associated with each collimator are used to determine the distribution of radioactive emission intensity at a number of points in the scan-field. (author)
A residual Monte Carlo method for discrete thermal radiative diffusion
Evans, T.M.; Urbatsch, T.J.; Lichtenstein, H.; Morel, J.E.
2003-01-01
Residual Monte Carlo methods reduce statistical error at a rate of exp(-bN), where b is a positive constant and N is the number of particle histories. Contrast this convergence rate with 1/√N, which is the rate of statistical error reduction for conventional Monte Carlo methods. Thus, residual Monte Carlo methods hold great promise for increased efficiency relative to conventional Monte Carlo methods. Previous research has shown that the application of residual Monte Carlo methods to the solution of continuum equations, such as the radiation transport equation, is problematic for all but the simplest of cases. However, the residual method readily applies to discrete systems as long as those systems are monotone, i.e., they produce positive solutions given positive sources. We develop a residual Monte Carlo method for solving a discrete 1D non-linear thermal radiative equilibrium diffusion equation, and we compare its performance with that of the discrete conventional Monte Carlo method upon which it is based. We find that the residual method provides efficiency gains of many orders of magnitude. Part of the residual gain is due to the fact that we begin each timestep with an initial guess equal to the solution from the previous timestep. Moreover, fully consistent non-linear solutions can be obtained in a reasonable amount of time because of the effective lack of statistical noise. We conclude that the residual approach has great potential and that further research into such methods should be pursued for more general discrete and continuum systems
V. V. Elizarov
2016-11-01
Full Text Available Subject of Research. The results of lidar combined scanning unit development for locating leaks of hydrocarbons are presented The unit enables to perform high-speed scanning of the investigated space in wide and narrow angle fields. Method. Scanning in a wide angular field is produced by one-line scanning path by means of the movable aluminum mirror with a frequency of 20Hz and amplitude of 20 degrees of swing. Narrowband scanning is performed along a spiral path by the deflector. The deflection of the beam is done by rotation of the optical wedges forming part of the deflector at an angle of ±50. The control function of the scanning node is performed by a specialized software product written in C# programming language. Main Results. This scanning unit allows scanning the investigated area at a distance of 50-100 m with spatial resolution at the level of 3 cm. The positioning accuracy of the laser beam in space is 15'. The developed scanning unit gives the possibility to browse the entire investigated area for the time not more than 1 ms at a rotation frequency of each wedge from 50 to 200 Hz. The problem of unambiguous definition of the beam geographical coordinates in space is solved at the software level according to the rotation angles of the mirrors and optical wedges. Lidar system coordinates are determined by means of GPS. Practical Relevance. Development results open the possibility for increasing the spatial resolution of scanning systems of a wide range of lidars and can provide high positioning accuracy of the laser beam in space.
Discrete geometric structures for architecture
Pottmann, Helmut
2010-06-13
The emergence of freeform structures in contemporary architecture raises numerous challenging research problems, most of which are related to the actual fabrication and are a rich source of research topics in geometry and geometric computing. The talk will provide an overview of recent progress in this field, with a particular focus on discrete geometric structures. Most of these result from practical requirements on segmenting a freeform shape into planar panels and on the physical realization of supporting beams and nodes. A study of quadrilateral meshes with planar faces reveals beautiful relations to discrete differential geometry. In particular, we discuss meshes which discretize the network of principal curvature lines. Conical meshes are among these meshes; they possess conical offset meshes at a constant face/face distance, which in turn leads to a supporting beam layout with so-called torsion free nodes. This work can be generalized to a variety of multilayer structures and laid the ground for an adapted curvature theory for these meshes. There are also efforts on segmenting surfaces into planar hexagonal panels. Though these are less constrained than planar quadrilateral panels, this problem is still waiting for an elegant solution. Inspired by freeform designs in architecture which involve circles and spheres, we present a new kind of triangle mesh whose faces\\' in-circles form a packing, i.e., the in-circles of two triangles with a common edge have the same contact point on that edge. These "circle packing (CP) meshes" exhibit an aesthetic balance of shape and size of their faces. They are closely tied to sphere packings on surfaces and to various remarkable structures and patterns which are of interest in art, architecture, and design. CP meshes constitute a new link between architectural freeform design and computational conformal geometry. Recently, certain timber structures motivated us to study discrete patterns of geodesics on surfaces. This
Radiative transfer on discrete spaces
Preisendorfer, Rudolph W; Stark, M; Ulam, S
1965-01-01
Pure and Applied Mathematics, Volume 74: Radiative Transfer on Discrete Spaces presents the geometrical structure of natural light fields. This book describes in detail with mathematical precision the radiometric interactions of light-scattering media in terms of a few well established principles.Organized into four parts encompassing 15 chapters, this volume begins with an overview of the derivations of the practical formulas and the arrangement of formulas leading to numerical solution procedures of radiative transfer problems in plane-parallel media. This text then constructs radiative tran
Jørgensen, John Bagterp; Thomsen, Per Grove; Madsen, Henrik
2007-01-01
for nonlinear stochastic continuous-discrete time systems is more than two orders of magnitude faster than a conventional implementation. This is of significance in nonlinear model predictive control applications, statistical process monitoring as well as grey-box modelling of systems described by stochastic......We present a novel numerically robust and computationally efficient extended Kalman filter for state estimation in nonlinear continuous-discrete stochastic systems. The resulting differential equations for the mean-covariance evolution of the nonlinear stochastic continuous-discrete time systems...
Modeling Anti-Air Warfare With Discrete Event Simulation and Analyzing Naval Convoy Operations
2016-06-01
W., & Scheaffer, R. L. (2008). Mathematical statistics with applications . Belmont, CA: Cengage Learning. 118 THIS PAGE INTENTIONALLY LEFT BLANK...WARFARE WITH DISCRETE EVENT SIMULATION AND ANALYZING NAVAL CONVOY OPERATIONS by Ali E. Opcin June 2016 Thesis Advisor: Arnold H. Buss Co...REPORT DATE June 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE MODELING ANTI-AIR WARFARE WITH DISCRETE EVENT
3-D discrete analytical ridgelet transform.
Helbert, David; Carré, Philippe; Andres, Eric
2006-12-01
In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines: 3-D discrete radial lines going through the origin defined from their orthogonal projections and 3-D planes covered with 2-D discrete line segments. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a 3-D DART adapted to a specific application. Indeed, the 3-D DART representation is not orthogonal, It is associated with a flexible redundancy factor. The 3-D DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. In order to illustrate the potentiality of this new discrete transform, we apply the 3-D DART and its extension to the Local-DART (with smooth windowing) to the denoising of 3-D image and color video. These experimental results show that the simple thresholding of the 3-D DART coefficients is efficient.
Håkan Olsson
2012-09-01
Full Text Available The introduction of Airborne Laser Scanning (ALS to forests has been revolutionary during the last decade. This development was facilitated by combining earlier ranging lidar discoveries [1–5], with experience obtained from full-waveform ranging radar [6,7] to new airborne laser scanning systems which had components such as a GNSS receiver (Global Navigation Satellite System, IMU (Inertial Measurement Unit and a scanning mechanism. Since the first commercial ALS in 1994, new ALS-based forest inventory approaches have been reported feasible for operational activities [8–12]. ALS is currently operationally applied for stand level forest inventories, for example, in Nordic countries. In Finland alone, the adoption of ALS for forest data collection has led to an annual savings of around 20 M€/year, and the work is mainly done by companies instead of governmental organizations. In spite of the long implementation times and there being a limited tradition of making changes in the forest sector, laser scanning was commercially and operationally applied after about only one decade of research. When analyzing high-ranked journal papers from ISI Web of Science, the topic of laser scanning of forests has been the driving force for the whole laser scanning research society over the last decade. Thus, the topic “laser scanning in forests” has provided a significant industrial, societal and scientific impact. [...
Gordon, I.; Peters, A.M.
1987-01-01
In 1984, a survey carried out in 21 countries in Europe showed that bone scintigraphy comprised 16% of all paediatric radioisotope scans. Although the value of bone scans in paediatrics is potentially great, their quality varies greatly, and poor-quality images are giving this valuable technique a bad reputation. The handling of children requires a sensitive staff and the provision of a few simple inexpensive items of distraction. Attempting simply to scan a child between two adult patients in a busy general department is a recipe for an unhappy, uncooperative child with the probable result of poor images. The intravenous injection of isotope should be given adjacent to the gamma camera room, unless dynamic scans are required, so that the child does not associate the camera with the injection. This injection is best carried out by someone competent in paediatric venipunture; the entire procedure should be explained to the child and parent, who should remain with child throughout. It is naive to think that silence makes for a cooperative child. The sensitivity of bone-seeking radioisotope tracers and the marked improvement in gamma camera resolution has allowed the bone scanning to become an integrated technique in the assessment of children suspected of suffering from pathological bone conditions. The tracer most commonly used for routine bone scanning is 99m Tc diphosphonate (MDP); other isotopes used include 99m Tc colloid for bone marrow scans and 67 Ga citrate and 111 In white blood cells ( 111 In WBC) for investigation of inflammatory/infective lesions
3D imaging of semiconductor components by discrete laminography
Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)
2014-06-19
X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.
Discrete Model for the Structure and Strength of Cementitious Materials
Balopoulos, Victor D.; Archontas, Nikolaos; Pantazopoulou, Stavroula J.
2017-12-01
Cementitious materials are characterized by brittle behavior in direct tension and by transverse dilatation (due to microcracking) under compression. Microcracking causes increasingly larger transverse strains and a phenomenological Poisson's ratio that gradually increases to about ν =0.5 and beyond, at the limit point in compression. This behavior is due to the underlying structure of cementitious pastes which is simulated here with a discrete physical model. The computational model is generic, assembled from a statistically generated, continuous network of flaky dendrites consisting of cement hydrates that emanate from partially hydrated cement grains. In the actual amorphous material, the dendrites constitute the solid phase of the cement gel and interconnect to provide the strength and stiffness against load. The idealized dendrite solid is loaded in compression and tension to compute values for strength and Poisson's effects. Parametric studies are conducted, to calibrate the statistical parameters of the discrete model with the physical and mechanical characteristics of the material, so that the familiar experimental trends may be reproduced. The model provides a framework for the study of the mechanical behavior of the material under various states of stress and strain and can be used to model the effects of additives (e.g., fibers) that may be explicitly simulated in the discrete structure.
Statistical Hair on Black Holes
Strominger, A.
1996-01-01
The Bekenstein-Hawking entropy for certain BPS-saturated black holes in string theory has recently been derived by counting internal black hole microstates at weak coupling. We argue that the black hole microstate can be measured by interference experiments even in the strong coupling region where there is clearly an event horizon. Extracting information which is naively behind the event horizon is possible due to the existence of statistical quantum hair carried by the black hole. This quantum hair arises from the arbitrarily large number of discrete gauge symmetries present in string theory. copyright 1996 The American Physical Society
Full Text Available ... for a thyroid scan is 30 minutes or less. Thyroid Uptake You will be given radioactive iodine ( ... for each thyroid uptake is five minutes or less. top of page What will I experience during ...
Full Text Available ... evaluate changes in the gland following medication use, surgery, radiotherapy or chemotherapy top of page How should ... such as an x-ray or CT scan, surgeries or treatments using iodinated contrast material within the ...
Full Text Available ... abnormal was found, and should not be a cause of concern for you. If you had an ... abnormal was found, and should not be a cause of concern for you. Actual scanning time for ...
Tomographic scanning apparatus
1981-01-01
Details are given of a tomographic scanning apparatus, with particular reference to a multiplexer slip ring means for receiving output from the detectors and enabling interfeed to the image reconstruction station. (U.K.)
Tomographic scanning apparatus
1981-01-01
Details are presented of a tomographic scanning apparatus, its rotational assembly, and the control and circuit elements, with particular reference to the amplifier and multiplexing circuits enabling detector signal calibration. (U.K.)
Tomographic scanning apparatus
1981-01-01
This patent specification relates to a tomographic scanning apparatus using a fan beam and digital output signal, and particularly to the design of the gas-pressurized ionization detection system. (U.K.)
The Radiation Epidemiology Branch and collaborators have initiated a retrospective cohort study to evaluate the relationship between radiation exposure from CT scans conducted during childhood and adolescence and the subsequent development of cancer.
Full Text Available ... which are encased in metal and plastic and most often shaped like a box, attached to a ... will I experience during and after the procedure? Most thyroid scan and thyroid uptake procedures are painless. ...
... make to decrease the risk of heart disease. Risks Risks of CT scans include: Being exposed to ... urac.org). URAC's accreditation program is an independent audit to verify that A.D.A.M. follows ...
Full Text Available ... eat for several hours before your exam because eating can affect the accuracy of the uptake measurement. ... often unattainable using other imaging procedures. For many diseases, nuclear medicine scans yield the most useful information ...
Full Text Available ... A thyroid scan is a type of nuclear medicine imaging. The radioactive iodine uptake test (RAIU) is ... thyroid function, but does not involve imaging. Nuclear medicine is a branch of medical imaging that uses ...
Full Text Available ... that help physicians diagnose and evaluate medical conditions. These imaging scans use radioactive materials called radiopharmaceuticals or ... or had thyroid cancer. A physician may perform these imaging tests to: determine if the gland is ...
Full Text Available ... Because nuclear medicine procedures are able to pinpoint molecular activity within the body, they offer the potential ... or imaging device that produces pictures and provides molecular information. The thyroid scan and thyroid uptake provide ...
Full Text Available ... Actual scanning time for each thyroid uptake is five minutes or less. top of page What will ... diagnostic procedures have been used for more than five decades, and there are no known long-term ...
Full Text Available ... top of page Additional Information and Resources RTAnswers.org Radiation Therapy for Head and Neck Cancer top ... Scan and Uptake Sponsored by Please note RadiologyInfo.org is not a medical facility. Please contact your ...
Full Text Available ... often unattainable using other imaging procedures. For many diseases, nuclear medicine scans yield the most useful information needed to make a diagnosis or to determine appropriate treatment, if any. Nuclear medicine is less expensive and ...
Full Text Available ... the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation ... high as with other imaging techniques, such as CT or MRI. However, nuclear medicine scans are more ...
Scanning Auger Electron Microscope
Federal Laboratory Consortium — A JEOL model 7830F field emission source, scanning Auger microscope.Specifications / Capabilities:Ultra-high vacuum (UHV), electron gun range from 0.1 kV to 25 kV,...
Full Text Available ... as an overactive thyroid gland, a condition called hyperthyroidism , cancer or other growths assess the nature of ... an x-ray or CT scan, surgeries or treatments using iodinated contrast material within the last two ...
Full Text Available ... painless. However, during the thyroid scan, you may feel uncomfortable when lying completely still with your head ... When the radiotracer is given intravenously, you will feel a slight pin prick when the needle is ...
Full Text Available ... energy. top of page What are some common uses of the procedure? The thyroid scan is used ... community, you can search the ACR-accredited facilities database . This website does not provide cost information. The ...
Full Text Available ... scan and thyroid uptake provide information about the structure and function of the thyroid. The thyroid is ... computer, create pictures offering details on both the structure and function of organs and tissues in your ...
Full Text Available ... found, and should not be a cause of concern for you. If you had an intravenous line ... found, and should not be a cause of concern for you. Actual scanning time for each thyroid ...
... a CT scan can be reformatted in multiple planes, and can even generate three-dimensional images. These ... other medical conditions and whether you have a history of heart disease, asthma, diabetes, kidney disease or ...
Full Text Available ... the gland following medication use, surgery, radiotherapy or chemotherapy top of page How should I prepare? You ... You will receive specific instructions based on the type of scan you are undergoing. top of page ...
Full Text Available ... Uptake? A thyroid scan is a type of nuclear medicine imaging. The radioactive iodine uptake test (RAIU) ... of thyroid function, but does not involve imaging. Nuclear medicine is a branch of medical imaging that ...
Tomographic scanning apparatus
1981-01-01
This patent specification describes a tomographic scanning apparatus, with particular reference to the adjustable fan beam and its collimator system, together with the facility for taking a conventional x-radiograph without moving the patient. (U.K.)
Full Text Available ... exam of any medications you are taking, including vitamins and herbal supplements. You should also inform them ... of scan you are undergoing. top of page What does the equipment look like? The special camera ...
The Scanning Optical Microscope.
Sheppard, C. J. R.
1978-01-01
Describes the principle of the scanning optical microscope and explains its advantages over the conventional microscope in the improvement of resolution and contrast, as well as the possibility of producing a picture from optical harmonies generated within the specimen.
Full Text Available ... the gland following medication use, surgery, radiotherapy or chemotherapy top of page How should I prepare? You ... but is often performed on hospitalized patients as well. Thyroid Scan You will be positioned on an ...
Verna, Emeline; Piercecchi-Marti, Marie-Dominique; Chaumoitre, Kathia; Bartoli, Christophe; Leonetti, Georges; Adalian, Pascal
2013-05-01
During forensic anthropological investigation, biological profile is determined by age, sex, ancestry, and stature. However, several individuals may share the same profile. Observation of discrete traits can yield useful information and contribute to identification. This research establishes the frequency of discrete traits of the sternum and ribs in a modern population in southern France, using 500 computer tomography (CT) scans of individuals aged 15-60 years. Only discrete traits with a frequency lower than 10% according to the literature were considered, a total of eight traits. All scans examined were three-dimensional (3D) volume renderings from DICOM images. In our population, the frequency of all the discrete traits was lower than 5%. None were associated with sex or age, with the exception of a single trait, the end of the xiphoid process. Our findings can usefully be applied for identification purposes in forensic anthropology and medicine. © 2013 American Academy of Forensic Sciences.
Discrete geometry: speculations on a new framework for classical electrodynamics
Hemion, G.
1988-01-01
An attempt is made to describe the basic principles of physics in terms of discrete partially ordered sets. Geometric ideas are introduced by means of an action at a distance formulation of classical electrodynamics. The speculations are in two main directions: (i) Gravity, one of the four elementary forces of nature, seems to be fundamentally different from the other three forces. Could it be that gravity can be explained as a natural consequence of the discrete structure? (ii) The problem of the observer in quantum mechanics continues to cause conceptual problems. Can quantum statistics be explained in terms of finite ensembles of possible partially ordered sets? The development is guided at all stages by reference to the simplest, and most well-established principles of physics
Fermion systems in discrete space-time
Finster, Felix
2007-01-01
Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure
Fermion systems in discrete space-time
Finster, Felix [NWF I - Mathematik, Universitaet Regensburg, 93040 Regensburg (Germany)
2007-05-15
Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.
Fermion Systems in Discrete Space-Time
Finster, Felix
2006-01-01
Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.
Fermion systems in discrete space-time
Finster, Felix
2007-05-01
Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.
National Statistical Commission and Indian Official Statistics*
IAS Admin
a good collection of official statistics of that time. With more .... statistical agencies and institutions to provide details of statistical activities .... ing several training programmes. .... ful completion of Indian Statistical Service examinations, the.
Some Statistics for Measuring Large-Scale Structure
Brandenberger, Robert H.; Kaplan, David M.; A, Stephen; Ramsey
1993-01-01
Good statistics for measuring large-scale structure in the Universe must be able to distinguish between different models of structure formation. In this paper, two and three dimensional ``counts in cell" statistics and a new ``discrete genus statistic" are applied to toy versions of several popular theories of structure formation: random phase cold dark matter model, cosmic string models, and global texture scenario. All three statistics appear quite promising in terms of differentiating betw...
Discrete Haar transform and protein structure.
Morosetti, S
1997-12-01
The discrete Haar transform of the sequence of the backbone dihedral angles (phi and psi) was performed over a set of X-ray protein structures of high resolution from the Brookhaven Protein Data Bank. Afterwards, the new dihedral angles were calculated by the inverse transform, using a growing number of Haar functions, from the lower to the higher degree. New structures were obtained using these dihedral angles, with standard values for bond lengths and angles, and with omega = 0 degree. The reconstructed structures were compared with the experimental ones, and analyzed by visual inspection and statistical analysis. When half of the Haar coefficients were used, all the reconstructed structures were not yet collapsed to a tertiary folding, but they showed yet realized most of the secondary motifs. These results indicate a substantial separation of structural information in the space of Haar transform, with the secondary structural information mainly present in the Haar coefficients of lower degrees, and the tertiary one present in the higher degree coefficients. Because of this separation, the representation of the folded structures in the space of Haar transform seems a promising candidate to encompass the problem of premature convergence in genetic algorithms.
Inevitable randomness in discrete mathematics
Beck, Jozsef
2009-01-01
Mathematics has been called the science of order. The subject is remarkably good for generalizing specific cases to create abstract theories. However, mathematics has little to say when faced with highly complex systems, where disorder reigns. This disorder can be found in pure mathematical arenas, such as the distribution of primes, the 3n+1 conjecture, and class field theory. The purpose of this book is to provide examples--and rigorous proofs--of the complexity law: (1) discrete systems are either simple or they exhibit advanced pseudorandomness; (2) a priori probabilities often exist even when there is no intrinsic symmetry. Part of the difficulty in achieving this purpose is in trying to clarify these vague statements. The examples turn out to be fascinating instances of deep or mysterious results in number theory and combinatorics. This book considers randomness and complexity. The traditional approach to complexity--computational complexity theory--is to study very general complexity classes, such as P...
Quantum evolution by discrete measurements
Roa, L; Guevara, M L Ladron de; Delgado, A; Olivares-RenterIa, G; Klimov, A B
2007-01-01
In this article we review two ways of driving a quantum system to a known pure state via a sequence discrete of von Neumann measurements. The first of them assumes that the initial state of the system is unknown, and the evolution is attained only with the help of two non-commuting observables. For this method, the overall success probability is maximized when the eigentstates of the involved observables constitute mutually unbiased bases. The second method assumes the initial state is known and it uses N observables which are consecutively measured to make the state of the system approach the target state. The probability of success of this procedure converges to 1 as the number of observables increases
Quantum evolution by discrete measurements
Roa, L [Center for Quantum Optics and Quantum Information, Departamento de Fisica, Universidad de Concepcion, Casilla 160-C, Concepcion (Chile); Guevara, M L Ladron de [Departamento de Fisica, Universidad Catolica del Norte, Casilla 1280, Antofagasta (Chile); Delgado, A [Center for Quantum Optics and Quantum Information, Departamento de Fisica, Universidad de Concepcion, Casilla 160-C, Concepcion (Chile); Olivares-RenterIa, G [Center for Quantum Optics and Quantum Information, Departamento de Fisica, Universidad de Concepcion, Casilla 160-C, Concepcion (Chile); Klimov, A B [Departamento de Fisica, Universidad de Guadalajara, Revolucion 1500, 44420 Guadalajara, Jalisco (Mexico)
2007-10-15
In this article we review two ways of driving a quantum system to a known pure state via a sequence discrete of von Neumann measurements. The first of them assumes that the initial state of the system is unknown, and the evolution is attained only with the help of two non-commuting observables. For this method, the overall success probability is maximized when the eigentstates of the involved observables constitute mutually unbiased bases. The second method assumes the initial state is known and it uses N observables which are consecutively measured to make the state of the system approach the target state. The probability of success of this procedure converges to 1 as the number of observables increases.
Discrete stochastic processes and applications
Collet, Jean-François
2018-01-01
This unique text for beginning graduate students gives a self-contained introduction to the mathematical properties of stochastics and presents their applications to Markov processes, coding theory, population dynamics, and search engine design. The book is ideal for a newly designed course in an introduction to probability and information theory. Prerequisites include working knowledge of linear algebra, calculus, and probability theory. The first part of the text focuses on the rigorous theory of Markov processes on countable spaces (Markov chains) and provides the basis to developing solid probabilistic intuition without the need for a course in measure theory. The approach taken is gradual beginning with the case of discrete time and moving on to that of continuous time. The second part of this text is more applied; its core introduces various uses of convexity in probability and presents a nice treatment of entropy.
Discrete calculus methods for counting
Mariconda, Carlo
2016-01-01
This book provides an introduction to combinatorics, finite calculus, formal series, recurrences, and approximations of sums. Readers will find not only coverage of the basic elements of the subjects but also deep insights into a range of less common topics rarely considered within a single book, such as counting with occupancy constraints, a clear distinction between algebraic and analytical properties of formal power series, an introduction to discrete dynamical systems with a thorough description of Sarkovskii’s theorem, symbolic calculus, and a complete description of the Euler-Maclaurin formulas and their applications. Although several books touch on one or more of these aspects, precious few cover all of them. The authors, both pure mathematicians, have attempted to develop methods that will allow the student to formulate a given problem in a precise mathematical framework. The aim is to equip readers with a sound strategy for classifying and solving problems by pursuing a mathematically rigorous yet ...
Modeling discrete competitive facility location
Karakitsiou, Athanasia
2015-01-01
This book presents an up-to-date review of modeling and optimization approaches for location problems along with a new bi-level programming methodology which captures the effect of competition of both producers and customers on facility location decisions. While many optimization approaches simplify location problems by assuming decision making in isolation, this monograph focuses on models which take into account the competitive environment in which such decisions are made. New insights in modeling, algorithmic and theoretical possibilities are opened by this approach and new applications are possible. Competition on equal term plus competition between market leader and followers are considered in this study, consequently bi-level optimization methodology is emphasized and further developed. This book provides insights regarding modeling complexity and algorithmic approaches to discrete competitive location problems. In traditional location modeling, assignment of customer demands to supply sources are made ...
Discrete modelling of drapery systems
Thoeni, Klaus; Giacomini, Anna
2016-04-01
Drapery systems are an efficient and cost-effective measure in preventing and controlling rockfall hazards on rock slopes. The simplest form consists of a row of ground anchors along the top of the slope connected to a horizontal support cable from which a wire mesh is suspended down the face of the slope. Such systems are generally referred to as simple or unsecured draperies (Badger and Duffy 2012). Variations such as secured draperies, where a pattern of ground anchors is incorporated within the field of the mesh, and hybrid systems, where the upper part of an unsecured drapery is elevated to intercept rockfalls originating upslope of the installation, are becoming more and more popular. This work presents a discrete element framework for simulation of unsecured drapery systems and its variations. The numerical model is based on the classical discrete element method (DEM) and implemented into the open-source framework YADE (Šmilauer et al., 2010). The model takes all relevant interactions between block, drapery and slope into account (Thoeni et al., 2014) and was calibrated and validated based on full-scale experiments (Giacomini et al., 2012).The block is modelled as a rigid clump made of spherical particles which allows any shape to be approximated. The drapery is represented by a set of spherical particle with remote interactions. The behaviour of the remote interactions is governed by the constitutive behaviour of the wire and generally corresponds to a piecewise linear stress-strain relation (Thoeni et al., 2013). The same concept is used to model wire ropes. The rock slope is represented by rigid triangular elements where material properties (e.g., normal coefficient of restitution, friction angle) are assigned to each triangle. The capabilities of the developed model to simulate drapery systems and estimate the residual hazard involved with such systems is shown. References Badger, T.C., Duffy, J.D. (2012) Drapery systems. In: Turner, A.K., Schuster R
Tellinghuisen, Joel
2008-01-01
The method of least squares is probably the most powerful data analysis tool available to scientists. Toward a fuller appreciation of that power, this work begins with an elementary review of statistics fundamentals, and then progressively increases in sophistication as the coverage is extended to the theory and practice of linear and nonlinear least squares. The results are illustrated in application to data analysis problems important in the life sciences. The review of fundamentals includes the role of sampling and its connection to probability distributions, the Central Limit Theorem, and the importance of finite variance. Linear least squares are presented using matrix notation, and the significance of the key probability distributions-Gaussian, chi-square, and t-is illustrated with Monte Carlo calculations. The meaning of correlation is discussed, including its role in the propagation of error. When the data themselves are correlated, special methods are needed for the fitting, as they are also when fitting with constraints. Nonlinear fitting gives rise to nonnormal parameter distributions, but the 10% Rule of Thumb suggests that such problems will be insignificant when the parameter is sufficiently well determined. Illustrations include calibration with linear and nonlinear response functions, the dangers inherent in fitting inverted data (e.g., Lineweaver-Burk equation), an analysis of the reliability of the van't Hoff analysis, the problem of correlated data in the Guggenheim method, and the optimization of isothermal titration calorimetry procedures using the variance-covariance matrix for experiment design. The work concludes with illustrations on assessing and presenting results.
Semiclassical analysis, Witten Laplacians, and statistical mechanis
Helffer, Bernard
2002-01-01
This important book explains how the technique of Witten Laplacians may be useful in statistical mechanics. It considers the problem of analyzing the decay of correlations, after presenting its origin in statistical mechanics. In addition, it compares the Witten Laplacian approach with other techniques, such as the transfer matrix approach and its semiclassical analysis. The author concludes by providing a complete proof of the uniform Log-Sobolev inequality. Contents: Witten Laplacians Approach; Problems in Statistical Mechanics with Discrete Spins; Laplace Integrals and Transfer Operators; S
Statistical mechanics of cellular automata
Wolfram, S.
1983-01-01
Cellular automata are used as simple mathematical models to investigate self-organization in statistical mechanics. A detailed analysis is given of ''elementary'' cellular automata consisting of a sequence of sites with values 0 or 1 on a line, with each site evolving deterministically in discrete time steps according to p definite rules involving the values of its nearest neighbors. With simple initial configurations, the cellular automata either tend to homogeneous states, or generate self-similar patterns with fractal dimensions approx. =1.59 or approx. =1.69. With ''random'' initial configurations, the irreversible character of the cellular automaton evolution leads to several self-organization phenomena. Statistical properties of the structures generated are found to lie in two universality classes, independent of the details of the initial state or the cellular automaton rules. More complicated cellular automata are briefly considered, and connections with dynamical systems theory and the formal theory of computation are discussed
A Discrete Spectral Problem and Related Hierarchy of Discrete Hamiltonian Lattice Equations
Xu Xixiang; Cao Weili
2007-01-01
Staring from a discrete matrix spectral problem, a hierarchy of lattice soliton equations is presented though discrete zero curvature representation. The resulting lattice soliton equations possess non-local Lax pairs. The Hamiltonian structures are established for the resulting hierarchy by the discrete trace identity. Liouville integrability of resulting hierarchy is demonstrated.
Statistical methods and their applications in constructional engineering
1977-01-01
An introduction into the basic terms of statistics is followed by a discussion of elements of the probability theory, customary discrete and continuous distributions, simulation methods, statistical supporting framework dynamics, and a cost-benefit analysis of the methods introduced. (RW) [de
Charkes, N.D.; Malmud, L.S.; Caswell, T.; Goldman, L.; Hall, J.; Lauby, V.; Lightfoot, W.; Maier, W.; Rosemond, G.
1975-01-01
Strontium nitrate Sr-87m bone scans were made preoperatively in a group of women with suspected breast cancer, 35 of whom subsequently underwent radical mastectomy. In 3 of the 35 (9 percent), the scans were abnormal despite the absence of clinical or roentgenographic evidence of metastatic disease. All three patients had extensive axillary lymph node involvement by tumor, and went on to have additional bone metastases, from which one died. Roentgenograms failed to detect the metastases in all three. Occult bone metastases account in part for the failure of radical mastectomy to cure some patients with breast cancer. It is recommended that all candidates for radical mastectomy have a preoperative bone scan. (U.S.)
Frequency scanning microstrip antennas
Danielsen, Magnus; Jørgensen, Rolf
1979-01-01
The principles of using radiating microstrip resonators as elements in a frequency scanning antenna array are described. The resonators are cascade-coupled. This gives a scan of the main lobe due to the phase-shift in the resonator in addition to that created by the transmission line phase......-shift. Experimental results inX-band, in good agreement with the theory, show that it is possible to scan the main lobe an angle ofpm30degby a variation of the frequencypm300MHz, and where the 3 dB beamwidth is less than10deg. The directivity was 14.7 dB, while the gain was 8.1 dB. The efficiency might be improved...
Geometry and Hamiltonian mechanics on discrete spaces
Talasila, V; Clemente-Gallardo, J; Schaft, A J van der
2004-01-01
Numerical simulation is often crucial for analysing the behaviour of many complex systems which do not admit analytic solutions. To this end, one either converts a 'smooth' model into a discrete (in space and time) model, or models systems directly at a discrete level. The goal of this paper is to provide a discrete analogue of differential geometry, and to define on these discrete models a formal discrete Hamiltonian structure-in doing so we try to bring together various fundamental concepts from numerical analysis, differential geometry, algebraic geometry, simplicial homology and classical Hamiltonian mechanics. For example, the concept of a twisted derivation is borrowed from algebraic geometry for developing a discrete calculus. The theory is applied to a nonlinear pendulum and we compare the dynamics obtained through a discrete modelling approach with the dynamics obtained via the usual discretization procedures. Also an example of an energy-conserving algorithm on a simple harmonic oscillator is presented, and its effect on the Poisson structure is discussed
Cuspidal discrete series for semisimple symmetric spaces
Andersen, Nils Byrial; Flensted-Jensen, Mogens; Schlichtkrull, Henrik
2012-01-01
We propose a notion of cusp forms on semisimple symmetric spaces. We then study the real hyperbolic spaces in detail, and show that there exists both cuspidal and non-cuspidal discrete series. In particular, we show that all the spherical discrete series are non-cuspidal. (C) 2012 Elsevier Inc. All...
Discrete Riccati equation solutions: Distributed algorithms
D. G. Lainiotis
1996-01-01
Full Text Available In this paper new distributed algorithms for the solution of the discrete Riccati equation are introduced. The algorithms are used to provide robust and computational efficient solutions to the discrete Riccati equation. The proposed distributed algorithms are theoretically interesting and computationally attractive.
Painleve test and discrete Boltzmann equations
Euler, N.; Steeb, W.H.
1989-01-01
The Painleve test for various discrete Boltzmann equations is performed. The connection with integrability is discussed. Furthermore the Lie symmetry vector fields are derived and group-theoretical reduction of the discrete Boltzmann equations to ordinary differentiable equations is performed. Lie Backlund transformations are gained by performing the Painleve analysis for the ordinary differential equations. 16 refs
Variance Swap Replication: Discrete or Continuous?
Fabien Le Floc’h
2018-02-01
Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.
Discretization vs. Rounding Error in Euler's Method
Borges, Carlos F.
2011-01-01
Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…
Discrete/PWM Ballast-Resistor Controller
King, Roger J.
1994-01-01
Circuit offers low switching loss and automatic compensation for failure of ballast resistor. Discrete/PWM ballast-resistor controller improved shunt voltage-regulator circuit designed to supply power from high-resistance source to low-impedance bus. Provides both coarse discrete voltage levels (by switching of ballast resistors) and continuous fine control of voltage via pulse-width modulation.
Current Density and Continuity in Discretized Models
Boykin, Timothy B.; Luisier, Mathieu; Klimeck, Gerhard
2010-01-01
Discrete approaches have long been used in numerical modelling of physical systems in both research and teaching. Discrete versions of the Schrodinger equation employing either one or several basis functions per mesh point are often used by senior undergraduates and beginning graduate students in computational physics projects. In studying…
Geometry and Hamiltonian mechanics on discrete spaces
Talasila, V.; Clemente-Gallardo, J.; Schaft, A.J. van der
2004-01-01
Numerical simulation is often crucial for analysing the behaviour of many complex systems which do not admit analytic solutions. To this end, one either converts a ‘smooth’ model into a discrete (in space and time) model, or models systems directly at a discrete level. The goal of this paper is to
Geometry and Hamiltonian mechanics on discrete spaces
Talasila, V.; Clemente Gallardo, J.J.; Clemente-Gallardo, J.; van der Schaft, Arjan
2004-01-01
Numerical simulation is often crucial for analysing the behaviour of many complex systems which do not admit analytic solutions. To this end, one either converts a 'smooth' model into a discrete (in space and time) model, or models systems directly at a discrete level. The goal of this paper is to
Discrete mathematics in the high school curriculum
Anderson, I.; Asch, van A.G.; van Lint, J.H.
2004-01-01
In this paper we present some topics from the field of discrete mathematics which might be suitable for the high school curriculum. These topics yield both easy to understand challenging problems and important applications of discrete mathematics. We choose elements from number theory and various
Discrete Fourier analysis of multigrid algorithms
van der Vegt, Jacobus J.W.; Rhebergen, Sander
2011-01-01
The main topic of this report is a detailed discussion of the discrete Fourier multilevel analysis of multigrid algorithms. First, a brief overview of multigrid methods is given for discretizations of both linear and nonlinear partial differential equations. Special attention is given to the
Radiopharmaceutical agents for skeletal scanning
Jansen, S.E.; Van Aswegen, A.; Loetter, M.G.; Minnaar, P.C.; Otto, A.C.; Goedhals, L.; Dedekind, P.S.
1987-01-01
The quality of bone scan images obtained with a locally produced and with an imported radiopharmaceutical bone agent, methylene diphosphonate (MDP), was compared visually. Standard skeletal imaging was carried out on 10 patients using both agents, with a period of 2 to 7 days between studies with alternate agents. Equal amounts of activity were administered for both agents. All images were acquired on Polaroid film for subsequent evaluation. The acquisition time for standard amount of counts per study was recorded. Three physicians with applicable experience evaluated image quality (on a 4 point scale) and detectability of metastasis (on a 3 point scale). There was no statistically significant difference (p 0,05) between the two agents by paired t-test of Hotelling's T 2 analysis. It is concluded that the imaging properties of the locally produced and the imported MDP are similar
Patil, Sumati; Datar, Suwarna; Dharmadhikari, C V
2018-03-01
Scanning tunneling spectroscopy (STS) is used for investigating variations in electronic properties of gold nanoparticles (AuNPs) and its composite with urethane-methacrylate comb polymer (UMCP) as function of temperature. Films are prepared by drop casting AuNPs and UMCP in desired manner on silicon substrates. Samples are further analyzed for morphology under scanning electron microscopy (SEM) and atomic force microscopy (AFM). STS measurements performed in temperature range of 33 °C to 142 °C show systematic variation in current versus voltage (I-V) curves, exhibiting semiconducting to metallic transition/Schottky behavior for different samples, depending upon preparation method and as function of temperature. During current versus time (I-t) measurement for AuNPs, random telegraphic noise is observed at room temperature. Random switching of tunneling current between two discrete levels is observed for this sample. Power spectra derived from I-t show 1/f2 dependence. Statistical analysis of fluctuations shows exponential behavior with time width τ ≈ 7 ms. Local density of states (LDOS) plots derived from I-V curves of each sample show systematic shift in valance/conduction band edge towards/away from Fermi level, with respect to increase in temperature. Schottky emission is best fitted electron emission mechanism for all samples over certain range of bias voltage. Schottky plots are used to calculate barrier heights and temperature dependent measurements helped in measuring activation energies for electron transport in all samples.
A combined scanning tunnelling microscope and x-ray interferometer
Yacoot, Andrew; Kuetgens, Ulrich; Koenders, Ludger; Weimann, Thomas
2001-10-01
A monolithic x-ray interferometer made from silicon and a scanning tunnelling microscope have been combined and used to calibrate grating structures with periodicities of 100 nm or less. The x-ray interferometer is used as a translation stage which moves in discrete steps of 0.192 nm, the lattice spacing of the silicon (220) planes. Hence, movements are traceable to the definition of the metre and the nonlinearity associated with the optical interferometers used to measure displacement in more conventional metrological scanning probe microscopes (MSPMs) removed.
Tomographic scanning apparatus
Abele, M.
1983-01-01
A computerized tomographic scanning apparatus suitable for diagnosis and for improving target identification in stereotactic neurosurgery is described. It consists of a base, a source of penetrating energy, a detector which produces scanning signals and detector positioning means. A frame with top and bottom arms secures the detector and source to the top and bottom arms respectively. A drive mechanism rotates the frame about an axis along which the frame may also be moved. Finally, the detector may be moved relative to the bottom arm in a direction contrary to the rotation of the frame. (U.K.)
Scanning the phenomenological MSSM
Wuerzinger, Jonas
2017-01-01
A framework to perform scans in the 19-dimensional phenomenological MSSM is developed and used to re-evaluate the ATLAS experiments' sensitivity to R-parity-conserving supersymmetry with LHC Run 2 data ($\\sqrt{s}=13$ TeV), using results from 14 separate ATLAS searches. We perform a $\\tilde{t}_1$ dedicated scan, only considering models with $m_{\\tilde{t}_1}<1$ TeV, while allowing both a neutralino ($\\tilde{\\chi}_1^0$) and a sneutrino ($\\tilde{\
Gómez Arranz, Paula; Courtney, Michael
This report describes the tests carried out on a scanning lidar at the DTU Test Station for large wind turbines, Høvsøre. The tests were divided in two parts. In the first part, the purpose was to obtain wind speed calibrations at two heights against two cup anemometers mounted on a mast. Additio......This report describes the tests carried out on a scanning lidar at the DTU Test Station for large wind turbines, Høvsøre. The tests were divided in two parts. In the first part, the purpose was to obtain wind speed calibrations at two heights against two cup anemometers mounted on a mast...
Adaptive Optical Scanning Holography
Tsang, P. W. M.; Poon, Ting-Chung; Liu, J.-P.
2016-01-01
Optical Scanning Holography (OSH) is a powerful technique that employs a single-pixel sensor and a row-by-row scanning mechanism to capture the hologram of a wide-view, three-dimensional object. However, the time required to acquire a hologram with OSH is rather lengthy. In this paper, we propose an enhanced framework, which is referred to as Adaptive OSH (AOSH), to shorten the holographic recording process. We have demonstrated that the AOSH method is capable of decreasing the acquisition time by up to an order of magnitude, while preserving the content of the hologram favorably. PMID:26916866
Handbook on modelling for discrete optimization
Pitsoulis, Leonidas; Williams, H
2006-01-01
The primary objective underlying the Handbook on Modelling for Discrete Optimization is to demonstrate and detail the pervasive nature of Discrete Optimization. While its applications cut across an incredibly wide range of activities, many of the applications are only known to specialists. It is the aim of this handbook to correct this. It has long been recognized that "modelling" is a critically important mathematical activity in designing algorithms for solving these discrete optimization problems. Nevertheless solving the resultant models is also often far from straightforward. In recent years it has become possible to solve many large-scale discrete optimization problems. However, some problems remain a challenge, even though advances in mathematical methods, hardware, and software technology have pushed the frontiers forward. This handbook couples the difficult, critical-thinking aspects of mathematical modeling with the hot area of discrete optimization. It will be done in an academic handbook treatment...
Discrete elements method of neutral particle transport
Mathews, K.A.
1983-01-01
A new discrete elements (L/sub N/) transport method is derived and compared to the discrete ordinates S/sub N/ method, theoretically and by numerical experimentation. The discrete elements method is more accurate than discrete ordinates and strongly ameliorates ray effects for the practical problems studied. The discrete elements method is shown to be more cost effective, in terms of execution time with comparable storage to attain the same accuracy, for a one-dimensional test case using linear characteristic spatial quadrature. In a two-dimensional test case, a vacuum duct in a shield, L/sub N/ is more consistently convergent toward a Monte Carlo benchmark solution than S/sub N/, using step characteristic spatial quadrature. An analysis of the interaction of angular and spatial quadrature in xy-geometry indicates the desirability of using linear characteristic spatial quadrature with the L/sub N/ method
Spatially localized, temporally quasiperiodic, discrete nonlinear excitations
Cai, D.; Bishop, A.R.; Gronbech-Jensen, N.
1995-01-01
In contrast to the commonly discussed discrete breather, which is a spatially localized, time-periodic solution, we present an exact solution of a discrete nonlinear Schroedinger breather which is a spatially localized, temporally quasiperiodic nonlinear coherent excitation. This breather is a multiple-soliton solution in the sense of the inverse scattering transform. A discrete breather of multiple frequencies is conceptually important in studies of nonlinear lattice systems. We point out that, for this breather, the incommensurability of its frequencies is a discrete lattice effect and these frequencies become commensurate in the continuum limit. To understand the dynamical properties of the breather, we also discuss its stability and its behavior in the presence of an external potential. Finally, we indicate how to obtain an exact N-soliton breather as a discrete generalization of the continuum multiple-soliton solution
Laplacians on discrete and quantum geometries
Calcagni, Gianluca; Oriti, Daniele; Thürigen, Johannes
2013-01-01
We extend discrete calculus for arbitrary (p-form) fields on embedded lattices to abstract discrete geometries based on combinatorial complexes. We then provide a general definition of discrete Laplacian using both the primal cellular complex and its combinatorial dual. The precise implementation of geometric volume factors is not unique and, comparing the definition with a circumcentric and a barycentric dual, we argue that the latter is, in general, more appropriate because it induces a Laplacian with more desirable properties. We give the expression of the discrete Laplacian in several different sets of geometric variables, suitable for computations in different quantum gravity formalisms. Furthermore, we investigate the possibility of transforming from position to momentum space for scalar fields, thus setting the stage for the calculation of heat kernel and spectral dimension in discrete quantum geometries. (paper)
Discrete breathers in graphane: Effect of temperature
Baimova, J. A., E-mail: julia.a.baimova@gmail.com [Russian Academy of Sciences, Institute of Metal Physics, Ural Branch (Russian Federation); Murzaev, R. T.; Lobzenko, I. P.; Dmitriev, S. V. [Russian Academy of Sciences, Institute for Metals Superplasticity Problems (Russian Federation); Zhou, Kun [Nanyang Technological University, School of Mechanical and Aerospace Engineering (Singapore)
2016-05-15
The discrete breathers in graphane in thermodynamic equilibrium in the temperature range 50–600 K are studied by molecular dynamics simulation. A discrete breather is a hydrogen atom vibrating along the normal to a sheet of graphane at a high amplitude. As was found earlier, the lifetime of a discrete breather at zero temperature corresponds to several tens of thousands of vibrations. The effect of temperature on the decay time of discrete breathers and the probability of their detachment from a sheet of graphane are studied in this work. It is shown that closely spaced breathers can exchange energy with each other at zero temperature. The data obtained suggest that thermally activated discrete breathers can be involved in the dehydrogenation of graphane, which is important for hydrogen energetics.
Magnetically scanned proton therapy beams: rationales and techniques
Jones, D.T.L.; Schreuder, A.N.
2000-01-01
Perhaps the most important advantages of beam scanning systems for proton therapy in comparison with conventional passive beam spreading systems are: (1) Intensity modulation and inverse planning are possible. (2) There is negligible reduction in the range of the beam. (3) Integral dose is reduced as dose conformation to the proximal edge of the lesion is possible. (4) In principle no field-specific modifying devices are required. (5) There is less activation of the surroundings. (6) Scanning systems axe almost infinitely flexible. The main disadvantages include: (1) Scanning systems are more complicated and therefore potentially less reliable and more dangerous. (2) The development of such systems is more demanding in terms of cost, time and manpower. (3) More stable beams are required. (4) Dose and beam position monitoring are more difficult. (5) The problems associated with patient and organ movement axe more severe. There are several techniques which can be used for scanning. For lateral beam spreading, circular scanning (wobbling) or linear scanning can be done. In the latter case the beam can be scanned continuously or in a discrete fashion (spot scanning). Another possibility is to undertake the fastest scan in one dimension (strip scanning) and translate the patient or the scanning magnet in the other dimension. Depth variation is achieved by interposing degraders in the beam (cyclotrons) or by changing the beam energy (synchrotrons). The aim of beam scanning is to deliver a predetermined dose at any point in the body. Special safety precautions must be taken because of the high instantaneous dose rates. The beam position and the dose delivered at each point must be accurately and redundantly determined. (author)
Ding Qing
2007-01-01
We prove that the integrable-nonintegrable discrete nonlinear Schroedinger equation (AL-DNLS) introduced by Cai, Bishop and Gronbech-Jensen (Phys. Rev. Lett. 72 591(1994)) is the discrete gauge equivalent to an integrable-nonintegrable discrete Heisenberg model from the geometric point of view. Then we study whether the transmission and bifurcation properties of the AL-DNLS equation are preserved under the action of discrete gauge transformations. Our results reveal that the transmission property of the AL-DNLS equation is completely preserved and the bifurcation property is conditionally preserved to those of the integrable-nonintegrable discrete Heisenberg model
The dynamics of discrete populations and series of events
Hopcraft, Keith Iain; Ridley, Kevin D
2014-01-01
IntroductionReferencesStatistical PreliminariesIntroductionProbability DistributionsMoment-Generating FunctionsDiscrete ProcessesSeries of EventsSummaryFurther ReadingMarkovian Population ProcessesIntroductionBirths and DeathsImmigration and the Poisson ProcessThe Effect of MeasurementCorrelation of CountsSummaryFurther ReadingThe Birth-Death-Immigration ProcessIntroductionRate Equations for the ProcessEquation for the Generating FunctionGeneral Time-Dependent SolutionFluctuation Characteristics of a Birth-Death-Immigration PopulationSampling and Measurement ProcessesCorrelation of CountsSumma
Estimation in Discretely Observed Diffusions Killed at a Threshold
Bibbona, Enrico; Ditlevsen, Susanne
2013-01-01
are modelled as discretely observed diffusions which are killed when the threshold is reached. Statistical inference is often based on a misspecified likelihood ignoring the presence of the threshold causing severe bias, e.g. the bias incurred in the drift parameters of the Ornstein–Uhlenbeck model...... for biological relevant parameters can be up to 25–100 per cent. We compute or approximate the likelihood function of the killed process. When estimating from a single trajectory, considerable bias may still be present, and the distribution of the estimates can be heavily skewed and with a huge variance...
A Two-stage Improvement Method for Robot Based 3D Surface Scanning
He, F. B.; Liang, Y. D.; Wang, R. F.; Lin, Y. S.
2018-03-01
As known that the surface of unknown object was difficult to measure or recognize precisely, hence the 3D laser scanning technology was introduced and used properly in surface reconstruction. Usually, the surface scanning speed was slower and the scanning quality would be better, while the speed was faster and the quality would be worse. In this case, the paper presented a new two-stage scanning method in order to pursuit the quality of surface scanning in a faster speed. The first stage was rough scanning to get general point cloud data of object’s surface, and then the second stage was specific scanning to repair missing regions which were determined by chord length discrete method. Meanwhile, a system containing a robotic manipulator and a handy scanner was also developed to implement the two-stage scanning method, and relevant paths were planned according to minimum enclosing ball and regional coverage theories.
Full Text Available ... process that regulates the rate at which the body converts food to energy. top of page What are some common uses of the procedure? The thyroid scan is used to determine the size, shape and position of the thyroid gland. The ...
Full Text Available Toggle navigation Test/Treatment Patient Type Screening/Wellness Disease/Condition Safety En Español More Info Images/Videos About Us News Physician Resources Professions Site Index A-Z Thyroid Scan and Uptake ...
Dialogue scanning measuring systems
Borodyuk, V.P.; Shkundenkov, V.N.
1985-01-01
The main developments of scanning measuring systems intended for mass precision processsing of films in nuclear physics problems and in related fields are reviewed. A special attention is paid to the problem of creation of dialogue systems which permit to simlify the development of control computer software
Cox, B. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada)
1970-05-15
The JSM-11 scanning electron microscope at CRNL has been used extensively for topographical studies of oxidized metals, fracture surfaces, entomological and biological specimens. A non-dispersive X-ray attachment permits the microanalysis of the surface features. Techniques for the production of electron channeling patterns have been developed. (author)
Binnig, G.; Rohrer, H.
1983-01-01
Based on vacuum tunneling, a novel type of microscope, the scanning tunneling microscope (STM) was developed. It has an unprecedented resolution in real space on an atomic scale. The authors review the important technical features, illustrate the power of the STM for surface topographies and discuss its potential in other areas of science and technology. (Auth.)
Morales G, R.; Cano P, R.; Mendoza P, R.
1993-01-01
In this chapter a revision is made concerning different uses of bone scan in rheumatic diseases. These include reflex sympathetic dystrophy, osteomyelitis, spondyloarthropaties, metabolic bone diseases, avascular bone necrosis and bone injuries due to sports. There is as well some comments concerning pediatric pathology and orthopedics. (authors). 19 refs., 9 figs
Full Text Available ... information. The thyroid scan and thyroid uptake provide information about the structure and function of the thyroid. The thyroid is a gland in the neck that controls metabolism , a chemical process that regulates the rate at which the body ...
Tomographic scanning apparatus
1981-01-01
Details are given of a tomographic scanning apparatus, with particular reference to the means of adjusting the apparent gain of the signal processing means for receiving output signals from the detectors, to compensate for drift in the gain characteristics, including means for passing a reference signal. (U.K.)
Stabilized radiographic scanning agent
Fawzi, M.B.
1979-01-01
A stable composition useful in preparation of technetium-99m-based radiographic scanning agents has been developed. The composition contains a stabilizing amount of gentisate stabilizer selected from gentisic acid and its soluble pharmaceutically-acceptable salts and esthers. (E.G.)
Anon.
1980-01-01
The principle underlying the design of the scanning electron microscope (SEM), the design and functioning of SEM are described. Its applications in the areas of microcircuitry and materials science are outlined. The development of SEM in India is reviewed. (M.G.B.)
Tofe, A.J.
1976-01-01
A stable radiographic scanning agent on a sup(99m)Tc basis has been developed. The substance contains a pertechnetate reduction agent, tin(II)-chloride, chromium(II)-chloride, or iron(II)-sulphate, as well as an organospecific carrier and ascorbic acid or a pharmacologically admissible salt or ester of ascorbic acid. (VJ) [de
Full Text Available ... you: have had any tests, such as an x-ray or CT scan, surgeries or treatments using iodinated ... page How does the procedure work? With ordinary x-ray examinations, an image is made by passing x- ...
Full Text Available ... for a thyroid scan is 30 minutes or less. Thyroid Uptake You will be given radioactive iodine (I-123 or I-131) in liquid or capsule form to swallow. The thyroid uptake will begin several hours to 24 hours later. Often, two separate uptake ...
Full Text Available ... an x-ray or CT scan, surgeries or treatments using iodinated contrast material within the last two months. are taking medications or ingesting other substances that contain iodine , including kelp, seaweed, cough syrups, multivitamins or heart medications. have any ...
Compatible Spatial Discretizations for Partial Differential Equations
Arnold, Douglas, N, ed.
2004-11-25
From May 11--15, 2004, the Institute for Mathematics and its Applications held a hot topics workshop on Compatible Spatial Discretizations for Partial Differential Equations. The numerical solution of partial differential equations (PDE) is a fundamental task in science and engineering. The goal of the workshop was to bring together a spectrum of scientists at the forefront of the research in the numerical solution of PDEs to discuss compatible spatial discretizations. We define compatible spatial discretizations as those that inherit or mimic fundamental properties of the PDE such as topology, conservation, symmetries, and positivity structures and maximum principles. A wide variety of discretization methods applied across a wide range of scientific and engineering applications have been designed to or found to inherit or mimic intrinsic spatial structure and reproduce fundamental properties of the solution of the continuous PDE model at the finite dimensional level. A profusion of such methods and concepts relevant to understanding them have been developed and explored: mixed finite element methods, mimetic finite differences, support operator methods, control volume methods, discrete differential forms, Whitney forms, conservative differencing, discrete Hodge operators, discrete Helmholtz decomposition, finite integration techniques, staggered grid and dual grid methods, etc. This workshop seeks to foster communication among the diverse groups of researchers designing, applying, and studying such methods as well as researchers involved in practical solution of large scale problems that may benefit from advancements in such discretizations; to help elucidate the relations between the different methods and concepts; and to generally advance our understanding in the area of compatible spatial discretization methods for PDE. Particular points of emphasis included: + Identification of intrinsic properties of PDE models that are critical for the fidelity of numerical
Discrete-Feature Model Implementation of SDM-Site Forsmark
Geier, Joel
2010-03-01
A discrete-feature model (DFM) was implemented for the Forsmark repository site based on the final site descriptive model from surface based investigations. The discrete-feature conceptual model represents deformation zones, individual fractures, and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which, in the present study, is treated as impermeable. This approximation is reasonable for sites in crystalline rock which has very low permeability, apart from that which results from macroscopic fracturing. Models are constructed based on the geological and hydrogeological description of the sites and engineering designs. Hydraulic heads and flows through the network of water-conducting features are calculated by the finite-element method, and are used in turn to simulate migration of non-reacting solute by a particle-tracking method, in order to estimate the properties of pathways by which radionuclides could be released to the biosphere. Stochastic simulation is used to evaluate portions of the model that can only be characterized in statistical terms, since many water-conducting features within the model volume cannot be characterized deterministically. Chapter 2 describes the methodology by which discrete features are derived to represent water-conducting features around the hypothetical repository at Forsmark (including both natural features and features that result from the disturbance of excavation), and then assembled to produce a discrete-feature network model for numerical simulation of flow and transport. Chapter 3 describes how site-specific data and repository design are adapted to produce the discrete-feature model. Chapter 4 presents results of the calculations. These include utilization factors for deposition tunnels based on the emplacement criteria that have been set forth by the implementers, flow distributions to the deposition holes, and calculated properties of discharge paths as well as
Discrete-Feature Model Implementation of SDM-Site Forsmark
Geier, Joel (Clearwater Hardrock Consulting, Corvallis, OR (United States))
2010-03-15
A discrete-feature model (DFM) was implemented for the Forsmark repository site based on the final site descriptive model from surface based investigations. The discrete-feature conceptual model represents deformation zones, individual fractures, and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which, in the present study, is treated as impermeable. This approximation is reasonable for sites in crystalline rock which has very low permeability, apart from that which results from macroscopic fracturing. Models are constructed based on the geological and hydrogeological description of the sites and engineering designs. Hydraulic heads and flows through the network of water-conducting features are calculated by the finite-element method, and are used in turn to simulate migration of non-reacting solute by a particle-tracking method, in order to estimate the properties of pathways by which radionuclides could be released to the biosphere. Stochastic simulation is used to evaluate portions of the model that can only be characterized in statistical terms, since many water-conducting features within the model volume cannot be characterized deterministically. Chapter 2 describes the methodology by which discrete features are derived to represent water-conducting features around the hypothetical repository at Forsmark (including both natural features and features that result from the disturbance of excavation), and then assembled to produce a discrete-feature network model for numerical simulation of flow and transport. Chapter 3 describes how site-specific data and repository design are adapted to produce the discrete-feature model. Chapter 4 presents results of the calculations. These include utilization factors for deposition tunnels based on the emplacement criteria that have been set forth by the implementers, flow distributions to the deposition holes, and calculated properties of discharge paths as well as
Perfect discretization of reparametrization invariant path integrals
Bahr, Benjamin; Dittrich, Bianca; Steinhaus, Sebastian
2011-01-01
To obtain a well-defined path integral one often employs discretizations. In the case of gravity and reparametrization-invariant systems, the latter of which we consider here as a toy example, discretizations generically break diffeomorphism and reparametrization symmetry, respectively. This has severe implications, as these symmetries determine the dynamics of the corresponding system. Indeed we will show that a discretized path integral with reparametrization-invariance is necessarily also discretization independent and therefore uniquely determined by the corresponding continuum quantum mechanical propagator. We use this insight to develop an iterative method for constructing such a discretized path integral, akin to a Wilsonian RG flow. This allows us to address the problem of discretization ambiguities and of an anomaly-free path integral measure for such systems. The latter is needed to obtain a path integral, that can act as a projector onto the physical states, satisfying the quantum constraints. We will comment on implications for discrete quantum gravity models, such as spin foams.
Perfect discretization of reparametrization invariant path integrals
Bahr, Benjamin; Dittrich, Bianca; Steinhaus, Sebastian
2011-05-01
To obtain a well-defined path integral one often employs discretizations. In the case of gravity and reparametrization-invariant systems, the latter of which we consider here as a toy example, discretizations generically break diffeomorphism and reparametrization symmetry, respectively. This has severe implications, as these symmetries determine the dynamics of the corresponding system. Indeed we will show that a discretized path integral with reparametrization-invariance is necessarily also discretization independent and therefore uniquely determined by the corresponding continuum quantum mechanical propagator. We use this insight to develop an iterative method for constructing such a discretized path integral, akin to a Wilsonian RG flow. This allows us to address the problem of discretization ambiguities and of an anomaly-free path integral measure for such systems. The latter is needed to obtain a path integral, that can act as a projector onto the physical states, satisfying the quantum constraints. We will comment on implications for discrete quantum gravity models, such as spin foams.
Higher dimensional discrete Cheeger inequalities
Anna Gundert
2015-01-01
Full Text Available For graphs there exists a strong connection between spectral and combinatorial expansion properties. This is expressed, e.g., by the discrete Cheeger inequality, the lower bound of which states that $\\lambda(G \\leq h(G$, where $\\lambda(G$ is the second smallest eigenvalue of the Laplacian of a graph $G$ and $h(G$ is the Cheeger constant measuring the edge expansion of $G$. We are interested in generalizations of expansion properties to finite simplicial complexes of higher dimension (or uniform hypergraphs. Whereas higher dimensional Laplacians were introduced already in 1945 by Eckmann, the generalization of edge expansion to simplicial complexes is not straightforward. Recently, a topologically motivated notion analogous to edge expansion that is based on $\\mathbb{Z}_2$-cohomology was introduced by Gromov and independently by Linial, Meshulam and Wallach. It is known that for this generalization there is no direct higher dimensional analogue of the lower bound of the Cheeger inequality. A different, combinatorially motivated generalization of the Cheeger constant, denoted by $h(X$, was studied by Parzanchevski, Rosenthal and Tessler. They showed that indeed $\\lambda(X \\leq h(X$, where $\\lambda(X$ is the smallest non-trivial eigenvalue of the ($(k-1$-dimensional upper Laplacian, for the case of $k$-dimensional simplicial complexes $X$ with complete $(k-1$-skeleton. Whether this inequality also holds for $k$-dimensional complexes with non-com\\-plete$(k-1$-skeleton has been an open question.We give two proofs of the inequality for arbitrary complexes. The proofs differ strongly in the methods and structures employed,and each allows for a different kind of additional strengthening of the original result.
Maruno, Ken-ichi; Biondini, Gino
2004-01-01
We present a class of solutions of the two-dimensional Toda lattice equation, its fully discrete analogue and its ultra-discrete limit. These solutions demonstrate the existence of soliton resonance and web-like structure in discrete integrable systems such as differential-difference equations, difference equations and cellular automata (ultra-discrete equations)
Hairs of discrete symmetries and gravity
Choi, Kang Sin [Scranton Honors Program, Ewha Womans University, Seodaemun-Gu, Seoul 03760 (Korea, Republic of); Center for Fields, Gravity and Strings, CTPU, Institute for Basic Sciences, Yuseong-Gu, Daejeon 34047 (Korea, Republic of); Kim, Jihn E., E-mail: jihnekim@gmail.com [Department of Physics, Kyung Hee University, 26 Gyungheedaero, Dongdaemun-Gu, Seoul 02447 (Korea, Republic of); Center for Axion and Precision Physics Research (IBS), 291 Daehakro, Yuseong-Gu, Daejeon 34141 (Korea, Republic of); Kyae, Bumseok [Department of Physics, Pusan National University, 2 Busandaehakro-63-Gil, Geumjeong-Gu, Busan 46241 (Korea, Republic of); Nam, Soonkeon [Department of Physics, Kyung Hee University, 26 Gyungheedaero, Dongdaemun-Gu, Seoul 02447 (Korea, Republic of)
2017-06-10
Gauge symmetries are known to be respected by gravity because gauge charges carry flux lines, but global charges do not carry flux lines and are not conserved by gravitational interaction. For discrete symmetries, they are spontaneously broken in the Universe, forming domain walls. Since the realization of discrete symmetries in the Universe must involve the vacuum expectation values of Higgs fields, a string-like configuration (hair) at the intersection of domain walls in the Higgs vacua can be realized. Therefore, we argue that discrete charges are also respected by gravity.
Hairs of discrete symmetries and gravity
Kang Sin Choi
2017-06-01
Full Text Available Gauge symmetries are known to be respected by gravity because gauge charges carry flux lines, but global charges do not carry flux lines and are not conserved by gravitational interaction. For discrete symmetries, they are spontaneously broken in the Universe, forming domain walls. Since the realization of discrete symmetries in the Universe must involve the vacuum expectation values of Higgs fields, a string-like configuration (hair at the intersection of domain walls in the Higgs vacua can be realized. Therefore, we argue that discrete charges are also respected by gravity.
Discrete Tomography and Imaging of Polycrystalline Structures
Alpers, Andreas
High resolution transmission electron microscopy is commonly considered as the standard application for discrete tomography. While this has yet to be technically realized, new applications with a similar flavor have emerged in materials science. In our group at Ris� DTU (Denmark's National...... Laboratory for Sustainable Energy), for instance, we study polycrystalline materials via synchrotron X-ray diffraction. Several reconstruction problems arise, most of them exhibit inherently discrete aspects. In this talk I want to give a concise mathematical introduction to some of these reconstruction...... problems. Special focus is on their relationship to classical discrete tomography. Several open mathematical questions will be mentioned along the way....
Ensemble simulations with discrete classical dynamics
Toxværd, Søren
2013-01-01
For discrete classical Molecular dynamics (MD) obtained by the "Verlet" algorithm (VA) with the time increment $h$ there exist a shadow Hamiltonian $\\tilde{H}$ with energy $\\tilde{E}(h)$, for which the discrete particle positions lie on the analytic trajectories for $\\tilde{H}$. $\\tilde......{E}(h)$ is employed to determine the relation with the corresponding energy, $E$ for the analytic dynamics with $h=0$ and the zero-order estimate $E_0(h)$ of the energy for discrete dynamics, appearing in the literature for MD with VA. We derive a corresponding time reversible VA algorithm for canonical dynamics...
Stochastic Kuramoto oscillators with discrete phase states
Jörg, David J.
2017-09-01
We present a generalization of the Kuramoto phase oscillator model in which phases advance in discrete phase increments through Poisson processes, rendering both intrinsic oscillations and coupling inherently stochastic. We study the effects of phase discretization on the synchronization and precision properties of the coupled system both analytically and numerically. Remarkably, many key observables such as the steady-state synchrony and the quality of oscillations show distinct extrema while converging to the classical Kuramoto model in the limit of a continuous phase. The phase-discretized model provides a general framework for coupled oscillations in a Markov chain setting.
Stochastic Kuramoto oscillators with discrete phase states.
Jörg, David J
2017-09-01
We present a generalization of the Kuramoto phase oscillator model in which phases advance in discrete phase increments through Poisson processes, rendering both intrinsic oscillations and coupling inherently stochastic. We study the effects of phase discretization on the synchronization and precision properties of the coupled system both analytically and numerically. Remarkably, many key observables such as the steady-state synchrony and the quality of oscillations show distinct extrema while converging to the classical Kuramoto model in the limit of a continuous phase. The phase-discretized model provides a general framework for coupled oscillations in a Markov chain setting.
Discrete-Time Biomedical Signal Encryption
Victor Grigoraş
2017-12-01
Full Text Available Chaotic modulation is a strong method of improving communication security. Analog and discrete chaotic systems are presented in actual literature. Due to the expansion of digital communication, discrete-time systems become more efficient and closer to actual technology. The present contribution offers an in-depth analysis of the effects chaos encryption produce on 1D and 2D biomedical signals. The performed simulations show that modulating signals are precisely recovered by the synchronizing receiver if discrete systems are digitally implemented and the coefficients precisely correspond. Channel noise is also applied and its effects on biomedical signal demodulation are highlighted.
Discrete symmetries and de Sitter spacetime
Cotăescu, Ion I., E-mail: gpascu@physics.uvt.ro; Pascu, Gabriel, E-mail: gpascu@physics.uvt.ro [West University of Timişoara, V. Pârvan Ave. 4, RO-300223 Timişoara (Romania)
2014-11-24
Aspects of the ambiguity in defining quantum modes on de Sitter spacetime using a commuting system composed only of differential operators are discussed. Discrete symmetries and their actions on the wavefunction in commonly used coordinate charts are reviewed. It is argued that the system of commuting operators can be supplemented by requiring the invariance of the wavefunction to combined discrete symmetries- a criterion which selects a single state out of the α-vacuum family. Two such members of this family are singled out by particular combined discrete symmetries- states between which exists a well-known thermality relation.
Static quarks with improved statistical precision
Della Morte, M.; Duerr, S.; Molke, H.; Heitger, J.
2003-09-01
We present a numerical study for different discretisations of the static action, concerning cut-off effects and the growth of statistical errors with Euclidean time. An error reduction by an order of magnitude can be obtained with respect to the Eichten-Hill action, for time separations up to 2 fm, keeping discretization errors small. The best actions lead to a big improvement on the precision of the quark mass M b and F B s in the static approximation. (orig.)
Signature Curves Statistics of DNA Supercoils
Shakiban, Cheri; Lloyd, Peter
2004-01-01
In this paper we describe the Euclidean signature curves for two dimensional closed curves in the plane and their generalization to closed space curves. The focus will be on discrete numerical methods for approximating such curves. Further we will apply these numerical methods to plot the signature curves related to three-dimensional simulated DNA supercoils. Our primary focus will be on statistical analysis of the data generated for the signature curves of the supercoils. We will try to esta...
Mainsbridge, B.
1994-01-01
In late 1959, Richard Feynman observed that manoeuvring atoms was something that could be done in principle but has not been done, 'because we are too big'. In 1982, the scanning tunnelling microscope (STM) was invented and is now a central tool for the construction of nanoscale devices in what was known as molecular engineering, and now, nanotechnology. The principles of the microscope are outlined and references are made to other scanning devices which have evolved from the original invention. The method of employment of the STM as a machine tool is described and references are made to current speculations on applications of the instrument in nanotechnology. A short bibliography on this topic is included. 27 refs., 7 figs
Mainsbridge, B [Murdoch Univ., WA (Australia). School of Mathematical and Physical Sciences
1994-12-31
In late 1959, Richard Feynman observed that manoeuvring atoms was something that could be done in principle but has not been done, `because we are too big`. In 1982, the scanning tunnelling microscope (STM) was invented and is now a central tool for the construction of nanoscale devices in what was known as molecular engineering, and now, nanotechnology. The principles of the microscope are outlined and references are made to other scanning devices which have evolved from the original invention. The method of employment of the STM as a machine tool is described and references are made to current speculations on applications of the instrument in nanotechnology. A short bibliography on this topic is included. 27 refs., 7 figs.
Niden, A.H.; Mishkin, F.S.; Khurana, M.M.L.; Pick, R.
1977-01-01
Twenty-three patients with clinical signs of pulmonary embolic disease and lung infiltrates were studied to determine the value of gallium citrate 67 Ga lung scan in differentiating embolic from inflammatory lung disease. In 11 patients without angiographically proved embolism, only seven had corresponding ventilation-perfusion defects compatible with inflammatory disease. In seven of these 11 patients, the 67 Ga concentration indicated inflammatory disease. In the 12 patients with angiographically proved embolic disease, six had corresponding ventilation-perfusion defects compatible with inflammatory disease. None had an accumulation of 67 Ga in the area of pulmonary infiltrate. Thus, ventilation-perfusion lung scans are of limited value when lung infiltrates are present. In contrast, the accumulation of 67 Ga in the lung indicates an inflammatory process. Gallium imaging can help select those patients with lung infiltrates who need angiography
Horizon Scanning for Pharmaceuticals
Lepage-Nefkens, Isabelle; Douw, Karla; Mantjes, GertJan
for a joint horizon scanning system (HSS). We propose to create a central “horizon scanning unit” to perform the joint HS activities (a newly established unit, an existing HS unit, or a third party commissioned and financed by the collaborating countries). The unit will be responsible for the identification...... and filtration of new and emerging pharmaceutical products. It will maintain and update the HS database, organise company pipeline meetings, and disseminate the HSS’s outputs. The HS unit works closely together with the designated national HS experts in each collaborating country. The national HS experts...... will collect country-specific information, liaise between the central HS unit and country-specific clinical and other experts, coordinate the national prioritization process (to select products for early assessment), and communicate the output of the HSS to national decision makers. The outputs of the joint...
Multichannel scanning spectrophotometer
Lagutin, A.F.
1979-01-01
A spectrophotometer designed in the Crimea astrophysical observatory is described. The spectrophotometer is intended for the installation at the telescope to measure energy distribution in the star spectra in the 3100-8550 A range. The device is made according to the scheme with a fixed diffraction lattice. The choice of the optical kinematic scheme is explained. The main design elements are shown. Some singularities of the scanning drive kinematics are considered. The device performance is given
Jin, Jian; Xiang, Chengxiang; Gregoire, John
2017-05-09
Electrochemical experiments are performed on a collection of samples by suspending a drop of electrolyte solution between an electrochemical experiment probe and one of the samples that serves as a test sample. During the electrochemical experiment, the electrolyte solution is added to the drop and an output solution is removed from the drop. The probe and collection of samples can be moved relative to one another so the probe can be scanned across the samples.
Jin, Jian; Xiang, Chengxiang; Gregoire, John M.; Shinde, Aniketa A.; Guevarra, Dan W.; Jones, Ryan J.; Marcin, Martin R.; Mitrovic, Slobodan
2017-05-09
Electrochemical or electrochemical and photochemical experiments are performed on a collection of samples by suspending a drop of electrolyte solution between an electrochemical experiment probe and one of the samples that serves as a test sample. During the electrochemical experiment, the electrolyte solution is added to the drop and an output solution is removed from the drop. The probe and collection of samples can be moved relative to one another so the probe can be scanned across the samples.
Baek, Sang Yeol; Park, Dae Kyu; Ahn, Sang Bok; Ju, Yong Sun; Jeon, Yong Bum
1997-06-01
The gamma scanning system which is installed in IMEF is the equipment obtaining the gamma ray spectrum from irradiated fuels. This equipment could afford the useful data relating spent fuels like as burn-up measurements. We describe the specifications of the equipment and its accessories, and also described its operation procedure so that an operator can use this report as the operation procedure. (author). 1 tab., 11 figs., 11 refs.
Baek, Sang Yeol; Park, Dae Kyu; Ahn, Sang Bok; Ju, Yong Sun; Jeon, Yong Bum.
1997-06-01
The gamma scanning system which is installed in IMEF is the equipment obtaining the gamma ray spectrum from irradiated fuels. This equipment could afford the useful data relating spent fuels like as burn-up measurements. We describe the specifications of the equipment and its accessories, and also described its operation procedure so that an operator can use this report as the operation procedure. (author). 1 tab., 11 figs., 11 refs
Exterior difference systems and invariance properties of discrete mechanics
Xie Zheng; Xie Duanqiang; Li Hongbo
2008-01-01
Invariance properties describe the fundamental physical laws in discrete mechanics. Can those properties be described in a geometric way? We investigate an exterior difference system called the discrete Euler-Lagrange system, whose solution has one-to-one correspondence with solutions of discrete Euler-Lagrange equations, and use it to define the first integrals. The preservation of the discrete symplectic form along the discrete Hamilton phase flows and the discrete Noether's theorem is also described in the language of difference forms
Plaige, Yves.
1976-01-01
This invention concerns a measurement scanning assembly for collectron type detectors. It is used in measuring the neutron flux in nuclear reactors. As the number of these detectors in a reactor can be very great, they are not usually all connected permanently to the measuring facility but rather in turn by means of a scanning device which carries out, as it were, multiplexing between all the collectrons and the input of a single measuring system. The object of the invention is a scanning assembly which is of relative simplicity through an original organisation. Specifically, according to this organisation, the collectrons outputs are grouped together in bunches, each of these bunches being processed by a multiplexing sub-assembly belonging to a first stage, the different outputs of these multiplexing subassemblies of this first stage being grouped together yet again in bunches processed by multiplexors forming a new stage and so forth. Further, this structure is specially adapted for use with collectrons by utilising a current amplifier at each multiplexing level so that from one end to the other of the multiplexing system, the commutations are carried out on currents and not on voltages [fr
Forensic Scanning Electron Microscope
Keeley, R. H.
1983-03-01
The scanning electron microscope equipped with an x-ray spectrometer is a versatile instrument which has many uses in the investigation of crime and preparation of scientific evidence for the courts. Major applications include microscopy and analysis of very small fragments of paint, glass and other materials which may link an individual with a scene of crime, identification of firearms residues and examination of questioned documents. Although simultaneous observation and chemical analysis of the sample is the most important feature of the instrument, other modes of operation such as cathodoluminescence spectrometry, backscattered electron imaging and direct x-ray excitation are also exploited. Marks on two bullets or cartridge cases can be compared directly by sequential scanning with a single beam or electronic linkage of two instruments. Particles of primer residue deposited on the skin and clothing when a gun is fired can be collected on adhesive tape and identified by their morphology and elemental composition. It is also possible to differentiate between the primer residues of different types of ammunition. Bullets may be identified from the small fragments left behind as they pass through the body tissues. In the examination of questioned documents the scanning electron microscope is used to establish the order in which two intersecting ink lines were written and to detect traces of chemical markers added to the security inks on official documents.
On organizing principles of discrete differential geometry. Geometry of spheres
Bobenko, Alexander I; Suris, Yury B
2007-01-01
Discrete differential geometry aims to develop discrete equivalents of the geometric notions and methods of classical differential geometry. This survey contains a discussion of the following two fundamental discretization principles: the transformation group principle (smooth geometric objects and their discretizations are invariant with respect to the same transformation group) and the consistency principle (discretizations of smooth parametrized geometries can be extended to multidimensional consistent nets). The main concrete geometric problem treated here is discretization of curvature-line parametrized surfaces in Lie geometry. Systematic use of the discretization principles leads to a discretization of curvature-line parametrization which unifies circular and conical nets.
Solutions of several coupled discrete models in terms of Lamé ...
3Departments of Mathematics and Statistics, Stanford University, Stanford, CA 94305, USA. ∗. Corresponding author. E-mail: avadh@lanl.gov. MS received 23 January 2012; revised 29 March 2012; accepted 18 April 2012. Abstract. Coupled discrete models are ubiquitous in a variety of physical contexts. We provide.
Will the alphabet soup of design criteria affect discrete choice experiment results?
Olsen, Søren Bøye; Meyerhoff, Jürgen
2017-01-01
Every discrete choice experiment needs one, but the impacts of a statistical design on the results are still not well understood. Comparative studies have found that efficient designs outperform especially orthogonal designs. What has been little studied is whether efficient designs come at a cos...
An equivalence between the discrete Gaussian model and a generalized Sine Gordon theory on a lattice
Baskaran, G.; Gupte, N.
1983-11-01
We demonstrate an equivalence between the statistical mechanics of the discrete Gaussian model and a generalized Sine-Gordon theory on an Euclidean lattice in arbitrary dimensions. The connection is obtained by a simple transformation of the partition function and is non perturbative in nature. (author)
Can time be a discrete dynamical variable
Lee, T.D.
1983-01-01
The possibility that time can be regarded as a discrete dynamical variable is examined through all phases of mechanics: from classical mechanics to nonrelativistic quantum mechanics, and to relativistic quantum field theories. (orig.)
Local discrete symmetries from superstring derived models
Faraggi, A.E.
1996-10-01
Discrete and global symmetries play an essential role in many extensions of the Standard Model, for example, to preserve the proton lifetime, to prevent flavor changing neutral currents, etc. An important question is how can such symmetries survive in a theory of quantum gravity, like superstring theory. In a specific string model the author illustrates how local discrete symmetries may arise in string models and play an important role in preventing fast proton decay and flavor changing neutral currents. The local discrete symmetry arises due to the breaking of the non-Abelian gauge symmetries by Wilson lines in the superstring models and forbids, for example dimension five operators which mediate rapid proton decay, to all orders of nonrenormalizable terms. In the context of models of unification of the gauge and gravitational interactions, it is precisely this type of local discrete symmetries that must be found in order to insure that a given model is not in conflict with experimental observations
Breatherlike impurity modes in discrete nonlinear lattices
Hennig, D.; Rasmussen, Kim; Tsironis, G. P.
1995-01-01
We investigate the properties of a disordered generalized discrete nonlinear Schrodinger equation, containing both diagonal and nondiagonal nonlinear terms. The equation models a Linear host lattice doped with nonlinear impurities. We find different types of impurity states that form itinerant...
Inferring gene networks from discrete expression data
Zhang, L.; Mallick, B. K.
2013-01-01
graphical models applied to continuous data, which give a closedformmarginal likelihood. In this paper,we extend network modeling to discrete data, specifically data from serial analysis of gene expression, and RNA-sequencing experiments, both of which
A discrete control model of PLANT
Mitchell, C. M.
1985-01-01
A model of the PLANT system using the discrete control modeling techniques developed by Miller is described. Discrete control models attempt to represent in a mathematical form how a human operator might decompose a complex system into simpler parts and how the control actions and system configuration are coordinated so that acceptable overall system performance is achieved. Basic questions include knowledge representation, information flow, and decision making in complex systems. The structure of the model is a general hierarchical/heterarchical scheme which structurally accounts for coordination and dynamic focus of attention. Mathematically, the discrete control model is defined in terms of a network of finite state systems. Specifically, the discrete control model accounts for how specific control actions are selected from information about the controlled system, the environment, and the context of the situation. The objective is to provide a plausible and empirically testable accounting and, if possible, explanation of control behavior.