Institute of Scientific and Technical Information of China (English)
熊峻江; 武哲; 高镇同
2002-01-01
According to the traditional fatigue constant life curve, the concept and the universal expression of the generalized fatigue constant life curve were proposed.Then, on the basis of the optimization method of the correlation coefficient, the parameter estimation formulas were induced and the generalized fatigue constant life curve with the reliability level p was given.From P-Sa-Sm curve, the two-dimensional probability distribution of the fatigue limit was derived.After then, three set of tests of LY11 CZ corresponding to the different average stress were carried out in terms of the two-dimensional up-down method.Finally, the methods are used to analyze the test results, and it is found that the analyzedresults with the high precision may be obtained.
DEFF Research Database (Denmark)
Yura, Harold; Hanson, Steen Grüner
2012-01-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the......Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set...... with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative...
Li, Hanshan; Lei, Zhiyong
2013-01-01
To improve projectile coordinate measurement precision in fire measurement system, this paper introduces the optical fiber coding fire measurement method and principle, sets up their measurement model, and analyzes coordinate errors by using the differential method. To study the projectile coordinate position distribution, using the mathematical statistics hypothesis method to analyze their distributing law, firing dispersion and probability of projectile shooting the object center were put under study. The results show that exponential distribution testing is relatively reasonable to ensure projectile position distribution on the given significance level. Through experimentation and calculation, the optical fiber coding fire measurement method is scientific and feasible, which can gain accurate projectile coordinate position.
Miniature sensor for two-dimensional magnetic field distributions
Fluitman, J.H.J.; Krabbe, H.W.
1972-01-01
Describes a simple method of production of a sensor for two-dimensional magnetic field distributions. The sensor consists of a strip of Ni-Fe(81-19), of which the magnetoresistance is utilized. Typical dimensions of the strip, placed at the edge of a glass substrate, are: length 100 mu m, width 2 or
Exact analytic flux distributions for two-dimensional solar concentrators.
Fraidenraich, Naum; Henrique de Oliveira Pedrosa Filho, Manoel; Vilela, Olga C; Gordon, Jeffrey M
2013-07-01
A new approach for representing and evaluating the flux density distribution on the absorbers of two-dimensional imaging solar concentrators is presented. The formalism accommodates any realistic solar radiance and concentrator optical error distribution. The solutions obviate the need for raytracing, and are physically transparent. Examples illustrating the method's versatility are presented for parabolic trough mirrors with both planar and tubular absorbers, Fresnel reflectors with tubular absorbers, and V-trough mirrors with planar absorbers.
Two dimensional velocity distribution in open channels using Renyi entropy
Kumbhakar, Manotosh; Ghoshal, Koeli
2016-05-01
In this study, the entropy concept is employed for describing the two-dimensional velocity distribution in an open channel. Using the principle of maximum entropy, the velocity distribution is derived by maximizing the Renyi entropy by assuming dimensionless velocity as a random variable. The derived velocity equation is capable of describing the variation of velocity along both the vertical and transverse directions with maximum velocity occurring on or below the water surface. The developed model of velocity distribution is tested with field and laboratory observations and is also compared with existing entropy-based velocity distributions. The present model has shown good agreement with the observed data and its prediction accuracy is comparable with the other existing models.
Return probability and recurrence for the random walk driven by two-dimensional Gaussian free field
Biskup, Marek; Ding, Jian; Goswami, Subhajit
2016-01-01
Given any $\\gamma>0$ and for $\\eta=\\{\\eta_v\\}_{v\\in \\mathbb Z^2}$ denoting a sample of the two-dimensional discrete Gaussian free field on $\\mathbb Z^2$ pinned at the origin, we consider the random walk on $\\mathbb Z^2$ among random conductances where the conductance of edge $(u, v)$ is given by $\\mathrm{e}^{\\gamma(\\eta_u + \\eta_v)}$. We show that, for almost every $\\eta$, this random walk is recurrent and that, with probability tending to 1 as $T\\to \\infty$, the return probability at time $2...
Probability-changing cluster algorithm for two-dimensional XY and clock models
Tomita, Yusuke; Okabe, Yutaka
2002-05-01
We extend the newly proposed probability-changing cluster (PCC) Monte Carlo algorithm to the study of systems with the vector order parameter. Wolff's idea of the embedded cluster formalism is used for assigning clusters. The Kosterlitz-Thouless (KT) transitions for the two-dimensional (2D) XY and q-state clock models are studied by using the PCC algorithm. Combined with the finite-size scaling analysis based on the KT form of the correlation length, ξ~exp(c/(T/TKT-1)), we determine the KT transition temperature and the decay exponent η as TKT=0.8933(6) and η=0.243(4) for the 2D XY model. We investigate two transitions of the KT type for the 2D q-state clock models with q=6,8,12 and confirm the prediction of η=4/q2 at T1, the low-temperature critical point between the ordered and XY-like phases, systematically.
Graphene materials having randomly distributed two-dimensional structural defects
Energy Technology Data Exchange (ETDEWEB)
Kung, Harold H.; Zhao, Xin; Hayner, Cary M.; Kung, Mayfair C.
2016-05-31
Graphene-based storage materials for high-power battery applications are provided. The storage materials are composed of vertical stacks of graphene sheets and have reduced resistance for Li ion transport. This reduced resistance is achieved by incorporating a random distribution of structural defects into the stacked graphene sheets, whereby the structural defects facilitate the diffusion of Li ions into the interior of the storage materials.
Graphene materials having randomly distributed two-dimensional structural defects
Kung, Harold H; Zhao, Xin; Hayner, Cary M; Kung, Mayfair C
2013-10-08
Graphene-based storage materials for high-power battery applications are provided. The storage materials are composed of vertical stacks of graphene sheets and have reduced resistance for Li ion transport. This reduced resistance is achieved by incorporating a random distribution of structural defects into the stacked graphene sheets, whereby the structural defects facilitate the diffusion of Li ions into the interior of the storage materials.
Institute of Scientific and Technical Information of China (English)
XING Yong-Zhong
2009-01-01
The analytical solution of a multidimensional Langevin equation at the overdamping limit is obtained and the probability of particles passing over a two-dimensional saddle point is discussed. These results may break a path for studying further the fusion in superheavy elements synthesis.
Two-dimensional distributed-phase-reference protocol for quantum key distribution
DEFF Research Database (Denmark)
Bacco, Davide; Christensen, Jesper Bjerge; Usuga Castaneda, Mario A.;
2016-01-01
Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last...... 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak...
Visualising the strain distribution in suspended two-dimensional materials under local deformation
Elibol, Kenan; Bayer, Bernhard C.; Hummel, Stefan; Kotakoski, Jani; Argentero, Giacomo; Meyer, Jannik C.
2016-06-01
We demonstrate the use of combined simultaneous atomic force microscopy (AFM) and laterally resolved Raman spectroscopy to study the strain distribution around highly localised deformations in suspended two-dimensional materials. Using the AFM tip as a nanoindentation probe, we induce localised strain in suspended few-layer graphene, which we adopt as a two-dimensional membrane model system. Concurrently, we visualise the strain distribution under and around the AFM tip in situ using hyperspectral Raman mapping via the strain-dependent frequency shifts of the few-layer graphene’s G and 2D Raman bands. Thereby we show how the contact of the nm-sized scanning probe tip results in a two-dimensional strain field with μm dimensions in the suspended membrane. Our combined AFM/Raman approach thus adds to the critically required instrumental toolbox towards nanoscale strain engineering of two-dimensional materials.
Probability distributions for magnetotellurics
Energy Technology Data Exchange (ETDEWEB)
Stodt, John A.
1982-11-01
Estimates of the magnetotelluric transfer functions can be viewed as ratios of two complex random variables. It is assumed that the numerator and denominator are governed approximately by a joint complex normal distribution. Under this assumption, probability distributions are obtained for the magnitude, squared magnitude, logarithm of the squared magnitude, and the phase of the estimates. Normal approximations to the distributions are obtained by calculating mean values and variances from error propagation, and the distributions are plotted with their normal approximations for different percentage errors in the numerator and denominator of the estimates, ranging from 10% to 75%. The distribution of the phase is approximated well by a normal distribution for the range of errors considered, while the distribution of the logarithm of the squared magnitude is approximated by a normal distribution for a much larger range of errors than is the distribution of the squared magnitude. The distribution of the squared magnitude is most sensitive to the presence of noise in the denominator of the estimate, in which case the true distribution deviates significantly from normal behavior as the percentage errors exceed 10%. In contrast, the normal approximation to the distribution of the logarithm of the magnitude is useful for errors as large as 75%.
Superpositions of probability distributions
Jizba, Petr; Kleinert, Hagen
2008-09-01
Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=σ2 play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.
Superpositions of probability distributions.
Jizba, Petr; Kleinert, Hagen
2008-09-01
Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=sigma;{2} play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.
Directory of Open Access Journals (Sweden)
Yan Li
2012-01-01
Full Text Available We consider the dynamic proportional reinsurance in a two-dimensional compound Poisson risk model. The optimization in the sense of minimizing the ruin probability which is defined by the sum of subportfolio is being ruined. Via the Hamilton-Jacobi-Bellman approach we find a candidate for the optimal value function and prove the verification theorem. In addition, we obtain the Lundberg bounds and the Cramér-Lundberg approximation for the ruin probability and show that as the capital tends to infinity, the optimal strategies converge to the asymptotically optimal constant strategies. The asymptotic value can be found by maximizing the adjustment coefficient.
Two-dimensional distributed-phase-reference protocol for quantum key distribution
Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A. Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo
2016-12-01
Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable.
Two-Dimensional Automatic Measurement for Nozzle Flow Distribution Using Improved Ultrasonic Sensor
Changyuan Zhai; Chunjiang Zhao; Xiu Wang; Ning Wang; Wei Zou; Wei Li
2015-01-01
Spray deposition and distribution are affected by many factors, one of which is nozzle flow distribution. A two-dimensional automatic measurement system, which consisted of a conveying unit, a system control unit, an ultrasonic sensor, and a deposition collecting dish, was designed and developed. The system could precisely move an ultrasonic sensor above a pesticide deposition collecting dish to measure the nozzle flow distribution. A sensor sleeve with a PVC tube was designed for the ultras...
Thorneywork, Alice L; Roth, Roland; Aarts, Dirk G A L; Dullens, Roel P A
2014-04-28
Two-dimensional hard disks are a fundamentally important many-body model system in classical statistical mechanics. Despite their significance, a comprehensive experimental data set for two-dimensional single component and binary hard disks is lacking. Here, we present a direct comparison between the full set of radial distribution functions and the contact values of a two-dimensional binary colloidal hard sphere model system and those calculated using fundamental measure theory. We find excellent quantitative agreement between our experimental data and theoretical predictions for both single component and binary hard disk systems. Our results provide a unique and fully quantitative mapping between experiments and theory, which is crucial in establishing the fundamental link between structure and dynamics in simple liquids and glass forming systems.
Energy Spectra of Vortex Distributions in Two-Dimensional Quantum Turbulence
Directory of Open Access Journals (Sweden)
Ashton S. Bradley
2012-10-01
Full Text Available We theoretically explore key concepts of two-dimensional turbulence in a homogeneous compressible superfluid described by a dissipative two-dimensional Gross-Pitaeveskii equation. Such a fluid supports quantized vortices that have a size characterized by the healing length ξ. We show that, for the divergence-free portion of the superfluid velocity field, the kinetic-energy spectrum over wave number k may be decomposed into an ultraviolet regime (k≫ξ^{-1} having a universal k^{-3} scaling arising from the vortex core structure, and an infrared regime (k≪ξ^{-1} with a spectrum that arises purely from the configuration of the vortices. The Novikov power-law distribution of intervortex distances with exponent -1/3 for vortices of the same sign of circulation leads to an infrared kinetic-energy spectrum with a Kolmogorov k^{-5/3} power law, which is consistent with the existence of an inertial range. The presence of these k^{-3} and k^{-5/3} power laws, together with the constraint of continuity at the smallest configurational scale k≈ξ^{-1}, allows us to derive a new analytical expression for the Kolmogorov constant that we test against a numerical simulation of a forced homogeneous, compressible, two-dimensional superfluid. The numerical simulation corroborates our analysis of the spectral features of the kinetic-energy distribution, once we introduce the concept of a clustered fraction consisting of the fraction of vortices that have the same sign of circulation as their nearest neighboring vortices. Our analysis presents a new approach to understanding two-dimensional quantum turbulence and interpreting similarities and differences with classical two-dimensional turbulence, and suggests new methods to characterize vortex turbulence in two-dimensional quantum fluids via vortex position and circulation measurements.
Pasted type distributed two-dimensional fiber Bragg grating vibration sensor.
Li, Tianliang; Tan, Yuegang; Zhou, Zude; Wei, Qin
2015-07-01
A pasted type distributed two-dimensional fiber Bragg grating (FBG) vibration sensor has been proposed and studied in this paper. The optical fiber is directly considered as an elastomer. The two-dimensional vibration can be separated by subtraction/addition of two FBGs' center wavelength shift. The principle of the sensor as well as numerical simulation and experimental analyses are presented. Experimental results show that the resonant frequencies of the sensor x/y main vibration direction are separately 1300/20.51 Hz, which are consistent with the numerical simulation analysis result. The flat frequency range resides in 10-750 Hz and 3-12 Hz, respectively; dynamic range is 28.63 dB; in the x main vibration direction, the sensor's sensitivity is 32.84 pm/g, with linearity 3.91% in the range of 10-60 m/s(2), while in the y main vibration direction, the sensor's sensitivity is 451.3 pm/g, with linearity 1.92% in the range of 1.5-8 m/s(2). The cross sensitivity is 3.91%. Benefitting from the two dimensional sensing properties, it can be used in distributed two-dimensional vibration measurement.
Probability distribution relationships
Directory of Open Access Journals (Sweden)
Yousry Abdelkader
2013-05-01
Full Text Available In this paper, we are interesting to show the most famous distributions and their relations to the other distributions in collected diagrams. Four diagrams are sketched as networks. The first one is concerned to the continuous distributions and their relations. The second one presents the discrete distributions. The third diagram is depicted the famous limiting distributions. Finally, the Balakrishnan skew-normal density and its relationship with the other distributions are shown in the fourth diagram.
Dallapiccola, Ramona; Gopinath, Ashwin; Stellacci, Francesco; Dal Negro, Luca
2008-04-14
In this paper we investigate for the first time the near-field optical behavior of two-dimensional Fibonacci plasmonic lattices fabricated by electron-beam lithography on transparent quartz substrates. In particular, by performing near-field optical microscopy measurements and three dimensional Finite Difference Time Domain simulations we demonstrate that near-field coupling of nanoparticle dimers in Fibonacci arrays results in a quasi-periodic lattice of localized nanoparticle plasmons. The possibility to accurately predict the spatial distribution of enhanced localized plasmon modes in quasi-periodic Fibonacci arrays can have a significant impact for the design and fabrication of novel nano-plasmonics devices.
Optical matrix for clock distribution and synchronous operation in two-dimensional array devices
Lee, K. S.; Shu, C.
1996-06-01
A scheme to generate an optical matrix from a mode-locked Nd:YAG laser has been theoretically explored and experimentally demonstrated. The matrix consists of highly synchronized and sequentially delayed optical pulses suitable for use with two-dimensional array optoelectronic devices and clock distribution system. The output pulses have the same state of polarization and no timing jitter is produced among the elements. Encoded outputs have been generated from the matrix using a set of photomasks. This technique can be applied to high-speed optical parallel processing.
Directory of Open Access Journals (Sweden)
K.-P. Heue
2008-11-01
Full Text Available In many investigations of tropospheric chemistry information about the two dimensional distribution of trace gases on a small scale (e.g. tens to hundreds of metres is highly desirable. An airborne instrument based on imaging Differential Optical Absorption Spectroscopy has been built to map the two dimensional distribution of a series of relevant trace gases including NO_{2}, HCHO, C_{2}H_{2}O_{2}, H_{2}O, O_{4}, SO_{2}, and BrO on a scale of 100 m.
Here we report on the first tests of the novel aircraft instrument over the industrialised South African Highveld, where large variations in NO_{2} column densities in the immediate vicinity of several sources e.g. power plants or steel works, were measured. The observed patterns in the trace gas distribution are interpreted with respect to flux estimates, and it is seen that the fine resolution of the measurements allows separate sources in close proximity to one another to be distinguished.
Directory of Open Access Journals (Sweden)
Kunal Pathak
2016-09-01
Full Text Available The calcium signaling plays a crucial role in expansion and contraction of cardiac myocytes. This calcium signaling is achieved by calcium diffusion, buffering mechanisms and influx in cardiac myocytes. The various calcium distribution patterns required for achieving calcium signaling in myocytes are still not well understood. In this paper an attempt has been made to develop a model of calcium distribution in myocytes incorporating diffusion of calcium, point source and excess buffer approximation. The model has been developed for a two dimensional unsteady state case. Appropriate boundary conditions and initial condition have been framed. The finite element method has been employed to obtain the solution. The numerical results have been used to study the effect of buffers and source amplitude on calcium distribution in myocytes.
Some explicit expressions for the probability distribution of force magnitude
Indian Academy of Sciences (India)
Saralees Nadarajah
2008-08-01
Recently, empirical investigations have suggested that the components of contact forces follow the exponential distribution. However, explicit expressions for the probability distribution of the corresponding force magnitude have not been known and only approximations have been used in the literature. In this note, for the ﬁrst time, I provide explicit expressions for the probability distribution of the force magnitude. Both two-dimensional and three-dimensional cases are considered.
Two-Dimensional Automatic Measurement for Nozzle Flow Distribution Using Improved Ultrasonic Sensor
Directory of Open Access Journals (Sweden)
Changyuan Zhai
2015-10-01
Full Text Available Spray deposition and distribution are affected by many factors, one of which is nozzle flow distribution. A two-dimensional automatic measurement system, which consisted of a conveying unit, a system control unit, an ultrasonic sensor, and a deposition collecting dish, was designed and developed. The system could precisely move an ultrasonic sensor above a pesticide deposition collecting dish to measure the nozzle flow distribution. A sensor sleeve with a PVC tube was designed for the ultrasonic sensor to limit its beam angle in order to measure the liquid level in the small troughs. System performance tests were conducted to verify the designed functions and measurement accuracy. A commercial spray nozzle was also used to measure its flow distribution. The test results showed that the relative error on volume measurement was less than 7.27% when the liquid volume was 2 mL in trough, while the error was less than 4.52% when the liquid volume was 4 mL or more. The developed system was also used to evaluate the flow distribution of a commercial nozzle. It was able to provide the shape and the spraying width of the flow distribution accurately.
Two-dimensional automatic measurement for nozzle flow distribution using improved ultrasonic sensor.
Zhai, Changyuan; Zhao, Chunjiang; Wang, Xiu; Wang, Ning; Zou, Wei; Li, Wei
2015-10-16
Spray deposition and distribution are affected by many factors, one of which is nozzle flow distribution. A two-dimensional automatic measurement system, which consisted of a conveying unit, a system control unit, an ultrasonic sensor, and a deposition collecting dish, was designed and developed. The system could precisely move an ultrasonic sensor above a pesticide deposition collecting dish to measure the nozzle flow distribution. A sensor sleeve with a PVC tube was designed for the ultrasonic sensor to limit its beam angle in order to measure the liquid level in the small troughs. System performance tests were conducted to verify the designed functions and measurement accuracy. A commercial spray nozzle was also used to measure its flow distribution. The test results showed that the relative error on volume measurement was less than 7.27% when the liquid volume was 2 mL in trough, while the error was less than 4.52% when the liquid volume was 4 mL or more. The developed system was also used to evaluate the flow distribution of a commercial nozzle. It was able to provide the shape and the spraying width of the flow distribution accurately.
First operation of a powerful FEL with two-dimensional distributed feedback
Agarin, N V; Bobylev, V B; Ginzburg, N S; Ivanenko, V G; Kalinin, P V; Kuznetsov, S A; Peskov, N Yu; Sergeev, A S; Sinitsky, S L; Stepanov, V D
2000-01-01
A W-band (75 GHz) FEL of planar geometry driven by a sheet electron beam was realised using the pulse accelerator ELMI (0.8 MeV/3 kA/5 mu s). To provide the spatial coherence of radiation from different parts of the electron beam with a cross-section of 0.4x12 cm two-dimensional distributed feedback systems have been employed using a 2-D Bragg resonator of planar geometry. The resonator consisted of two 2-D Bragg reflectors separated by a regular waveguide section. The total energy in the microwave pulse of microsecond duration was 100 J corresponding to a power of approx 100 MW. The main component of the FEL radiation spectrum was at 75 GHz that corresponded to the zone of effective Bragg reflection found from 'cold' microwave testing of the resonator. The experimental data compared well with the results of theoretical analysis.
Entanglement distribution in a two-dimensional 5-site frustrated J1-J2 spin system
Jafarpour, Mojtaba; Ghanavati, Soghra; Afshar, Davood
2015-11-01
We have studied several ground states and their entanglement structure for a two-dimensional 5-site frustrated J1-J2 system in the presence and absence of an external magnetic field. We have used concurrence as a measure of bipartite entanglement and the Meyer-Wallach measure and its generalizations as the measures of multipartite entanglement. They provide a total of eight measures which lead to 30 entanglement quantities for each possible ground state. Computing these 30 quantities for several ground states, we have provided a detailed exposition of the entanglement distribution in each state. We have also categorized them into separable states, not showing entanglement for any bipartition; globally-entangled states, showing entanglement for all the bipartitions, and the states in between. It turns out that by adjusting the external magnetic field, conditioned on the values of the interaction parameters, one may generate specific ground states belonging to a specific class, appropriate for specific tasks in quantum information theory.
Energy Technology Data Exchange (ETDEWEB)
Amemiya, Naoyuki; Shinkai, Yoshichika [Faculty of Engineering, Yokohama National University, Tokiwadai, Hodogaya, Yokohama (Japan); Iijima, Yasuhiro; Kakimoto, Kazuomi; Takeda, Kaoru [Materials Research Laboratory, Fujikura Ltd., Kiba, Koto, Tokyo (Japan)
2001-08-01
The critical current density (J{sub c}) distribution in YBCO coated conductors is attracting interest from the viewpoint of its influence on their AC loss characteristics as well as from material science and process engineering. A two-dimensional J{sub c} distribution in a YBCO coated conductor made by the IBAD (ion-beam assisted deposition) and the PLD (pulse-laser deposition) method can be determined by the magnetic-knife method with spatial resolutions of 0.2 mm lateral and 10 mm longitudinal directions, respectively. In an up-to-date 80A-class YBCO coated conductor, the J{sub c} is relatively uniform in the central part and reaches mbox{l_brace}7$x$10{sup 9} A m{sup -2}, {r_brace} while the J{sub c} fluctuates spatially in the central part of a tape fabricated earlier and with less critical current. Near the edges of the tapes, the J{sub c} is higher or lower than in the central part and the experimentally determined J{sub c} distributions are far from uniform. This suggests that a presumption of a uniform J{sub c} for AC loss estimations is not always reasonable and can lead to a large error in the estimated AC losses. (author)
Two-Dimensional Key Table-Based Group Key Distribution in Advanced Metering Infrastructure
Directory of Open Access Journals (Sweden)
Woong Go
2014-01-01
Full Text Available A smart grid provides two-way communication by using the information and communication technology. In order to establish two-way communication, the advanced metering infrastructure (AMI is used in the smart grid as the core infrastructure. This infrastructure consists of smart meters, data collection units, maintenance data management systems, and so on. However, potential security problems of the AMI increase owing to the application of the public network. This is because the transmitted information is electricity consumption data for charging. Thus, in order to establish a secure connection to transmit electricity consumption data, encryption is necessary, for which key distribution is required. Further, a group key is more efficient than a pairwise key in the hierarchical structure of the AMI. Therefore, we propose a group key distribution scheme using a two-dimensional key table through the analysis result of the sensor network group key distribution scheme. The proposed scheme has three phases: group key predistribution, selection of group key generation element, and generation of group key.
Diurnal distribution of sunshine probability
Energy Technology Data Exchange (ETDEWEB)
Aydinli, S.
1982-01-01
The diurnal distribution of the sunshine probability is essential for the predetermination of average irradiances and illuminances by solar radiation on sloping surfaces. The most meteorological stations have only monthly average values of the sunshine duration available. It is, therefore, necessary to compute the diurnal distribution of sunshine probability starting from the average monthly values. It is shown how the symmetric component of the distribution of the sunshine probability which is a consequence of a ''sidescene effect'' of the clouds can be calculated. The asymmetric components of the sunshine probability depending on the location and the seasons and their influence on the predetermination of the global radiation are investigated and discussed.
Yu, Han
2014-06-11
On the basis of unsaturated Darcy\\'s law, the Talbot-Ogden method provides a fast unconditional mass conservative algorithm to simulate groundwater infiltration in various unsaturated soil textures. Unlike advanced reservoir modelling methods that compute unsaturated flow in space, it only discretizes the moisture content domain into a suitable number of bins so that the vertical water movement is estimated piecewise in each bin. The dimensionality of the moisture content domain is extended from one dimensional to two dimensional in this study, which allows us to distinguish pore shapes within the same moisture content range. The vertical movement of water in the extended model imitates the infiltration phase in the Talbot-Ogden method. However, the difference in this extension is the directional redistribution, which represents the horizontal inter-bin flow and causes the water content distribution to have an effect on infiltration. Using this extension, we mathematically analyse the general relationship between infiltration and the moisture content distribution associated with wetting front depths in different bins. We show that a more negatively skewed moisture content distribution can produce a longer ponding time, whereas a higher overall flux cannot be guaranteed in this situation. It is proven on the basis of the water content probability distribution independent of soil textures. To illustrate this analysis, we also present numerical examples for both fine and coarse soil textures.
Ihara, I.; Yamada, H.; Takahashi, M.
2011-01-01
A non-contact method with a laser-ultrasonic technique for measuring two-dimensional temperature distribution on a material surface is presented. The method consists of a laser-ultrasonic measurement of a one-dimensional temperature distribution on a material surface and its two-dimensional area mapping. The surface temperature is basically determined from a temperature dependence of the velocity of the surface acoustic wave (SAW) propagating on a material surface. One-dimensional surface temperature distributions are determined by an inverse analysis consisting of a SAW measurement and a finite difference calculation. To obtain a two-dimensional distribution of surface temperature on a material surface, SAW measurements within the area of a square on the surface are performed by a pulsed laser scanning with a galvanometer system. The inverse analysis is then applied to each of the SAW data to determine the surface temperature distribution in a certain direction, and the obtained one-dimensional distributions are combined to construct a two-dimensional distribution of surface temperature. It has been demonstrated from the experiment with a heated aluminum plate that the temperature distributions of the area of a square on the aluminium surface determined by the ultrasonic method almost agree with those measured using an infrared camera.
Ogata, K; Kandori, A; Miyashita, T; Sekihara, K; Tsukada, K
2011-01-01
The aim of this study was to develop a method for converting the pseudo two-dimensional current given by a current-arrow map (CAM) into the physical current. The physical current distribution is obtained by the optimal solution in a least mean square sense with Tikhonov regularization (LMSTR). In the current dipole simulation, the current pattern differences (ΔJ) between the results of the CAM and the LMSTR with several regularization parameters (α = 10(-1)-10(-15)) are calculated. In magnetocardiographic (MCG) analysis, the depth (z(d)) of a reconstruction plane is chosen by using the coordinates of the sinus node, which is estimated from MCG signals at the early p-wave. The ΔJs at p-wave peaks, QRS-complex peaks, and T-wave peaks of MCG signals for healthy subjects are calculated. Furthermore, correlation coefficients and regression lines are also calculated from the current values of the CAM and the LMSTR during p-waves, QRS-complex, and T-waves of MCG signals. In the simulation, the ΔJs (α ≈ 10(-10)) had a minimal value. The ΔJs (α = 10(-10)) at p-wave peaks, QRS-complex peaks, and T-wave peaks of MCG signals for healthy subjects also had minimal value. The correlation coefficients of the current values given by the CAM and the LMSTR (α = 10(-10)) were greater than 0.9. Furthermore, slopes (y) of the regression lines are correlated with the depth (z(d)) (r = -0.93). Consequently, the CAM value can be transformed into the LMSTR current value by multiplying it by the slope (y) obtained from the depth (z(d)). In conclusion, the result given by the CAM can be converted into an effective physical current distribution by using the depth (z(d)).
Exact Probability Distribution versus Entropy
Directory of Open Access Journals (Sweden)
Kerstin Andersson
2014-10-01
Full Text Available The problem addressed concerns the determination of the average number of successive attempts of guessing a word of a certain length consisting of letters with given probabilities of occurrence. Both first- and second-order approximations to a natural language are considered. The guessing strategy used is guessing words in decreasing order of probability. When word and alphabet sizes are large, approximations are necessary in order to estimate the number of guesses. Several kinds of approximations are discussed demonstrating moderate requirements regarding both memory and central processing unit (CPU time. When considering realistic sizes of alphabets and words (100, the number of guesses can be estimated within minutes with reasonable accuracy (a few percent and may therefore constitute an alternative to, e.g., various entropy expressions. For many probability distributions, the density of the logarithm of probability products is close to a normal distribution. For those cases, it is possible to derive an analytical expression for the average number of guesses. The proportion of guesses needed on average compared to the total number decreases almost exponentially with the word length. The leading term in an asymptotic expansion can be used to estimate the number of guesses for large word lengths. Comparisons with analytical lower bounds and entropy expressions are also provided.
Probability distributions for multimeric systems.
Albert, Jaroslav; Rooman, Marianne
2016-01-01
We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.
Generazio, E. R.
1988-01-01
Microstructural images may be tone pulse encoded and subsequently Fourier transformed to determine the two-dimensional density of frequency components. A theory is developed relating the density of frequency components to the density of length components. The density of length components corresponds directly to the actual grain-size distribution function from which the mean grain shape, size, and orientation can be obtained.
Generazio, E. R.
1986-01-01
Microstructural images may be tone pulse encoded and subsequently Fourier transformed to determine the two-dimensional density of frequency components. A theory is developed relating the density of frequency components to the density of length components. The density of length components corresponds directly to the actual grain size distribution function from which the mean grain shape, size, and orientation can be obtained.
TWO-DIMENSIONAL PLANE WATER FLOW AND WATER QUALITY DISTRIBUTION IN BOSTEN LAKE
Institute of Scientific and Technical Information of China (English)
Feng Min-quan; Zhou Xiao-de; Zheng Bang-min; Min Tao; Zhao Ke-yu
2003-01-01
The two-dimensional plane water flow and water quality was developed by using the techniques of coordinate transformation, alternating directions, staggered grid, linear recurrence, and implicit scheme in the study of large water body in lakes. The model was proved to be suitable for treating the irregular boundary and predicting quickly water flow and water quality. The application of the model to the Bosten Lake in Xinjiang Uygur Autonomous Region of China shows that it is reasonable and practicable.
Institute of Scientific and Technical Information of China (English)
ZHOU Ningyu; ZHAO Dongfeng; DING Hongwei
2008-01-01
A higher quality of service (QoS) is provided for ad hoc networks through a multi-channel and slotted random multi-access (MSRM) protocol with two-dimensional probability. For this protocol, the system time is slotted into a time slot with high channel utilization realized by the choice of two parameters p1 and p2, and the channel load equilibrium. The protocol analyzes the throughput of the MSRM protocol for a load equilibrium state and the throughput based on priority. Simulations agree with the theoretical analysis. The simulations also show that the slotted-time system is better than the continuous-time system.
Pinkel, I Irving; Serafini, John S; Gregg, John L
1952-01-01
The modifications in the pressure distributions and the aerodynamic coefficients associated with additions of heat to the two-dimensional supersonic in viscid flow field adjacetnt to the lower surface of of a 5-percent-thickness symmetrical circular-arc wing are presented in this report. The pressure distributions are obtained by the use of graphical method which gives the two-dimensional supersonic inviscid flow field obtained with moderate heat addition. The variation is given of the lift-drag ratio and of the aerodynamic coefficients of lift, drag, and moment with free stream Mach number, angle of attack, and parameters defining extent and amount of heat addition. The six graphical solutions used in this study included Mach numbers of 3.0 and 5.0 and angles of attack of 0 degrees and 2 degrees.
Gartrell, L. R.; Rhodes, D. B.
1980-01-01
A rapid scanning two dimensional laser velocimeter (LV) has been used to measure simultaneously the vortex vertical and axial velocity distributions in the Langley Vortex Research Facility. This system utilized a two dimensional Bragg cell for removing flow direction ambiguity by translating the optical frequency for each velocity component, which was separated by band-pass filters. A rotational scan mechanism provided an incremental rapid scan to compensate for the large displacement of the vortex with time. The data were processed with a digital counter and an on-line minicomputer. Vaporized kerosene (0.5 micron to 5 micron particle sizes) was used for flow visualization and LV scattering centers. The overall measured mean-velocity uncertainity is less than 2 percent. These measurements were obtained from ensemble averaging of individual realizations.
Ishigaki, H.; Itoh, M.; Hida, A.; Endo, H.; Oya, T.
1991-03-01
As a basic study for magnetic bearings using high-Tc superconductors, evaluations of the materials were conducted. These evaluations included measurements of the repulsive force and lateral restoring force of various kinds of YBCO pellets. Pure air, which was supplied in the process of fabrication, and the presence of Ag in YBCO showed evidence of the effects of increasing the repulsive force. The lateral restoring force which was observed in the lateral displacement of a levitated permanent magnet over YBCO pellets was also affected by pure air and the presence of Ag. A new measuring instrument for magnetic fields was developed by using a highly sensitive force sensor. Because this instrument has the capability of measuring the repulsive force due to the Meissner effect, it was used for evaluating the two-dimensional distribution of superconducting properties. Results show that the pellets had nonuniform superconducting properties. The two-dimensional distribution of residual flux density on the pellets which had been cooled in a magnetic field (field cooling) was also observed by means of the instrument. The mechanism for generating lateral force is discussed in relation to the distribution.
Ensemble Distribution for Immiscible Two-Phase Flow in Two-Dimensional Networks
Savani, Isha; Kjelstrup, Signe; Vassvik, Morten; Sinha, Santanu; Hansen, Alex
2016-01-01
An ensemble distribution has been constructed to describe steady immiscible two-phase flow of two incompressible fluids in a network. The system is ergodic. The distribution relates the time that a bubble of the non-wetting fluid spends in a link to the local volume flow. The properties of the ensemble distribution are tested by two-phase flow simulations at the pore-scale for capillary numbers ranging from 0.1 to 0.001. It is shown that the distribution follows the postulated dependence on the local flow for Ca = 0.01 and 0.001. The distribution is used to compute the global flow performance of the network. In particular, we find the expression for the overall mobility of the system using the ensemble distribution. The entropy production at the scale of the network is shown to give the expected product of the average flow and its driving force, obtained from a black-box description. The distribution can be used to obtain macroscopic variables from local network information, for a practical range of capillary...
Directory of Open Access Journals (Sweden)
Xuehua Shen
2015-01-01
Full Text Available Temperature, especially temperature distribution, is one of the most fundamental and vital parameters for theoretical study and control of various industrial applications. In this paper, ultrasonic thermometry to reconstruct temperature distribution is investigated, referring to the dependence of ultrasound velocity on temperature. In practical applications of this ultrasonic technique, reconstruction algorithm based on least square method is commonly used. However, it has a limitation that the amount of divided blocks of measure area cannot exceed the amount of effective travel paths, which eventually leads to its inability to offer sufficient temperature information. To make up for this defect, an improved reconstruction algorithm based on least square method and multiquadric interpolation is presented. And then, its reconstruction performance is validated via numerical studies using four temperature distribution models with different complexity and is compared with that of algorithm based on least square method. Comparison and analysis indicate that the algorithm presented in this paper has more excellent reconstruction performance, as the reconstructed temperature distributions will not lose information near the edge of area while with small errors, and its mean reconstruction time is short enough that can meet the real-time demand.
Global and Temporal Distribution of Meteoric Smoke: A Two-Dimensional Simulation Study
2008-02-02
particles are especially important in the middle atmosphere where dust sources from below are small. Smoke particles are thought to play a major role in...concentrations found during the local winter/spring. Despite the fact that the modeled ablation is independent of latitude, the mass distribution in the strato ...fraction of nonvolatile particles in the Arctic lower stratosphere. They conclude that ‘‘the fraction of meteoric material in strato - spheric particles
Wang, Qinghua; Ri, Shien; Tsuda, Hiroshi; Koyama, Motomichi; Tsuzaki, Kaneaki
2017-06-12
Aimed at the low accuracy problem of shear strain measurement in Moiré methods, a two-dimensional (2D) Moiré phase analysis method is proposed for full-field deformation measurement with high accuracy. A grid image is first processed by the spatial phase-shifting sampling Moiré technique to get the Moiré phases in two directions, which are then conjointly analyzed for measuring 2D displacement and strain distributions. The strain especially the shear strain measurement accuracy is remarkably improved, and dynamic deformation is measurable from automatic batch processing of single-shot grid images. As an application, the 2D microscale strain distributions of a titanium alloy were measured, and the crack occurrence location was successfully predicted from strain concentration.
ASYMPTOTIC QUANTIZATION OF PROBABILITY DISTRIBUTIONS
Institute of Scientific and Technical Information of China (English)
Klaus P(o)tzelberger
2003-01-01
We give a brief introduction to results on the asymptotics of quantization errors.The topics discussed include the quantization dimension,asymptotic distributions of sets of prototypes,asymptotically optimal quantizations,approximations and random quantizations.
Synthesizing a four-dimensional beam particle distribution frommultiple two-dimensional views
Energy Technology Data Exchange (ETDEWEB)
Friedman, A.; Grote, D.P.; Celata, C.M.; Staples, J.W.
2002-02-20
The transverse dynamics of a nearly-monoenergetic particle beam are described by the evolution of the 4D distribution f(x,y,x',y'), where x and y are the transverse spatial coordinates and x' {triple_bond} p{sub x}/p{sub z} and y' {triple_bond} p{sub y}/p{sub z} are the corresponding momentum components divided by the longitudinal momentum component. In present-day experimental practice, such beams are often diagnosed by passing them through an axially-separated pair of slits parallel to the y axis. This selects for x and x' and integrates over y and y'. A sequence of pulses (with the slits at various x positions) yields a 2D projection of the beam phase space, f(x,x'). Another scanner might yield f(y,y') or, using crossed slits, f(x,y). The challenge is that a small set of such 2D scans does not uniquely specify f(x,y,x',y'); correlations in planes other than those measured are unknown. We have developed Monte-Carlo methods and formulated physically-motivated constraints to synthesize a ''reasonable'' set of particles having 2D projectional densities consistent with the experimental data. Such a set may be used to initialize simulations of the downstream beam. The methods and their performance on model problems are described.
The Multivariate Gaussian Probability Distribution
DEFF Research Database (Denmark)
Ahrendt, Peter
2005-01-01
This technical report intends to gather information about the multivariate gaussian distribution, that was previously not (at least to my knowledge) to be found in one place and written as a reference manual. Additionally, some useful tips and tricks are collected that may be useful in practical...
DEFF Research Database (Denmark)
Tong, Guohong; Zhang, Guoqiang; Ravn, Peter
2008-01-01
Variations of air exchanges in slurry pit with four angles of an environmental deflector, namely 0° (parallel to the side wall or without deflector), 30°, 45° and 90°, were investigated using a tracer gas method. The investigations were performed in a two-dimensional ventilation chamber in the Air...... physics Lab, University of Aarhus. Ventilation rates used in the experiments were 100 and 200 m3/h. The experiment results showed that using the deflectors of 30°, 45° and 90° the airflow patterns were obviously changed in the room space near the slatted floor and in the head space of the pit compared...... with the setup without deflector. It was also found that of all the deflector angle performances with respect to air-exchange ratio and concentration distribution, the deflector position of 45° in two airflow rates cases behaved better with the lowest pit ventilation and the highest concentration in the head...
Pre-aggregation for Probability Distributions
DEFF Research Database (Denmark)
Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach
Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....
Pre-aggregation for Probability Distributions
DEFF Research Database (Denmark)
Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach
Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....
Pre-Aggregation with Probability Distributions
DEFF Research Database (Denmark)
Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach
2006-01-01
Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....
Pre-Aggregation with Probability Distributions
DEFF Research Database (Denmark)
Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach
2006-01-01
Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....
Institute of Scientific and Technical Information of China (English)
刘伯潭; 申言同; 张会书; 刘春江; 唐忠利; 袁希钢
2016-01-01
A method of using laser induced fluorescence(LIF)technique was applied to two-dimensional meas-urement of the liquid concentration distribution in the 250Y structured packing sheet. The experimental structured packing sheet was made of perspex so that the laser could pass through it. The visualization of the distribution of the liquid concentration in the structured packing sheet was realized. The calibration of the thickness and liquid concentration was carried out firstly and the regression formulaI=kcd was acquired, in which concentrationc and the liquid film thicknessd were both considered. Then the liquid feed of uniform tracer(rhodamine)concentration entered the perspex structured packing from the top under different spraying densities. The corresponding thickness of liquid film on the packing was calculated. Finally, tracer(rhodamine)with a high concentration was injected only at one fixed point of the structured packing under different spraying densities of the liquid. With the known liquid film thickness, the concentration distribution of the tracer can be calculated inside the structured packing sheet.
Institute of Scientific and Technical Information of China (English)
刘成森; 王艳辉; 王德真
2005-01-01
One important parameter for the plasma source ion implantation (PSII) process is the target temperature obtained during the surface modification. Because the power input to the target being implanted can be large, its temperature is quite high. The target temperature prediction is useful, whether the high temperature is required in the experiment.In addition, there is likely to be temperature variation across the target surface, which can lead to locally different surface properties. In this paper, we have presented a model to predict and explain the temperature distribution on a hemispherical bowl-shaped vessel during plasma source ion implantation. A two-dimensional fluid model to derive both the ion flux to the target and the energy imparted to the substrate by the ions in the plasma sheath simulation is employed. The calculated energy input and radiative heat loss are used to predict the temperature rise and variation inside the sample in the thermal model. The shape factor of the target for radiation is taken into account in the radiative energy loss. The influence of the pulse duration and the pulsing frequency on the temperature distribution is investigated in detail. Our work shows that at high pulsing frequencies the temperature of the bowl will no longer rise with the increase of the pulsing frequency.
Directory of Open Access Journals (Sweden)
Jae Seong Lee
2013-09-01
Full Text Available We measured two-dimensional (2-D oxygen distribution in the surface sediment layer of intertidal sediment using a simple and inexpensive planar oxygen optode, which is based on a color ratiometric image approach. The recorded emission intensity of red color luminophore light significantly changed with oxygen concentration by O2 quenching of platinum(IIoctaethylporphyrin (PtOEP. The ratios between the intensity of red and green emissions with oxygen concentration variation demonstrated the Stern-Volmer relationship. The 2-D oxygen distribution image showed microtopographic structure, diffusivity boundary layer and burrow in surface sediment layer. The oxygen penetration depth (OPD was about 2 mm and the one-dimensional vertical diffusive oxygen uptake (DOU was 12.6 mmol m−2 d−1 in the undisturbed surface sediment layer. However, those were enhanced near burrow by benthic fauna, and the OPD was two times deeper and DOU was increased by 34%. The simple and inexpensive oxygen planar optode has great application potential in the study of oxygen dynamics with high spatiotemporal resolution, in benthic boundary layers.
Energy Technology Data Exchange (ETDEWEB)
Lavrent' ev, I.V.; Sidorenkov, S.I.
1988-01-01
To establish the limits of applicability of two-dimensional mathematical models describing induced electromagnetic field distribution in an annular MHD channel, it is necessary to solve a three-dimensional problem. By reducing the number of dimensions of the problem (using, for example, the axial symmetry of MHD flow), the solution can be derived in some approximation. This paper proposes and demonstrates this method by studying the motion of a conducting medium in an annular channel with a two-pole ferromagnetic system under various assumptions for the field, channel and liquid, among them the superconductivity of the working medium. The work performed by the Lorentz force in the channel, equal to the Joule losses in the current-carrying boundary layer, was determined. It was concluded that the current-carrying boundary layer begins to develop at the wall of the channel when the flow enters the magnetic field and that its thickness grows with the length of the region of MHD interaction. The problem was solved numerically and asymptotically.
Kyoden, Tomoaki; Yasue, Youichi; Ishida, Hiroki; Akiguchi, Shunsuke; Andoh, Tsugunobu; Takada, Yogo; Teranishi, Tsunenobu; Hachiga, Tadashi
2015-01-01
A laser Doppler velocimeter (LDV) has been developed that is capable of performing two-dimensional (2D) cross-sectional measurements. It employs two horizontal laser light sheets that intersect at an angle of 13.3°. Since the intersection region is thin, it can be used to approximately determine the 2D flow field. An 8 × 8 array of optical fibers is used to simultaneously measure Doppler frequencies at 64 points. Experiments were conducted to assess the performance of the LDV, and it was found to be capable of obtaining spatial and temporal velocity information at multiple points in a flow field. The technique is fast, noninvasive, and accurate over long sampling periods. Furthermore, its applicability to an actual flow field was confirmed by measuring the temporal velocity distribution of a pulsatile flow in a rectangular flow channel with an obstruction. The proposed device is thus a useful, compact optical instrument for conducting simultaneous 2D cross-sectional multipoint measurements.
Jing, Fulong; Jiao, Shuhong; Hou, Changbo; Si, Weijian; Wang, Yu
2017-06-21
For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM) signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR) and the quadratic chirp rate (QCR) are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF) and modified scaled Fourier transform (mSFT), an effective parameter estimation algorithm is proposed-referred to as the Two-Dimensional product modified Lv's distribution (2D-PMLVD)-for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT) and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified.
Directory of Open Access Journals (Sweden)
T. H. Raupach
2015-01-01
Full Text Available The raindrop size distribution (DSD quantifies the microstructure of rainfall and is critical to studying precipitation processes. We present a method to improve the accuracy of DSD measurements from Parsivel (particle size and velocity disdrometers, using a two-dimensional video disdrometer (2DVD as a reference instrument. Parsivel disdrometers bin raindrops into velocity and equivolume diameter classes, but may misestimate the number of drops per class. In our correction method, drop velocities are corrected with reference to theoretical models of terminal drop velocity. We define a filter for raw disdrometer measurements to remove particles that are unlikely to be plausible raindrops. Drop concentrations are corrected such that on average the Parsivel concentrations match those recorded by a 2DVD. The correction can be trained on and applied to data from both generations of OTT Parsivel disdrometers, and indeed any disdrometer in general. The method was applied to data collected during field campaigns in Mediterranean France for a network of first- and second-generation Parsivel disdrometers, and on a first-generation Parsivel in Payerne, Switzerland. We compared the moments of the resulting DSDs to those of a collocated 2DVD, and the resulting DSD-derived rain rates to collocated rain gauges. The correction improved the accuracy of the moments of the Parsivel DSDs, and in the majority of cases the rain rate match with collocated rain gauges was improved. In addition, the correction was shown to be similar for two different climatologies, suggesting its general applicability.
Exact probability distribution functions for Parrondo's games
Zadourian, Rubina; Saakian, David B.; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
George, Jacob
The present study deals with the effects of sparsely distributed three-dimensional elements on two-dimensional (2-D) and three-dimensional (3-D) turbulent boundary layers (TBL) such as those that occur on submarines, ship hulls, etc. This study was achieved in three parts: Part 1 dealt with the cylinders when placed individually in the turbulent boundary layers, thereby considering the effect of a single perturbation on the TBL; Part 2 considered the effects when the same individual elements were placed in a sparse and regular distribution, thus studying the response of the flow to a sequence of perturbations; and in Part 3, the distributions were subjected to 3-D turbulent boundary layers, thus examining the effects of streamwise and spanwise pressure gradients on the same perturbed flows as considered in Part 2. The 3-D turbulent boundary layers were generated by an idealized wing-body junction flow. Detailed 3-velocity-component Laser-Doppler Velocimetry (LDV) and other measurements were carried out to understand and describe the rough-wall flow structure. The measurements include mean velocities, turbulence quantities (Reynolds stresses and triple products), skin friction, surface pressure and oil flow visualizations in 2-D and 3-D rough-wall flows for Reynolds numbers, based on momentum thickness, greater than 7000. Very uniform circular cylindrical roughness elements of 0.38mm, 0.76mm and 1.52mm height (k) were used in square and diagonal patterns, yielding six different roughness geometries of rough-wall surface. For the 2-D rough-wall flows, the roughness Reynolds numbers, k +, based on the element height (k) and the friction velocity (Utau), range from 26 to 131. Results for the 2-D rough-wall flows reveal that the velocity-defect law is similar for both smooth and rough surfaces, and the semi-logarithmic velocity-distribution curve is shifted by an amount DeltaU/U, depending on the height of the roughness element, showing that Delta U/Utau is a function
Proposal for Modified Damage Probability Distribution Functions
DEFF Research Database (Denmark)
Pedersen, Preben Terndrup; Hansen, Peter Friis
1996-01-01
Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub-committee on st......Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub...
Institute of Scientific and Technical Information of China (English)
Ping Yuan; Liang Qiao; Li Dai; Yan-Ping Wang; Guang-Xuan Zhou; Ying Han; Xiao-Xia Liu; Xun Zhang; Yi Cao; Juan Liang; Jun Zhu
2009-01-01
AIM:To investigate the spatial distribution patterns of anorectal atresia/stenosis in China.METHODS:Data were collected from the Chinese Birth Defects Monitoring Network (CBDMN),a hospitalbased congenital malformations registry system.All fetuses more than 28 wk of gestation and neonates up to 7 d of age in hospitals within the monitoring sites of the CBDMN were monitored from 2001 to 2005.Two-dimensional graph-theoretical clustering was used to divide monitoring sites of the CBDMN into different clusters according to the average incidences of anorectal atresia/stenosis in the different monitoring sites.RESULTS:The overall average incidence of anorectal atresia/stenosis in China was 3.17 per 10 000 from 2001 to 2005.The areas with the highest average incidences of anorectal atresia/stenosis were almost always focused in Eastern China.The monitoring sites were grouped into 6 clusters of areas.Cluster 1 comprised the monitoring sites in Heilongjiang Province,Jilin Province,and Liaoning Province;Cluster 2 was composed of those in Fujian Province,Guangdong Province,Hainan Province,Guangxi Zhuang Autonomous Region,south Hunan Province,and south Jiangxi Province;Cluster 3 consisted of those in Beijing Municipal City,Tianjin Municipal City,Hebei Province,Shandong Province,north Jiangsu Province,and north Anhui Province;Cluster 4 was made up of those in Zhejiang Province,Shanghai Municipal City,south Anhui Province,south Jiangsu Province,north Hunan Province,north Jiangxi Province,Hubei Province,Henan Province,Shanxi Province and Inner Mongolia Autonomous Region;Cluster 5 consisted of those in Ningxia Hui Autonomous Region,Gansu Province and Qinghai Province;and Cluster 6 included those in Shaanxi Province,Sichuan Province,Chongqing Municipal City,Yunnan Province,Guizhou Province,Xinjiang Uygur Autonomous Province and Tibet Autonomous Region.CONCLUSION:The findings in this research allow the display of the spatial distribution patterns of anorectal atresia/stenosis in
Froessling, Nils
1958-01-01
The fundamental boundary layer equations for the flow, temperature and concentration fields are presented. Two dimensional symmetrical and unsymmetrical and rotationally symmetrical steady boundary layer flows are treated as well as the transfer boundary layer. Approximation methods for the calculation of the transfer layer are discussed and a brief survey of an investigation into the validity of the law that the Nusselt number is proportional to the cube root of the Prandtl number is presented.
Calculating Cumulative Binomial-Distribution Probabilities
Scheuer, Ernest M.; Bowerman, Paul N.
1989-01-01
Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.
Generating pseudo-random discrete probability distributions
Energy Technology Data Exchange (ETDEWEB)
Maziero, Jonas, E-mail: jonasmaziero@gmail.com [Universidade Federal de Santa Maria (UFSM), RS (Brazil). Departamento de Fisica
2015-08-15
The generation of pseudo-random discrete probability distributions is of paramount importance for a wide range of stochastic simulations spanning from Monte Carlo methods to the random sampling of quantum states for investigations in quantum information science. In spite of its significance, a thorough exposition of such a procedure is lacking in the literature. In this article, we present relevant details concerning the numerical implementation and applicability of what we call the iid, normalization, and trigonometric methods for generating an unbiased probability vector p=(p{sub 1},⋯ ,p{sub d}). An immediate application of these results regarding the generation of pseudo-random pure quantum states is also described. (author)
Directory of Open Access Journals (Sweden)
K. R. pardasani
2005-01-01
Full Text Available In this study, a two dimensional infinite element model has been developed to study thermal effect in human dermal regions due to tumors. This model incorporates the effect of blood mass flow rate, metabolic heat generation and thermal conductivity of the tissues.The dermal region is divided into three natural layers, namely, epidermis, dermis and subdermal tissues. A uniformly perfused tumor is assumed to be present in the dermis. The domain is assumed to be finite along the depth and infinite along the breadth. The whole dermis region involving tumor is modelled with the help of triangular finite elements to incorporate the geometry of the region. These elements are surrounded by infinite domain elements along the breadth. Appropriate boundary conditions has been incorporated. A computer program has been developed to obtain the numerical results.
Probability distributions with summary graph structure
Wermuth, Nanny
2010-01-01
A set of independence statements may define the independence structure of interest in a family of joint probability distributions. This structure is often captured by a graph that consists of nodes representing the random variables and of edges that couple node pairs. One important class are multivariate regression chain graphs. They describe the independences of stepwise processes, in which at each step single or joint responses are generated given the relevant explanatory variables in their past. For joint densities that then result after possible marginalising or conditioning, we use summary graphs. These graphs reflect the independence structure implied by the generating process for the reduced set of variables and they preserve the implied independences after additional marginalising and conditioning. They can identify generating dependences which remain unchanged and alert to possibly severe distortions due to direct and indirect confounding. Operators for matrix representations of graphs are used to de...
Perez-Morelo, D. J.; Ramirez-Pastor, A. J.; Romá, F.
2012-02-01
We study the two-dimensional Edwards-Anderson spin-glass model using a parallel tempering Monte Carlo algorithm. The ground-state energy and entropy are calculated for different bond distributions. In particular, the entropy is obtained by using a thermodynamic integration technique and an appropriate reference state, which is determined with the method of high-temperature expansion. This strategy provides accurate values of this quantity for finite-size lattices. By extrapolating to the thermodynamic limit, the ground-state energy and entropy of the different versions of the spin-glass model are determined.
The Probability Distribution for a Biased Spinner
Foster, Colin
2012-01-01
This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)
The Probability Distribution for a Biased Spinner
Foster, Colin
2012-01-01
This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)
Scoring Rules for Subjective Probability Distributions
DEFF Research Database (Denmark)
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd;
report the true subjective probability of a binary event, even under Subjective Expected Utility. To address this one can “calibrate” inferences about true subjective probabilities from elicited subjective probabilities over binary events, recognizing the incentives that risk averse agents have...
Leonhardt, Juri; Teutenberg, Thorsten; Buschmann, Greta; Gassner, Oliver; Schmidt, Torsten C
2016-11-01
For the identification of the optimal column combinations, a comparative orthogonality study of single columns and columns coupled in series for the first dimension of a microscale two-dimensional liquid chromatographic approach was performed. In total, eight columns or column combinations were chosen. For the assessment of the optimal column combination, the orthogonality value as well as the peak distributions across the first and second dimension was used. In total, three different methods of orthogonality calculation, namely the Convex Hull, Bin Counting, and Asterisk methods, were compared. Unfortunately, the first two methods do not provide any information of peak distribution. The third method provides this important information, but is not optimal when only a limited number of components are used for method development. Therefore, a new concept for peak distribution assessment across the separation space of two-dimensional chromatographic systems and clustering detection was developed. It could be shown that the Bin Counting method in combination with additionally calculated histograms for the respective dimensions is well suited for the evaluation of orthogonality and peak clustering. The newly developed method could be used generally in the assessment of 2D separations. Graphical Abstract ᅟ.
Polarization Mode Dispersion Probability Distribution for Arbitrary Mode Coupling
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
The probability distribution of the differential group delay for arbitrary mode coupling is simulated with Monte-Carlo method. Fitting the simulation results, we obtain probability distribution function for arbitrary mode coupling.
Kamataki, K.; Morita, Y.; Shiratani, M.; Koga, K.; Uchida, G.; Itagaki, N.
2012-04-01
We have developed a simple in-situ method for measuring the size distribution (the mean size (mean diameter) and size dispersion) of nano-particles generated in reactive plasmas using the 2 dimensional laser light scattering (2DLLS) method. The principle of the method is based on thermal coagulation of the nano-particles, which occurs after the discharge is turned off, and the size and density of the nano-particles can then be deduced. We first determined the 2D spatial distribution of the density and size of the nano-particles in smaller particle size (a few nm) range than ones deduced from the conventional 2DLLS method. From this 2D dataset, we have for the first time been able to determine the size distribution of nano-particles generated in a reactive plasma without ex-situ measurements.
Sun, Xiaobo; Ramesh, Palanisamy; Itkis, Mikhail E.; Bekyarova, Elena; Haddon, Robert C.
2010-08-01
We report the thermal conductivities of graphite nanoplatelet-epoxy composites prepared by exfoliation of natural graphite flakes of varying lateral sizes. We found that utilization of natural graphite flakes of the optimum lateral dimensions (~200-400 µm) as a starting material for exfoliation significantly enhanced the thermal conductivity of the composite. In order to understand this enhancement we developed a procedure for evaluation of the particle size distribution of graphite nanoplatelets and correlated the measured distributions with the resulting thermal conductivities. In order to expand the scope of our study we applied our statistical and thermal analysis to commercially available graphite nanoplatelet materials.
Yamamoto, T.; Matsumura, A.; Yamamoto, K.; Kumada, H.; Shibata, Y.; Nose, T.
2002-07-01
The aim of this study was to determine the in-phantom thermal neutron distribution derived from neutron beams for intraoperative boron neutron capture therapy (IOBNCT). Gold activation wires arranged in a cylindrical water phantom with (void-in-phantom) or without (standard phantom) a cylinder styrene form placed inside were irradiated by using the epithermal beam (ENB) and the mixed thermal-epithermal beam (TNB-1) at the Japan Research Reactor No 4. With ENB, we observed a flattened distribution of thermal neutron flux and a significantly enhanced thermal flux delivery at a depth compared with the results of using TNB-1. The thermal neutron distribution derived from both the ENB and TNB-1 was significantly improved in the void-in-phantom, and a double high dose area was formed lateral to the void. The flattened distribution in the circumference of the void was observed with the combination of ENB and the void-in-phantom. The measurement data suggest that the ENB may provide a clinical advantage in the form of an enhanced and flattened dose delivery to the marginal tissue of a post-operative cavity in which a residual and/or microscopically infiltrating tumour often occurs. The combination of the epithermal neutron beam and IOBNCT will improve the clinical results of BNCT for brain tumours.
Palma, G; Niedermayer, F; Rácz, Z; Riveros, A; Zambrano, D
2016-08-01
The zero-temperature, classical XY model on an L×L square lattice is studied by exploring the distribution Φ_{L}(y) of its centered and normalized magnetization y in the large-L limit. An integral representation of the cumulant generating function, known from earlier works, is used for the numerical evaluation of Φ_{L}(y), and the limit distribution Φ_{L→∞}(y)=Φ_{0}(y) is obtained with high precision. The two leading finite-size corrections Φ_{L}(y)-Φ_{0}(y)≈a_{1}(L)Φ_{1}(y)+a_{2}(L)Φ_{2}(y) are also extracted both from numerics and from analytic calculations. We find that the amplitude a_{1}(L) scales as ln(L/L_{0})/L^{2} and the shape correction function Φ_{1}(y) can be expressed through the low-order derivatives of the limit distribution, Φ_{1}(y)=[yΦ_{0}(y)+Φ_{0}^{'}(y)]^{'}. Thus, Φ_{1}(y) carries the same universal features as the limit distribution and can be used for consistency checks of universality claims based on finite-size systems. The second finite-size correction has an amplitude a_{2}(L)∝1/L^{2} and one finds that a_{2}Φ_{2}(y)≪a_{1}Φ_{1}(y) already for small system size (L>10). We illustrate the feasibility of observing the calculated finite-size corrections by performing simulations of the XY model at low temperatures, including T=0.
Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.
Xiao, Dan; Balcom, Bruce J
2012-07-01
Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results.
Energy Technology Data Exchange (ETDEWEB)
Kim, Tae-Hoon; Kim, Yong-Kyun; Lee, Cheol Ho; Son, Jaebum; Lee, Sangmin; Kim, Dong Geon; Choi, Joonbum; Jang, Jae Yeong [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun-Tai [Seoul National University, Seoul (Korea, Republic of)
2016-10-15
Gamma Knife model C contains 201 {sup 60}Co sources located on a spherical surface, so that each beam is concentrated on the center of the sphere. In the last work, we simulated the Gamma Knife model C through Monte Carlo simulation code using Geant4. Instead of 201 multi-collimation system, we made one single collimation system that collects source parameter passing through the collimator helmet. Using the virtual source, we drastically reduced the simulation time to transport 201 gamma circle beams to the target. Gamma index has been widely used to compare two dose distributions in cancer radiotherapy. Gamma index pass rates were compared in two calculated results using the virtual source method and the original method and measured results obtained using radiocrhomic films. A virtual source method significantly reduces simulation time of a Gamma Knife Model C and provides equivalent absorbed dose distributions as that of the original method showing Gamma Index pass rate close to 100% under 1mm/3% criteria. On the other hand, it gives a little narrow dose distribution compared to the film measurement showing Gamma Index pass rate of 94%. More accurate and sophisticated examination on the accuracy of the simulation and film measurement is necessary.
1987-08-30
computer * In- house software I g I 06-10-72-12 15 [Gaussian mode]. The intensity distribution was observed and analyzed by capturing the frontal and...points of the exciplex coff ,ponents ar,.’ the 4uel at one atr,:soreire. and therety to reduce the ttendenc’y of the componeritz t:, e~:~ C;4eertia!l...fluorescence position. The cuvette is slow, and only a fraction of the M* initially formed chamber was heated with flowing house air, which had
Pellegrini, Yves-Patrick
2015-01-01
The two-dimensional elastodynamic Green tensor is the primary building block of solutions of linear elasticity problems dealing with nonuniformly moving rectilinear line sources, such as dislocations. Elastodynamic solutions for these problems involve derivatives of this Green tensor, which stand as hypersingular kernels. These objects, well defined as distributions, prove cumbersome to handle in practice. This paper, restricted to isotropic media, examines some of their representations in the framework of distribution theory. A particularly convenient regularization of the Green tensor is introduced, that amounts to considering line sources of finite width. Technically, it is implemented by an analytic continuation of the Green tensor to complex times. It is applied to the computation of regularized forms of certain integrals of tensor character that involve the gradient of the Green tensor. These integrals are fundamental to the computation of the elastodynamic fields in the problem of nonuniformly moving d...
Energy probability distribution zeros: A route to study phase transitions
Costa, B. V.; Mól, L. A. S.; Rocha, J. C. S.
2017-07-01
In the study of phase transitions a very few models are accessible to exact solution. In most cases analytical simplifications have to be done or some numerical techniques have to be used to get insight about their critical properties. Numerically, the most common approaches are those based on Monte Carlo simulations together with finite size scaling analysis. The use of Monte Carlo techniques requires the estimation of quantities like the specific heat or susceptibilities in a wide range of temperaturesor the construction of the density of states in large intervals of energy. Although many of these techniques are well developed they may be very time consuming when the system size becomes large enough. It should be suitable to have a method that could surpass those difficulties. In this work we present an iterative method to study the critical behavior of a system based on the partial knowledge of the complex Fisher zeros set of the partition function. The method is general with advantages over most conventional techniques since it does not need to identify any order parameter a priori. The critical temperature and exponents can be obtained with great precision even in the most unamenable cases like the two dimensional XY model. To test the method and to show how it works we applied it to some selected models where the transitions are well known: The 2D Ising, Potts and XY models and to a homopolymer system. Our choices cover systems with first order, continuous and Berezinskii-Kosterlitz-Thouless transitions as well as the homopolymer that has two pseudo-transitions. The strategy can easily be adapted to any model, classical or quantum, once we are able to build the corresponding energy probability distribution.
Matrix-exponential distributions in applied probability
Bladt, Mogens
2017-01-01
This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...
Some New Approaches to Multivariate Probability Distributions.
1986-12-01
Forte, B. (1985). Mutual dependence of random variables and maximum discretized entropy , Ann. Prob., 13, 630-637. .. 3. Billingsley, P. (1968...characterizations of distributions, such as the Marshall-Olkin bivariate distribution or Frechet’s multi- variate distribution with continuous marginals or a...problem mentioned in Remark 8. He has given in this context a uniqueness theorem in the bivariate case under certain assump- tions. The following
Foundations of quantization for probability distributions
Graf, Siegfried
2000-01-01
Due to the rapidly increasing need for methods of data compression, quantization has become a flourishing field in signal and image processing and information theory. The same techniques are also used in statistics (cluster analysis), pattern recognition, and operations research (optimal location of service centers). The book gives the first mathematically rigorous account of the fundamental theory underlying these applications. The emphasis is on the asymptotics of quantization errors for absolutely continuous and special classes of singular probabilities (surface measures, self-similar measures) presenting some new results for the first time. Written for researchers and graduate students in probability theory the monograph is of potential interest to all people working in the disciplines mentioned above.
Eliciting Subjective Probability Distributions with Binary Lotteries
DEFF Research Database (Denmark)
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd;
2015-01-01
We test in a laboratory experiment the theoretical prediction that risk attitudes have a surprisingly small role in distorting reports from true belief distributions. We find evidence consistent with theory in our experiment.......We test in a laboratory experiment the theoretical prediction that risk attitudes have a surprisingly small role in distorting reports from true belief distributions. We find evidence consistent with theory in our experiment....
Probability distribution fitting of schedule overruns in construction projects
P E D Love; C-P Sing; WANG, X; Edwards, D.J.; H Odeyinka
2013-01-01
The probability of schedule overruns for construction and engineering projects can be ascertained using a ‘best fit’ probability distribution from an empirical distribution. The statistical characteristics of schedule overruns occurring in 276 Australian construction and engineering projects were analysed. Skewness and kurtosis values revealed that schedule overruns are non-Gaussian. Theoretical probability distributions were then fitted to the schedule overrun data; including the Kolmogorov–...
Osserman, Robert
2011-01-01
The basic component of several-variable calculus, two-dimensional calculus is vital to mastery of the broader field. This extensive treatment of the subject offers the advantage of a thorough integration of linear algebra and materials, which aids readers in the development of geometric intuition. An introductory chapter presents background information on vectors in the plane, plane curves, and functions of two variables. Subsequent chapters address differentiation, transformations, and integration. Each chapter concludes with problem sets, and answers to selected exercises appear at the end o
Juday, Richard D. (Inventor)
1992-01-01
A two-dimensional vernier scale is disclosed utilizing a cartesian grid on one plate member with a polar grid on an overlying transparent plate member. The polar grid has multiple concentric circles at a fractional spacing of the spacing of the cartesian grid lines. By locating the center of the polar grid on a location on the cartesian grid, interpolation can be made of both the X and Y fractional relationship to the cartesian grid by noting which circles coincide with a cartesian grid line for the X and Y direction.
Energy Technology Data Exchange (ETDEWEB)
Wu, Bin; Li, Huiying; Du, Xiaoming; Zhong, Lirong; Yang, Bin; Du, Ping; Gu, Qingbao; Li, Fasheng
2016-02-01
During the process of surfactant enhanced aquifer remediation (SEAR), free phase dense non-aqueous phase liquid (DNAPL) may be mobilized and spread. The understanding of the impact of DNAPL spreading on the SEAR remediation is not sufficient with its positive effect infrequently mentioned. To evaluate the correlation between DNAPL spreading and remediation efficiency, a two-dimensional sandbox apparatus was used to simulate the migration and dissolution process of 1,2-DCA (1,2-dichloroethane) DNAPL in SEAR. Distribution area of DNAPL in the sandbox was determined by digital image analysis and correlated with effluent DNAPL concentration. The results showed that the effluent DNAPL concentration has significant positive linear correlation with the DNAPL distribution area, indicating the mobilization of DNAPL could improve remediation efficiency by enlarging total NAPL-water interfacial area for mass transfer. Meanwhile, the vertical migration of 1,2-DCA was limited within the boundary of aquifer in all experiments, implying that by manipulating injection parameters in SEAR, optimal remediation efficiency can be reached while the risk of DNAPL vertical migration is minimized. This study provides a convenient visible and quantitative method for the optimization of parameters for SEAR project, and an approach of rapid predicting the extent of DNAPL contaminant distribution based on the dissolved DNAPL concentration in the extraction well.
D'Abrusco, R; Mineo, S; Strader, J; Fragos, T; Kim, D W; Luo, B; Zezas, A
2014-01-01
We report significant anisotropies in the projected two-dimensional (2D) spatial distributions of Globular Clusters (GCs) of the giant Virgo elliptical galaxy NGC4649 (M60). Similar features are found in the 2D distribution of low-mass X-ray binaries (LMXBs), both associated with GCs and in the stellar field. Deviations from azimuthal symmetry suggest an arc-like excess of GCs extending north at 4-15 kpc galactocentric radii in the eastern side of major axis of NGC4649. This feature is more prominent for red GCs, but still persists in the 2D distribution of blue GCs. High and low luminosity GCs also show some segregation along this arc, with high-luminosity GCs preferentially located in the southern end and low-luminosity GCs in the northern section of the arc. GC-LMXBs follow the anisotropy of red-GCs, where most of them reside; however, a significant overdensity of (high-luminosity) field LMXBs is present to the south of the GC arc. These results suggest that NGC4649 has experienced mergers and/or multiple ...
Shi, Jingya; Wu, Peiyi; Yan, Feng
2010-07-06
The intermolecular interaction and distribution of components in [Bmim][BF(4)]-based polystyrene composite membrane which is composed of 1-butyl-3-methylimidazolium tetrafluoroborate ([Bmim][BF(4)]), poly(1-(2-methyl acryloyloxyundecyl)-3-methylimidazolium bromide) (poly(MAUM-Br)) and polystyrene is investigated by in situ Fourier transform infrared spectroscopy (FTIR) and two-dimensional correlation infrared spectroscopy (2DIR) in this study. A proposed model about the structure of this composite material is presented, and a sketch map about the local distributions of components is provided. In this model, alkyl chains in [Bmim][BF(4)], poly(MAUM-Br), and polystyrene in this system were supposed to form a polymeric network through aggregation or copolymerization. Cations of ionic liquids separate into the polymer network, while anions are kept mainly through the Coulomb force and partially by the hydrogen bonding between cations and anions. To support this model, FTIR has provided some hints on the pi-pi interaction existing in this complex material between the imidazole ring of ionic liquids and the benzene ring of polystyrene, based on the discovery of the shifts of IR absorption bands assigned to the C-C stretching vibrational mode. The sequential order of the responses from different chemical groups toward the variation of temperature is calculated by 2DIR, and the results suggest how different components distributed in this [Bmim][BF(4)]-based polystyrene composite membrane.
Wu, Bin; Li, Huiying; Du, Xiaoming; Zhong, Lirong; Yang, Bin; Du, Ping; Gu, Qingbao; Li, Fasheng
2016-02-01
During the process of surfactant enhanced aquifer remediation (SEAR), free phase dense non-aqueous phase liquid (DNAPL) may be mobilized and spread. The understanding of the impact of DNAPL spreading on the SEAR remediation is not sufficient with its positive effect infrequently mentioned. To evaluate the correlation between DNAPL spreading and remediation efficiency, a two-dimensional sandbox apparatus was used to simulate the migration and dissolution process of 1,2-DCA (1,2-dichloroethane) DNAPL in SEAR. Distribution area of DNAPL in the sandbox was determined by digital image analysis and correlated with effluent DNAPL concentration. The results showed that the effluent DNAPL concentration has significant positive linear correlation with the DNAPL distribution area, indicating the mobilization of DNAPL could improve remediation efficiency by enlarging total NAPL-water interfacial area for mass transfer. Meanwhile, the vertical migration of 1,2-DCA was limited within the boundary of aquifer in all experiments, implying that by manipulating injection parameters in SEAR, optimal remediation efficiency can be reached while the risk of DNAPL vertical migration is minimized. This study provides a convenient visible and quantitative method for the optimization of parameters for SEAR project, and an approach of rapid predicting the extent of DNAPL contaminant distribution based on the dissolved DNAPL concentration in the extraction well.
Thornhill, J. W.; Giuliani, J. L.; Chong, Y. K.; Velikovich, A. L.; Dasgupta, A.; Apruzese, J. P.; Jones, B.; Ampleford, D. J.; Coverdale, C. A.; Jennings, C. A.; Waisman, E. M.; Lamppa, D. C.; McKenney, J. L.; Cuneo, M. E.; Krishnan, M.; Coleman, P. L.; Madden, R. E.; Elliott, K. W.
2012-09-01
Argon Z-pinch experiments are to be performed on the refurbished Z machine (which we will refer to as ZR here in order to distinguish between pre-refurbishment Z) at Sandia National Laboratories with a new 8 cm diameter double-annulus gas puff nozzle constructed by Alameda Applied Sciences Corporation (AASC). The gas exits the nozzle from an outer and inner annulus and a central jet. The amount of gas present in each region can be varied. Here a two-dimensional radiation MHD (2DRMHD) model, MACH2-TCRE, with tabular collisional radiative equilibrium atomic kinetics is used to theoretically investigate stability and K-shell emission properties of several measured (interferometry) initial gas distributions emanating from this new nozzle. Of particular interest is to facilitate that the distributions employed in future experiments have stability and K-shell emission properties that are at least as good as the Titan nozzle generated distribution that was successfully fielded in earlier experiments on the Z machine before it underwent refurbishment. The model incorporates a self-consistent calculation for non-local thermodynamic equilibrium kinetics and ray-trace based radiation transport. This level of detail is necessary in order to model opacity effects, non-local radiation effects, and the high temperature state of K-shell emitting Z-pinch loads. Comparisons of radiation properties and stability of measured AASC gas profiles are made with that of the distribution used in the pre-refurbished Z experiments. Based on these comparisons, an optimal K-shell emission producing initial gas distribution is determined from among the AASC nozzle measured distributions and predictions are made for K-shell yields attainable from future ZR experiments.
Scoring Rules for Subjective Probability Distributions
DEFF Research Database (Denmark)
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd
2017-01-01
Subjective beliefs are elicited routinely in economics experiments. However, such elicitation often suffers from two possible disadvantages. First, beliefs are recovered in the form of a summary statistic, usually the mean, of the underlying latent distribution. Second, recovered beliefs are bias...
NONPARAMETRIC ESTIMATION OF CHARACTERISTICS OF PROBABILITY DISTRIBUTIONS
Directory of Open Access Journals (Sweden)
Orlov A. I.
2015-10-01
Full Text Available The article is devoted to the nonparametric point and interval estimation of the characteristics of the probabilistic distribution (the expectation, median, variance, standard deviation, variation coefficient of the sample results. Sample values are regarded as the implementation of independent and identically distributed random variables with an arbitrary distribution function having the desired number of moments. Nonparametric analysis procedures are compared with the parametric procedures, based on the assumption that the sample values have a normal distribution. Point estimators are constructed in the obvious way - using sample analogs of the theoretical characteristics. Interval estimators are based on asymptotic normality of sample moments and functions from them. Nonparametric asymptotic confidence intervals are obtained through the use of special output technology of the asymptotic relations of Applied Statistics. In the first step this technology uses the multidimensional central limit theorem, applied to the sums of vectors whose coordinates are the degrees of initial random variables. The second step is the conversion limit multivariate normal vector to obtain the interest of researcher vector. At the same considerations we have used linearization and discarded infinitesimal quantities. The third step - a rigorous justification of the results on the asymptotic standard for mathematical and statistical reasoning level. It is usually necessary to use the necessary and sufficient conditions for the inheritance of convergence. This article contains 10 numerical examples. Initial data - information about an operating time of 50 cutting tools to the limit state. Using the methods developed on the assumption of normal distribution, it can lead to noticeably distorted conclusions in a situation where the normality hypothesis failed. Practical recommendations are: for the analysis of real data we should use nonparametric confidence limits
Stable Probability Distributions and their Domains of Attraction
J.L. Geluk (Jaap); L.F.M. de Haan (Laurens)
1997-01-01
textabstractThe theory of stable probability distributions and their domains of attraction is derived in a direct way (avoiding the usual route via infinitely divisible distributions) using Fourier transforms. Regularly varying functions play an important role in the exposition.
Semi-stable distributions in free probability theory
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Semi-stable distributions, in classical probability theory, are characterized as limiting distributions of subsequences of normalized partial sums of independent and identically distributed random variables. We establish the noncommutative counterpart of semi-stable distributions. We study the characterization of noncommutative semi-stability through free cumulant transform and develop the free semi-stability and domain of semi-stable attraction in free probability theory.
Incorporating Skew into RMS Surface Roughness Probability Distribution
Stahl, Mark T.; Stahl, H. Philip.
2013-01-01
The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.
Institute of Scientific and Technical Information of China (English)
ZHANG; Renhua; WANG; Jinfeng; ZHU; Caiying; SUN; Xiaomin
2004-01-01
After having analyzed the requirement on the aerodynamic earth's surface roughness in two-dimensional distribution in the research field of interaction between land surface and atmosphere, this paper presents a new way to calculate the aerodynamic roughness using the earth's surface geometric roughness retrieved from SAR (Synthetic Aperture Radar) and TM thermal infrared image data. On the one hand, the SPM (Small Perturbation Model) was used as a theoretical SAR backscattering model to describe the relationship between the SAR backscattering coefficient and the earth's surface geometric roughness and its dielectric constant retrieved from the physical model between the soil thermal inertia and the soil surface moisture with the simultaneous TM thermal infrared image data and the ground microclimate data. On the basis of the SAR image matching with the TM image, the non-volume scattering surface geometric information was obtained from the SPM model at the TM image pixel scale, and the ground pixel surface's equivalent geometric roughness-height standard RMS (Root Mean Square) was achieved from the geometric information by the transformation of the typical topographic factors. The vegetation (wheat, tree) height retrieved from spectrum model was also transferred into its equivalent geometric roughness. A completely two-dimensional distribution map of the equivalent geometric roughness over the experimental area was produced by the data mosaic technique. On the other hand, according to the atmospheric eddy currents theory, the aerodynamic surface roughness was iterated out with the atmosphere stability correction method using the wind and the temperature profiles data measured at several typical fields such as bare soil field and vegetation field. After having analyzed the effect of surface equivalent geometric roughness together with dynamic and thermodynamic factors on the aerodynamic surface roughness within the working area, this paper first establishes a scale
Probability distributions in risk management operations
Artikis, Constantinos
2015-01-01
This book is about the formulations, theoretical investigations, and practical applications of new stochastic models for fundamental concepts and operations of the discipline of risk management. It also examines how these models can be useful in the descriptions, measurements, evaluations, and treatments of risks threatening various modern organizations. Moreover, the book makes clear that such stochastic models constitute very strong analytical tools which substantially facilitate strategic thinking and strategic decision making in many significant areas of risk management. In particular the incorporation of fundamental probabilistic concepts such as the sum, minimum, and maximum of a random number of continuous, positive, independent, and identically distributed random variables in the mathematical structure of stochastic models significantly supports the suitability of these models in the developments, investigations, selections, and implementations of proactive and reactive risk management operations. The...
Directory of Open Access Journals (Sweden)
Fulong Jing
2017-06-01
Full Text Available For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR and the quadratic chirp rate (QCR are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF and modified scaled Fourier transform (mSFT, an effective parameter estimation algorithm is proposed—referred to as the Two-Dimensional product modified Lv’s distribution (2D-PMLVD—for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified.
Kumari, Babita; Adlakha, Neeru
2015-02-01
Thermoregulation is a complex mechanism regulating heat production within the body (chemical thermoregulation) and heat exchange between the body and the environment (physical thermoregulation) in such a way that the heat exchange is balanced and deep body temperatures are relatively stable. The external heat transfer mechanisms are radiation, conduction, convection and evaporation. The physical activity causes thermal stress and poses challenges for this thermoregulation. In this paper, a model has been developed to study temperature distribution in SST regions of human limbs immediately after physical exercise under cold climate. It is assumed that the subject is doing exercise initially and comes to rest at time t = 0. The human limb is assumed to be of cylindrical shape. The peripheral region of limb is divided into three natural components namely epidermis, dermis and subdermal tissues (SST). Appropriate boundary conditions have been framed based on the physical conditions of the problem. Finite difference has been employed for time, radial and angular variables. The numerical results have been used to obtain temperature profiles in the SST region immediately after continuous exercise for a two-dimensional unsteady state case. The results have been used to analyze the thermal stress in relation to light, moderate and vigorous intensity exercise.
Si, Weijian; Zhao, Pinjiao; Qu, Zhiyu
2016-05-31
This paper presents an L-shaped sparsely-distributed vector sensor (SD-VS) array with four different antenna compositions. With the proposed SD-VS array, a novel two-dimensional (2-D) direction of arrival (DOA) and polarization estimation method is proposed to handle the scenario where uncorrelated and coherent sources coexist. The uncorrelated and coherent sources are separated based on the moduli of the eigenvalues. For the uncorrelated sources, coarse estimates are acquired by extracting the DOA information embedded in the steering vectors from estimated array response matrix of the uncorrelated sources, and they serve as coarse references to disambiguate fine estimates with cyclical ambiguity obtained from the spatial phase factors. For the coherent sources, four Hankel matrices are constructed, with which the coherent sources are resolved in a similar way as for the uncorrelated sources. The proposed SD-VS array requires only two collocated antennas for each vector sensor, thus the mutual coupling effects across the collocated antennas are reduced greatly. Moreover, the inter-sensor spacings are allowed beyond a half-wavelength, which results in an extended array aperture. Simulation results demonstrate the effectiveness and favorable performance of the proposed method.
Directory of Open Access Journals (Sweden)
Weijian Si
2016-05-01
Full Text Available This paper presents an L-shaped sparsely-distributed vector sensor (SD-VS array with four different antenna compositions. With the proposed SD-VS array, a novel two-dimensional (2-D direction of arrival (DOA and polarization estimation method is proposed to handle the scenario where uncorrelated and coherent sources coexist. The uncorrelated and coherent sources are separated based on the moduli of the eigenvalues. For the uncorrelated sources, coarse estimates are acquired by extracting the DOA information embedded in the steering vectors from estimated array response matrix of the uncorrelated sources, and they serve as coarse references to disambiguate fine estimates with cyclical ambiguity obtained from the spatial phase factors. For the coherent sources, four Hankel matrices are constructed, with which the coherent sources are resolved in a similar way as for the uncorrelated sources. The proposed SD-VS array requires only two collocated antennas for each vector sensor, thus the mutual coupling effects across the collocated antennas are reduced greatly. Moreover, the inter-sensor spacings are allowed beyond a half-wavelength, which results in an extended array aperture. Simulation results demonstrate the effectiveness and favorable performance of the proposed method.
Jing, Fulong; Jiao, Shuhong; Hou, Changbo; Si, Weijian; Wang, Yu
2017-01-01
For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM) signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR) and the quadratic chirp rate (QCR) are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF) and modified scaled Fourier transform (mSFT), an effective parameter estimation algorithm is proposed—referred to as the Two-Dimensional product modified Lv’s distribution (2D-PMLVD)—for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT) and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified. PMID:28635640
Two-dimensional optical spectroscopy
Cho, Minhaeng
2009-01-01
Discusses the principles and applications of two-dimensional vibrational and optical spectroscopy techniques. This book provides an account of basic theory required for an understanding of two-dimensional vibrational and electronic spectroscopy.
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
Ye, S.; Sleep, B. E.; Chien, C.
2010-12-01
Probability distribution of biofilm thickness and effect of biofilm on permeability of saturated porous media were investigated in a two-dimensional sand-filled cell (55 cm wide x 45 cm high x 1.28 cm thick) under condition of rich nutrition. Inoculation of the lower portion of the cell with a methanogenic culture and addition of methanol to the bottom of the cell led to biomass growth. Biomass distributions in the water and on the sand in the cell were measured by protein analysis. The biofilm distribution on the sand was observed by confocal laser scanning microscopy (CLSM). Permeability was measured by laboratory hydraulic tests. The biomass levels measured in water and on the sand increased with time, and were highest at the bottom of the cell. The biofilm on the sand at the bottom of the cell was thicker. Biomass distribution on the grain of sand was not uniform. Biofilm thickness was a random variable with a normal distribution by statistical analysis of CLSM images. The results of the hydraulic tests demonstrated that the permeability due to biofilm growth was estimated to be average 12% of the initial value. To investigate the spatial distribution of permeability in the two dimensional cell, three models (Taylor, Seki, and Clement) were used to calculate permeability of porous media with biofilm growth. The results of Taylor's model (Taylor et al., 1990) showed reduction in permeability of 2-5 orders magnitude. The Clement's model (Clement et al., 1996) predicted 3%-98% of the initial value. Seki's model (Seki and Miyazaki, 2001) could not be applied in this study. Conclusively, biofilm growth could obviously decrease the permeability of two dimensional saturated porous media, however, the reduction was much less than that estimated in one dimensional condition. Additionally, under condition of two dimensional saturated porous media with rich nutrition, Seki's model could not be applied, Taylor’s model predicted bigger reductions, and the results of
Energy Technology Data Exchange (ETDEWEB)
Chinillach Ferrando, N.; Tortosa Oliver, R. A.; Lorente Franco, L.; Morales Marco, J. C.; Solar Catalan, P.; Andreu Martinez, F. J.
2013-07-01
During the process of commissioned and implementation of IMRT module we ask which would be the experimental set-up for the evaluation of the two-dimensional dose distributions using an array of detectors, taking into account the different possibilities offered by the materials available in the service, which allow as to provide three different configurations. The objective of the study is to determine which of the three offered us better results for checks of the two-dimensional distributions of dose in IMRT treatments, taking into account the workload involved in each method. (Author)
How Can Histograms Be Useful for Introducing Continuous Probability Distributions?
Derouet, Charlotte; Parzysz, Bernard
2016-01-01
The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Energy Technology Data Exchange (ETDEWEB)
Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Bennett, William W; Teasdale, Peter R; Welsh, David T; Panther, Jared G; Jolley, Dianne F
2012-01-15
The recently developed colorimetric diffusive equilibration in thin films (DET) technique for the in situ, high-resolution measurement of iron(II) in marine sediments is optimized to allow measurement of the higher iron concentrations typical of freshwater sediment porewaters. Computer imaging densitometry (CID) is used to analyze the retrieved samplers following exposure to ferrozine, a colorimetric reagent selective for iron(II). The effect of ferrozine concentration, image processing parameters and ionic strength are investigated to improve the applicability of this technique to a wider range of aquatic systems than reported in the first publications of this approach. The technique was optimized to allow detection of up to 2,000 μmol L(-1) iron(II), a four-fold increase on the previous upper detection limit of 500 μ mol L(-1). The CID processing of the scanned color image was also optimized to adjust the sensitivity of the assay as required; by processing the image with different color channel filters, the sensitivity of the assay can be optimized for lower concentrations (up to 100 μmol L(-1)) or higher concentrations (up to 2,000 μmol L(-1)) of iron(II), depending on the specific site characteristics. This process does not require separate sampling probes or even separate scans of the DET gels as the color filter and grayscale conversion is done post-image capture. The optimized technique is very simple to use and provides highly representative, high-resolution (1mm) two-dimensional distributions of iron(II) in sediment porewaters. The detection limit of the optimized technique was 4.1±0.3 μmol L(-1) iron(II) and relative standard deviations were less than 6%. Copyright © 2011 Elsevier B.V. All rights reserved.
Application-dependent Probability Distributions for Offshore Wind Speeds
Morgan, E. C.; Lackner, M.; Vogel, R. M.; Baise, L. G.
2010-12-01
The higher wind speeds of the offshore environment make it an attractive setting for future wind farms. With sparser field measurements, the theoretical probability distribution of short-term wind speeds becomes more important in estimating values such as average power output and fatigue load. While previous studies typically compare the accuracy of probability distributions using R2, we show that validation based on this metric is not consistent with validation based on engineering parameters of interest, namely turbine power output and extreme wind speed. Thus, in order to make the most accurate estimates possible, the probability distribution that an engineer picks to characterize wind speeds should depend on the design parameter of interest. We introduce the Kappa and Wakeby probability distribution functions to wind speed modeling, and show that these two distributions, along with the Biweibull distribution, fit wind speed samples better than the more widely accepted Weibull and Rayleigh distributions based on R2. Additionally, out of the 14 probability distributions we examine, the Kappa and Wakeby give the most accurate and least biased estimates of turbine power output. The fact that the 2-parameter Lognormal distribution estimates extreme wind speeds (i.e. fits the upper tail of wind speed distributions) with least error indicates that not one single distribution performs satisfactorily for all applications. Our use of a large dataset composed of 178 buoys (totaling ~72 million 10-minute wind speed observations) makes these findings highly significant, both in terms of large sample size and broad geographical distribution across various wind regimes. Boxplots of R2 from the fit of each of the 14 distributions to the 178 boy wind speed samples. Distributions are ranked from left to right by ascending median R2, with the Biweibull having the closest median to 1.
Most probable degree distribution at fixed structural entropy
Indian Academy of Sciences (India)
Ginestra Bianconi
2008-06-01
The structural entropy is the entropy of the ensemble of uncorrelated networks with given degree sequence. Here we derive the most probable degree distribution emerging when we distribute stubs (or half-edges) randomly through the nodes of the network by keeping fixed the structural entropy. This degree distribution is found to decay as a Poisson distribution when the entropy is maximized and to have a power-law tail with an exponent → 2 when the entropy is minimized.
PROBABILITY DISTRIBUTION FUNCTION OF NEAR-WALL TURBULENT VELOCITY FLUCTUATIONS
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
By large eddy simulation (LES), turbulent databases of channel flows at different Reynolds numbers were established. Then, the probability distribution functions of the streamwise and wall-normal velocity fluctuations were obtained and compared with the corresponding normal distributions. By hypothesis test, the deviation from the normal distribution was analyzed quantitatively. The skewness and flatness factors were also calculated. And the variations of these two factors in the viscous sublayer, buffer layer and log-law layer were discussed. Still illustrated were the relations between the probability distribution functions and the burst events-sweep of high-speed fluids and ejection of low-speed fluids-in the viscous sub-layer, buffer layer and loglaw layer. Finally the variations of the probability distribution functions with Reynolds number were examined.
Two-dimensional quantum repeaters
Wallnöfer, J.; Zwerger, M.; Muschik, C.; Sangouard, N.; Dür, W.
2016-11-01
The endeavor to develop quantum networks gave rise to a rapidly developing field with far-reaching applications such as secure communication and the realization of distributed computing tasks. This ultimately calls for the creation of flexible multiuser structures that allow for quantum communication between arbitrary pairs of parties in the network and facilitate also multiuser applications. To address this challenge, we propose a two-dimensional quantum repeater architecture to establish long-distance entanglement shared between multiple communication partners in the presence of channel noise and imperfect local control operations. The scheme is based on the creation of self-similar multiqubit entanglement structures at growing scale, where variants of entanglement swapping and multiparty entanglement purification are combined to create high-fidelity entangled states. We show how such networks can be implemented using trapped ions in cavities.
Generating Probability Distributions using Multivalued Stochastic Relay Circuits
Lee, David
2011-01-01
The problem of random number generation dates back to von Neumann's work in 1951. Since then, many algorithms have been developed for generating unbiased bits from complex correlated sources as well as for generating arbitrary distributions from unbiased bits. An equally interesting, but less studied aspect is the structural component of random number generation as opposed to the algorithmic aspect. That is, given a network structure imposed by nature or physical devices, how can we build networks that generate arbitrary probability distributions in an optimal way? In this paper, we study the generation of arbitrary probability distributions in multivalued relay circuits, a generalization in which relays can take on any of N states and the logical 'and' and 'or' are replaced with 'min' and 'max' respectively. Previous work was done on two-state relays. We generalize these results, describing a duality property and networks that generate arbitrary rational probability distributions. We prove that these network...
Evidence for Truncated Exponential Probability Distribution of Earthquake Slip
Thingbaijam, Kiran K. S.
2016-07-13
Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.
The Persistence Problem in Two-Dimensional Fluid Turbulence
Perlekar, Prasad; Mitra, Dhrubaditya; Pandit, Rahul
2010-01-01
We present a natural framework for studying the persistence problem in two-dimensional fluid turbulence by using the Okubo-Weiss parameter {\\Lambda} to distinguish between vortical and extensional regions. We then use a direct numerical simulation (DNS) of the two-dimensional, incompressible Navier-Stokes equation with Ekman friction to study probability distribution functions (PDFs) of the persistence times of vortical and extensional regions by employing both Eulerian and Lagrangian measurements. We find that, in the Eulerian case, the persistence-time PDFs have exponential tails; by contrast, this PDF for Lagrangian particles, in vortical regions, has a power-law tail with a universal exponent {\\theta} = 3.1 \\pm 0.2.
多元Beta分布特性分析%Analysis on Multi-dimensional Beta Probability Distribution Function
Institute of Scientific and Technical Information of China (English)
潘高田; 梁帆; 郭齐胜; 黄一斌
2011-01-01
Based on the quantitative truncated sequential test theory, multi-dimensional Beta probability distribution functions are come across in the problem of weapons system against aerial target hit accuracy tests. This paper analyses multi-dimensional Beta probability distribution function's properties and figures out" part of two-dimensional Beta probability distribution function values. This research plays an important role in the field of weapon system hit accuracy tests.%利用小样本截尾序贯检验理论,在武器系统对空中目标的命中精度检验问题中,遇到了一类多元Beta概率分布函数,讨论分析了多维Beta概率分布函数的特性并给出了概率计算表.结果对武器精度检验具有重要意义和实用价值.
NORMALLY DISTRIBUTED PROBABILITY MEASURE ON THE METRIC SPACE OF NORMS
Institute of Scientific and Technical Information of China (English)
Á.G. HORVÁTH
2013-01-01
In this paper we propose a method to construct probability measures on the space of convex bodies. For this purpose, first, we introduce the notion of thinness of a body. Then we show the existence of a measure with the property that its pushforward by the thinness function is a probability measure of truncated normal distribution. Finally, we improve this method to find a measure satisfying some important properties in geometric measure theory.
Probability distributions for Poisson processes with pile-up
Sevilla, Diego J R
2013-01-01
In this paper, two parametric probability distributions capable to describe the statistics of X-ray photon detection by a CCD are presented. They are formulated from simple models that account for the pile-up phenomenon, in which two or more photons are counted as one. These models are based on the Poisson process, but they have an extra parameter which includes all the detailed mechanisms of the pile-up process that must be fitted to the data statistics simultaneously with the rate parameter. The new probability distributions, one for number of counts per time bins (Poisson-like), and the other for waiting times (exponential-like) are tested fitting them to statistics of real data, and between them through numerical simulations, and their results are analyzed and compared. The probability distributions presented here can be used as background statistical models to derive likelihood functions for statistical methods in signal analysis.
Probability distribution functions in the finite density lattice QCD
Ejiri, S; Aoki, S; Kanaya, K; Saito, H; Hatsuda, T; Ohno, H; Umeda, T
2012-01-01
We study the phase structure of QCD at high temperature and density by lattice QCD simulations adopting a histogram method. We try to solve the problems which arise in the numerical study of the finite density QCD, focusing on the probability distribution function (histogram). As a first step, we investigate the quark mass dependence and the chemical potential dependence of the probability distribution function as a function of the Polyakov loop when all quark masses are sufficiently large, and study the properties of the distribution function. The effect from the complex phase of the quark determinant is estimated explicitly. The shape of the distribution function changes with the quark mass and the chemical potential. Through the shape of the distribution, the critical surface which separates the first order transition and crossover regions in the heavy quark region is determined for the 2+1-flavor case.
Assigning probability distributions to input parameters of performance assessment models
Energy Technology Data Exchange (ETDEWEB)
Mishra, Srikanta [INTERA Inc., Austin, TX (United States)
2002-02-01
This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.
Probability distribution of arrival times in quantum mechanics
Delgado, V
1998-01-01
In a previous paper [Phys. Rev. A, in press] we introduced a self-adjoint operator $\\hat {{\\cal T}}(X)$ whose eigenstates can be used to define consistently a probability distribution of the time of arrival at a given spatial point. In the present work we show that the probability distribution previously proposed can be well understood on classical grounds in the sense that it is given by the expectation value of a certain positive definite operator $\\hat J^{(+)}(X)$ which is nothing but a straightforward quantum version of the modulus of the classical current. For quantum states highly localized in momentum space about a certain momentum $p_0 \
Parametric Probability Distribution Functions for Axon Diameters of Corpus Callosum
Directory of Open Access Journals (Sweden)
Farshid eSepehrband
2016-05-01
Full Text Available Axon diameter is an important neuroanatomical characteristic of the nervous system that alters in the course of neurological disorders such as multiple sclerosis. Axon diameters vary, even within a fiber bundle, and are not normally distributed. An accurate distribution function is therefore beneficial, either to describe axon diameters that are obtained from a direct measurement technique (e.g., microscopy, or to infer them indirectly (e.g., using diffusion-weighted MRI. The gamma distribution is a common choice for this purpose (particularly for the inferential approach because it resembles the distribution profile of measured axon diameters which has been consistently shown to be non-negative and right-skewed. In this study we compared a wide range of parametric probability distribution functions against empirical data obtained from electron microscopy images. We observed that the gamma distribution fails to accurately describe the main characteristics of the axon diameter distribution, such as location and scale of the mode and the profile of distribution tails. We also found that the generalized extreme value distribution consistently fitted the measured distribution better than other distribution functions. This suggests that there may be distinct subpopulations of axons in the corpus callosum, each with their own distribution profiles. In addition, we observed that several other distributions outperformed the gamma distribution, yet had the same number of unknown parameters; these were the inverse Gaussian, log normal, log logistic and Birnbaum-Saunders distributions.
Probability distributions of the electroencephalogram envelope of preterm infants.
Saji, Ryoya; Hirasawa, Kyoko; Ito, Masako; Kusuda, Satoshi; Konishi, Yukuo; Taga, Gentaro
2015-06-01
To determine the stationary characteristics of electroencephalogram (EEG) envelopes for prematurely born (preterm) infants and investigate the intrinsic characteristics of early brain development in preterm infants. Twenty neurologically normal sets of EEGs recorded in infants with a post-conceptional age (PCA) range of 26-44 weeks (mean 37.5 ± 5.0 weeks) were analyzed. Hilbert transform was applied to extract the envelope. We determined the suitable probability distribution of the envelope and performed a statistical analysis. It was found that (i) the probability distributions for preterm EEG envelopes were best fitted by lognormal distributions at 38 weeks PCA or less, and by gamma distributions at 44 weeks PCA; (ii) the scale parameter of the lognormal distribution had positive correlations with PCA as well as a strong negative correlation with the percentage of low-voltage activity; (iii) the shape parameter of the lognormal distribution had significant positive correlations with PCA; (iv) the statistics of mode showed significant linear relationships with PCA, and, therefore, it was considered a useful index in PCA prediction. These statistics, including the scale parameter of the lognormal distribution and the skewness and mode derived from a suitable probability distribution, may be good indexes for estimating stationary nature in developing brain activity in preterm infants. The stationary characteristics, such as discontinuity, asymmetry, and unimodality, of preterm EEGs are well indicated by the statistics estimated from the probability distribution of the preterm EEG envelopes. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Augmenting momentum resolution with well tuned probability distributions
Landi, Gregorio
2016-01-01
The realistic probability distributions of a previous article are applied to the reconstruction of tracks in constant magnetic field. The complete forms and their schematic approximations produce excellent momentum estimations, drastically better than standard fits. A simplified derivation of one of our probability distributions is illustrated. The momentum reconstructions are compared with standard fits (least squares) with two different position algorithms: the eta-algorithm and the two-strip center of gravity. The quality of our results are expressed as the increase of the magnetic field and signal-to-noise ratio that overlap the standard fit reconstructions with ours best distributions. The data and the simulations are tuned on the tracker of a running experiment and its double sided microstrip detectors, here each detector side is simulated to measure the magnetic bending. To overlap with our best distributions, the magnetic field must be increased by a factor 1.5 for the least squares based on the eta-a...
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Probability distribution of extreme share returns in Malaysia
Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin
2014-09-01
The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.
Log-concave Probability Distributions: Theory and Statistical Testing
DEFF Research Database (Denmark)
An, Mark Yuing
1996-01-01
This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete and mul...
Probability Measure of Navigation pattern predition using Poisson Distribution Analysis
Directory of Open Access Journals (Sweden)
Dr.V.Valli Mayil
2012-06-01
Full Text Available The World Wide Web has become one of the most important media to store, share and distribute information. The rapid expansion of the web has provided a great opportunity to study user and system behavior by exploring web access logs. Web Usage Mining is the application of data mining techniques to large web data repositories in order to extract usage patterns. Every web server keeps a log of all transactions between the server and the clients. The log data which are collected by web servers contains information about every click of user to the web documents of the site. The useful log information needs to be analyzed and interpreted in order to obtainknowledge about actual user preferences in accessing web pages. In recent years several methods have been proposed for mining web log data. This paper addresses the statistical method of Poisson distribution analysis to find out the higher probability session sequences which is then used to test the web application performance.The analysis of large volumes of click stream data demands the employment of data mining methods. Conducting data mining on logs of web servers involves the determination of frequently occurring access sequences. A statistical poisson distribution shows the frequency probability of specific events when the average probability of a single occurrence is known. The Poisson distribution is a discrete function wich is used in this paper to find out the probability frequency of particular page is visited by the user.
On Probability Distributions for Trees: Representations, Inference and Learning
Denis, François; Gilleron, Rémi; Tommasi, Marc; Gilbert, Édouard
2008-01-01
We study probability distributions over free algebras of trees. Probability distributions can be seen as particular (formal power) tree series [Berstel et al 82, Esik et al 03], i.e. mappings from trees to a semiring K . A widely studied class of tree series is the class of rational (or recognizable) tree series which can be defined either in an algebraic way or by means of multiplicity tree automata. We argue that the algebraic representation is very convenient to model probability distributions over a free algebra of trees. First, as in the string case, the algebraic representation allows to design learning algorithms for the whole class of probability distributions defined by rational tree series. Note that learning algorithms for rational tree series correspond to learning algorithms for weighted tree automata where both the structure and the weights are learned. Second, the algebraic representation can be easily extended to deal with unranked trees (like XML trees where a symbol may have an unbounded num...
Probability distributions of continuous measurement results for conditioned quantum evolution
Franquet, A.; Nazarov, Yuli V.
2017-02-01
We address the statistics of continuous weak linear measurement on a few-state quantum system that is subject to a conditioned quantum evolution. For a conditioned evolution, both the initial and final states of the system are fixed: the latter is achieved by the postselection in the end of the evolution. The statistics may drastically differ from the nonconditioned case, and the interference between initial and final states can be observed in the probability distributions of measurement outcomes as well as in the average values exceeding the conventional range of nonconditioned averages. We develop a proper formalism to compute the distributions of measurement outcomes, and evaluate and discuss the distributions in experimentally relevant setups. We demonstrate the manifestations of the interference between initial and final states in various regimes. We consider analytically simple examples of nontrivial probability distributions. We reveal peaks (or dips) at half-quantized values of the measurement outputs. We discuss in detail the case of zero overlap between initial and final states demonstrating anomalously big average outputs and sudden jump in time-integrated output. We present and discuss the numerical evaluation of the probability distribution aiming at extending the analytical results and describing a realistic experimental situation of a qubit in the regime of resonant fluorescence.
Convolutions Induced Discrete Probability Distributions and a New Fibonacci Constant
Rajan, Arulalan; Rao, Vittal; Rao, Ashok
2010-01-01
This paper proposes another constant that can be associated with Fibonacci sequence. In this work, we look at the probability distributions generated by the linear convolution of Fibonacci sequence with itself, and the linear convolution of symmetrized Fibonacci sequence with itself. We observe that for a distribution generated by the linear convolution of the standard Fibonacci sequence with itself, the variance converges to 8.4721359... . Also, for a distribution generated by the linear convolution of symmetrized Fibonacci sequences, the variance converges in an average sense to 17.1942 ..., which is approximately twice that we get with common Fibonacci sequence.
Measuring Robustness of Timetables at Stations using a Probability Distribution
DEFF Research Database (Denmark)
Jensen, Lars Wittrup; Landex, Alex
Stations are often the limiting capacity factor in a railway network. This induces interdependencies, especially at at-grade junctions, causing network effects. This paper presents three traditional methods that can be used to measure the complexity of a station, indicating the robustness...... infrastructure layouts given a timetable. These two methods provide different precision at the expense of a more complex calculation process. The advanced and more precise method is based on a probability distribution that can describe the expected delay between two trains as a function of the buffer time....... This paper proposes to use the exponential distribution, only taking non-negative delays into account, but any probability distribution can be used. Furthermore, the paper proposes that the calculation parameters are estimated from existing delay data, at a station, to achieve a higher precision. As delay...
Probability Distribution Function of Passive Scalars in Shell Models
Institute of Scientific and Technical Information of China (English)
LIU Chun-Ping; ZHANG Xiao-Qiang; LIU Yu-Rong; WANG Guang-Rui; HE Da-Ren; CHEN Shi-Gang; ZHU Lu-Jin
2008-01-01
A shell-model version of passive scalar problem is introduced, which is inspired by the model of K. Ohkitani and M. Yakhot [K. Ohkitani and M. Yakhot, Phys. Rev. Lett. 60 (1988) 983; K. Ohkitani and M. Yakhot, Prog. Theor. Phys. 81 (1988) 329]. As in the original problem, the prescribed random velocity field is Gaussian and 5 correlated in time. Deterministic differential equations are regarded as nonlinear Langevin equation. Then, the Fokker-Planck equations of PDF for passive scalars axe obtained and solved numerically. In energy input range (n < 5, n is the shell number.), the probability distribution function (PDF) of passive scalars is near the Gaussian distribution. In inertial range (5 < n < 16) and dissipation range (n ≥ 17), the probability distribution function (PDF) of passive scalars has obvious intermittence. And the scaling power of passive scalar is anomalous. The results of numerical simulations are compared with experimental measurements.
Distribution probability of large-scale landslides in central Nepal
Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi
2014-12-01
Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
Wave Packet Dynamics in the Infinite Square Well with the Wigner Quasi-probability Distribution
Belloni, Mario; Doncheski, Michael; Robinett, Richard
2004-05-01
Over the past few years a number of authors have been interested in the time evolution and revivals of Gaussian wave packets in one-dimensional infinite wells and in two-dimensional infinite wells of various geometries. In all of these circumstances, the wave function is guaranteed to revive at a time related to the inverse of the system's ground state energy, if not sooner. To better visualize these revivals we have calculated the time-dependent Wigner quasi-probability distribution for position and momentum, P_W(x; p), for Gaussian wave packet solutions of this system. The Wigner quasi-probability distribution clearly demonstrates the short-term semi-classical time dependence, as well as longer-term revival behavior and the structure during the collapsed state. This tool also provides an excellent way of demonstrating the patterns of highly-correlated Schrödinger-cat-like `mini-packets' which appear at fractional multiples of the exact revival time. This research is supported in part by a Research Corporation Cottrell College Science Award (CC5470) and the National Science Foundation under contracts DUE-0126439 and DUE-9950702.
Unitary equilibrations: probability distribution of the Loschmidt echo
Venuti, Lorenzo Campos
2009-01-01
Closed quantum systems evolve unitarily and therefore cannot converge in a strong sense to an equilibrium state starting out from a generic pure state. Nevertheless for large system size one observes temporal typicality. Namely, for the overwhelming majority of the time instants, the statistics of observables is practically indistinguishable from an effective equilibrium one. In this paper we consider the Loschmidt echo (LE) to study this sort of unitary equilibration after a quench. We draw several conclusions on general grounds and on the basis of an exactly-solvable example of a quasi-free system. In particular we focus on the whole probability distribution of observing a given value of the LE after waiting a long time. Depending on the interplay between the initial state and the quench Hamiltonian, we find different regimes reflecting different equilibration dynamics. When the perturbation is small and the system is away from criticality the probability distribution is Gaussian. However close to criticali...
Measurement of probability distributions for internal stresses in dislocated crystals
Energy Technology Data Exchange (ETDEWEB)
Wilkinson, Angus J.; Tarleton, Edmund; Vilalta-Clemente, Arantxa; Collins, David M. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Jiang, Jun; Britton, T. Benjamin [Department of Materials, Imperial College London, Royal School of Mines, Exhibition Road, London SW7 2AZ (United Kingdom)
2014-11-03
Here, we analyse residual stress distributions obtained from various crystal systems using high resolution electron backscatter diffraction (EBSD) measurements. Histograms showing stress probability distributions exhibit tails extending to very high stress levels. We demonstrate that these extreme stress values are consistent with the functional form that should be expected for dislocated crystals. Analysis initially developed by Groma and co-workers for X-ray line profile analysis and based on the so-called “restricted second moment of the probability distribution” can be used to estimate the total dislocation density. The generality of the results are illustrated by application to three quite different systems, namely, face centred cubic Cu deformed in uniaxial tension, a body centred cubic steel deformed to larger strain by cold rolling, and hexagonal InAlN layers grown on misfitting sapphire and silicon carbide substrates.
Fibonacci Sequence, Recurrence Relations, Discrete Probability Distributions and Linear Convolution
Rajan, Arulalan; Rao, Ashok; Jamadagni, H S
2012-01-01
The classical Fibonacci sequence is known to exhibit many fascinating properties. In this paper, we explore the Fibonacci sequence and integer sequences generated by second order linear recurrence relations with positive integer coe?cients from the point of view of probability distributions that they induce. We obtain the generalizations of some of the known limiting properties of these probability distributions and present certain optimal properties of the classical Fibonacci sequence in this context. In addition, we also look at the self linear convolution of linear recurrence relations with positive integer coefficients. Analysis of self linear convolution is focused towards locating the maximum in the resulting sequence. This analysis, also highlights the influence that the largest positive real root, of the "characteristic equation" of the linear recurrence relations with positive integer coefficients, has on the location of the maximum. In particular, when the largest positive real root is 2,the locatio...
Outage probability of distributed beamforming with co-channel interference
Yang, Liang
2012-03-01
In this letter, we consider a distributed beamforming scheme (DBF) in the presence of equal-power co-channel interferers for both amplify-and-forward and decode-and-forward relaying protocols over Rayleigh fading channels. We first derive outage probability expressions for the DBF systems. We then present a performance analysis for a scheme relying on source selection. Numerical results are finally presented to verify our analysis. © 2011 IEEE.
Cosmological constraints from the convergence 1-point probability distribution
Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus; Huff, Eric; Melchior, Peter; Ross, Ashley J.; Suchyta, Eric
2016-01-01
We examine the cosmological information available from the 1-point probability distribution (PDF) of the weak-lensing convergence field, utilizing fast L-PICOLA simulations and a Fisher analysis. We find competitive constraints in the $\\Omega_m$-$\\sigma_8$ plane from the convergence PDF with $188\\ arcmin^2$ pixels compared to the cosmic shear power spectrum with an equivalent number of modes ($\\ell < 886$). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that...
Testing for the maximum cell probabilities in multinomial distributions
Institute of Scientific and Technical Information of China (English)
XIONG Shifeng; LI Guoying
2005-01-01
This paper investigates one-sided hypotheses testing for p[1], the largest cell probability of multinomial distribution. A small sample test of Ethier (1982) is extended to the general cases. Based on an estimator of p[1], a kind of large sample tests is proposed. The asymptotic power of the above tests under local alternatives is derived. An example is presented at the end of this paper.
Estimating probable flaw distributions in PWR steam generator tubes
Energy Technology Data Exchange (ETDEWEB)
Gorman, J.A.; Turner, A.P.L. [Dominion Engineering, Inc., McLean, VA (United States)
1997-02-01
This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses.
Steady-state distributions of probability fluxes on complex networks
Chełminiak, Przemysław; Kurzyński, Michał
2017-02-01
We consider a simple model of the Markovian stochastic dynamics on complex networks to examine the statistical properties of the probability fluxes. The additional transition, called hereafter a gate, powered by the external constant force breaks a detailed balance in the network. We argue, using a theoretical approach and numerical simulations, that the stationary distributions of the probability fluxes emergent under such conditions converge to the Gaussian distribution. By virtue of the stationary fluctuation theorem, its standard deviation depends directly on the square root of the mean flux. In turn, the nonlinear relation between the mean flux and the external force, which provides the key result of the present study, allows us to calculate the two parameters that entirely characterize the Gaussian distribution of the probability fluxes both close to as well as far from the equilibrium state. Also, the other effects that modify these parameters, such as the addition of shortcuts to the tree-like network, the extension and configuration of the gate and a change in the network size studied by means of computer simulations are widely discussed in terms of the rigorous theoretical predictions.
The Probability Distribution Model of Wind Speed over East Malaysia
Directory of Open Access Journals (Sweden)
Nurulkamal Masseran
2013-07-01
Full Text Available Many studies have found that wind speed is the most significant parameter of wind power. Thus, an accurate determination of the probability distribution of wind speed is an important parameter to measure before estimating the wind energy potential over a particular region. Utilizing an accurate distribution will minimize the uncertainty in wind resource estimates and improve the site assessment phase of planning. In general, different regions have different wind regimes. Hence, it is reasonable that different wind distributions will be found for different regions. Because it is reasonable to consider that wind regimes vary according to the region of a particular country, nine different statistical distributions have been fitted to the mean hourly wind speed data from 20 wind stations in East Malaysia, for the period from 2000 to 2009. The values from Kolmogorov-Smirnov statistic, Akaike’s Information Criteria, Bayesian Information Criteria and R2 correlation coefficient were compared with the distributions to determine the best fit for describing the observed data. A good fit for most of the stations in East Malaysia was found using the Gamma and Burr distributions, though there was no clear pattern observed for all regions in East Malaysia. However, the Gamma distribution was a clear fit to the data from all stations in southern Sabah.
Research on probability distribution of port cargo throughput
Institute of Scientific and Technical Information of China (English)
SUN Liang; TAN De-rong
2008-01-01
In order to more accurately examine developing trends in gross cargo throughput, we have modeled the probability distribution of cargo throughput. Gross cargo throughput is determined by the time spent by cargo ships in the port and the operating efficiency of handling equipment. Gross cargo throughput is the sum of all compound variables determining each aspect of cargo throughput for every cargo ship arriving at the port. Probability distribution was determined using the Wald equation. The results show that the variability of gross cargo throughput primarily depends on the different times required by different cargo ships arriving at the port. This model overcomes the shortcoming of previous models: inability to accurately determine the probability of a specific value of future gross cargo throughput. Our proposed model of cargo throughput depends on the relationship between time required by a cargo ship arriving at the port and the operational capacity of handling equipment at the port. At the same time, key factors affecting gross cargo throughput are analyzed. In order to test the efficiency of the model, the cargo volume of a port in Shandong Province was used as an example. In the case study the actual results matched our theoretical analysis.
Probability distribution of turbulence in curvilinear cross section mobile bed channel.
Sharma, Anurag; Kumar, Bimlesh
2016-01-01
The present study investigates the probability density functions (PDFs) of two-dimensional turbulent velocity fluctuations, Reynolds shear stress (RSS) and conditional RSSs in threshold channel obtained by using Gram-Charlier (GC) series. The GC series expansion has been used up to the moments of order four to include the skewness and kurtosis. Experiments were carried out in the curvilinear cross section sand bed channel at threshold condition with uniform sand size of d50 = 0.418 mm. The result concludes that the PDF distributions of turbulent velocity fluctuations and RSS calculated theoretically based on GC series expansion satisfied the PDFs obtained from the experimental data. The PDF distribution of conditional RSSs related to the ejections and sweeps are well represented by the GC series exponential distribution, except that a slight departure of inward and outward interactions is observed, which may be due to weaker events. This paper offers some new insights into the probabilistic mechanism of sediment transport, which can be helpful in sediment management and design of curvilinear cross section mobile bed channel.
Methods for fitting a parametric probability distribution to most probable number data.
Williams, Michael S; Ebel, Eric D
2012-07-01
Every year hundreds of thousands, if not millions, of samples are collected and analyzed to assess microbial contamination in food and water. The concentration of pathogenic organisms at the end of the production process is low for most commodities, so a highly sensitive screening test is used to determine whether the organism of interest is present in a sample. In some applications, samples that test positive are subjected to quantitation. The most probable number (MPN) technique is a common method to quantify the level of contamination in a sample because it is able to provide estimates at low concentrations. This technique uses a series of dilution count experiments to derive estimates of the concentration of the microorganism of interest. An application for these data is food-safety risk assessment, where the MPN concentration estimates can be fitted to a parametric distribution to summarize the range of potential exposures to the contaminant. Many different methods (e.g., substitution methods, maximum likelihood and regression on order statistics) have been proposed to fit microbial contamination data to a distribution, but the development of these methods rarely considers how the MPN technique influences the choice of distribution function and fitting method. An often overlooked aspect when applying these methods is whether the data represent actual measurements of the average concentration of microorganism per milliliter or the data are real-valued estimates of the average concentration, as is the case with MPN data. In this study, we propose two methods for fitting MPN data to a probability distribution. The first method uses a maximum likelihood estimator that takes average concentration values as the data inputs. The second is a Bayesian latent variable method that uses the counts of the number of positive tubes at each dilution to estimate the parameters of the contamination distribution. The performance of the two fitting methods is compared for two
Two-dimensional liquid chromatography
DEFF Research Database (Denmark)
Græsbøll, Rune
of this thesis is on online comprehensive two-dimensional liquid chromatography (online LC×LC) with reverse phase in both dimensions (online RP×RP). Since online RP×RP has not been attempted before within this research group, a significant part of this thesis consists of knowledge and experience gained...
Phase diagram of epidemic spreading - unimodal vs. bimodal probability distributions
Lancic, Alen; Sikic, Mile; Stefancic, Hrvoje
2009-01-01
The disease spreading on complex networks is studied in SIR model. Simulations on empirical complex networks reveal two specific regimes of disease spreading: local containment and epidemic outbreak. The variables measuring the extent of disease spreading are in general characterized by a bimodal probability distribution. Phase diagrams of disease spreading for empirical complex networks are introduced. A theoretical model of disease spreading on m-ary tree is investigated both analytically and in simulations. It is shown that the model reproduces qualitative features of phase diagrams of disease spreading observed in empirical complex networks. The role of tree-like structure of complex networks in disease spreading is discussed.
Log-concave Probability Distributions: Theory and Statistical Testing
DEFF Research Database (Denmark)
An, Mark Yuing
1996-01-01
This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...
Net-proton probability distribution in heavy ion collisions
Braun-Munzinger, P; Karsch, F; Redlich, K; Skokov, V
2011-01-01
We compute net-proton probability distributions in heavy ion collisions within the hadron resonance gas model. The model results are compared with data taken by the STAR Collaboration in Au-Au collisions at sqrt(s_{NN})= 200 GeV for different centralities. We show that in peripheral Au-Au collisions the measured distributions, and the resulting first four moments of net-proton fluctuations, are consistent with results obtained from the hadron resonance gas model. However, data taken in central Au-Au collisions differ from the predictions of the model. The observed deviations can not be attributed to uncertainties in model parameters. We discuss possible interpretations of the observed deviations.
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Landslide Probability Assessment by the Derived Distributions Technique
Muñoz, E.; Ochoa, A.; Martínez, H.
2012-12-01
Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model
Probability Distribution and Projected Trends of Daily Precipitation in China
Institute of Scientific and Technical Information of China (English)
CAO; Li-Ge; ZHONG; Jun; SU; Bu-Da; ZHAI; Jian-Qing; Macro; GEMMER
2013-01-01
Based on observed daily precipitation data of 540 stations and 3,839 gridded data from the high-resolution regional climate model COSMO-Climate Limited-area Modeling(CCLM)for 1961–2000,the simulation ability of CCLM on daily precipitation in China is examined,and the variation of daily precipitation distribution pattern is revealed.By applying the probability distribution and extreme value theory to the projected daily precipitation(2011–2050)under SRES A1B scenario with CCLM,trends of daily precipitation series and daily precipitation extremes are analyzed.Results show that except for the western Qinghai-Tibetan Plateau and South China,distribution patterns of the kurtosis and skewness calculated from the simulated and observed series are consistent with each other;their spatial correlation coefcients are above 0.75.The CCLM can well capture the distribution characteristics of daily precipitation over China.It is projected that in some parts of the Jianghuai region,central-eastern Northeast China and Inner Mongolia,the kurtosis and skewness will increase significantly,and precipitation extremes will increase during 2011–2050.The projected increase of maximum daily rainfall and longest non-precipitation period during flood season in the aforementioned regions,also show increasing trends of droughts and floods in the next 40 years.
Two dimensional unstable scar statistics.
Energy Technology Data Exchange (ETDEWEB)
Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Kotulski, Joseph Daniel; Lee, Kelvin S. H. (ITT Industries/AES Los Angeles, CA)
2006-12-01
This report examines the localization of time harmonic high frequency modal fields in two dimensional cavities along periodic paths between opposing sides of the cavity. The cases where these orbits lead to unstable localized modes are known as scars. This paper examines the enhancements for these unstable orbits when the opposing mirrors are both convex and concave. In the latter case the construction includes the treatment of interior foci.
Juday, Richard D.
1992-01-01
Modified vernier scale gives accurate two-dimensional coordinates from maps, drawings, or cathode-ray-tube displays. Movable circular overlay rests on fixed rectangular-grid overlay. Pitch of circles nine-tenths that of grid and, for greatest accuracy, radii of circles large compared with pitch of grid. Scale enables user to interpolate between finest divisions of regularly spaced rule simply by observing which mark on auxiliary vernier rule aligns with mark on primary rule.
Jamming patterns in a two-dimensional hopper
Indian Academy of Sciences (India)
Kiwing To
2005-06-01
We report experimental studies of jamming phenomenon of monodisperse metal disks falling through a two-dimensional hopper when the hopper opening is larger than three times the size of the disks. For each jamming event, the configuration of the arch formed at the hopper opening is studied. The cumulative distribution functions () for hoppers of opening size d are measured. (Here is the horizontal component of the arch vector, which is defined as the displacement vector from the center of the first disk to the center of the last disk in the arch.) We found that the distribution of () can be collasped into a master curve () = ()() that decays exponentially for > 4. The scaling factor () is a decreasing function of d and is approximately proportional to the jamming probability.
Baer, P.; Mastrandrea, M.
2006-12-01
Simple probabilistic models which attempt to estimate likely transient temperature change from specified CO2 emissions scenarios must make assumptions about at least six uncertain aspects of the causal chain between emissions and temperature: current radiative forcing (including but not limited to aerosols), current land use emissions, carbon sinks, future non-CO2 forcing, ocean heat uptake, and climate sensitivity. Of these, multiple PDFs (probability density functions) have been published for the climate sensitivity, a couple for current forcing and ocean heat uptake, one for future non-CO2 forcing, and none for current land use emissions or carbon cycle uncertainty (which are interdependent). Different assumptions about these parameters, as well as different model structures, will lead to different estimates of likely temperature increase from the same emissions pathway. Thus policymakers will be faced with a range of temperature probability distributions for the same emissions scenarios, each described by a central tendency and spread. Because our conventional understanding of uncertainty and probability requires that a probabilistically defined variable of interest have only a single mean (or median, or modal) value and a well-defined spread, this "multidimensional" uncertainty defies straightforward utilization in policymaking. We suggest that there are no simple solutions to the questions raised. Crucially, we must dispel the notion that there is a "true" probability probabilities of this type are necessarily subjective, and reasonable people may disagree. Indeed, we suggest that what is at stake is precisely the question, what is it reasonable to believe, and to act as if we believe? As a preliminary suggestion, we demonstrate how the output of a simple probabilistic climate model might be evaluated regarding the reasonableness of the outputs it calculates with different input PDFs. We suggest further that where there is insufficient evidence to clearly
Non-Gaussian probability distributions of solar wind fluctuations
Directory of Open Access Journals (Sweden)
E. Marsch
Full Text Available The probability distributions of field differences ∆x(τ=x(t+τ-x(t, where the variable x(t may denote any solar wind scalar field or vector field component at time t, have been calculated from time series of Helios data obtained in 1976 at heliocentric distances near 0.3 AU. It is found that for comparatively long time lag τ, ranging from a few hours to 1 day, the differences are normally distributed according to a Gaussian. For shorter time lags, of less than ten minutes, significant changes in shape are observed. The distributions are often spikier and narrower than the equivalent Gaussian distribution with the same standard deviation, and they are enhanced for large, reduced for intermediate and enhanced for very small values of ∆x. This result is in accordance with fluid observations and numerical simulations. Hence statistical properties are dominated at small scale τ by large fluctuation amplitudes that are sparsely distributed, which is direct evidence for spatial intermittency of the fluctuations. This is in agreement with results from earlier analyses of the structure functions of ∆x. The non-Gaussian features are differently developed for the various types of fluctuations. The relevance of these observations to the interpretation and understanding of the nature of solar wind magnetohydrodynamic (MHD turbulence is pointed out, and contact is made with existing theoretical concepts of intermittency in fluid turbulence.
Subspace Learning via Local Probability Distribution for Hyperspectral Image Classification
Directory of Open Access Journals (Sweden)
Huiwu Luo
2015-01-01
Full Text Available The computational procedure of hyperspectral image (HSI is extremely complex, not only due to the high dimensional information, but also due to the highly correlated data structure. The need of effective processing and analyzing of HSI has met many difficulties. It has been evidenced that dimensionality reduction has been found to be a powerful tool for high dimensional data analysis. Local Fisher’s liner discriminant analysis (LFDA is an effective method to treat HSI processing. In this paper, a novel approach, called PD-LFDA, is proposed to overcome the weakness of LFDA. PD-LFDA emphasizes the probability distribution (PD in LFDA, where the maximum distance is replaced with local variance for the construction of weight matrix and the class prior probability is applied to compute the affinity matrix. The proposed approach increases the discriminant ability of the transformed features in low dimensional space. Experimental results on Indian Pines 1992 data indicate that the proposed approach significantly outperforms the traditional alternatives.
Some Useful Distributions and Probabilities for Cellular Networks
Yu, Seung Min
2011-01-01
The cellular network is one of the most useful networks for wireless communications and now universally used. There have been a lot of analytic results about the performance of the mobile user at a specific location such as the cell center or edge. On the other hand, there have been few analytic results about the performance of the mobile user at an arbitrary location. Moreover, to the best of our knowledge, there is no analytic result on the performance of the mobile user at an arbitrary location considering the mobile user density. In this paper, we use the stochastic geometry approach and derive useful distributions and probabilities for cellular networks. Using those, we analyze the performance of the mobile user, e.g., outage probability at an arbitrary location considering the mobile user density. Under some assumptions, those can be expressed by closed form formulas. Our analytic results will provide a fundamental framework for the performance analysis of cellular networks, which will significantly red...
Characterizing the \\lyaf\\ flux probability distribution function using Legendre polynomials
Cieplak, Agnieszka M
2016-01-01
The Lyman-$\\alpha$ forest is a highly non-linear field with a lot of information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polyonomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, $n$-th coefficient can be expressed as a linear combination of the first $n$ moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities.
Cosmological constraints from the convergence 1-point probability distribution
Patton, Kenneth; Honscheid, Klaus; Huff, Eric; Melchior, Peter; Ross, Ashley J; Suchyta, Eric
2016-01-01
We examine the cosmological information available from the 1-point probability distribution (PDF) of the weak-lensing convergence field, utilizing fast L-PICOLA simulations and a Fisher analysis. We find competitive constraints in the $\\Omega_m$-$\\sigma_8$ plane from the convergence PDF with $188\\ arcmin^2$ pixels compared to the cosmic shear power spectrum with an equivalent number of modes ($\\ell < 886$). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is less susceptible, and improves the total figure of merit by a factor of $2-3$, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.
A probability distribution approach to synthetic turbulence time series
Sinhuber, Michael; Bodenschatz, Eberhard; Wilczek, Michael
2016-11-01
The statistical features of turbulence can be described in terms of multi-point probability density functions (PDFs). The complexity of these statistical objects increases rapidly with the number of points. This raises the question of how much information has to be incorporated into statistical models of turbulence to capture essential features such as inertial-range scaling and intermittency. Using high Reynolds number hot-wire data obtained at the Variable Density Turbulence Tunnel at the Max Planck Institute for Dynamics and Self-Organization, we establish a PDF-based approach on generating synthetic time series that reproduce those features. To do this, we measure three-point conditional PDFs from the experimental data and use an adaption-rejection method to draw random velocities from this distribution to produce synthetic time series. Analyzing these synthetic time series, we find that time series based on even low-dimensional conditional PDFs already capture some essential features of real turbulent flows.
Probability distributions for one component equations with multiplicative noise
Deutsch, J M
1993-01-01
Abstract: Systems described by equations involving both multiplicative and additive noise are common in nature. Examples include convection of a passive scalar field, polymersin turbulent flow, and noise in dye lasers. In this paper the one component version of this problem is studied. The steady state probability distribution is classified into two different types of behavior. One class has power law tails and the other is of the form of an exponential to a power law. The value of the power law exponent is determined analytically for models having colored gaussian noise. It is found to only depend on the power spectrum of the noise at zero frequency. When non-gaussian noise is considered it is shown that stretched exponential tails are possible. An intuitive understanding of the results is found and makes use of the Lyapunov exponents for these systems.
Gesture Recognition Based on the Probability Distribution of Arm Trajectories
Wan, Khairunizam; Sawada, Hideyuki
The use of human motions for the interaction between humans and computers is becoming an attractive alternative to verbal media, especially through the visual interpretation of the human body motion. In particular, hand gestures are used as non-verbal media for the humans to communicate with machines that pertain to the use of the human gestures to interact with them. This paper introduces a 3D motion measurement of the human upper body for the purpose of the gesture recognition, which is based on the probability distribution of arm trajectories. In this study, by examining the characteristics of the arm trajectories given by a signer, motion features are selected and classified by using a fuzzy technique. Experimental results show that the use of the features extracted from arm trajectories effectively works on the recognition of dynamic gestures of a human, and gives a good performance to classify various gesture patterns.
Seismic pulse propagation with constant Q and stable probability distributions
Directory of Open Access Journals (Sweden)
M. Tomirotti
1997-06-01
Full Text Available The one-dimensional propagation of seismic waves with constant Q is shown to be governed by an evolution equation of fractional order in time, which interpolates the heat equation and the wave equation. The fundamental solutions for the Cauchy and Signalling problems are expressed in terms of entire functions (of Wright type in the similarity variable and their behaviours turn out to be intermediate between those for the limiting cases of a perfectly viscous fluid and a perfectly elastic solid. In view of the small dissipation exhibited by the seismic pulses, the nearly elastic limit is considered. Furthermore, the fundamental solutions for the Cauchy and Signalling problems are shown to be related to stable probability distributions with an index of stability determined by the order of the fractional time derivative in the evolution equation.
Seismic pulse propagation with constant Q and stable probability distributions
Mainardi, Francesco
2010-01-01
The one-dimensional propagation of seismic waves with constant Q is shown to be governed by an evolution equation of fractional order in time, which interpolates the heat equation and the wave equation. The fundamental solutions for the Cauchy and Signalling problems are expressed in terms of entire functions (of Wright type) in the similarity variable and their behaviours turn out to be intermediate between those for the limiting cases of a perfectly viscous fluid and a perfectly elastic solid. In view of the small dissipation exhibited by the seismic pulses, the nearly elastic limit is considered. Furthermore, the fundamental solutions for the Cauchy and Signalling problems are shown to be related to stable probability distributions with index of stability determined by the order of the fractional time derivative in the evolution equation.
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
【说词】1. He can probably tell us the truth.2. Will it rain this afternoong ？ Probably【解语】作副词，意为“大概、或许”，表示可能性很大，通常指根据目前情况作出积极推测或判断；
DEFF Research Database (Denmark)
Martiny, Christian; Abu-Samha, Mahmoud; Madsen, Lars Bojer
2010-01-01
We solve the three-dimensional time-dependent Schrödinger equation for a few-cycle circularly polarized femtosecond laser pulse that interacts with an oriented target exemplified by an argon atom, initially in a 3px or 3py state. The photoelectron momentum distributions show distinct signatures o...
Two-dimensional liquid chromatography
DEFF Research Database (Denmark)
Græsbøll, Rune
Two-dimensional liquid chromatography has received increasing interest due to the rise in demand for analysis of complex chemical mixtures. Separation of complex mixtures is hard to achieve as a simple consequence of the sheer number of analytes, as these samples might contain hundreds or even...... dimensions. As a consequence of the conclusions made within this thesis, the research group has, for the time being, decided against further development of online LC×LC systems, since it was not deemed ideal for the intended application, the analysis of the polar fraction of oil. Trap-and...
Palma, G.; Niedermayer, F.; Rácz, Z.; Riveros, A.; Zambrano, D.
2016-08-01
The zero-temperature, classical X Y model on an L ×L square lattice is studied by exploring the distribution ΦL(y ) of its centered and normalized magnetization y in the large-L limit. An integral representation of the cumulant generating function, known from earlier works, is used for the numerical evaluation of ΦL(y ) , and the limit distribution ΦL →∞(y ) =Φ0(y ) is obtained with high precision. The two leading finite-size corrections ΦL(y ) -Φ0(y ) ≈a1(L ) Φ1(y ) +a2(L ) Φ2(y ) are also extracted both from numerics and from analytic calculations. We find that the amplitude a1(L ) scales as ln(L /L0) /L2 and the shape correction function Φ1(y ) can be expressed through the low-order derivatives of the limit distribution, Φ1(y ) =[yΦ0(y ) +Φ0'(y ) ] ' . Thus, Φ1(y ) carries the same universal features as the limit distribution and can be used for consistency checks of universality claims based on finite-size systems. The second finite-size correction has an amplitude a2(L ) ∝1 /L2 and one finds that a2Φ2(y ) ≪a1Φ1(y ) already for small system size (L >10 ). We illustrate the feasibility of observing the calculated finite-size corrections by performing simulations of the X Y model at low temperatures, including T =0 .
Xia, Huihui; Xu, Zhenyu; Kan, Ruifeng; He, Yabai; Liu, Jianguo; Zhang, Guangle
2015-09-01
The principle of gas temperature and concentration measurement based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) is introduced. Combining Computed Tomography (CT) with TDLAS, herein referred to as Tunable Diode Laser Absorption Tomography (TDLAT), reconstructs temperature and concentration distribution which are assumed as Gaussian function or paraboloid function. A pair of water absorption lines (7153.722 cm-1 and 7153.748 cm-1 and 7154.354 cm-1) is selected to measure temperature by means of two-line technique. Radon transform is used to calculate projections of different path for reconstructing temperature distribution based on filtered backprojection algorithm. With a general normalization process, water vapor concentration distribution can be obtained simultaneously. The reconstruction results agree well with the original model. In consideration of laboratory verification and experimental condition, the TDLAT data consist of 13 projection angles and 11 parallel rays at each angle is discussed in this article, obtaining distribution map with a resolution of 20 × 20. Although the reconstruction value of the edge deviates a little from the original parameters, this method achieves relatively satisfactory outcome in general. The reconstruction error roughly increases with decreasing projection angles and parallel rays, additionally, the reconstruction accuracy is more dependent on the parallel ray number at each angle than the projection angle number. Appropriate grid partition is also important in reconstruction study, the optimal grid partition is 30 × 30 or near this magnitude when the system contains totally 18 projection angles and 27 parallel rays at each angle. This work proposes a feasible formula for reconstruction research with a small amount of projections and rays, theoretically, laying a foundation for experimental validation in the future.
Insights from probability distribution functions of intensity maps
Breysse, Patrick C; Behroozi, Peter S; Dai, Liang; Kamionkowski, Marc
2016-01-01
In the next few years, intensity-mapping surveys that target lines such as CO, Ly$\\alpha$, and CII stand to provide powerful probes of high-redshift astrophysics. However, these line emissions are highly non-Gaussian, and so the typical power-spectrum methods used to study these maps will leave out a significant amount of information. We propose a new statistic, the probability distribution of voxel intensities, which can access this extra information. Using a model of a CO intensity map at $z\\sim3$ as an example, we demonstrate that this voxel intensity distribution (VID) provides substantial constraining power beyond what is obtainable from the power spectrum alone. We find that a future survey similar to the planned COMAP Full experiment could constrain the CO luminosity function to order $\\sim10\\%$. We also explore the effects of contamination from continuum emission, interloper lines, and gravitational lensing on our constraints and find that the VID statistic retains significant constraining power even ...
Extreme paths in oriented two-dimensional percolation
Andjel, E. D.; Gray, L. F.
2016-01-01
International audience; A useful result about leftmost and rightmost paths in two dimensional bond percolation is proved. This result was introduced without proof in \\cite{G} in the context of the contact process in continuous time. As discussed here, it also holds for several related models, including the discrete time contact process and two dimensional site percolation. Among the consequences are a natural monotonicity in the probability of percolation between different sites and a somewha...
Two-dimensional capillary origami
Energy Technology Data Exchange (ETDEWEB)
Brubaker, N.D., E-mail: nbrubaker@math.arizona.edu; Lega, J., E-mail: lega@math.arizona.edu
2016-01-08
We describe a global approach to the problem of capillary origami that captures all unfolded equilibrium configurations in the two-dimensional setting where the drop is not required to fully wet the flexible plate. We provide bifurcation diagrams showing the level of encapsulation of each equilibrium configuration as a function of the volume of liquid that it contains, as well as plots representing the energy of each equilibrium branch. These diagrams indicate at what volume level the liquid drop ceases to be attached to the endpoints of the plate, which depends on the value of the contact angle. As in the case of pinned contact points, three different parameter regimes are identified, one of which predicts instantaneous encapsulation for small initial volumes of liquid. - Highlights: • Full solution set of the two-dimensional capillary origami problem. • Fluid does not necessarily wet the entire plate. • Global energy approach provides exact differential equations satisfied by minimizers. • Bifurcation diagrams highlight three different regimes. • Conditions for spontaneous encapsulation are identified.
Universality class of the two-dimensional site-diluted Ising model.
Martins, P H L; Plascak, J A
2007-07-01
In this work, we evaluate the probability distribution function of the order parameter for the two-dimensional site-diluted Ising model. Extensive Monte Carlo simulations have been performed for different spin concentrations p (0.70universality class of the diluted Ising model seems to be independent of the amount of dilution. Logarithmic corrections of the finite-size critical temperature behavior of the model can also be inferred even for such small lattices.
Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.
Ashby, Neil; Patla, Bijunath
2016-04-01
Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.
Two-dimensional capillary origami
Brubaker, N. D.; Lega, J.
2016-01-01
We describe a global approach to the problem of capillary origami that captures all unfolded equilibrium configurations in the two-dimensional setting where the drop is not required to fully wet the flexible plate. We provide bifurcation diagrams showing the level of encapsulation of each equilibrium configuration as a function of the volume of liquid that it contains, as well as plots representing the energy of each equilibrium branch. These diagrams indicate at what volume level the liquid drop ceases to be attached to the endpoints of the plate, which depends on the value of the contact angle. As in the case of pinned contact points, three different parameter regimes are identified, one of which predicts instantaneous encapsulation for small initial volumes of liquid.
Two-dimensional cubic convolution.
Reichenbach, Stephen E; Geng, Frank
2003-01-01
The paper develops two-dimensional (2D), nonseparable, piecewise cubic convolution (PCC) for image interpolation. Traditionally, PCC has been implemented based on a one-dimensional (1D) derivation with a separable generalization to two dimensions. However, typical scenes and imaging systems are not separable, so the traditional approach is suboptimal. We develop a closed-form derivation for a two-parameter, 2D PCC kernel with support [-2,2] x [-2,2] that is constrained for continuity, smoothness, symmetry, and flat-field response. Our analyses, using several image models, including Markov random fields, demonstrate that the 2D PCC yields small improvements in interpolation fidelity over the traditional, separable approach. The constraints on the derivation can be relaxed to provide greater flexibility and performance.
Flow of foams in two-dimensional disordered porous media
Dollet, Benjamin; Geraud, Baudouin; Jones, Sian A.; Meheust, Yves; Cantat, Isabelle; Institut de Physique de Rennes Team; Geosciences Rennes Team
2015-11-01
Liquid foams are a yield stress fluid with elastic properties. When a foam flow is confined by solid walls, viscous dissipation arises from the contact zones between soap films and walls, giving very peculiar friction laws. In particular, foams potentially invade narrow pores much more efficiently than Newtonian fluids, which is of great importance for enhanced oil recovery. To quantify this effect, we study experimentally flows of foam in a model two-dimensional porous medium, consisting of an assembly of circular obstacles placed randomly in a Hele-Shaw cell, and use image analysis to quantify foam flow at the local scale. We show that bubbles split as they flow through the porous medium, by a mechanism of film pinching during contact with an obstacle, yielding two daughter bubbles per split bubble. We quantify the evolution of the bubble size distribution as a function of the distance along the porous medium, the splitting probability as a function of bubble size, and the probability distribution function of the daughter bubbles. We propose an evolution equation to model this splitting phenomenon and compare it successfully to the experiments, showing how at long distance, the porous medium itself dictates the size distribution of the foam.
Performance Probability Distributions for Sediment Control Best Management Practices
Ferrell, L.; Beighley, R.; Walsh, K.
2007-12-01
Controlling soil erosion and sediment transport can be a significant challenge during the construction process due to the extent and conditions of bare, disturbed soils. Best Management Practices (BMPs) are used as the framework for the design of sediment discharge prevention systems in stormwater pollution prevention plans which are typically required for construction sites. This research focuses on commonly-used BMP systems for perimeter control of sediment export: silt fences and fiber rolls. Although these systems are widely used, the physical and engineering parameters describing their performance are not well understood. Performance expectations are based on manufacturer results, but due to the dynamic conditions that exist on a construction site performance expectations are not always achievable in the field. Based on experimental results product performance is shown to be highly variable. Experiments using the same installation procedures show inconsistent sediment removal performances ranging from (>)85 percent to zero. The goal of this research is to improve the determination of off-site sediment yield based on probabilistic performance results of perimeter control BMPs. BMPs are evaluated in the Soil Erosion Research Laboratory (SERL) in the Civil and Environmental Engineering department at San Diego State University. SERL experiments are performed on a 3-m by 10-m tilting soil bed with a soil depth of 0.5 meters and a slope of 33 percent. The simulated storm event consists of 17 mm/hr for 20 minutes followed by 51 mm/hr for 30 minutes. The storm event is based on an ASTM design storm intended to simulate BMP failures. BMP performance is assessed based on experiments where BMPs are installed per manufacture specifications, less than optimal installations, and no treatment conditions. Preliminary results from 30 experiments are presented and used to develop probability distributions for BMP sediment removal efficiencies. The results are then combined with
Two Dimensional Hydrodynamic Analysis of the Moose Creek Floodway
2012-09-01
ER D C/ CH L TR -1 2 -2 0 Two Dimensional Hydrodynamic Analysis of the Moose Creek Floodway C oa st al a n d H yd ra u lic s La b or at...distribution is unlimited. ERDC/CHL TR-12-20 September 2012 Two Dimensional Hydrodynamic Analysis of the Moose Creek Floodway Stephen H. Scott, Jeremy A...A two-dimensional Adaptive Hydraulics (AdH) hydrodynamic model was developed to simulate the Moose Creek Floodway. The Floodway is located
Two-Dimensional Distributed Velocity Collision Avoidance
2014-02-11
trigonometry . For convex polygon agents, the tangents are found by iterating over each point, calculating the z-component of the cross product between a...the modifications to the basic VO to favor the source bot’s current velocity (i.e., encourage the bot to change course as little as possible). To...the source agent on a collision course . However, if ignore factors are used, then A2 is more important (i.e., has a lower ignore factor), and so the
Classifying Two-dimensional Hyporeductive Triple Algebras
Issa, A Nourou
2010-01-01
Two-dimensional real hyporeductive triple algebras (h.t.a.) are investigated. A classification of such algebras is presented. As a consequence, a classification of two-dimensional real Lie triple algebras (i.e. generalized Lie triple systems) and two-dimensional real Bol algebras is given.
Two-dimensional function photonic crystals
Wu, Xiang-Yao; Liu, Xiao-Jing; Liang, Yu
2016-01-01
In this paper, we have firstly proposed two-dimensional function photonic crystals, which the dielectric constants of medium columns are the functions of space coordinates $\\vec{r}$, it is different from the two-dimensional conventional photonic crystals constituting by the medium columns of dielectric constants are constants. We find the band gaps of two-dimensional function photonic crystals are different from the two-dimensional conventional photonic crystals, and when the functions form of dielectric constants are different, the band gaps structure should be changed, which can be designed into the appropriate band gaps structures by the two-dimensional function photonic crystals.
The Probability Distribution of Inter-car Spacings
Xian, Jin Guo; Han, Dong
In this paper, the celluar automation model with Fukui-Ishibashi-type acceleration rule is used to study the inter-car spacing distribution for traffic flow. The method used in complex network analysis is applied to study the spacings distribution. By theoretical analysis, we obtain the result that the distribution of inter-car spacings follows power law when vehicle density is low and spacing is not large, while, when the vehicle density is high or the spacing is large, the distribution can be described by exponential distribution. Moreover, the numerical simulations support the theoretical result.
Chaotic dynamics for two-dimensional tent maps
Pumariño, Antonio; Ángel Rodríguez, José; Carles Tatjer, Joan; Vigil, Enrique
2015-02-01
For a two-dimensional extension of the classical one-dimensional family of tent maps, we prove the existence of an open set of parameters for which the respective transformation presents a strange attractor with two positive Lyapounov exponents. Moreover, periodic orbits are dense on this attractor and the attractor supports a unique ergodic invariant probability measure.
Conant, Darcy Lynn
2013-01-01
Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…
Conant, Darcy Lynn
2013-01-01
Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…
Measuring Robustness of Timetables in Stations using a Probability Distribution
DEFF Research Database (Denmark)
Jensen, Lars Wittrup; Landex, Alex
of a station based on the plan of operation and the minimum headway times However, none of the above methods take a given timetable into account when the complexity of the station is calculated. E.g. two timetable candidates are given following the same plan of operation in a station; one will be more...... vulnerable to delays (less robust) while the other will be less vulnerable (more robust), but this cannot be measured by the above methods. In the light of this, the article will describe a new method where the complexity of a given station with a given timetable can be calculated based on a probability...... delays caused by interdependencies, and result in a more robust operation. Currently three methods to calculate the complexity of station exists: 1. Complexity of a station based on the track layout 2. Complexity of a station based on the probability of a conflict using a plan of operation 3. Complexity...
Energy Technology Data Exchange (ETDEWEB)
Xu, Lijun, E-mail: lijunxu@buaa.edu.cn; Liu, Chang; Jing, Wenyang; Cao, Zhang [School of Instrument Science and Opto-Electronic Engineering, Beihang University, Beijing 100191 (China); Ministry of Education’s Key Laboratory of Precision Opto-Mechatronics Technology, Beijing 100191 (China); Xue, Xin; Lin, Yuzhen [School of Energy and Power Engineering, Beihang University, Beijing 100191 (China)
2016-01-15
To monitor two-dimensional (2D) distributions of temperature and H{sub 2}O mole fraction, an on-line tomography system based on tunable diode laser absorption spectroscopy (TDLAS) was developed. To the best of the authors’ knowledge, this is the first report on a multi-view TDLAS-based system for simultaneous tomographic visualization of temperature and H{sub 2}O mole fraction in real time. The system consists of two distributed feedback (DFB) laser diodes, a tomographic sensor, electronic circuits, and a computer. The central frequencies of the two DFB laser diodes are at 7444.36 cm{sup −1} (1343.3 nm) and 7185.6 cm{sup −1} (1391.67 nm), respectively. The tomographic sensor is used to generate fan-beam illumination from five views and to produce 60 ray measurements. The electronic circuits not only provide stable temperature and precise current controlling signals for the laser diodes but also can accurately sample the transmitted laser intensities and extract integrated absorbances in real time. Finally, the integrated absorbances are transferred to the computer, in which the 2D distributions of temperature and H{sub 2}O mole fraction are reconstructed by using a modified Landweber algorithm. In the experiments, the TDLAS-based tomography system was validated by using asymmetric premixed flames with fixed and time-varying equivalent ratios, respectively. The results demonstrate that the system is able to reconstruct the profiles of the 2D distributions of temperature and H{sub 2}O mole fraction of the flame and effectively capture the dynamics of the combustion process, which exhibits good potential for flame monitoring and on-line combustion diagnosis.
Wigner crystallization of electrons in deep traps in a two-dimensional dielectric
Energy Technology Data Exchange (ETDEWEB)
Shaimeev, S. S., E-mail: shaimeev@isp.nsc.ru; Gritsenko, V. A. [Institute of Semiconductor Physics (Russian Federation)
2011-03-15
A two-dimensional model is used to examine the spatial distribution of electrons in deep traps in a two-dimensional dielectric. When the trap concentration is much higher than the trapped electron concentration, Coulomb repulsion leads to the formation of a two-dimensional quasi-periodic hexagonal lattice of localized electrons (Wigner glass).
Hadamard States and Two-dimensional Gravity
Salehi, H
2001-01-01
We have used a two-dimensional analog of the Hadamard state-condition to study the local constraints on the two-point function of a linear quantum field conformally coupled to a two-dimensional gravitational background. We develop a dynamical model in which the determination of the state of the quantum field is essentially related to the determination of a conformal frame. A particular conformal frame is then introduced in which a two-dimensional gravitational equation is established.
Topological defects in two-dimensional crystals
Chen, Yong; Qi, Wei-Kai
2008-01-01
By using topological current theory, we study the inner topological structure of the topological defects in two-dimensional (2D) crystal. We find that there are two elementary point defects topological current in two-dimensional crystal, one for dislocations and the other for disclinations. The topological quantization and evolution of topological defects in two-dimensional crystals are discussed. Finally, We compare our theory with Brownian-dynamics simulations in 2D Yukawa systems.
Two-dimensional hazard estimation for longevity analysis
DEFF Research Database (Denmark)
Fledelius, Peter; Guillen, M.; Nielsen, J.P.
2004-01-01
the two-dimensional mortality surface. Furthermore we look at aggregated synthetic population metrics as 'population life expectancy' and 'population survival probability'. For Danish women these metrics indicate decreasing mortality with respect to chronological time. The metrics can not directly be used......We investigate developments in Danish mortality based on data from 1974-1998 working in a two-dimensional model with chronological time and age as the two dimensions. The analyses are done with non-parametric kernel hazard estimation techniques. The only assumption is that the mortality surface...... for analysis of economic implications arising from mortality changes....
Energy Technology Data Exchange (ETDEWEB)
Chacko, M; Aldoohan, S; Sonnad, J; Ahmad, S; Ali, I [University of Oklahoma Health Science Center, Oklahoma City, OK (United States)
2015-06-15
Purpose: To evaluate quantitatively dose distributions from helical, axial and cone-beam CT clinical imaging techniques by measurement using a two-dimensional (2D) diode-array detector. Methods: 2D-dose distributions from selected clinical protocols used for axial, helical and cone-beam CT imaging were measured using a diode-array detector (MapCheck2). The MapCheck2 is composed from solid state diode detectors that are arranged in horizontal and vertical lines with a spacing of 10 mm. A GE-Light-Speed CT-simulator was used to acquire axial and helical CT images and a kV on-board-imager integrated with a Varian TrueBeam-STx machine was used to acquire cone-beam CT (CBCT) images. Results: The dose distributions from axial, helical and cone-beam CT were non-uniform over the region-of-interest with strong spatial and angular dependence. In axial CT, a large dose gradient was measured that decreased from lateral sides to the middle of the phantom due to large superficial dose at the side of the phantom in comparison with larger beam attenuation at the center. The dose decreased at the superior and inferior regions in comparison to the center of the phantom in axial CT. An asymmetry was found between the right-left or superior-inferior sides of the phantom which possibly to angular dependence in the dose distributions. The dose level and distribution varied from one imaging technique into another. For the pelvis technique, axial CT deposited a mean dose of 3.67 cGy, helical CT deposited a mean dose of 1.59 cGy, and CBCT deposited a mean dose of 1.62 cGy. Conclusions: MapCheck2 provides a robust tool to measure directly 2D-dose distributions for CT imaging with high spatial resolution detectors in comparison with ionization chamber that provides a single point measurement or an average dose to the phantom. The dose distributions measured with MapCheck2 consider medium heterogeneity and can represent specific patient dose.
Mori, Kazuyoshi; Ogasawara, Hanako; Tsuchiya, Takenobu; Endoh, Nobuyuki
2016-07-01
An aspherical lens with an aperture diameter of 1.0 m has been designed and fabricated to develop a prototype system for ambient noise imaging (ANI). A sea trial of silent target detection using the prototype ANI system was conducted under only natural ocean ambient noise at Uchiura Bay in November 2010. It was verified that targets are successfully detected under natural ocean ambient noise, mainly generated by snapping shrimps. Recently, we have built a second prototype ANI system using an acoustic lens with a two-dimensional (2D) receiver array with 127 elements corresponding to a field of view (FOV) spanning 15° horizontally by 9° vertically. In this study, we investigated the effects of the direction of the FOV and the spatial noise distribution on the 2D target image obtained by ANI. Here, the noise sources in front of the target are called “front light”, and those at the rear of the target are called “back light”. The second sea trial was conducted to image targets arranged in the FOV and measure the positions of noise sources at Uchiura Bay in November 10-14, 2014. For front light, the pixel values in the on-target directions were greater than those in other directions owing to the dominant target scatterings. Reversely, for back light, the pixel values in the on-target directions were lower than those in other directions owing to the dominant direct noises such as “silhouette”.
Interacting discrete Markov processes with power-law probability distributions
Ridley, Kevin D.; Jakeman, Eric
2017-09-01
During recent years there has been growing interest in the occurrence of long-tailed distributions, also known as heavy-tailed or fat-tailed distributions, which can exhibit power-law behaviour and often characterise physical systems that undergo very large fluctuations. In this paper we show that the interaction between two discrete Markov processes naturally generates a time-series characterised by such a distribution. This possibility is first demonstrated by numerical simulation and then confirmed by a mathematical analysis that enables the parameter range over which the power-law occurs to be quantified. The results are supported by comparison of numerical results with theoretical predictions and general conclusions are drawn regarding mechanisms that can cause this behaviour.
Martingale Couplings and Bounds on the Tails of Probability Distributions
Luh, Kyle J
2011-01-01
Hoeffding has shown that tail bounds on the distribution for sampling from a finite population with replacement also apply to the corresponding cases of sampling without replacement. (A special case of this result is that binomial tail bounds apply to the corresponding hypergeometric tails.) We give a new proof of Hoeffding's result by constructing a martingale coupling between the sampling distributions. This construction is given by an explicit combinatorial procedure involving balls and urns. We then apply this construction to create martingale couplings between other pairs of sampling distributions, both without replacement and with "surreplacement" (that is, sampling in which not only is the sampled individual replaced, but some number of "copies" of that individual are added to the population).
Subjective Probability Distribution Elicitation in Cost Risk Analysis: A Review
2007-01-01
DeGroot , Morris H., Optimal Statistical Decisions, New York: McGraw-Hill, 1970. Dewar, James A., Assumption-Based Planning: A Tool for Reducing...formal decision-analysis point of view. See DeGroot (1970) for a clear exposition of utility in decision analysis. 2 For the triangle distribution, the
Sampling Random Bioinformatics Puzzles using Adaptive Probability Distributions
DEFF Research Database (Denmark)
Have, Christian Theil; Appel, Emil Vincent; Bork-Jensen, Jette
2016-01-01
We present a probabilistic logic program to generate an educational puzzle that introduces the basic principles of next generation sequencing, gene finding and the translation of genes to proteins following the central dogma in biology. In the puzzle, a secret "protein word" must be found...... by assembling DNA from fragments (reads), locating a gene in this sequence and translating the gene to a protein. Sampling using this program generates random instance of the puzzle, but it is possible constrain the difficulty and to customize the secret protein word. Because of these constraints...... and the randomness of the generation process, sampling may fail to generate a satisfactory puzzle. To avoid failure we employ a strategy using adaptive probabilities which change in response to previous steps of generative process, thus minimizing the risk of failure....
Analysis of Two-Layered Random Interfaces for Two Dimensional Widom-Rowlinson's Model
Directory of Open Access Journals (Sweden)
Jun Wang
2011-01-01
Full Text Available The statistical behaviors of two-layered random-phase interfaces in two-dimensional Widom-Rowlinson's model are investigated. The phase interfaces separate two coexisting phases of the lattice Widom-Rowlinson model; when the chemical potential μ of the model is large enough, the convergence of the probability distributions which describe the fluctuations of the phase interfaces is studied. In this paper, the backbones of interfaces are introduced in the model, and the corresponding polymer chains and cluster expansions are developed and analyzed for the polymer weights. And the existence of the free energy for two-layered random-phase interfaces of the two-dimensional Widom-Rowlinson model is given.
Numerically exact correlations and sampling in the two-dimensional Ising spin glass.
Thomas, Creighton K; Middleton, A Alan
2013-04-01
A powerful existing technique for evaluating statistical mechanical quantities in two-dimensional Ising models is based on constructing a matrix representing the nearest-neighbor spin couplings and then evaluating the Pfaffian of the matrix. Utilizing this technique and other more recent developments in evaluating elements of inverse matrices and exact sampling, a method and computer code for studying two-dimensional Ising models is developed. The formulation of this method is convenient and fast for computing the partition function and spin correlations. It is also useful for exact sampling, where configurations are directly generated with probability given by the Boltzmann distribution. These methods apply to Ising model samples with arbitrary nearest-neighbor couplings and can also be applied to general dimer models. Example results of computations are described, including comparisons with analytic results for the ferromagnetic Ising model, and timing information is provided.
Two-dimensional dissipative rogue waves due to time-delayed feedback in cavity nonlinear optics
Tlidi, Mustapha; Panajotov, Krassimir
2017-01-01
We demonstrate a way to generate two-dimensional rogue waves in two types of broad area nonlinear optical systems subject to time-delayed feedback: in the generic Lugiato-Lefever model and in the model of a broad-area surface-emitting laser with saturable absorber. The delayed feedback is found to induce a spontaneous formation of rogue waves. In the absence of delayed feedback, spatial pulses are stationary. The rogue waves are exited and controlled by the delay feedback. We characterize their formation by computing the probability distribution of the pulse height. The long-tailed statistical contribution, which is often considered as a signature of the presence of rogue waves, appears for sufficiently strong feedback. The generality of our analysis suggests that the feedback induced instability leading to the spontaneous formation of two-dimensional rogue waves is a universal phenomenon.
Dynamics of vortex interactions in two-dimensional flows
DEFF Research Database (Denmark)
Juul Rasmussen, J.; Nielsen, A.H.; Naulin, V.
2002-01-01
a critical value, a(c). Using the Weiss-field, a(c) is estimated for vortex patches. Introducing an effective radius for vortices with distributed vorticity, we find that 3.3 a(c) ...The dynamics and interaction of like-signed vortex structures in two dimensional flows are investigated by means of direct numerical solutions of the two-dimensional Navier-Stokes equations. Two vortices with distributed vorticity merge when their distance relative to their radius, d/R-0l. is below...
Spin dynamics in a two-dimensional quantum gas
DEFF Research Database (Denmark)
Pedersen, Poul Lindholm; Gajdacz, Miroslav; Deuretzbacher, Frank
2014-01-01
We have investigated spin dynamics in a two-dimensional quantum gas. Through spin-changing collisions, two clouds with opposite spin orientations are spontaneously created in a Bose-Einstein condensate. After ballistic expansion, both clouds acquire ring-shaped density distributions with superimp......We have investigated spin dynamics in a two-dimensional quantum gas. Through spin-changing collisions, two clouds with opposite spin orientations are spontaneously created in a Bose-Einstein condensate. After ballistic expansion, both clouds acquire ring-shaped density distributions...
Modeling two-dimensional water flow and bromide transport in a heterogeneous lignitic mine soil
Energy Technology Data Exchange (ETDEWEB)
Buczko, U.; Gerke, H.H. [Brandenburg University of Technology, Cottbus (Germany)
2006-02-15
Water and solute fluxes in lignitic mine soils and in many other soils are often highly heterogeneous. Here, heterogeneity reflects dumping-induced inclined structures and embedded heterogeneous distributions of sediment mixtures and of lignitic fragments. Such two-scale heterogeneity effects may be analyzed through the application of two-dimensional models for calculating water and solute fluxes. The objective of this study was to gain more insight to what extent spatial heterogeneity of soil hydraulic parameters contributes to preferential flow at a lignitic mine soil. The simulations pertained to the 'Barenbrucker Hohe' site in Germany where previously water fluxes and applied tracers had been monitored with a cell lysimeter, and from where a soil block had been excavated for detailed two-dimensional characterization of the hydraulic parameters using pedotransfer functions. Based on those previous studies, scenarios with different distributions of hydraulic parameters were simulated. The results show that spatial variability of hydraulic parameters alone can hardly explain the observed flow patterns. The observed preferential flow at the site was probably caused by additional factors such as hydrophobicity, the presence of root channels, anisotropy in the hydraulic conductivity, and heterogeneous root distributions. To study the relative importance of these other factors by applying two-dimensional flow models to such sites, the experimental database must be improved. Single-continuum model approaches may be insufficient for such sites.
Extreme Points of the Convex Set of Joint Probability Distributions with Fixed Marginals
Indian Academy of Sciences (India)
K R Parthasarathy
2007-11-01
By using a quantum probabilistic approach we obtain a description of the extreme points of the convex set of all joint probability distributions on the product of two standard Borel spaces with fixed marginal distributions.
Strongly interacting two-dimensional Dirac fermions
Lim, L.K.; Lazarides, A.; Hemmerich, Andreas; de Morais Smith, C.
2009-01-01
We show how strongly interacting two-dimensional Dirac fermions can be realized with ultracold atoms in a two-dimensional optical square lattice with an experimentally realistic, inherent gauge field, which breaks time reversal and inversion symmetries. We find remarkable phenomena in a temperature
Topology optimization of two-dimensional waveguides
DEFF Research Database (Denmark)
Jensen, Jakob Søndergaard; Sigmund, Ole
2003-01-01
In this work we use the method of topology optimization to design two-dimensional waveguides with low transmission loss.......In this work we use the method of topology optimization to design two-dimensional waveguides with low transmission loss....
Institute of Scientific and Technical Information of China (English)
乔大勇; 杨璇; 夏长锋; 曾琪; 潘春晖; 练彬
2014-01-01
In order to increase the deflection, the effect of comb structure on the twisting amplitude for two-dimensional(2D)microscanner is researched on basis of the nonlinear dynamics theory in this paper,and it can be obtained that the effect of the divergent distribution is superior to parallel distribution. Besides, the samples are fabricated based on silicon-on-insulator(SOI)technology,and then the electromechanical characteristic is tested. The test results demonstrate the microscanner with divergent comb distribution has greater twisting amplitude,which agrees with the theoretical analysis. The microscanner with divergent distribution could generate maximum twisting angles of 12. 3° and 13. 49° under the square wave of 42 V, while the sample with parallel distribution could generate maximum twisting angles of 10. 25° and 11. 68° under the same square wave.%为增大静电驱动二维微扫描镜的机械转角，基于非线性动力学理论研究了不同梳齿结构对其振幅的影响，理论上得到发散型梳齿分布相较于平行型梳齿分布具有更大的机械转角。此外，采用绝缘体上硅( SOI)加工工艺设计并制作了这两种结构的微扫描镜，并对其相关特性进行了测试。测试结果表明：在相同的驱动电压下，发散型结构始终都比平行型结构具有更大的机械转角，与仿真结果基本一致；当加载驱动电压为42 V的方波信号时，发散型结构扫描镜的可动框架和镜面的最大机械转角可以达到12.3°、13.49°，而平行型结构扫描镜的可动框架和镜面的最大机械转角则为10.25°、11.68°。
Spectral Radiative Properties of Two-Dimensional Rough Surfaces
Xuan, Yimin; Han, Yuge; Zhou, Yue
2012-12-01
Spectral radiative properties of two-dimensional rough surfaces are important for both academic research and practical applications. Besides material properties, surface structures have impact on the spectral radiative properties of rough surfaces. Based on the finite difference time domain algorithm, this paper studies the spectral energy propagation process on a two-dimensional rough surface and analyzes the effect of different factors such as the surface structure, angle, and polarization state of the incident wave on the spectral radiative properties of the two-dimensional rough surface. To quantitatively investigate the spatial distribution of energy reflected from the rough surface, the concept of the bidirectional reflectance distribution function is introduced. Correlation analysis between the reflectance and different impact factors is conducted to evaluate the influence degree. Comparison between the theoretical and experimental data is given to elucidate the accuracy of the computational code. This study is beneficial to optimizing the surface structures of optoelectronic devices such as solar cells.
Probability distribution analysis of observational extreme events and model evaluation
Yu, Q.; Lau, A. K. H.; Fung, J. C. H.; Tsang, K. T.
2016-12-01
Earth's surface temperatures were the warmest in 2015 since modern record-keeping began in 1880, according to the latest study. In contrast, a cold weather occurred in many regions of China in January 2016, and brought the first snowfall to Guangzhou, the capital city of Guangdong province in 67 years. To understand the changes of extreme weather events as well as project its future scenarios, this study use statistical models to analyze on multiple climate data. We first use Granger-causality test to identify the attribution of global mean temperature rise and extreme temperature events with CO2 concentration. The four statistical moments (mean, variance, skewness, kurtosis) of daily maximum temperature distribution is investigated on global climate observational, reanalysis (1961-2010) and model data (1961-2100). Furthermore, we introduce a new tail index based on the four moments, which is a more robust index to measure extreme temperatures. Our results show that the CO2 concentration can provide information to the time series of mean and extreme temperature, but not vice versa. Based on our new tail index, we find that other than mean and variance, skewness is an important indicator should be considered to estimate extreme temperature changes and model evaluation. Among the 12 climate model data we investigate, the fourth version of Community Climate System Model (CCSM4) from National Center for Atmospheric Research performs well on the new index we introduce, which indicate the model have a substantial capability to project the future changes of extreme temperature in the 21st century. The method also shows its ability to measure extreme precipitation/ drought events. In the future we will introduce a new diagram to systematically evaluate the performance of the four statistical moments in climate model output, moreover, the human and economic impacts of extreme weather events will also be conducted.
Batch Mode Active Sampling based on Marginal Probability Distribution Matching.
Chattopadhyay, Rita; Wang, Zheng; Fan, Wei; Davidson, Ian; Panchanathan, Sethuraman; Ye, Jieping
2012-01-01
Active Learning is a machine learning and data mining technique that selects the most informative samples for labeling and uses them as training data; it is especially useful when there are large amount of unlabeled data and labeling them is expensive. Recently, batch-mode active learning, where a set of samples are selected concurrently for labeling, based on their collective merit, has attracted a lot of attention. The objective of batch-mode active learning is to select a set of informative samples so that a classifier learned on these samples has good generalization performance on the unlabeled data. Most of the existing batch-mode active learning methodologies try to achieve this by selecting samples based on varied criteria. In this paper we propose a novel criterion which achieves good generalization performance of a classifier by specifically selecting a set of query samples that minimizes the difference in distribution between the labeled and the unlabeled data, after annotation. We explicitly measure this difference based on all candidate subsets of the unlabeled data and select the best subset. The proposed objective is an NP-hard integer programming optimization problem. We provide two optimization techniques to solve this problem. In the first one, the problem is transformed into a convex quadratic programming problem and in the second method the problem is transformed into a linear programming problem. Our empirical studies using publicly available UCI datasets and a biomedical image dataset demonstrate the effectiveness of the proposed approach in comparison with the state-of-the-art batch-mode active learning methods. We also present two extensions of the proposed approach, which incorporate uncertainty of the predicted labels of the unlabeled data and transfer learning in the proposed formulation. Our empirical studies on UCI datasets show that incorporation of uncertainty information improves performance at later iterations while our studies on 20
The exact interface model for wetting in the two-dimensional Ising model
Upton, P. J.
2002-01-01
We use exact methods to derive an interface model from an underlying microscopic model, i.e., the Ising model on a square lattice. At the wetting transition in the two-dimensional Ising model, the long Peierls contour (or interface) gets depinned from the substrate. Using exact transfer-matrix methods, we find that on sufficiently large length scales (i.e., length scales sufficiently larger than the bulk correlation length) the distribution of the long contour is given by a unique probability...
Unconventional critical activated scaling of two-dimensional quantum spin glasses
Matoz-Fernandez, D. A.; Romá, F.
2016-07-01
We study the critical behavior of two-dimensional short-range quantum spin glasses by numerical simulations. Using a parallel tempering algorithm, we calculate the Binder cumulant for the Ising spin glass in a transverse magnetic field with two different short-range bond distributions, the bimodal and the Gaussian ones. Through an exhaustive finite-size analysis, we show that the cumulant probably follows an unconventional activated scaling, which we interpret as new evidence supporting the hypothesis that the quantum critical behavior is governed by an infinite randomness fixed point.
Research on the behavior of fiber orientation probability distribution function in the planar flows
Institute of Scientific and Technical Information of China (English)
ZHOU Kun; LIN Jian-zhong
2005-01-01
The equation of two-dimensional fiber direction vector was solved theoretically to give the fiber orientation distribution in simple shear flow, flow with two direction shears, extensional flow and arbitrary planar incompressible flow. The Fokker-Planck equation was solved numerically to validify the theoretical solutions. The stable orientation and orientation period of fiber were obtained. The results showed that the fiber orientation distribution is dependent on the relative not absolute magnitude of the matrix rate-of-strain of flow. The effect of fiber aspect ratio on the orientation distribution of fiber is insignificant in most conditions except the simple shear case. It was proved that the results for a planar flow could be generalized to the case of 3-D fiber direction vector.
Topology optimization of two-dimensional elastic wave barriers
DEFF Research Database (Denmark)
Van Hoorickx, C.; Sigmund, Ole; Schevenels, M.
2016-01-01
Topology optimization is a method that optimally distributes material in a given design domain. In this paper, topology optimization is used to design two-dimensional wave barriers embedded in an elastic halfspace. First, harmonic vibration sources are considered, and stiffened material is insert...
Calisto, H.; Bologna, M.
2007-05-01
We report an exact result for the calculation of the probability distribution of the Bernoulli-Malthus-Verhulst model driven by a multiplicative colored noise. We study the conditions under which the probability distribution of the Malthus-Verhulst model can exhibit a transition from a unimodal to a bimodal distribution depending on the value of a critical parameter. Also we show that the mean value of x(t) in the latter model always approaches asymptotically the value 1.
Thomas, Drew M
2013-01-01
A dust grain in a plasma has a fluctuating electric charge, and past work concludes that spherical grains in a stationary, collisionless plasma have an essentially Gaussian charge probability distribution. This paper extends that work to flowing plasmas and arbitrarily large spheres, deriving analytic charge probability distributions up to normalizing constants. We find that these distributions also have good Gaussian approximations, with analytic expressions for their mean and variance.
The probability distribution of the predicted CFM-induced ozone depletion. [Chlorofluoromethane
Ehhalt, D. H.; Chang, J. S.; Bulter, D. M.
1979-01-01
It is argued from the central limit theorem that the uncertainty in model predicted changes of the ozone column density is best represented by a normal probability density distribution. This conclusion is validated by comparison with a probability distribution generated by a Monte Carlo technique. In the case of the CFM-induced ozone depletion, and based on the estimated uncertainties in the reaction rate coefficients alone the relative mean standard deviation of this normal distribution is estimated to be 0.29.
Constructing the probability distribution function for the total capacity of a power system
Energy Technology Data Exchange (ETDEWEB)
Vasin, V.P.; Prokhorenko, V.I.
1980-01-01
The difficulties involved in constructing the probability distribution function for the total capacity of a power system consisting of numerous power plants are discussed. A method is considered for the approximate determination of such a function by a Monte Carlo method and by exact calculation based on special recursion formulas on a particular grid of argument values. It is shown that there may be significant deviations between the true probability distribution and a normal distribution.
On the critical behaviour of two-dimensional liquid crystals
Directory of Open Access Journals (Sweden)
A.l. Fariñas-Sánchez
2010-01-01
Full Text Available The Lebwohl-Lasher (LL model is the traditional model used to describe the nematic-isotropic transition of real liquid crystals. In this paper, we develop a numerical study of the temperature behaviour and of finite-size scaling of the two-dimensional (2D LL-model. We discuss two possible scenarios. In the first one, the 2D LL-model presents a phase transition similar to the topological transition appearing in the 2D XY-model. In the second one, the 2D LL-model does not exhibit any critical transition, but its low temperature behaviour is rather characterized by a crossover from a disordered phase to an ordered phase at zero temperature. We realize and discuss various comparisons with the 2D XY-model and the 2D Heisenberg model. Having added finite-size scaling behaviour of the order parameter and conformal mapping of order parameter profile to previous studies, we analyze the critical scaling of the probability distribution function, hyperscaling relations and stiffness order parameter and conclude that the second scenario (no critical transition is the most plausible.
Two-Dimensional Tail-Biting Convolutional Codes
Alfandary, Liam
2011-01-01
The multidimensional convolutional codes are an extension of the notion of convolutional codes (CCs) to several dimensions of time. This paper explores the class of two-dimensional convolutional codes (2D CCs) and 2D tail-biting convolutional codes (2D TBCCs), in particular, from several aspects. First, we derive several basic algebraic properties of these codes, applying algebraic methods in order to find bijective encoders, create parity check matrices and to inverse encoders. Next, we discuss the minimum distance and weight distribution properties of these codes. Extending an existing tree-search algorithm to two dimensions, we apply it to find codes with high minimum distance. Word-error probability asymptotes for sample codes are given and compared with other codes. The results of this approach suggest that 2D TBCCs can perform better than comparable 1D TBCCs or other codes. We then present several novel iterative suboptimal algorithms for soft decoding 2D CCs, which are based on belief propagation. Two ...
Perspective: Two-dimensional resonance Raman spectroscopy
Molesky, Brian P.; Guo, Zhenkun; Cheshire, Thomas P.; Moran, Andrew M.
2016-11-01
Two-dimensional resonance Raman (2DRR) spectroscopy has been developed for studies of photochemical reaction mechanisms and structural heterogeneity in complex systems. The 2DRR method can leverage electronic resonance enhancement to selectively probe chromophores embedded in complex environments (e.g., a cofactor in a protein). In addition, correlations between the two dimensions of the 2DRR spectrum reveal information that is not available in traditional Raman techniques. For example, distributions of reactant and product geometries can be correlated in systems that undergo chemical reactions on the femtosecond time scale. Structural heterogeneity in an ensemble may also be reflected in the 2D spectroscopic line shapes of both reactive and non-reactive systems. In this perspective article, these capabilities of 2DRR spectroscopy are discussed in the context of recent applications to the photodissociation reactions of triiodide and myoglobin. We also address key differences between the signal generation mechanisms for 2DRR and off-resonant 2D Raman spectroscopies. Most notably, it has been shown that these two techniques are subject to a tradeoff between sensitivity to anharmonicity and susceptibility to artifacts. Overall, recent experimental developments and applications of the 2DRR method suggest great potential for the future of the technique.
Institute of Scientific and Technical Information of China (English)
宋小源; 刘杰; 郑春苗
2012-01-01
Dye tracer has been widely used in sandbox experiments of solute transport.This study uses image analysis to determine the concentration of dye tracer in porous media.By establishing a quantitative relationship between the concentration and the hue value of the dye tracer,the concentration distribution of the dye tracer can be quickly obtained.The image analysis provides a cost-effective and high-resolution method to quantify the solute concentration distribution in two-dimensional sandbox experiment for groundwater solute transport studies.Because the range of image hue values is limited,the image analysis method can only be applied to a certain range of solute concentration.%有色示踪剂在砂箱溶质运移实验中有着广泛的应用,因此,本文介绍了一种通过对数字图像进行分析以确定多孔介质中有色示踪剂浓度分布的方法.在实验过程中,通过建立有色示踪剂浓度与图像颜色要素之间的定量关系,从而能够利用数字图像迅速确定某一时刻砂箱内部的示踪剂浓度分布.结果发现,与传统定点采样方法相比,图像分析法可以达到更高的空间和时间分辨率,从而观察到有可能被忽视的重要溶质运移过程,能够为地下水溶质运移实验研究提供了一种经济实用和高空间分辨率的数据采集手段.但由于色彩空间变化范围的限制,图像分析法只能在一定溶质浓度范围内适用.
Two Dimensional Plasmonic Cavities on Moire Surfaces
Balci, Sinan; Kocabas, Askin; Karabiyik, Mustafa; Kocabas, Coskun; Aydinli, Atilla
2010-03-01
We investigate surface plasmon polariton (SPP) cavitiy modes on two dimensional Moire surfaces in the visible spectrum. Two dimensional hexagonal Moire surface can be recorded on a photoresist layer using Interference lithography (IL). Two sequential exposures at slightly different angles in IL generate one dimensional Moire surfaces. Further sequential exposure for the same sample at slightly different angles after turning the sample 60 degrees around its own axis generates two dimensional hexagonal Moire cavity. Spectroscopic reflection measurements have shown plasmonic band gaps and cavity states at all the azimuthal angles (omnidirectional cavity and band gap formation) investigated. The plasmonic band gap edge and the cavity states energies show six fold symmetry on the two dimensional Moire surface as measured in reflection measurements.
Two-dimensional function photonic crystals
Liu, Xiao-Jing; Liang, Yu; Ma, Ji; Zhang, Si-Qi; Li, Hong; Wu, Xiang-Yao; Wu, Yi-Heng
2017-01-01
In this paper, we have studied two-dimensional function photonic crystals, in which the dielectric constants of medium columns are the functions of space coordinates , that can become true easily by electro-optical effect and optical kerr effect. We calculated the band gap structures of TE and TM waves, and found the TE (TM) wave band gaps of function photonic crystals are wider (narrower) than the conventional photonic crystals. For the two-dimensional function photonic crystals, when the dielectric constant functions change, the band gaps numbers, width and position should be changed, and the band gap structures of two-dimensional function photonic crystals can be adjusted flexibly, the needed band gap structures can be designed by the two-dimensional function photonic crystals, and it can be of help to design optical devices.
Two-Dimensional Planetary Surface Lander
Hemmati, H.; Sengupta, A.; Castillo, J.; McElrath, T.; Roberts, T.; Willis, P.
2014-06-01
A systems engineering study was conducted to leverage a new two-dimensional (2D) lander concept with a low per unit cost to enable scientific study at multiple locations with a single entry system as the delivery vehicle.
American Society for Testing and Materials. Philadelphia
2001-01-01
Standard Test Method for Measuring the Curved Beam Strength of a Fiber-Reinforced Polymer-Matrix Composite - (View Full Text) D6416/D6416M-01(2007) Standard Test Method for Two-Dimensional Flexural Properties of Simply Supported Sandwich Composite Plates Subjected to a Distributed Load
A New Probability of Detection Model for Updating Crack Distribution of Offshore Structures
Institute of Scientific and Technical Information of China (English)
李典庆; 张圣坤; 唐文勇
2003-01-01
There exists model uncertainty of probability of detection for inspecting ship structures with nondestructive inspection techniques. Based on a comparison of several existing probability of detection (POD) models, a new probability of detection model is proposed for the updating of crack size distribution. Furthermore, the theoretical derivation shows that most existing probability of detection models are special cases of the new probability of detection model. The least square method is adopted for determining the values of parameters in the new POD model. This new model is also compared with other existing probability of detection models. The results indicate that the new probability of detection model can fit the inspection data better. This new probability of detection model is then applied to the analysis of the problem of crack size updating for offshore structures. The Bayesian updating method is used to analyze the effect of probability of detection models on the posterior distribution of a crack size. The results show that different probabilities of detection models generate different posterior distributions of a crack size for offshore structures.
Some possible q-exponential type probability distribution in the non-extensive statistical physics
Chung, Won Sang
2016-08-01
In this paper, we present two exponential type probability distributions which are different from Tsallis’s case which we call Type I: one given by pi = 1 Zq[eq(Ei)]-β (Type IIA) and another given by pi = 1 Zq[eq(-β)]Ei (Type IIIA). Starting with the Boltzman-Gibbs entropy, we obtain the different probability distribution by using the Kolmogorov-Nagumo average for the microstate energies. We present the first-order differential equations related to Types I, II and III. For three types of probability distributions, we discuss the quantum harmonic oscillator, two-level problem and the spin-1 2 paramagnet.
Probability distributions for directed polymers in random media with correlated noise
Chu, Sherry; Kardar, Mehran
2016-07-01
The probability distribution for the free energy of directed polymers in random media (DPRM) with uncorrelated noise in d =1 +1 dimensions satisfies the Tracy-Widom distribution. We inquire if and how this universal distribution is modified in the presence of spatially correlated noise. The width of the distribution scales as the DPRM length to an exponent β , in good (but not full) agreement with previous renormalization group and numerical results. The scaled probability is well described by the Tracy-Widom form for uncorrelated noise, but becomes symmetric with increasing correlation exponent. We thus find a class of distributions that continuously interpolates between Tracy-Widom and Gaussian forms.
Collective motions of globally coupled oscillators and some probability distributions on circle
Energy Technology Data Exchange (ETDEWEB)
Jaćimović, Vladimir [Faculty of Natural Sciences and Mathematics, University of Montenegro, Cetinjski put, bb., 81000 Podgorica (Montenegro); Crnkić, Aladin, E-mail: aladin.crnkic@hotmail.com [Faculty of Technical Engineering, University of Bihać, Ljubijankićeva, bb., 77000 Bihać, Bosnia and Herzegovina (Bosnia and Herzegovina)
2017-06-28
In 2010 Kato and Jones described a new family of probability distributions on circle, obtained as Möbius transformation of von Mises distribution. We present the model demonstrating that these distributions appear naturally in study of populations of coupled oscillators. We use this opportunity to point out certain relations between Directional Statistics and collective motion of coupled oscillators. - Highlights: • We specify probability distributions on circle that arise in Kuramoto model. • We study how the mean-field coupling affects the shape of distribution of phases. • We discuss potential applications in some experiments on cell cycle. • We apply Directional Statistics to study collective dynamics of coupled oscillators.
A measure of mutual divergence among a number of probability distributions
Directory of Open Access Journals (Sweden)
J. N. Kapur
1987-01-01
major inequalities due to Shannon, Renyi and Holder. The inequalities are then used to obtain some useful results in information theory. In particular measures are obtained to measure the mutual divergence among two or more probability distributions.
The probability distribution of fatigue damage and the statistical moment of fatigue life
Institute of Scientific and Technical Information of China (English)
熊峻江; 高镇同
1997-01-01
The randomization of deterministic fatigue damage equation leads to the stochastic differential equation and the Fokker-Planck equation affected by random fluctuation. By means of the solution of equation, the probability distribution of fatigue damage with the change of time is obtained. Then the statistical moment of fatigue life in consideration of the stationary random fluctuation is derived. Finally, the damage probability distributions during the fatigue crack initiation and fatigue crack growth are given
Interpolation by two-dimensional cubic convolution
Shi, Jiazheng; Reichenbach, Stephen E.
2003-08-01
This paper presents results of image interpolation with an improved method for two-dimensional cubic convolution. Convolution with a piecewise cubic is one of the most popular methods for image reconstruction, but the traditional approach uses a separable two-dimensional convolution kernel that is based on a one-dimensional derivation. The traditional, separable method is sub-optimal for the usual case of non-separable images. The improved method in this paper implements the most general non-separable, two-dimensional, piecewise-cubic interpolator with constraints for symmetry, continuity, and smoothness. The improved method of two-dimensional cubic convolution has three parameters that can be tuned to yield maximal fidelity for specific scene ensembles characterized by autocorrelation or power-spectrum. This paper illustrates examples for several scene models (a circular disk of parametric size, a square pulse with parametric rotation, and a Markov random field with parametric spatial detail) and actual images -- presenting the optimal parameters and the resulting fidelity for each model. In these examples, improved two-dimensional cubic convolution is superior to several other popular small-kernel interpolation methods.
TWO-DIMENSIONAL TOPOLOGY OF COSMOLOGICAL REIONIZATION
Energy Technology Data Exchange (ETDEWEB)
Wang, Yougang; Xu, Yidong; Chen, Xuelei [Key Laboratory of Computational Astrophysics, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100012 China (China); Park, Changbom [School of Physics, Korea Institute for Advanced Study, 85 Hoegiro, Dongdaemun-gu, Seoul 130-722 (Korea, Republic of); Kim, Juhan, E-mail: wangyg@bao.ac.cn, E-mail: cbp@kias.re.kr [Center for Advanced Computation, Korea Institute for Advanced Study, 85 Hoegiro, Dongdaemun-gu, Seoul 130-722 (Korea, Republic of)
2015-11-20
We study the two-dimensional topology of the 21-cm differential brightness temperature for two hydrodynamic radiative transfer simulations and two semi-numerical models. In each model, we calculate the two-dimensional genus curve for the early, middle, and late epochs of reionization. It is found that the genus curve depends strongly on the ionized fraction of hydrogen in each model. The genus curves are significantly different for different reionization scenarios even when the ionized faction is the same. We find that the two-dimensional topology analysis method is a useful tool to constrain the reionization models. Our method can be applied to the future observations such as those of the Square Kilometre Array.
Two dimensional topology of cosmological reionization
Wang, Yougang; Xu, Yidong; Chen, Xuelei; Kim, Juhan
2015-01-01
We study the two-dimensional topology of the 21-cm differential brightness temperature for two hydrodynamic radiative transfer simulations and two semi-numerical models. In each model, we calculate the two dimensional genus curve for the early, middle and late epochs of reionization. It is found that the genus curve depends strongly on the ionized fraction of hydrogen in each model. The genus curves are significantly different for different reionization scenarios even when the ionized faction is the same. We find that the two-dimensional topology analysis method is a useful tool to constrain the reionization models. Our method can be applied to the future observations such as those of the Square Kilometer Array.
DEFF Research Database (Denmark)
Schjær-Jacobsen, Hans
2012-01-01
to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... by probability distributions is readily done by means of Monte Carlo simulation. Calculation of non-monotonic functions of possibility distributions is done within the theoretical framework of fuzzy intervals, but straight forward application of fuzzy arithmetic in general results in overestimation of interval...
Frank, Steven A
2010-01-01
Commonly observed patterns typically follow a few distinct families of probability distributions. Over one hundred years ago, Karl Pearson provided a systematic derivation and classification of the common continuous distributions. His approach was phenomenological: a differential equation that generated common distributions without any underlying conceptual basis for why common distributions have particular forms and what explains the familial relations. Pearson's system and its descendants remain the most popular systematic classification of probability distributions. Here, we unify the disparate forms of common distributions into a single system based on two meaningful and justifiable propositions. First, distributions follow maximum entropy subject to constraints, where maximum entropy is equivalent to minimum information. Second, different problems associate magnitude to information in different ways, an association we describe in terms of the relation between information invariance and measurement scale....
Two-dimensional x-ray diffraction
He, Bob B
2009-01-01
Written by one of the pioneers of 2D X-Ray Diffraction, this useful guide covers the fundamentals, experimental methods and applications of two-dimensional x-ray diffraction, including geometry convention, x-ray source and optics, two-dimensional detectors, diffraction data interpretation, and configurations for various applications, such as phase identification, texture, stress, microstructure analysis, crystallinity, thin film analysis and combinatorial screening. Experimental examples in materials research, pharmaceuticals, and forensics are also given. This presents a key resource to resea
Matching Two-dimensional Gel Electrophoresis' Spots
DEFF Research Database (Denmark)
Dos Anjos, António; AL-Tam, Faroq; Shahbazkia, Hamid Reza
2012-01-01
This paper describes an approach for matching Two-Dimensional Electrophoresis (2-DE) gels' spots, involving the use of image registration. The number of false positive matches produced by the proposed approach is small, when compared to academic and commercial state-of-the-art approaches. This ar......This paper describes an approach for matching Two-Dimensional Electrophoresis (2-DE) gels' spots, involving the use of image registration. The number of false positive matches produced by the proposed approach is small, when compared to academic and commercial state-of-the-art approaches...
Mobility anisotropy of two-dimensional semiconductors
Lang, Haifeng; Zhang, Shuqing; Liu, Zhirong
2016-12-01
The carrier mobility of anisotropic two-dimensional semiconductors under longitudinal acoustic phonon scattering was theoretically studied using deformation potential theory. Based on the Boltzmann equation with the relaxation time approximation, an analytic formula of intrinsic anisotropic mobility was derived, showing that the influence of effective mass on mobility anisotropy is larger than those of deformation potential constant or elastic modulus. Parameters were collected for various anisotropic two-dimensional materials (black phosphorus, Hittorf's phosphorus, BC2N , MXene, TiS3, and GeCH3) to calculate their mobility anisotropy. It was revealed that the anisotropic ratio is overestimated by the previously described method.
Towards two-dimensional search engines
Ermann, Leonardo; Chepelianskii, Alexei D.; Shepelyansky, Dima L.
2011-01-01
We study the statistical properties of various directed networks using ranking of their nodes based on the dominant vectors of the Google matrix known as PageRank and CheiRank. On average PageRank orders nodes proportionally to a number of ingoing links, while CheiRank orders nodes proportionally to a number of outgoing links. In this way the ranking of nodes becomes two-dimensional that paves the way for development of two-dimensional search engines of new type. Statistical properties of inf...
Two-dimensional hazard estimation for longevity analysis
DEFF Research Database (Denmark)
Fledelius, Peter; Guillen, M.; Nielsen, J.P.
2004-01-01
We investigate developments in Danish mortality based on data from 1974-1998 working in a two-dimensional model with chronological time and age as the two dimensions. The analyses are done with non-parametric kernel hazard estimation techniques. The only assumption is that the mortality surface...... the two-dimensional mortality surface. Furthermore we look at aggregated synthetic population metrics as 'population life expectancy' and 'population survival probability'. For Danish women these metrics indicate decreasing mortality with respect to chronological time. The metrics can not directly be used...... for prediction purposes. However, we suggest that life insurance companies use the estimation technique and the cross-validation for bandwidth selection when analyzing their portfolio mortality. The non-parametric approach may give valuable information prior to developing more sophisticated prediction models...
Predicting the probability of slip in gait: methodology and distribution study.
Gragg, Jared; Yang, James
2016-01-01
The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.
WIENER-HOPF SOLVER WITH SMOOTH PROBABILITY DISTRIBUTIONS OF ITS COMPONENTS
Directory of Open Access Journals (Sweden)
Mr. Vladimir A. Smagin
2016-12-01
Full Text Available The Wiener – Hopf solver with smooth probability distributions of its component is presented. The method is based on hyper delta approximations of initial distributions. The use of Fourier series transformation and characteristic function allows working with the random variable method concentrated in transversal axis of absc.
Results from laboratory tests of the two-dimensional Time-Encoded Imaging System.
Energy Technology Data Exchange (ETDEWEB)
Marleau, Peter; Brennan, James S.; Brubaker, Erik; Gerling, Mark D; Le Galloudec, Nathalie Joelle
2014-09-01
A series of laboratory experiments were undertaken to demonstrate the feasibility of two dimensional time-encoded imaging. A prototype two-dimensional time encoded imaging system was designed and constructed. Results from imaging measurements of single and multiple point sources as well as extended source distributions are presented. Time encoded imaging has proven to be a simple method for achieving high resolution two-dimensional imaging with potential to be used in future arms control and treaty verification applications.
Matias-Peralta, Hazel Monica; Ghodsi, Alireza; Shitan, Mahendran; Yusoff, Fatimah Md.
Copepods are the most abundant microcrustaceans in the marine waters and are the major food resource for many commercial fish species. In addition, changes in the distribution and population composition of copepods may also serve as an indicator of global climate changes. Therefore, it is important to model the copepod distribution in different ecosystems. Copepod samples were collected from three different ecosystems (seagrass area, cage aquaculture area and coastal waters off shrimp aquaculture farm) along the coastal waters of the Malacca Straits over a one year period. In this study the major statistical analysis consisted of fitting different probability models. This paper highlights the fitting of probability distributions and discusses the adequateness of the fitted models. The usefulness of these fitted models would enable one to make probability statements about the distribution of copepods in three different ecosystems.
Institute of Scientific and Technical Information of China (English)
翟永玺; 张堃元; 王磊; 李永洲; 张林
2014-01-01
A parametric research on the curved compression surface with controllable Mach number distri-bution was commenced to find the effect regularity of design parameters on the performance parameters of curved compression surface. On this basis,a polynomial response surface proxy model was built to make a multi-objec-tive optimization,and a hypersonic curved shock two-dimensional inlet was designed based on the optimization result, the performance was compared with the three-ramp compression inlet which was designed under the same constraints. Results indicate among the design parameters, the initial compress angle θ and the factor C and factor md1 affect most. The flow coefficient of the innovative inlet is up to 0.769 at Mach 4,when Mach num-ber ranges from 4 to 7,the two inlets have equally the same mass capture ratio,while the innovative inlet has high total pressure recovery of throat and outlet section. Compared with the relative three-ramp inlet , the total pressure recovery of throat section of the innovative inlet increased by 6.5%at Mach 4, 8.4%at Mach 6, and 10.7%at Mach 7.%针对一种马赫数分布可控的二元高超弯曲压缩面进行参数化研究，获得其设计参数对压缩面性能的影响规律，在此基础上建立多项式响应面代理模型并进行多目标优化，基于优化结果设计了二元弯曲激波进气道，并与同等约束条件下的三楔进气道进行比较。结果表明：压缩面初始压缩角θ与马赫数梯度函数中的设计参数md1，C对压缩面性能影响最为显著；Ma∞=4.0时弯曲激波进气道流量系数达0.769，与三楔进气道相比，在Ma∞=4～7工作范围内的流量捕获能力相当，但其喉道、出口截面的总压恢复系数均高于三楔进气道，在Ma∞=4，6，7工况下，喉道截面总压恢复分别有6.5%，8.4%和10.7%的提高。
Vehicle routing problem in distribution with two-dimensional loading constraint%带二维装箱约束的物流配送车辆路径问题
Institute of Scientific and Technical Information of China (English)
王征; 胡祥培; 王旭坪
2011-01-01
In real distributions, there is a great amount of transportation problems called vehicle routing problem with two-dimensional loading constraint, in which items demanded by customers are usually fragile and needed to be packed into vehicles. This is a new problem that combines the two classical problems of vehicle routing problem and bin packing problem. To solve the problem, a mathematical model and a Memetic algorithm were presented after an explicit problem definition. The key modules of the algorithm, such as a depth-first based heuristic loading method, an encoding and splitting method of chromosomes, an initial solution generation method, crossover and local search methods, were explained in details. Based on some preliminary experiments, the best combination of parameters' values in Memetic algorithm was given. Finally, the robustness and the effectiveness of the Memetic algorithm were tested on Iori's 30 instances whose numbers of customers are from 20 to 199 and a comparison with other algorithm in the literature was made, which shows that the memetic algorithm greatly surpasses the algorithm on aspects of solving capacity and solution quality.%现实物流活动中大量存在的易损、易碎物品的运输问题属于带二维装箱约束的物流配送问题,该问题是二维装箱问题与车辆路径问题这两个经典难题融合之后的一个新问题.针对这一问题,在对其进行明确定义的基础上,建立了数学模型,提出了解决该问题一个Memetic算法,对算法中的几个关键算子:深度优先的启发式装箱方法、染色体的编码方式及其路径分割程序、初始解的生成方法、交叉算子、局部搜索算子,进行了详细的阐述.通过初步的实验,确定了Memetic算法的最佳参数配置；然后在Iori提出的30个顾客数在20 199个标准算例上对算法的鲁棒性、求解的质量、以及求解性能等几项指标进行了测试,并与文献中的求解结果进行了比
Piezoelectricity in Two-Dimensional Materials
Wu, Tao
2015-02-25
Powering up 2D materials: Recent experimental studies confirmed the existence of piezoelectricity - the conversion of mechanical stress into electricity - in two-dimensional single-layer MoS2 nanosheets. The results represent a milestone towards embedding low-dimensional materials into future disruptive technologies. © 2015 Wiley-VCH Verlag GmbH & Co. KGaA.
Kronecker Product of Two-dimensional Arrays
Institute of Scientific and Technical Information of China (English)
Lei Hu
2006-01-01
Kronecker sequences constructed from short sequences are good sequences for spread spectrum communication systems. In this paper we study a similar problem for two-dimensional arrays, and we determine the linear complexity of the Kronecker product of two arrays. Our result shows that similar good property on linear complexity holds for Kronecker product of arrays.
Two-Dimensional Toda-Heisenberg Lattice
Directory of Open Access Journals (Sweden)
Vadim E. Vekslerchik
2013-06-01
Full Text Available We consider a nonlinear model that is a combination of the anisotropic two-dimensional classical Heisenberg and Toda-like lattices. In the framework of the Hirota direct approach, we present the field equations of this model as a bilinear system, which is closely related to the Ablowitz-Ladik hierarchy, and derive its N-soliton solutions.
A novel two dimensional particle velocity sensor
Pjetri, Olti; Wiegerink, Remco J.; Lammerink, Theo S.; Krijnen, Gijs J.
2013-01-01
In this paper we present a two wire, two-dimensional particle velocity sensor. The miniature sensor of size 1.0x2.5x0.525 mm, consisting of only two crossed wires, shows excellent directional sensitivity in both directions, thus requiring no directivity calibration, and is relatively easy to fabrica
Two-dimensional microstrip detector for neutrons
Energy Technology Data Exchange (ETDEWEB)
Oed, A. [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France)
1997-04-01
Because of their robust design, gas microstrip detectors, which were developed at ILL, can be assembled relatively quickly, provided the prefabricated components are available. At the beginning of 1996, orders were received for the construction of three two-dimensional neutron detectors. These detectors have been completed. The detectors are outlined below. (author). 2 refs.
Two-dimensional magma-repository interactions
Bokhove, O.
2001-01-01
Two-dimensional simulations of magma-repository interactions reveal that the three phases --a shock tube, shock reflection and amplification, and shock attenuation and decay phase-- in a one-dimensional flow tube model have a precursor. This newly identified phase ``zero'' consists of the impact of
Two-dimensional subwavelength plasmonic lattice solitons
Ye, F; Hu, B; Panoiu, N C
2010-01-01
We present a theoretical study of plasmonic lattice solitons (PLSs) formed in two-dimensional (2D) arrays of metallic nanowires embedded into a nonlinear medium with Kerr nonlinearity. We analyze two classes of 2D PLSs families, namely, fundamental and vortical PLSs in both focusing and defocusing media. Their existence, stability, and subwavelength spatial confinement are studied in detai
A two-dimensional Dirac fermion microscope
DEFF Research Database (Denmark)
Bøggild, Peter; Caridad, Jose; Stampfer, Christoph
2017-01-01
in the solid state. Here we provide a perspective view on how a two-dimensional (2D) Dirac fermion-based microscope can be realistically implemented and operated, using graphene as a vacuum chamber for ballistic electrons. We use semiclassical simulations to propose concrete architectures and design rules of 2...
Topological defect motifs in two-dimensional Coulomb clusters
Radzvilavičius, A; 10.1088/0953-8984/23/38/385301
2012-01-01
The most energetically favourable arrangement of low-density electrons in an infinite two-dimensional plane is the ordered triangular Wigner lattice. However, in most instances of contemporary interest one deals instead with finite clusters of strongly interacting particles localized in potential traps, for example, in complex plasmas. In the current contribution we study distribution of topological defects in two-dimensional Coulomb clusters with parabolic lateral confinement. The minima hopping algorithm based on molecular dynamics is used to efficiently locate the ground- and low-energy metastable states, and their structure is analyzed by means of the Delaunay triangulation. The size, structure and distribution of geometry-induced lattice imperfections strongly depends on the system size and the energetic state. Besides isolated disclinations and dislocations, classification of defect motifs includes defect compounds --- grain boundaries, rosette defects, vacancies and interstitial particles. Proliferatio...
Two-dimensional sub-half-wavelength atom localization via controlled spontaneous emission.
Wan, Ren-Gang; Zhang, Tong-Yi
2011-12-05
We propose a scheme for two-dimensional (2D) atom localization based on the controlled spontaneous emission, in which the atom interacts with two orthogonal standing-wave fields. Due to the spatially dependent atom-field interaction, the position probability distribution of the atom can be directly determined by measuring the resulting spontaneously emission spectrum. The phase sensitive property of the atomic system leads to quenching of the spontaneous emission in some regions of the standing-waves, which significantly reduces the uncertainty in the position measurement of the atom. We find that the frequency measurement of the emitted light localizes the atom in half-wavelength domain. Especially the probability of finding the atom at a particular position can reach 100% when a photon with certain frequency is detected. By increasing the Rabi frequencies of the driving fields, such 2D sub-half-wavelength atom localization can acquire high spatial resolution.
A Class of Chaotic Sequences with Gauss Probability Distribution for Radar Mask Jamming
Institute of Scientific and Technical Information of China (English)
Ni-Ni Rao; Yu-Chuan Huang; Bin Liu
2007-01-01
A simple generation approach for chaotic sequences with Gauss probability distribution is proposed. Theoretical analysis and simulation based on Logistic chaotic model show that the approach is feasible and effective. The distribution characteristics of the novel chaotic sequence are comparable to that of the standard normal distribution. Its mean and variance can be changed to the desired values. The novel sequences have also good randomness. The applications for radar mask jamming are analyzed.
Marshman, Emily; Singh, Chandralekha
2017-03-01
A solid grasp of the probability distributions for measuring physical observables is central to connecting the quantum formalism to measurements. However, students often struggle with the probability distributions of measurement outcomes for an observable and have difficulty expressing this concept in different representations. Here we first describe the difficulties that upper-level undergraduate and PhD students have with the probability distributions for measuring physical observables in quantum mechanics. We then discuss how student difficulties found in written surveys and individual interviews were used as a guide in the development of a quantum interactive learning tutorial (QuILT) to help students develop a good grasp of the probability distributions of measurement outcomes for physical observables. The QuILT strives to help students become proficient in expressing the probability distributions for the measurement of physical observables in Dirac notation and in the position representation and be able to convert from Dirac notation to position representation and vice versa. We describe the development and evaluation of the QuILT and findings about the effectiveness of the QuILT from in-class evaluations.
Cryptography Using Multiple Two-Dimensional Chaotic Maps
Directory of Open Access Journals (Sweden)
Ibrahim S. I. Abuhaiba
2012-08-01
Full Text Available In this paper, a symmetric key block cipher cryptosystem is proposed, involving multiple two-dimensional chaotic maps and using 128-bits external secret key. Computer simulations indicate that the cipher has good diffusion and confusion properties with respect to the plaintext and the key. Moreover, it produces ciphertext with random distribution. The computation time is much less than previous related works. Theoretic analysis verifies its superiority to previous cryptosystems against different types of attacks.
Static Structure of Two-Dimensional Granular Chain
Institute of Scientific and Technical Information of China (English)
WEN Ping-Ping; LI Liang-Sheng; ZHENG Ning; SHI Qing-Fan
2010-01-01
@@ Static packing structures of two-dimensional granular chains are investigated experimentally.It is shown that the packing density approximates the saturation with the exponential law as the length of chain increases.The packing structures are globally disordered,while the local square crystallization is found by using the radial distribution function.This characteristic phase of chain packing is similar to a liquid crystal state,and has properties between a conventional liquid and solid crystal.
Score distributions of gapped multiple sequence alignments down to the low-probability tail
Fieth, Pascal; Hartmann, Alexander K.
2016-08-01
Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to multiple sequence alignments with gaps, which are much more relevant for practical applications in molecular biology. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.
Test of quantum thermalization in the two-dimensional transverse-field Ising model
Blaß, Benjamin; Rieger, Heiko
2016-12-01
We study the quantum relaxation of the two-dimensional transverse-field Ising model after global quenches with a real-time variational Monte Carlo method and address the question whether this non-integrable, two-dimensional system thermalizes or not. We consider both interaction quenches in the paramagnetic phase and field quenches in the ferromagnetic phase and compare the time-averaged probability distributions of non-conserved quantities like magnetization and correlation functions to the thermal distributions according to the canonical Gibbs ensemble obtained with quantum Monte Carlo simulations at temperatures defined by the excess energy in the system. We find that the occurrence of thermalization crucially depends on the quench parameters: While after the interaction quenches in the paramagnetic phase thermalization can be observed, our results for the field quenches in the ferromagnetic phase show clear deviations from the thermal system. These deviations increase with the quench strength and become especially clear comparing the shape of the thermal and the time-averaged distributions, the latter ones indicating that the system does not completely lose the memory of its initial state even for strong quenches. We discuss our results with respect to a recently formulated theorem on generalized thermalization in quantum systems.
Probability collectives a distributed multi-agent system approach for optimization
Kulkarni, Anand Jayant; Abraham, Ajith
2015-01-01
This book provides an emerging computational intelligence tool in the framework of collective intelligence for modeling and controlling distributed multi-agent systems referred to as Probability Collectives. In the modified Probability Collectives methodology a number of constraint handling techniques are incorporated, which also reduces the computational complexity and improved the convergence and efficiency. Numerous examples and real world problems are used for illustration, which may also allow the reader to gain further insight into the associated concepts.
The Exit Distribution for Smart Kinetic Walk with Symmetric and Asymmetric Transition Probability
Dai, Yan
2017-03-01
It has been proved that the distribution of the point where the smart kinetic walk (SKW) exits a domain converges in distribution to harmonic measure on the hexagonal lattice. For other lattices, it is believed that this result still holds, and there is good numerical evidence to support this conjecture. Here we examine the effect of the symmetry and asymmetry of the transition probability on each step of the SKW on the square lattice and test if the exit distribution converges in distribution to harmonic measure as well. From our simulations, the limiting exit distribution of the SKW with a non-uniform but symmetric transition probability as the lattice spacing goes to zero is the harmonic measure. This result does not hold for asymmetric transition probability. We are also interested in the difference between the SKW with symmetric transition probability exit distribution and harmonic measure. Our simulations provide strong support for a explicit conjecture about this first order difference. The explicit formula for the conjecture will be given below.
Evaluation of probability distributions for concentration fluctuations in a building array
Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.
2017-10-01
The wide range of values observed in a measured concentration time series after the release of a dispersing airborne pollutant from a point source in the atmospheric boundary layer, and the hazard level associated with the peak values, demonstrate the necessity of predicting the concentration probability distribution. For this, statistical models describing the probability of occurrence are preferably employed. In this paper a concentration database pertaining to a field experiment of dispersion in an urban-like area (MUST experiment) from a continuously emitting source is used for the selection of the best performing statistical model between the Gamma and the Beta distributions. The skewness, the kurtosis as well as the inverses of the cumulative distribution function were compared between the two statistical models and the experiment. The evaluation is performed in the form of validation metrics such as the Fractional Bias (FB), the Normalized Mean Square Error and the factor-of-2 percentage. The Beta probability distribution agreed with the experimental results better than the Gamma probability distribution except for the 25th percentile. Also according to the significant tests using the BOOT software the Beta model presented FB and NMSE values that are statistical different than the ones of the Gamma model except the 75th percentiles and the FB of the 99th percentiles. The effect of the stability conditions and source heights on the performance of the statistical models is also examined. For both cases the performance of the Beta distribution was slightly better than that of the Gamma.
Institute of Scientific and Technical Information of China (English)
Xian-min Geng; Shu-chen Wan
2011-01-01
The compound negative binomial model, introduced in this paper, is a discrete time version. We discuss the Markov properties of the surplus process, and study the ruin probability and the joint distributions of actuarial random vectors in this model. By the strong Markov property and the mass function of a defective renewal sequence, we obtain the explicit expressions of the ruin probability, the finite-horizon ruin probability,the joint distributions of T, U(T - 1), |U(T)| and inf 0≤n＜T1 U(n) (i.e., the time of ruin, the surplus immediately before ruin, the deficit at ruin and maximal deficit from ruin to recovery) and the distributions of some actuariai random vectors.
Net-charge probability distributions in heavy ion collisions at chemical freeze-out
Braun-Munzinger, P; Karsch, F; Redlich, K; Skokov, V
2011-01-01
We explore net charge probability distributions in heavy ion collisions within the hadron resonance gas model. The distributions for strangeness, electric charge and baryon number are derived. We show that, within this model, net charge probability distributions and the resulting fluctuations can be computed directly from the measured yields of charged and multi-charged hadrons. The influence of multi-charged particles and quantum statistics on the shape of the distribution is examined. We discuss the properties of the net proton distribution along the chemical freeze-out line. The model results presented here can be compared with data at RHIC energies and at the LHC to possibly search for the relation between chemical freeze-out and QCD cross-over lines in heavy ion collisions.
Directory of Open Access Journals (Sweden)
Diogo de Carvalho Bezerra
2015-12-01
Full Text Available ABSTRACT Contributions from the sensitivity analysis of the parameters of the linear programming model for the elicitation of experts' beliefs are presented. The process allows for the calibration of the family of probability distributions obtained in the elicitation process. An experiment to obtain the probability distribution of a future event (Brazil vs. Spain soccer game in the 2013 FIFA Confederations Cup final game was conducted. The proposed sensitivity analysis step may help to reduce the vagueness of the information given by the expert.
THE LEBESGUE-STIELJES INTEGRAL AS APPLIED IN PROBABILITY DISTRIBUTION THEORY
bounded variation and Borel measureable functions are set forth in the introduction. Chapter 2 is concerned with establishing a one to one correspondence between LebesgueStieljes measures and certain equivalence classes of functions which are monotone non decreasing and continuous on the right. In Chapter 3 the Lebesgue-Stieljes Integral is defined and some of its properties are demonstrated. In Chapter 4 probability distribution function is defined and the notions in Chapters 2 and 3 are used to show that the Lebesgue-Stieljes integral of any probability distribution
Hanayama, Nobutane; Sibuya, Masaaki
2016-08-01
In modern biology, theories of aging fall mainly into two groups: damage theories and programed theories. If programed theories are true, the probability that human beings live beyond a specific age will be zero. In contrast, if damage theories are true, such an age does not exist, and a longevity record will be eventually destroyed. In this article, for examining real state, a special type of binomial model based on the generalized Pareto distribution has been applied to data of Japanese centenarians. From the results, it is concluded that the upper limit of lifetime probability distribution in the Japanese population has been estimated 123 years.
Electronics based on two-dimensional materials.
Fiori, Gianluca; Bonaccorso, Francesco; Iannaccone, Giuseppe; Palacios, Tomás; Neumaier, Daniel; Seabaugh, Alan; Banerjee, Sanjay K; Colombo, Luigi
2014-10-01
The compelling demand for higher performance and lower power consumption in electronic systems is the main driving force of the electronics industry's quest for devices and/or architectures based on new materials. Here, we provide a review of electronic devices based on two-dimensional materials, outlining their potential as a technological option beyond scaled complementary metal-oxide-semiconductor switches. We focus on the performance limits and advantages of these materials and associated technologies, when exploited for both digital and analog applications, focusing on the main figures of merit needed to meet industry requirements. We also discuss the use of two-dimensional materials as an enabling factor for flexible electronics and provide our perspectives on future developments.
Two-dimensional ranking of Wikipedia articles
Zhirov, A. O.; Zhirov, O. V.; Shepelyansky, D. L.
2010-10-01
The Library of Babel, described by Jorge Luis Borges, stores an enormous amount of information. The Library exists ab aeterno. Wikipedia, a free online encyclopaedia, becomes a modern analogue of such a Library. Information retrieval and ranking of Wikipedia articles become the challenge of modern society. While PageRank highlights very well known nodes with many ingoing links, CheiRank highlights very communicative nodes with many outgoing links. In this way the ranking becomes two-dimensional. Using CheiRank and PageRank we analyze the properties of two-dimensional ranking of all Wikipedia English articles and show that it gives their reliable classification with rich and nontrivial features. Detailed studies are done for countries, universities, personalities, physicists, chess players, Dow-Jones companies and other categories.
Two-Dimensional NMR Lineshape Analysis
Waudby, Christopher A.; Ramos, Andres; Cabrita, Lisa D.; Christodoulou, John
2016-04-01
NMR titration experiments are a rich source of structural, mechanistic, thermodynamic and kinetic information on biomolecular interactions, which can be extracted through the quantitative analysis of resonance lineshapes. However, applications of such analyses are frequently limited by peak overlap inherent to complex biomolecular systems. Moreover, systematic errors may arise due to the analysis of two-dimensional data using theoretical frameworks developed for one-dimensional experiments. Here we introduce a more accurate and convenient method for the analysis of such data, based on the direct quantum mechanical simulation and fitting of entire two-dimensional experiments, which we implement in a new software tool, TITAN (TITration ANalysis). We expect the approach, which we demonstrate for a variety of protein-protein and protein-ligand interactions, to be particularly useful in providing information on multi-step or multi-component interactions.
Towards two-dimensional search engines
Ermann, Leonardo; Shepelyansky, Dima L
2011-01-01
We study the statistical properties of various directed networks using ranking of their nodes based on the dominant vectors of the Google matrix known as PageRank and CheiRank. On average PageRank orders nodes proportionally to a number of ingoing links, while CheiRank orders nodes proportionally to a number of outgoing links. In this way the ranking of nodes becomes two-dimensional that paves the way for development of two-dimensional search engines of new type. Information flow properties on PageRank-CheiRank plane are analyzed for networks of British, French and Italian Universities, Wikipedia, Linux Kernel, gene regulation and other networks. Methods of spam links control are also analyzed.
Toward two-dimensional search engines
Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.
2012-07-01
We study the statistical properties of various directed networks using ranking of their nodes based on the dominant vectors of the Google matrix known as PageRank and CheiRank. On average PageRank orders nodes proportionally to a number of ingoing links, while CheiRank orders nodes proportionally to a number of outgoing links. In this way, the ranking of nodes becomes two dimensional which paves the way for the development of two-dimensional search engines of a new type. Statistical properties of information flow on the PageRank-CheiRank plane are analyzed for networks of British, French and Italian universities, Wikipedia, Linux Kernel, gene regulation and other networks. A special emphasis is done for British universities networks using the large database publicly available in the UK. Methods of spam links control are also analyzed.
A two-dimensional Dirac fermion microscope
Bøggild, Peter; Caridad, José M.; Stampfer, Christoph; Calogero, Gaetano; Papior, Nick Rübner; Brandbyge, Mads
2017-06-01
The electron microscope has been a powerful, highly versatile workhorse in the fields of material and surface science, micro and nanotechnology, biology and geology, for nearly 80 years. The advent of two-dimensional materials opens new possibilities for realizing an analogy to electron microscopy in the solid state. Here we provide a perspective view on how a two-dimensional (2D) Dirac fermion-based microscope can be realistically implemented and operated, using graphene as a vacuum chamber for ballistic electrons. We use semiclassical simulations to propose concrete architectures and design rules of 2D electron guns, deflectors, tunable lenses and various detectors. The simulations show how simple objects can be imaged with well-controlled and collimated in-plane beams consisting of relativistic charge carriers. Finally, we discuss the potential of such microscopes for investigating edges, terminations and defects, as well as interfaces, including external nanoscale structures such as adsorbed molecules, nanoparticles or quantum dots.
A two-dimensional Dirac fermion microscope.
Bøggild, Peter; Caridad, José M; Stampfer, Christoph; Calogero, Gaetano; Papior, Nick Rübner; Brandbyge, Mads
2017-06-09
The electron microscope has been a powerful, highly versatile workhorse in the fields of material and surface science, micro and nanotechnology, biology and geology, for nearly 80 years. The advent of two-dimensional materials opens new possibilities for realizing an analogy to electron microscopy in the solid state. Here we provide a perspective view on how a two-dimensional (2D) Dirac fermion-based microscope can be realistically implemented and operated, using graphene as a vacuum chamber for ballistic electrons. We use semiclassical simulations to propose concrete architectures and design rules of 2D electron guns, deflectors, tunable lenses and various detectors. The simulations show how simple objects can be imaged with well-controlled and collimated in-plane beams consisting of relativistic charge carriers. Finally, we discuss the potential of such microscopes for investigating edges, terminations and defects, as well as interfaces, including external nanoscale structures such as adsorbed molecules, nanoparticles or quantum dots.
Gulev, Sergey; Tilinina, Natalia; Belyaev, Konstantin
2015-04-01
Surface turbulent heat fluxes from modern era and first generation reanalyses (NCEP-DOE, ERA-Interim, MERRA NCEP-CFSR, JRA) as well as from satellite products (SEAFLUX, IFREMER, HOAPS) were intercompared using framework of probability distributions for sensible and latent heat fluxes. For approximation of probability distributions and estimation of extreme flux values Modified Fisher-Tippett (MFT) distribution has been used. Besides mean flux values, consideration is given to the comparative analysis of (i) parameters of the MFT probability density functions (scale and location), (ii) extreme flux values corresponding high order percentiles of fluxes (e.g. 99th and higher) and (iii) fractional contribution of extreme surface flux events in the total surface turbulent fluxes integrated over months and seasons. The latter was estimated using both fractional distribution derived from MFT and empirical estimates based upon occurrence histograms. The strongest differences in the parameters of probability distributions of surface fluxes and extreme surface flux values between different reanalyses are found in the western boundary current extension regions and high latitudes, while the highest differences in the fractional contributions of surface fluxes may occur in mid ocean regions being closely associated with atmospheric synoptic dynamics. Generally, satellite surface flux products demonstrate relatively stronger extreme fluxes compared to reanalyses, even in the Northern Hemisphere midlatitudes where data assimilation input in reanalyses is quite dense compared to the Southern Ocean regions.
Lowe, V J; Bullard, A G; Coleman, R E
1995-12-01
The criteria used in the Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED) study for the interpretation of ventilation/perfusion scans are widely used and the probability of pulmonary embolism is determined from these criteria. The prevalence of pulmonary embolism in the PIOPED study was 33%. To investigate the similarity of patient populations who have ventilation/perfusion scans at one of the medical centers that participated in the PIOPED study and a small community hospital, the authors evaluated the probability category distributions of lung scans at the two institutions. They retrospectively interpreted 54 and 49 ventilation/perfusion lung scans selected from January, 1991, to June, 1992, at Duke University Medical Center and at Central Carolina Hospital, respectively. Studies were interpreted according to the PIOPED criteria. The percentage of studies assigned to each category at Duke University Medical Center and Central Carolina Hospital were 17% and 27% normal or very low probability, 31% and 59% low probability, 39% and 10% intermediate probability, and 13% and 4% high probability, respectively. The different distribution of probability categories between university and community hospitals suggests that the prevalence of disease may also be different. The post-test probability of pulmonary embolism is related to the prevalence of disease and the sensitivity and specificity of the ventilation/perfusion scan. Because these variables may differ in community hospital settings, the post-test probability of pulmonary embolism as determined by data from the PIOPED study should only be used in institutions with similar populations. Clinical management based upon the results of the PIOPED study may not be applicable to patients who have ventilation/perfusion scans performed in a community hospital.
Two-Dimensional Scheduling: A Review
Directory of Open Access Journals (Sweden)
Zhuolei Xiao
2013-07-01
Full Text Available In this study, we present a literature review, classification schemes and analysis of methodology for scheduling problems on Batch Processing machine (BP with both processing time and job size constraints which is also regarded as Two-Dimensional (TD scheduling. Special attention is given to scheduling problems with non-identical job sizes and processing times, with details of the basic algorithms and other significant results.
Two dimensional fermions in four dimensional YM
Narayanan, R
2009-01-01
Dirac fermions in the fundamental representation of SU(N) live on a two dimensional torus flatly embedded in $R^4$. They interact with a four dimensional SU(N) Yang Mills vector potential preserving a global chiral symmetry at finite $N$. As the size of the torus in units of $\\frac{1}{\\Lambda_{SU(N)}}$ is varied from small to large, the chiral symmetry gets spontaneously broken in the infinite $N$ limit.
Two-dimensional Kagome photonic bandgap waveguide
DEFF Research Database (Denmark)
Nielsen, Jens Bo; Søndergaard, Thomas; Libori, Stig E. Barkou;
2000-01-01
The transverse-magnetic photonic-bandgap-guidance properties are investigated for a planar two-dimensional (2-D) Kagome waveguide configuration using a full-vectorial plane-wave-expansion method. Single-moded well-localized low-index guided modes are found. The localization of the optical modes...... is investigated with respect to the width of the 2-D Kagome waveguide, and the number of modes existing for specific frequencies and waveguide widths is mapped out....
String breaking in two-dimensional QCD
Hornbostel, K J
1999-01-01
I present results of a numerical calculation of the effects of light quark-antiquark pairs on the linear heavy-quark potential in light-cone quantized two-dimensional QCD. I extract the potential from the Q-Qbar component of the ground-state wavefunction, and observe string breaking at the heavy-light meson pair threshold. I briefly comment on the states responsible for the breaking.
Two-dimensional supramolecular electron spin arrays.
Wäckerlin, Christian; Nowakowski, Jan; Liu, Shi-Xia; Jaggi, Michael; Siewert, Dorota; Girovsky, Jan; Shchyrba, Aneliia; Hählen, Tatjana; Kleibert, Armin; Oppeneer, Peter M; Nolting, Frithjof; Decurtins, Silvio; Jung, Thomas A; Ballav, Nirmalya
2013-05-07
A bottom-up approach is introduced to fabricate two-dimensional self-assembled layers of molecular spin-systems containing Mn and Fe ions arranged in a chessboard lattice. We demonstrate that the Mn and Fe spin states can be reversibly operated by their selective response to coordination/decoordination of volatile ligands like ammonia (NH3). Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Two dimensional echocardiographic detection of intraatrial masses.
DePace, N L; Soulen, R L; Kotler, M N; Mintz, G S
1981-11-01
With two dimensional echocardiography, a left atrial mass was detected in 19 patients. Of these, 10 patients with rheumatic mitral stenosis had a left atrial thrombus. The distinctive two dimensional echocardiographic features of left atrial thrombus included a mass of irregular nonmobile laminated echos within an enlarged atrial cavity, usually with a broad base of attachment to the posterior left atrial wall. Seven patients had a left atrial myxoma. Usually, the myxoma appeared as a mottled ovoid, sharply demarcated mobile mass attached to the interatrial septum. One patient had a right atrial angiosarcoma that appeared as a nonmobile mass extending from the inferior vena caval-right atrial junction into the right atrial cavity. One patient had a left atrial leiomyosarcoma producing a highly mobile mass attached to the lateral wall of the left atrium. M mode echocardiography detected six of the seven myxomas, one thrombus and neither of the other tumors. Thus, two dimensional echocardiography appears to be the technique of choice in the detection, localization and differentiation of intraatrial masses.
Stress Wave Propagation in Two-dimensional Buckyball Lattice
Xu, Jun; Zheng, Bowen
2016-11-01
Orderly arrayed granular crystals exhibit extraordinary capability to tune stress wave propagation. Granular system of higher dimension renders many more stress wave patterns, showing its great potential for physical and engineering applications. At nanoscale, one-dimensionally arranged buckyball (C60) system has shown the ability to support solitary wave. In this paper, stress wave behaviors of two-dimensional buckyball (C60) lattice are investigated based on square close packing and hexagonal close packing. We show that the square close packed system supports highly directional Nesterenko solitary waves along initially excited chains and hexagonal close packed system tends to distribute the impulse and dissipates impact exponentially. Results of numerical calculations based on a two-dimensional nonlinear spring model are in a good agreement with the results of molecular dynamics simulations. This work enhances the understanding of wave properties and allows manipulations of nanoscale lattice and novel design of shock mitigation and nanoscale energy harvesting devices.
Importance measures for imprecise probability distributions and their sparse grid solutions
Institute of Scientific and Technical Information of China (English)
WANG; Pan; LU; ZhenZhou; CHENG; Lei
2013-01-01
For the imprecise probability distribution of structural system, the variance based importance measures (IMs) of the inputs are investigated, and three IMs are defined on the conditions of random distribution parameters, interval distribution parameters and the mixture of those two types of distribution parameters. The defined IMs can reflect the influence of the inputs on the output of the structural system with imprecise distribution parameters, respectively. Due to the large computational cost of the variance based IMs, sparse grid method is employed in this work to compute the variance based IMs at each reference point of distribution parameters. For the three imprecise distribution parameter cases, the sparse grid method and the combination of sparse grid method with genetic algorithm are used to compute the defined IMs. Numerical and engineering examples are em-ployed to demonstrate the rationality of the defined IMs and the efficiency of the applied methods.
The probability distribution model of air pollution index and its dominants in Kuala Lumpur
AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah
2016-11-01
This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.
Marco Bee
2012-01-01
This paper deals with the estimation of the lognormal-Pareto and the lognormal-Generalized Pareto mixture distributions. The log-likelihood function is discontinuous, so that Maximum Likelihood Estimation is not asymptotically optimal. For this reason, we develop an alternative method based on Probability Weighted Moments. We show that the standard version of the method can be applied to the first distribution, but not to the latter. Thus, in the lognormal- Generalized Pareto case, we work ou...
Calculation of Radar Probability of Detection in K-Distributed Sea Clutter and Noise
2011-04-01
Expanded Swerling Target Models, IEEE Trans. AES 39 (2003) 1059-1069. 18. G. Arfken , Mathematical Methods for Physicists, Second Edition, Academic...form solution for the probability of detection in K-distributed clutter, so numerical methods are required. The K distribution is a compound model...the integration, with the nodes and weights calculated using matrix methods , so that a general purpose numerical integration routine is not required
Analysis of low probability of intercept (LPI) radar signals using the Wigner Distribution
Gau, Jen-Yu
2002-01-01
Approved for public release, distribution is unlimited The parameters of Low Probability of Intercept (LPI) radar signals are hard to identify by using traditional periodogram signal processing techniques. Using the Wigner Distribution (WD), this thesis examines eight types of LPI radar signals. Signal to noise ratios of 0 dB and -6dB are also investigated. The eight types LPI radar signals examined include Frequency Modulation Continuous Wave (FMCW), Frank code, P1 code, P2 code, P3 code,...
Directory of Open Access Journals (Sweden)
Daniel Ting
2010-04-01
Full Text Available Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1 input data size and criteria for structure inclusion (resolution, R-factor, etc.; 2 filtering of suspect conformations and outliers using B-factors or other features; 3 secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included; 4 the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5 whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp.
Steeneveld, W.; Gaag, van der L.C.; Barkema, H.W.; Hogeveen, H.
2009-01-01
Clinical mastitis (CM) can be caused by a wide variety of pathogens and farmers must start treatment before the actual causal pathogen is known. By providing a probability distribution for the causal pathogen, naive Bayesian networks (NBN) can serve as a management tool for farmers to decide which t
A.C.D. Donkers (Bas); T. Lourenco (Tania); B.G.C. Dellaert (Benedict); D.G. Goldstein (Daniel G.)
2013-01-01
textabstract In this paper we propose the use of preferred outcome distributions as a new method to elicit individuals' value and probability weighting functions in decisions under risk. Extant approaches for the elicitation of these two key ingredients of individuals' risk attitude typically rely
Institute of Scientific and Technical Information of China (English)
吕渭济; 崔巍
2001-01-01
In this paper, two kinds of models are presented and optimized for project investment risk income on the basis of probability X distribution. One kind of model being proved has only a maximal value and another kind being proved has no extreme values.
Institute of Scientific and Technical Information of China (English)
LU Wei-ji; CUI Wei
2001-01-01
In this paper, two kinds of models are presented and optimized for pro ject investment risk income on the basis of probability χ distribution. One kin d of model being proved has only a maximal value and another kind being proved h as no extreme values.
Local kinetic effects in two-dimensional plasma turbulence.
Servidio, S; Valentini, F; Califano, F; Veltri, P
2012-01-27
Using direct numerical simulations of a hybrid Vlasov-Maxwell model, kinetic processes are investigated in a two-dimensional turbulent plasma. In the turbulent regime, kinetic effects manifest through a deformation of the ion distribution function. These patterns of non-Maxwellian features are concentrated in space nearby regions of strong magnetic activity: the distribution function is modulated by the magnetic topology, and can elongate along or across the local magnetic field. These results open a new path on the study of kinetic processes such as heating, particle acceleration, and temperature anisotropy, commonly observed in astrophysical and laboratory plasmas.
Magnetization of two-dimensional superconductors with defects
Kashurnikov, V A; Zyubin, M V
2002-01-01
The new method for modeling the layered high-temperature superconductors magnetization with defects, based on the Monte-Carlo algorithm, is developed. Minimization of the free energy functional of the vortex two-dimensional system made it possible to obtain the equilibrium vortex density configurations and calculate the magnetization of the superconductor with the arbitrary defects distribution in the wide range of temperatures. The magnetic induction profiles and magnetic flux distribution inside the superconductor, proving the applicability of the Bean model, are calculated
Directory of Open Access Journals (Sweden)
A. B. Levina
2016-03-01
Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking
Directory of Open Access Journals (Sweden)
Aydın Kahriman
2011-11-01
Full Text Available Determine the diameter distribution of a stand and its relations with stand ages, site index, density and mixture percentage is very important both biologically and economically. The Weibull with two parameters, Weibull with three parameters, Gamma with two parameters, Gamma with three parameters, Beta, Lognormal with two parameters, Lognormal with three parameters, Normal, Johnson SB probability density functions were used to determination of diameter distributions. This study aimed to compared based on performance of describing different diameter distribution and to describe the best successful function of diameter distributions. The data were obtaited from 162 temporary sample plots measured Scots pine and Oriental beech mixed stands in Black Sea Region. The results show that four parameter Johnson SB function for both scots pine and oriental beech is the best successful function to describe diameter distributions based on error index values calculated by difference between observed and predicted diameter distributions.
Banik, S K; Ray, D S; Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar
2002-01-01
Traditionally, the quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasi-probability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using {\\it true probability distribution functions} is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their co-ordinates and momenta we derive a generalized quantum Langevin equation in $c$-numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion and the Smoluchowski equations are the {\\it exact} quantum analogues of their classical counterparts. The present work is {\\it independent} of path integral techniques. The theory as developed here is a natural ext...
Two Dimensional Connectivity for Vehicular Ad-Hoc Networks
Farivar, Masoud; Ashtiani, Farid
2008-01-01
In this paper, we focus on two-dimensional connectivity in sparse vehicular ad hoc networks (VANETs). In this respect, we find thresholds for the arrival rates of vehicles at entrances of a block of streets such that the connectivity is guaranteed for any desired probability. To this end, we exploit a mobility model recently proposed for sparse VANETs, based on BCMP open queuing networks and solve the related traffic equations to find the traffic characteristics of each street and use the results to compute the exact probability of connectivity along these streets. Then, we use the results from percolation theory and the proposed fast algorithms for evaluation of bond percolation problem in a random graph corresponding to the block of the streets. We then find sufficiently accurate two dimensional connectivity-related parameters, such as the average number of intersections connected to each other and the size of the largest set of inter-connected intersections. We have also proposed lower bounds for the case ...
Wang, S Q; Zhang, H Y; Li, Z L
2016-10-01
Understanding spatio-temporal distribution of pest in orchards can provide important information that could be used to design monitoring schemes and establish better means for pest control. In this study, the spatial and temporal distribution of Bactrocera minax (Enderlein) (Diptera: Tephritidae) was assessed, and activity trends were evaluated by using probability kriging. Adults of B. minax were captured in two successive occurrences in a small-scale citrus orchard by using food bait traps, which were placed both inside and outside the orchard. The weekly spatial distribution of B. minax within the orchard and adjacent woods was examined using semivariogram parameters. The edge concentration was discovered during the most weeks in adult occurrence, and the population of the adults aggregated with high probability within a less-than-100-m-wide band on both of the sides of the orchard and the woods. The sequential probability kriged maps showed that the adults were estimated in the marginal zone with higher probability, especially in the early and peak stages. The feeding, ovipositing, and mating behaviors of B. minax are possible explanations for these spatio-temporal patterns. Therefore, spatial arrangement and distance to the forest edge of traps or spraying spot should be considered to enhance pest control on B. minax in small-scale orchards.
Weakly disordered two-dimensional Frenkel excitons
Boukahil, A.; Zettili, Nouredine
2004-03-01
We report the results of studies of the optical properties of weakly disordered two- dimensional Frenkel excitons in the Coherent Potential Approximation (CPA). An approximate complex Green's function for a square lattice with nearest neighbor interactions is used in the self-consistent equation to determine the coherent potential. It is shown that the Density of States is very much affected by the logarithmic singularities in the Green's function. Our CPA results are in excellent agreement with previous investigations by Schreiber and Toyozawa using the Monte Carlo simulation.
Two-dimensional photonic crystal surfactant detection.
Zhang, Jian-Tao; Smith, Natasha; Asher, Sanford A
2012-08-07
We developed a novel two-dimensional (2-D) crystalline colloidal array photonic crystal sensing material for the visual detection of amphiphilic molecules in water. A close-packed polystyrene 2-D array monolayer was embedded in a poly(N-isopropylacrylamide) (PNIPAAm)-based hydrogel film. These 2-D photonic crystals placed on a mirror show intense diffraction that enables them to be used for visual determination of analytes. Binding of surfactant molecules attaches ions to the sensor that swells the PNIPAAm-based hydrogel. The resulting increase in particle spacing red shifts the 2-D diffracted light. Incorporation of more hydrophobic monomers increases the sensitivity to surfactants.
Theory of two-dimensional transformations
Kanayama, Yutaka J.; Krahn, Gary W.
1998-01-01
The article of record may be found at http://dx.doi.org/10.1109/70.720359 Robotics and Automation, IEEE Transactions on This paper proposes a new "heterogeneous" two-dimensional (2D) transformation group ___ to solve motion analysis/planning problems in robotics. In this theory, we use a 3×1 matrix to represent a transformation as opposed to a 3×3 matrix in the homogeneous formulation. First, this theory is as capable as the homogeneous theory, Because of the minimal size, its implement...
Two-dimensional ranking of Wikipedia articles
Zhirov, A O; Shepelyansky, D L
2010-01-01
The Library of Babel, described by Jorge Luis Borges, stores an enormous amount of information. The Library exists {\\it ab aeterno}. Wikipedia, a free online encyclopaedia, becomes a modern analogue of such a Library. Information retrieval and ranking of Wikipedia articles become the challenge of modern society. We analyze the properties of two-dimensional ranking of all Wikipedia English articles and show that it gives their reliable classification with rich and nontrivial features. Detailed studies are done for countries, universities, personalities, physicists, chess players, Dow-Jones companies and other categories.
Mobility anisotropy of two-dimensional semiconductors
Lang, Haifeng; Liu, Zhirong
2016-01-01
The carrier mobility of anisotropic two-dimensional (2D) semiconductors under longitudinal acoustic (LA) phonon scattering was theoretically studied with the deformation potential theory. Based on Boltzmann equation with relaxation time approximation, an analytic formula of intrinsic anisotropic mobility was deduced, which shows that the influence of effective mass to the mobility anisotropy is larger than that of deformation potential constant and elastic modulus. Parameters were collected for various anisotropic 2D materials (black phosphorus, Hittorf's phosphorus, BC$_2$N, MXene, TiS$_3$, GeCH$_3$) to calculate their mobility anisotropy. It was revealed that the anisotropic ratio was overestimated in the past.
Sums of two-dimensional spectral triples
DEFF Research Database (Denmark)
Christensen, Erik; Ivan, Cristina
2007-01-01
construct a sum of two dimensional modules which reflects some aspects of the topological dimensions of the compact metric space, but this will only give the metric back approximately. At the end we make an explicit computation of the last module for the unit interval in. The metric is recovered exactly......, the Dixmier trace induces a multiple of the Lebesgue integral but the growth of the number of eigenvalues is different from the one found for the standard differential operator on the unit interval....
Binding energy of two-dimensional biexcitons
DEFF Research Database (Denmark)
Singh, Jai; Birkedal, Dan; Vadim, Lyssenko;
1996-01-01
Using a model structure for a two-dimensional (2D) biexciton confined in a quantum well, it is shown that the form of the Hamiltonian of the 2D biexciton reduces into that of an exciton. The binding energies and Bohr radii of a 2D biexciton in its various internal energy states are derived...... analytically using the fractional dimension approach. The ratio of the binding energy of a 2D biexciton to that of a 2D exciton is found to be 0.228, which agrees very well with the recent experimental value. The results of our approach are compared with those of earlier theories....
Dynamics of film. [two dimensional continua theory
Zak, M.
1979-01-01
The general theory of films as two-dimensional continua are elaborated upon. As physical realizations of such a model this paper examines: inextensible films, elastic films, and nets. The suggested dynamic equations have enabled us to find out the characteristic speeds of wave propagation of the invariants of external and internal geometry and formulate the criteria of instability of their shape. Also included herein is a detailed account of the equation describing the film motions beyond the limits of the shape stability accompanied by the formation of wrinkles. The theory is illustrated by examples.
About the probability distribution of a quantity with given mean and variance
Olivares, Stefano
2012-01-01
Supplement 1 to GUM (GUM-S1) recommends the use of maximum entropy principle (MaxEnt) in determining the probability distribution of a quantity having specified properties, e.g., specified central moments. When we only know the mean value and the variance of a variable, GUM-S1 prescribes a Gaussian probability distribution for that variable. When further information is available, in the form of a finite interval in which the variable is known to lie, we indicate how the distribution for the variable in this case can be obtained. A Gaussian distribution should only be used in this case when the standard deviation is small compared to the range of variation (the length of the interval). In general, when the interval is finite, the parameters of the distribution should be evaluated numerically, as suggested by I. Lira, Metrologia, 46 L27 (2009). Here we note that the knowledge of the range of variation is equivalent to a bias of the distribution toward a flat distribution in that range, and the principle of mini...
Institute of Scientific and Technical Information of China (English)
冉洪流
2004-01-01
In recent years, some researchers have studied the paleoearthquake along the Haiyuan fault and revealed a lot of paleoearthquake events. All available information allows more reliable analysis of earthquake recurrence interval and earthquake rupture patterns along the Haiyuan fault. Based on this paleoseismological information, the recurrence probability and magnitude distribution for M≥6.7 earthquakes in future 100 years along the Haiyuan fault can be obtained through weighted computation by using Poisson and Brownian passage time models and considering different rupture patterns. The result shows that the recurrence probability of MS≥6.7 earthquakes is about 0.035 in future 100 years along the Haiyuan fault.
Energy Technology Data Exchange (ETDEWEB)
Parsons, Brendon A.; Marney, Luke C.; Siegler, William C.; Hoggard, Jamin C.; Wright, Bob W.; Synovec, Robert E.
2015-04-07
Multi-dimensional chromatographic instrumentation produces information-rich, and chemically complex data containing meaningful chemical signals and/or chemical patterns. Two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC × GC – TOFMS) is a prominent instrumental platform that has been applied extensively for discovery-based experimentation, where samples are sufficiently volatile or amenable to derivatization. Use of GC × GC – TOFMS and associated data analysis strategies aim to uncover meaningful chemical signals or chemical patterns. However, for complex samples, meaningful chemical information is often buried in a background of less meaningful chemical signal and noise. In this report, we utilize the tile-based F-ratio software in concert with the standard addition method by spiking non-native chemicals into a diesel fuel matrix at low concentrations. While the previous work studied the concentration range of 100-1000 ppm, the current study focuses on the 0 ppm to 100 ppm analyte spike range. This study demonstrates the sensitivity and selectivity of the tile-based F-ratio software for discovery of true positives in the non-targeted analysis of a chemically complex and analytically challenging sample matrix. By exploring the low concentration spike levels, we gain a better understanding of the limit of detection (LOD) of the tile-based F-ratio software with GC × GC – TOFMS data.
Gulev, S.
2015-12-01
Surface turbulent heat fluxes from modern era and first generation reanalyses (NCEP-DOE, ERA-Interim, MERRA NCEP-CFSR, JRA) as well as from satellite products (SEAFLUX, IFREMER, HOAPS) were intercompared using framework of probability distributions for sensible and latent heat fluxes. For approximation of probability distributions and estimation of extreme flux values Modified Fisher-Tippett (MFT) distribution has been used. Besides mean flux values, consideration is given to the comparative analysis of (i) parameters of the MFT probability density functions (scale and location), (ii) extreme flux values corresponding high order percentiles of fluxes (e.g. 99th and higher) and (iii) fractional contribution of extreme surface flux events in the total surface turbulent fluxes integrated over months and seasons. The latter was estimated using both fractional distribution derived from MFT and empirical estimates based upon occurrence histograms. The strongest differences in the parameters of probability distributions of surface fluxes and extreme surface flux values between different reanalyses are found in the western boundary current extension regions and high latitudes, while the highest differences in the fractional contributions of surface fluxes may occur in mid ocean regions being closely associated with atmospheric synoptic dynamics. Generally, satellite surface flux products demonstrate relatively stronger extreme fluxes compared to reanalyses, even in the Northern Hemisphere midlatitudes where data assimilation input in reanalyses is quite dense compared to the Southern Ocean regions. Our assessment also discriminated different reanalyses and satellite products with respect to their ability to quantify the role of extreme surface turbulent fluxes in forming ocean heat release in different regions.
Probability distribution of surface wind speed induced by convective adjustment on Venus
Yamamoto, Masaru
2017-03-01
The influence of convective adjustment on the spatial structure of Venusian surface wind and probability distribution of its wind speed is investigated using an idealized weather research and forecasting model. When the initially uniform wind is much weaker than the convective wind, patches of both prograde and retrograde winds with scales of a few kilometers are formed during active convective adjustment. After the active convective adjustment, because the small-scale convective cells and their related vertical momentum fluxes dissipate quickly, the large-scale (>4 km) prograde and retrograde wind patches remain on the surface and in the longitude-height cross-section. This suggests the coexistence of local prograde and retrograde flows, which may correspond to those observed by Pioneer Venus below 10 km altitude. The probability distributions of surface wind speed V during the convective adjustment have a similar form in different simulations, with a sharp peak around ∼0.1 m s-1 and a bulge developing on the flank of the probability distribution. This flank bulge is associated with the most active convection, which has a probability distribution with a peak at the wind speed 1.5-times greater than the Weibull fitting parameter c during the convective adjustment. The Weibull distribution P(> V) (= exp[-(V/c)k]) with best-estimate coefficients of Lorenz (2016) is reproduced during convective adjustments induced by a potential energy of ∼7 × 107 J m-2, which is calculated from the difference in total potential energy between initially unstable and neutral states. The maximum vertical convective heat flux magnitude is proportional to the potential energy of the convective adjustment in the experiments with the initial unstable-layer thickness altered. The present work suggests that convective adjustment is a promising process for producing the wind structure with occasionally generating surface winds of ∼1 m s-1 and retrograde wind patches.
Two-dimensional gauge theoretic supergravities
Cangemi, D.; Leblanc, M.
1994-05-01
We investigate two-dimensional supergravity theories, which can be built from a topological and gauge invariant action defined on an ordinary surface. One is the N = 1 supersymmetric extension of the Jackiw-Teitelboim model presented by Chamseddine in a superspace formalism. We complement the proof of Montano, Aoaki and Sonnenschein that this extension is topological and gauge invariant, based on the graded de Sitter algebra. Not only do the equations of motion correspond to the supergravity ones and do gauge transformations encompass local supersymmetries, but we also identify the ∫-theory with the superfield formalism action written by Chamseddine. Next, we show that the N = 1 supersymmetric extension of string-inspired two-dimensional dilaton gravity put forward by Park and Strominger cannot be written as a ∫-theory. As an alternative, we propose two topological and gauge theories that are based on a graded extension of the extended Poincaré algebra and satisfy a vanishing-curvature condition. Both models are supersymmetric extensions of the string-inspired dilaton gravity.
Two-Dimensional Theory of Scientific Representation
Directory of Open Access Journals (Sweden)
A Yaghmaie
2013-03-01
Full Text Available Scientific representation is an interesting topic for philosophers of science, many of whom have recently explored it from different points of view. There are currently two competing approaches to the issue: cognitive and non-cognitive, and each of them claims its own merits over the other. This article tries to provide a hybrid theory of scientific representation, called Two-Dimensional Theory of Scientific Representation, which has the merits of the two accounts and is free of their shortcomings. To do this, we will argue that although scientific representation needs to use the notion of intentionality, such a notion is defined and realized in a simply structural form contrary to what cognitive approach says about intentionality. After a short introduction, the second part of the paper is devoted to introducing theories of scientific representation briefly. In the third part, the structural accounts of representation will be criticized. The next step is to introduce the two-dimensional theory which involves two key components: fixing and structural fitness. It will be argued that fitness is an objective and non-intentional relation, while fixing is intentional.
Two-dimensional shape memory graphene oxide
Chang, Zhenyue; Deng, Junkai; Chandrakumara, Ganaka G.; Yan, Wenyi; Liu, Jefferson Zhe
2016-06-01
Driven by the increasing demand for micro-/nano-technologies, stimuli-responsive shape memory materials at nanoscale have recently attracted great research interests. However, by reducing the size of conventional shape memory materials down to approximately nanometre range, the shape memory effect diminishes. Here, using density functional theory calculations, we report the discovery of a shape memory effect in a two-dimensional atomically thin graphene oxide crystal with ordered epoxy groups, namely C8O. A maximum recoverable strain of 14.5% is achieved as a result of reversible phase transition between two intrinsically stable phases. Our calculations conclude co-existence of the two stable phases in a coherent crystal lattice, giving rise to the possibility of constructing multiple temporary shapes in a single material, thus, enabling highly desirable programmability. With an atomic thickness, excellent shape memory mechanical properties and electric field stimulus, the discovery of a two-dimensional shape memory graphene oxide opens a path for the development of exceptional micro-/nano-electromechanical devices.
Two -Dimensional Wavelength Selective Diffraction by High-Order Three-Dimensional Composite Grating
Institute of Scientific and Technical Information of China (English)
Kohji; Furuhashi; Hideaki; Okayama; Hirochika; Nakajima
2003-01-01
We propose a wavelength selective diffraction using reflectors placed on three-dimensional grid cross points. Different wavelengths are separated into spots distributed in two-dimensional plane. Compact device with high port counts is attainable.
Institute of Scientific and Technical Information of China (English)
XU Quan; TIAN Qiang
2007-01-01
Two-dimensional compact-like discrete breathers in discrete two-dimensional monatomic square lattices are investigated by discussing a generafized discrete two-dimensional monatomic model.It is proven that the twodimensional compact-like discrete breathers exist not only in two-dimensional soft Ф4 potentials but also in hard two-dimensional Ф4 potentials and pure two-dimensional K4 lattices.The measurements of the two-dimensional compact-like discrete breather cores in soft and hard two-dimensional Ф4 potential are determined by coupling parameter K4,while those in pure two-dimensional K4 lattices have no coupling with parameter K4.The stabilities of the two-dimensional compact-like discrete breathers correlate closely to the coupling parameter K4 and the boundary condition of lattices.
A numerical study of the alpha model for two-dimensional magnetohydrodynamic turbulent flows
Mininni, P D; Pouquet, A G
2004-01-01
We explore some consequences of the ``alpha model,'' also called the ``Lagrangian-averaged'' model, for two-dimensional incompressible magnetohydrodynamic (MHD) turbulence. This model is an extension of the smoothing procedure in fluid dynamics which filters velocity fields locally while leaving their associated vorticities unsmoothed, and has proved useful for high Reynolds number turbulence computations. We consider several known effects (selective decay, dynamic alignment, inverse cascades, and the probability distribution functions of fluctuating turbulent quantities) in magnetofluid turbulence and compare the results of numerical solutions of the primitive MHD equations with their alpha-model counterparts' performance for the same flows, in regimes where available resolution is adequate to explore both. The hope is to justify the use of the alpha model in regimes that lie outside currently available resolution, as will be the case in particular in three-dimensional geometry or for magnetic Prandtl number...
Statistics of the inverse-cascade regime in two-dimensional magnetohydrodynamic turbulence.
Banerjee, Debarghya; Pandit, Rahul
2014-07-01
We present a detailed direct numerical simulation of statistically steady, homogeneous, isotropic, two-dimensional magnetohydrodynamic turbulence. Our study concentrates on the inverse cascade of the magnetic vector potential. We examine the dependence of the statistical properties of such turbulence on dissipation and friction coefficients. We extend earlier work significantly by calculating fluid and magnetic spectra, probability distribution functions (PDFs) of the velocity, magnetic, vorticity, current, stream-function, and magnetic-vector-potential fields, and their increments. We quantify the deviations of these PDFs from Gaussian ones by computing their flatnesses and hyperflatnesses. We also present PDFs of the Okubo-Weiss parameter, which distinguishes between vortical and extensional flow regions, and its magnetic analog. We show that the hyperflatnesses of PDFs of the increments of the stream function and the magnetic vector potential exhibit significant scale dependence and we examine the implication of this for the multiscaling of structure functions. We compare our results with those of earlier studies.
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
Institute of Scientific and Technical Information of China (English)
Li Wei; Hai-liang Yang
2004-01-01
In this paper we first consider a risk process in which claim inter-arrival times and the time until the first claim have an Erlang (2)distribution.An explicit solution is derived for the probability of ultimate ruin,given an initial reserve of u when the claim size follows a Pareto distribution.Follow Ramsay [8] ,Laplace transforms and exponential integrals are used to derive the solution,which involves a single integral of real valued functions along the positive real line,and the integrand is not of an oscillating kind.Then we show that the ultimate ruin probability can be expressed as the sum of expected values of functions of two different Gamma random variables.Finally,the results are extended to the Erlang(n)case.Numerical examples are given to illustrate the main results.
Energy Technology Data Exchange (ETDEWEB)
Segalman, D.; Reese, G.
1998-09-01
The von Mises stress is often used as the metric for evaluating design margins, particularly for structures made of ductile materials. For deterministic loads, both static and dynamic, the calculation of von Mises stress is straightforward, as is the resulting calculation of reliability. For loads modeled as random processes, the task is different; the response to such loads is itself a random process and its properties must be determined in terms of those of both the loads and the system. This has been done in the past by Monte Carlo sampling of numerical realizations that reproduce the second order statistics of the problem. Here, the authors present a method that provides analytic expressions for the probability distributions of von Mises stress which can be evaluated efficiently and with good precision numerically. Further, this new approach has the important advantage of providing the asymptotic properties of the probability distribution.
Magnetic reconnection in two-dimensional magnetohydrodynamic turbulence.
Servidio, S; Matthaeus, W H; Shay, M A; Cassak, P A; Dmitruk, P
2009-03-20
Systematic analysis of numerical simulations of two-dimensional magnetohydrodynamic turbulence reveals the presence of a large number of X-type neutral points where magnetic reconnection occurs. We examine the statistical properties of this ensemble of reconnection events that are spontaneously generated by turbulence. The associated reconnection rates are distributed over a wide range of values and scales with the geometry of the diffusion region. Locally, these events can be described through a variant of the Sweet-Parker model, in which the parameters are externally controlled by turbulence. This new perspective on reconnection is relevant in space and astrophysical contexts, where plasma is generally in a fully turbulent regime.
Wake-induced bending of two-dimensional plasma crystals
Energy Technology Data Exchange (ETDEWEB)
Röcker, T. B., E-mail: tbr@mpe.mpg.de; Ivlev, A. V., E-mail: ivlev@mpe.mpg.de; Zhdanov, S. K.; Morfill, G. E. [Max Planck Institute for Extraterrestrial Physics, 85741 Garching (Germany); Couëdel, L. [CNRS, Aix-Marseille-Université, Laboratoire de Physique des Interactions Ioniques et Moléculaires, UMR 7345, 13397 Marseille Cedex 20 (France)
2014-07-15
It is shown that the wake-mediated interactions between microparticles in a two-dimensional plasma crystal affect the shape of the monolayer, making it non-flat. The equilibrium shape is calculated for various distributions of the particle number density in the monolayer. For typical experimental conditions, the levitation height of particles in the center of the crystal can be noticeably smaller than at the periphery. It is suggested that the effect of wake-induced bending can be utilized in experiments, to deduce important characteristics of the interparticle interaction.
Wake-induced bending of two-dimensional plasma crystals
Röcker, T B; Zhdanov, S K; Couëdel, L; Morfill, G E
2014-01-01
It is shown that the wake-mediated interactions between microparticles in a two-dimensional plasma crystal affect the shape of the monolayer, making it non-flat. The equilibrium shape is calculated for various distributions of the particle number density in the monolayer. For typical experimental conditions, the levitation height of particles in the center of the crystal can be noticeably smaller than at the periphery. It is suggested that the effect of wake-induced bending can be utilized in experiments, to deduce important characteristics of the interparticle interaction.
Electronic Transmission Properties of Two-Dimensional Quasi-Lattice
Institute of Scientific and Technical Information of China (English)
侯志林; 傅秀军; 刘有延
2002-01-01
In the framework of the tight binding model, the electronic transmission properties of two-dimensional Penrose lattices with free boundary conditions are studied using the generalized eigenfunction method (Phys. Rev. B 60(1999)13444). The electronic transmission coefficients for Penrose lattices with different sizes and widths are calculated, and the result shows strong energy dependence because of the quasiperiodic structure and quantum coherent effect. Around the Fermi level E = 0, there is an energy region with zero transmission amplitudes,which suggests that the studied systems are insulating. The spatial distributions of several typical electronic states with different transmission coefficients are plotted to display the propagation process.
Field analysis of two-dimensional focusing grating
Borsboom, P.P.; Frankena, H.J.
1995-01-01
The method that we have developed [P-P. Borsboom, Ph.D. dissertation (Delft University of Technology, Delft, The Netherlands); P-P. Borsboom and H. J. Frankena, J. Opt. Soc. Am. A 12, 1134–1141 (1995)] is successfully applied to a two-dimensional focusing grating coupler. The field in the focal region has been determined for symmetrical chirped gratings consisting of as many as 124 corrugations. The intensity distribution in the focal region agrees well with the approximate predictions of geo...
Hubig, Michael; Muggenthaler, Holger; Mall, Gita
2014-05-01
Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.
2014-01-01
The traditional mine microseism locating methods are mainly based on the assumption that the wave velocity is uniform through the space, which leads to some errors for the assumption goes against the laws of nature. In this paper, the wave velocity is regarded as a random variable, and the probability distribution information of the wave velocity is fused into the traditional locating method. This paper puts forwards the microseism source location method for the undersea mining on condition o...
2016-04-26
created using probability distribution functions. This new model performs as well or better than other modern models of the solar wind velocity. In... Physics , 120: 7987-8001, doi: 10.1002/2014JA020962. Abstract: The temporal and spatial variations of the thermospheric mass density during a series of...2015), Theoretical study of zonal differences of electron density at midlatitudes with GITM simulation, J. Geophys. Res. Space Physics , 120, 2951
Pauling resonant structures in real space through electron number probability distributions.
Pendas, A Martín; Francisco, E; Blanco, M A
2007-02-15
A general hierarchy of the coarsed-grained electron probability distributions induced by exhaustive partitions of the physical space is presented. It is argued that when the space is partitioned into atomic regions the consideration of these distributions may provide a first step toward an orbital invariant treatment of resonant structures. We also show that, in this case, the total molecular energy and its components may be partitioned into structure contributions, providing a fruitful extension of the recently developed interacting quantum atoms approach (J. Chem. Theory Comput. 2005, 1, 1096). The above ideas are explored in the hydrogen molecule, where a complete statistical and energetic decomposition into covalent and ionic terms is presented.
Evolving Molecular Cloud Structure and the Column Density Probability Distribution Function
Ward, Rachel L; Sills, Alison
2014-01-01
The structure of molecular clouds can be characterized with the probability distribution function (PDF) of the mass surface density. In particular, the properties of the distribution can reveal the nature of the turbulence and star formation present inside the molecular cloud. In this paper, we explore how these structural characteristics evolve with time and also how they relate to various cloud properties as measured from a sample of synthetic column density maps of molecular clouds. We find that, as a cloud evolves, the peak of its column density PDF will shift to surface densities below the observational threshold for detection, resulting in an underlying lognormal distribution which has been effectively lost at late times. Our results explain why certain observations of actively star-forming, dynamically older clouds, such as the Orion molecular cloud, do not appear to have any evidence of a lognormal distribution in their column density PDFs. We also study the evolution of the slope and deviation point ...
Probability distribution of the entanglement across a cut at an infinite-randomness fixed point
Devakul, Trithep; Majumdar, Satya N.; Huse, David A.
2017-03-01
We calculate the probability distribution of entanglement entropy S across a cut of a finite one-dimensional spin chain of length L at an infinite-randomness fixed point using Fisher's strong randomness renormalization group (RG). Using the random transverse-field Ising model as an example, the distribution is shown to take the form p (S |L ) ˜L-ψ (k ) , where k ≡S /ln[L /L0] , the large deviation function ψ (k ) is found explicitly, and L0 is a nonuniversal microscopic length. We discuss the implications of such a distribution on numerical techniques that rely on entanglement, such as matrix-product-state-based techniques. Our results are verified with numerical RG simulations, as well as the actual entanglement entropy distribution for the random transverse-field Ising model which we calculate for large L via a mapping to Majorana fermions.
Tomadakis, Manolis M.; Robertson, Teri J.
2003-07-01
We present a random walk based investigation of the pore size probability distribution and its moments, the survival probability and mean survival time, and the principal relaxation time, for random and ordered arrays of cylindrical fibers of various orientation distributions. The dimensionless mean survival time, principal relaxation time, mean pore size, and mean square pore size are found to increase with porosity, remain practically independent of the directionality of random fiber beds, and attain lower values for ordered arrays. Wide pore size distributions are obtained for random fiber structures and relatively narrow for ordered square arrays, all in very good agreement with theoretically predicted limiting values. Analytical results derived for the pore size probability and its lower moments for square arrays of fibers practically coincide with the corresponding simulation results. Earlier variational bounds on the mean survival time and principal relaxation time are obeyed by our numerical results in all cases, and are found to be quite sharp up to very high porosities. Dimensionless groups representing the deviation of such bounds from our simulation results vary in practically the same range as the corresponding values reported earlier for beds of spherical particles. A universal scaling expression of the literature relating the mean survival time to the mean pore size [S. Torquato and C. L. Y. Yeong, J. Chem. Phys. 106, 8814 (1997)] agrees very well with our results for all types of fiber structures, thus validated for the first time for anisotropic porous media.
Optimal design of unit hydrographs using probability distribution and genetic algorithms
Indian Academy of Sciences (India)
Rajib Kumar Bhattacharjya
2004-10-01
A nonlinear optimization model is developed to transmute a unit hydrograph into a probability distribution function (PDF). The objective function is to minimize the sum of the square of the deviation between predicted and actual direct runoff hydrograph of a watershed. The predicted runoff hydrograph is estimated by using a PDF. In a unit hydrograph, the depth of rainfall excess must be unity and the ordinates must be positive. Incorporation of a PDF ensures that the depth of rainfall excess for the unit hydrograph is unity, and the ordinates are also positive. Unit hydrograph ordinates are in terms of intensity of rainfall excess on a discharge per unit catchment area basis, the unit area thus representing the unit rainfall excess. The proposed method does not have any constraint. The nonlinear optimization formulation is solved using binary-coded genetic algorithms. The number of variables to be estimated by optimization is the same as the number of probability distribution parameters; gamma and log-normal probability distributions are used. The existing nonlinear programming model for obtaining optimal unit hydrograph has also been solved using genetic algorithms, where the constrained nonlinear optimization problem is converted to an unconstrained problem using penalty parameter approach. The results obtained are compared with those obtained by the earlier LP model and are fairly similar.
Silva, Antonio
2005-03-01
It is well-known that the mathematical theory of Brownian motion was first developed in the Ph. D. thesis of Louis Bachelier for the French stock market before Einstein [1]. In Ref. [2] we studied the so-called Heston model, where the stock-price dynamics is governed by multiplicative Brownian motion with stochastic diffusion coefficient. We solved the corresponding Fokker-Planck equation exactly and found an analytic formula for the time-dependent probability distribution of stock price changes (returns). The formula interpolates between the exponential (tent-shaped) distribution for short time lags and the Gaussian (parabolic) distribution for long time lags. The theoretical formula agrees very well with the actual stock-market data ranging from the Dow-Jones index [2] to individual companies [3], such as Microsoft, Intel, etc. [] [1] Louis Bachelier, ``Th'eorie de la sp'eculation,'' Annales Scientifiques de l''Ecole Normale Sup'erieure, III-17:21-86 (1900).[] [2] A. A. Dragulescu and V. M. Yakovenko, ``Probability distribution of returns in the Heston model with stochastic volatility,'' Quantitative Finance 2, 443--453 (2002); Erratum 3, C15 (2003). [cond-mat/0203046] [] [3] A. C. Silva, R. E. Prange, and V. M. Yakovenko, ``Exponential distribution of financial returns at mesoscopic time lags: a new stylized fact,'' Physica A 344, 227--235 (2004). [cond-mat/0401225
Optimal excitation of two dimensional Holmboe instabilities
Constantinou, Navid C
2010-01-01
Highly stratified shear layers are rendered unstable even at high stratifications by Holmboe instabilities when the density stratification is concentrated in a small region of the shear layer. These instabilities may cause mixing in highly stratified environments. However these instabilities occur in tongues for a limited range of parameters. We perform Generalized Stability analysis of the two dimensional perturbation dynamics of an inviscid Boussinesq stratified shear layer and show that Holmboe instabilities at high Richardson numbers can be excited by their adjoints at amplitudes that are orders of magnitude larger than by introducing initially the unstable mode itself. We also determine the optimal growth that obtains for parameters for which there is no instability. We find that there is potential for large transient growth regardless of whether the background flow is exponentially stable or not and that the characteristic structure of the Holmboe instability asymptotically emerges for parameter values ...
Phonon hydrodynamics in two-dimensional materials.
Cepellotti, Andrea; Fugallo, Giorgia; Paulatto, Lorenzo; Lazzeri, Michele; Mauri, Francesco; Marzari, Nicola
2015-03-06
The conduction of heat in two dimensions displays a wealth of fascinating phenomena of key relevance to the scientific understanding and technological applications of graphene and related materials. Here, we use density-functional perturbation theory and an exact, variational solution of the Boltzmann transport equation to study fully from first-principles phonon transport and heat conductivity in graphene, boron nitride, molybdenum disulphide and the functionalized derivatives graphane and fluorographene. In all these materials, and at variance with typical three-dimensional solids, normal processes keep dominating over Umklapp scattering well-above cryogenic conditions, extending to room temperature and more. As a result, novel regimes emerge, with Poiseuille and Ziman hydrodynamics, hitherto typically confined to ultra-low temperatures, characterizing transport at ordinary conditions. Most remarkably, several of these two-dimensional materials admit wave-like heat diffusion, with second sound present at room temperature and above in graphene, boron nitride and graphane.
Probabilistic Universality in two-dimensional Dynamics
Lyubich, Mikhail
2011-01-01
In this paper we continue to explore infinitely renormalizable H\\'enon maps with small Jacobian. It was shown in [CLM] that contrary to the one-dimensional intuition, the Cantor attractor of such a map is non-rigid and the conjugacy with the one-dimensional Cantor attractor is at most 1/2-H\\"older. Another formulation of this phenomenon is that the scaling structure of the H\\'enon Cantor attractor differs from its one-dimensional counterpart. However, in this paper we prove that the weight assigned by the canonical invariant measure to these bad spots tends to zero on microscopic scales. This phenomenon is called {\\it Probabilistic Universality}. It implies, in particular, that the Hausdorff dimension of the canonical measure is universal. In this way, universality and rigidity phenomena of one-dimensional dynamics assume a probabilistic nature in the two-dimensional world.
Two-dimensional position sensitive neutron detector
Indian Academy of Sciences (India)
A M Shaikh; S S Desai; A K Patra
2004-08-01
A two-dimensional position sensitive neutron detector has been developed. The detector is a 3He + Kr filled multiwire proportional counter with charge division position readout and has a sensitive area of 345 mm × 345 mm, pixel size 5 mm × 5 mm, active depth 25 mm and is designed for efficiency of 70% for 4 Å neutrons. The detector is tested with 0.5 bar 3He + 1.5 bar krypton gas mixture in active chamber and 2 bar 4He in compensating chamber. The pulse height spectrum recorded at an anode potential of 2000 V shows energy resolution of ∼ 25% for the 764 keV peak. A spatial resolution of 8 mm × 6 mm is achieved. The detector is suitable for SANS studies in the range of 0.02–0.25 Å-1.
Two-dimensional heterostructures for energy storage
Pomerantseva, Ekaterina; Gogotsi, Yury
2017-07-01
Two-dimensional (2D) materials provide slit-shaped ion diffusion channels that enable fast movement of lithium and other ions. However, electronic conductivity, the number of intercalation sites, and stability during extended cycling are also crucial for building high-performance energy storage devices. While individual 2D materials, such as graphene, show some of the required properties, none of them can offer all properties needed to maximize energy density, power density, and cycle life. Here we argue that stacking different 2D materials into heterostructured architectures opens an opportunity to construct electrodes that would combine the advantages of the individual building blocks while eliminating the associated shortcomings. We discuss characteristics of common 2D materials and provide examples of 2D heterostructured electrodes that showed new phenomena leading to superior electrochemical performance. We also consider electrode fabrication approaches and finally outline future steps to create 2D heterostructured electrodes that could greatly expand current energy storage technologies.
Rationally synthesized two-dimensional polymers.
Colson, John W; Dichtel, William R
2013-06-01
Synthetic polymers exhibit diverse and useful properties and influence most aspects of modern life. Many polymerization methods provide linear or branched macromolecules, frequently with outstanding functional-group tolerance and molecular weight control. In contrast, extending polymerization strategies to two-dimensional periodic structures is in its infancy, and successful examples have emerged only recently through molecular framework, surface science and crystal engineering approaches. In this Review, we describe successful 2D polymerization strategies, as well as seminal research that inspired their development. These methods include the synthesis of 2D covalent organic frameworks as layered crystals and thin films, surface-mediated polymerization of polyfunctional monomers, and solid-state topochemical polymerizations. Early application targets of 2D polymers include gas separation and storage, optoelectronic devices and membranes, each of which might benefit from predictable long-range molecular organization inherent to this macromolecular architecture.
Janus Spectra in Two-Dimensional Flows
Liu, Chien-Chia; Cerbus, Rory T.; Chakraborty, Pinaki
2016-09-01
In large-scale atmospheric flows, soap-film flows, and other two-dimensional flows, the exponent of the turbulent energy spectra, α , may theoretically take either of two distinct values, 3 or 5 /3 , but measurements downstream of obstacles have invariably revealed α =3 . Here we report experiments on soap-film flows where downstream of obstacles there exists a sizable interval in which α transitions from 3 to 5 /3 for the streamwise fluctuations but remains equal to 3 for the transverse fluctuations, as if two mutually independent turbulent fields of disparate dynamics were concurrently active within the flow. This species of turbulent energy spectra, which we term the Janus spectra, has never been observed or predicted theoretically. Our results may open up new vistas in the study of turbulence and geophysical flows.
Local doping of two-dimensional materials
Wong, Dillon; Velasco, Jr, Jairo; Ju, Long; Kahn, Salman; Lee, Juwon; Germany, Chad E.; Zettl, Alexander K.; Wang, Feng; Crommie, Michael F.
2016-09-20
This disclosure provides systems, methods, and apparatus related to locally doping two-dimensional (2D) materials. In one aspect, an assembly including a substrate, a first insulator disposed on the substrate, a second insulator disposed on the first insulator, and a 2D material disposed on the second insulator is formed. A first voltage is applied between the 2D material and the substrate. With the first voltage applied between the 2D material and the substrate, a second voltage is applied between the 2D material and a probe positioned proximate the 2D material. The second voltage between the 2D material and the probe is removed. The first voltage between the 2D material and the substrate is removed. A portion of the 2D material proximate the probe when the second voltage was applied has a different electron density compared to a remainder of the 2D material.
Two-dimensional fourier transform spectrometer
Energy Technology Data Exchange (ETDEWEB)
DeFlores, Lauren; Tokmakoff, Andrei
2016-10-25
The present invention relates to a system and methods for acquiring two-dimensional Fourier transform (2D FT) spectra. Overlap of a collinear pulse pair and probe induce a molecular response which is collected by spectral dispersion of the signal modulated probe beam. Simultaneous collection of the molecular response, pulse timing and characteristics permit real time phasing and rapid acquisition of spectra. Full spectra are acquired as a function of pulse pair timings and numerically transformed to achieve the full frequency-frequency spectrum. This method demonstrates the ability to acquire information on molecular dynamics, couplings and structure in a simple apparatus. Multi-dimensional methods can be used for diagnostic and analytical measurements in the biological, biomedical, and chemical fields.
Two-dimensional fourier transform spectrometer
DeFlores, Lauren; Tokmakoff, Andrei
2013-09-03
The present invention relates to a system and methods for acquiring two-dimensional Fourier transform (2D FT) spectra. Overlap of a collinear pulse pair and probe induce a molecular response which is collected by spectral dispersion of the signal modulated probe beam. Simultaneous collection of the molecular response, pulse timing and characteristics permit real time phasing and rapid acquisition of spectra. Full spectra are acquired as a function of pulse pair timings and numerically transformed to achieve the full frequency-frequency spectrum. This method demonstrates the ability to acquire information on molecular dynamics, couplings and structure in a simple apparatus. Multi-dimensional methods can be used for diagnostic and analytical measurements in the biological, biomedical, and chemical fields.
FACE RECOGNITION USING TWO DIMENSIONAL LAPLACIAN EIGENMAP
Institute of Scientific and Technical Information of China (English)
Chen Jiangfeng; Yuan Baozong; Pei Bingnan
2008-01-01
Recently,some research efforts have shown that face images possibly reside on a nonlinear sub-manifold. Though Laplacianfaces method considered the manifold structures of the face images,it has limits to solve face recognition problem. This paper proposes a new feature extraction method,Two Dimensional Laplacian EigenMap (2DLEM),which especially considers the manifold structures of the face images,and extracts the proper features from face image matrix directly by using a linear transformation. As opposed to Laplacianfaces,2DLEM extracts features directly from 2D images without a vectorization preprocessing. To test 2DLEM and evaluate its performance,a series of ex-periments are performed on the ORL database and the Yale database. Moreover,several experiments are performed to compare the performance of three 2D methods. The experiments show that 2DLEM achieves the best performance.
Equivalency of two-dimensional algebras
Energy Technology Data Exchange (ETDEWEB)
Santos, Gildemar Carneiro dos; Pomponet Filho, Balbino Jose S. [Universidade Federal da Bahia (UFBA), BA (Brazil). Inst. de Fisica
2011-07-01
Full text: Let us consider a vector z = xi + yj over the field of real numbers, whose basis (i,j) satisfy a given algebra. Any property of this algebra will be reflected in any function of z, so we can state that the knowledge of the properties of an algebra leads to more general conclusions than the knowledge of the properties of a function. However structural properties of an algebra do not change when this algebra suffers a linear transformation, though the structural constants defining this algebra do change. We say that two algebras are equivalent to each other whenever they are related by a linear transformation. In this case, we have found that some relations between the structural constants are sufficient to recognize whether or not an algebra is equivalent to another. In spite that the basis transform linearly, the structural constants change like a third order tensor, but some combinations of these tensors result in a linear transformation, allowing to write the entries of the transformation matrix as function of the structural constants. Eventually, a systematic way to find the transformation matrix between these equivalent algebras is obtained. In this sense, we have performed the thorough classification of associative commutative two-dimensional algebras, and find that even non-division algebra may be helpful in solving non-linear dynamic systems. The Mandelbrot set was used to have a pictorial view of each algebra, since equivalent algebras result in the same pattern. Presently we have succeeded in classifying some non-associative two-dimensional algebras, a task more difficult than for associative one. (author)
Effect of Rain on Probability Distributions Fitted to Vehicle Time Headways
Directory of Open Access Journals (Sweden)
Hashim Mohammed Alhassan
2012-01-01
Full Text Available Time headway data generated from different rain conditions were fitted to probability distributions to see which ones best described the trends in headway behaviour in wet weather. Data was generated from the J5, a principal road in Johor Bahru for two months and the headways in no-rain condition were analysed and compared to the rain generated headway data. The results showed a decrease in headways between no-rain and the rain conditions. Further decreases were observed with increase in rainfall intensity. Thus between no-rain to light rain condition there was 15.66% reduction in the mean headways. Also the mean headway reduction between no-rain and medium rain condition is 19.97% while the reduction between no-rain and heavy rain condition is 25.65%. This trend is already acknowledged in the literature. The Burr probability distribution ranked first amongst five others in describing the trends in headway behaviour during rainfall. It passed the goodness of fit tests for the K-S, A2 and C-S at 95% and 99 % respectively. The scale parameter of the Burr model and the P-value increased as the rain intensity increased. This suggests more vehicular cluster during rainfall with the probability of this occurring increasing with more rain intensity. The coefficient of variation and Skewness also pointed towards increase in vehicle cluster. The Burr Probability Distribution therefore can be applied to model headways in rain and no-rain weather conditions among others.
Diffusion in the two-dimensional nonoverlapping Lorentz gas
James, Corinne P.; Evans, Glenn T.
1987-10-01
The self-diffusion coefficient, velocity autocorrelation function, and distribution of collision times for a two-dimensional nonoverlapping Lorentz gas were calculated using molecular dynamics simulation. The systems studied covered a range of densities, from a packing fraction (πNr2/L2) of 0.01 to 0.8. Self-diffusion coefficients were found to agree to all densities with kinetic theory predictions [A. Weijland and J. M. J. van Leeuwen, Physica 38, 35 (1968)] if the radial distribution function (rdf) was taken into account. The density dependence of the decay of the velocity autocorrelation function was qualitatively different from that predicted by kinetic theory. The distribution of collision times was nearly exponential for all but the highest density studied.
Burkhart, Blakesley; Murray, Claire; Stanimirovic, Snezana
2015-01-01
The shape of the probability distribution function (PDF) of molecular clouds is an important ingredient for modern theories of star formation and turbulence. Recently, several studies have pointed out observational difficulties with constraining the low column density (i.e. Av <1) PDF using dust tracers. In order to constrain the shape and properties of the low column density probability distribution function, we investigate the PDF of multiphase atomic gas in the Perseus molecular cloud using opacity-corrected GALFA-HI data and compare the PDF shape and properties to the total gas PDF and the N(H2) PDF. We find that the shape of the PDF in the atomic medium of Perseus is well described by a lognormal distribution, and not by a power-law or bimodal distribution. The peak of the atomic gas PDF in and around Perseus lies at the HI-H2 transition column density for this cloud, past which the N(H2) PDF takes on a powerlaw form. We find that the PDF of the atomic gas is narrow and at column densities larger than...
Criticality of the net-baryon number probability distribution at finite density
Directory of Open Access Journals (Sweden)
Kenji Morita
2015-02-01
Full Text Available We compute the probability distribution P(N of the net-baryon number at finite temperature and quark-chemical potential, μ, at a physical value of the pion mass in the quark-meson model within the functional renormalization group scheme. For μ/T<1, the model exhibits the chiral crossover transition which belongs to the universality class of the O(4 spin system in three dimensions. We explore the influence of the chiral crossover transition on the properties of the net baryon number probability distribution, P(N. By considering ratios of P(N to the Skellam function, with the same mean and variance, we unravel the characteristic features of the distribution that are related to O(4 criticality at the chiral crossover transition. We explore the corresponding ratios for data obtained at RHIC by the STAR Collaboration and discuss their implications. We also examine O(4 criticality in the context of binomial and negative-binomial distributions for the net proton number.
Two-dimensional visualization of cluster beams by microchannel plates
Khoukaz, Alfons; Grieser, Silke; Hergemöller, Ann-Katrin; Köhler, Esperanza; Täschner, Alexander
2013-01-01
An advanced technique for a two-dimensional real time visualization of cluster beams in vacuum as well as of the overlap volume of cluster beams with particle accelerator beams is presented. The detection system consists of an array of microchannel plates (MCP) in combination with a phosphor screen which is read out by a CCD camera. This setup together with the ionization of a cluster beam by an electron or ion beam allows for spatial resolved investigations of the cluster beam position, size, and intensity. Moreover, since electrically uncharged clusters remain undetected, the operation in an internal beam experiment opens the way to monitor the overlap region and thus the position and size of an accelerator beam crossing an originally electrically neutral cluster jet. The observed intensity distribution of the recorded image is directly proportional to the convolution of the spatial ion beam and cluster beam intensities and is by this a direct measure of the two-dimensional luminosity distribution. This inf...
Probability distribution functions of turbulence in seepage-affected alluvial channel
Sharma, Anurag; Kumar, Bimlesh
2017-02-01
The present experimental study is carried out on the probability distribution functions (PDFs) of turbulent flow characteristics within near-bed-surface and away-from-bed surfaces for both no seepage and seepage flow. Laboratory experiments were conducted in the plane sand bed for no seepage (NS), 10% seepage (10%S) and 15% seepage (15%) cases. The experimental calculation of the PDFs of turbulent parameters such as Reynolds shear stress, velocity fluctuations, and bursting events is compared with theoretical expression obtained by Gram-Charlier (GC)-based exponential distribution. Experimental observations follow the computed PDF distributions for both no seepage and seepage cases. Jensen-Shannon divergence (JSD) method is used to measure the similarity between theoretical and experimental PDFs. The value of JSD for PDFs of velocity fluctuation lies between 0.0005 to 0.003 while the JSD value for PDFs of Reynolds shear stress varies between 0.001 to 0.006. Even with the application of seepage, the PDF distribution of bursting events, sweeps and ejections are well characterized by the exponential distribution of the GC series, except that a slight deflection of inward and outward interactions is observed which may be due to weaker events. The value of JSD for outward and inward interactions ranges from 0.0013 to 0.032, while the JSD value for sweep and ejection events varies between 0.0001 to 0.0025. The theoretical expression for the PDF of turbulent intensity is developed in the present study, which agrees well with the experimental observations and JSD lies between 0.007 and 0.015. The work presented is potentially applicable to the probability distribution of mobile-bed sediments in seepage-affected alluvial channels typically characterized by the various turbulent parameters. The purpose of PDF estimation from experimental data is that it provides a complete numerical description in the areas of turbulent flow either at a single or finite number of points.
On numerical evaluation of two-dimensional phase integrals
DEFF Research Database (Denmark)
Lessow, H.; Rusch, W.; Schjær-Jacobsen, Hans
1975-01-01
The relative advantages of several common numerical integration algorithms used in computing two-dimensional phase integrals are evaluated.......The relative advantages of several common numerical integration algorithms used in computing two-dimensional phase integrals are evaluated....
Fluid dynamics of two-dimensional pollination in Ruppia maritima
Musunuri, Naga; Bunker, Daniel; Pell, Susan; Pell, Fischer; Singh, Pushpendra
2016-11-01
The aim of this work is to understand the physics underlying the mechanisms of two-dimensional aquatic pollen dispersal, known as hydrophily. We observed two mechanisms by which the pollen released from male inflorescences of Ruppia maritima is adsorbed on a water surface: (i) inflorescences rise above the surface and after they mature their pollen mass falls onto the surface as clumps and disperses on the surface; (ii) inflorescences remain below the surface and produce air bubbles which carry their pollen mass to the surface where it disperses. In both cases dispersed pollen masses combined under the action of capillary forces to form pollen rafts. This increases the probability of pollination since the capillary force on a pollen raft towards a stigma is much larger than on a single pollen grain. The presence of a trace amount of surfactant can disrupt the pollination process so that the pollen is not transported or captured on the water surface. National Science Foundation.
Directory of Open Access Journals (Sweden)
Fonseca Rasmus
2009-10-01
Full Text Available Abstract Background Predicting the three-dimensional structure of a protein from its amino acid sequence is currently one of the most challenging problems in bioinformatics. The internal structure of helices and sheets is highly recurrent and help reduce the search space significantly. However, random coil segments make up nearly 40% of proteins and they do not have any apparent recurrent patterns, which complicates overall prediction accuracy of protein structure prediction methods. Luckily, previous work has indicated that coil segments are in fact not completely random in structure and flanking residues do seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been done previously, none have, to our knowledge, presented comparable results for the probability distribution of dihedral angles. Results In this paper we develop an artificial neural network that uses an input-window of amino acids to predict a dihedral angle probability distribution for the middle residue in the input-window. The trained neural network shows a significant improvement (4-68% in predicting the most probable bin (covering a 30° × 30° area of the dihedral angle space for all amino acids in the data set compared to baseline statistics. An accuracy comparable to that of secondary structure prediction (≈ 80% is achieved by observing the 20 bins with highest output values. Conclusion Many different protein structure prediction methods exist and each uses different tools and auxiliary predictions to help determine the native structure. In this work the sequence is used to predict local context dependent dihedral angle propensities in coil-regions. This predicted distribution can potentially improve tertiary structure prediction
Evolution Equation for a Joint Tomographic Probability Distribution of Spin-1 Particles
Korennoy, Ya. A.; Man'ko, V. I.
2016-11-01
The nine-component positive vector optical tomographic probability portrait of quantum state of spin-1 particles containing full spatial and spin information about the state without redundancy is constructed. Also the suggested approach is expanded to symplectic tomography representation and to representations with quasidistributions like Wigner function, Husimi Q-function, and Glauber-Sudarshan P-function. The evolution equations for constructed vector optical and symplectic tomograms and vector quasidistributions for arbitrary Hamiltonian are found. The evolution equations are also obtained in special case of the quantum system of charged spin-1 particle in arbitrary electro-magnetic field, which are analogs of non-relativistic Proca equation in appropriate representations. The generalization of proposed approach to the cases of arbitrary spin is discussed. The possibility of formulation of quantum mechanics of the systems with spins in terms of joint probability distributions without the use of wave functions or density matrices is explicitly demonstrated.
Cheng, Weiwei
2011-01-01
We consider an extension of the setting of label ranking, in which the learner is allowed to make predictions in the form of partial instead of total orders. Predictions of that kind are interpreted as a partial abstention: If the learner is not sufficiently certain regarding the relative order of two alternatives, it may abstain from this decision and instead declare these alternatives as being incomparable. We propose a new method for learning to predict partial orders that improves on an existing approach, both theoretically and empirically. Our method is based on the idea of thresholding the probabilities of pairwise preferences between labels as induced by a predicted (parameterized) probability distribution on the set of all rankings.
Hobbs, Jennifer A; Towal, R Blythe; Hartmann, Mitra J Z
2015-08-01
Analysis of natural scene statistics has been a powerful approach for understanding neural coding in the auditory and visual systems. In the field of somatosensation, it has been more challenging to quantify the natural tactile scene, in part because somatosensory signals are so tightly linked to the animal's movements. The present work takes a step towards quantifying the natural tactile scene for the rat vibrissal system by simulating rat whisking motions to systematically investigate the probabilities of whisker-object contact in naturalistic environments. The simulations permit an exhaustive search through the complete space of possible contact patterns, thereby allowing for the characterization of the patterns that would most likely occur during long sequences of natural exploratory behavior. We specifically quantified the probabilities of 'concomitant contact', that is, given that a particular whisker makes contact with a surface during a whisk, what is the probability that each of the other whiskers will also make contact with the surface during that whisk? Probabilities of concomitant contact were quantified in simulations that assumed increasingly naturalistic conditions: first, the space of all possible head poses; second, the space of behaviorally preferred head poses as measured experimentally; and third, common head poses in environments such as cages and burrows. As environments became more naturalistic, the probability distributions shifted from exhibiting a 'row-wise' structure to a more diagonal structure. Results also reveal that the rat appears to use motor strategies (e.g. head pitches) that generate contact patterns that are particularly well suited to extract information in the presence of uncertainty. © 2015. Published by The Company of Biologists Ltd.
The probability distribution functions of emission line flux measurements and their ratios
Wesson, R; Scicluna, P
2016-01-01
Many physical parameters in astrophysics are derived using the ratios of two observed quantities. If the relative uncertainties on measurements are small enough, uncertainties can be propagated analytically using simplifying assumptions, but for large normally distributed uncertainties, the probability distribution of the ratio become skewed, with a modal value offset from that expected in Gaussian uncertainty propagation. Furthermore, the most likely value of a ratio A/B is not equal to the reciprocal of the most likely value of B/A. The effect is most pronounced when the uncertainty on the denominator is larger than that on the numerator. We show that this effect is seen in an analysis of 12,126 spectra from the Sloan Digital Sky Survey. The intrinsically fixed ratio of the [O III] lines at 4959 and 5007 ${\\AA}$ is conventionally expressed as the ratio of the stronger line to the weaker line. Thus, the uncertainty on the denominator is larger, and non-Gaussian probability distributions result. By taking thi...
Janus spectra in two-dimensional flows
Liu, Chien-Chia; Chakraborty, Pinaki
2016-01-01
In theory, large-scale atmospheric flows, soap-film flows and other two-dimensional flows may host two distinct types of turbulent energy spectra---in one, $\\alpha$, the spectral exponent of velocity fluctuations, equals $3$ and the fluctuations are dissipated at the small scales, and in the other, $\\alpha=5/3$ and the fluctuations are dissipated at the large scales---but measurements downstream of obstacles have invariably revealed $\\alpha = 3$. Here we report experiments on soap-film flows where downstream of obstacles there exists a sizable interval in which $\\alpha$ has transitioned from $3$ to $5/3$ for the streamwise fluctuations but remains equal to $3$ for the transverse fluctuations, as if two mutually independent turbulent fields of disparate dynamics were concurrently active within the flow. This species of turbulent energy spectra, which we term the Janus spectra, has never been observed or predicted theoretically. Our results may open up new vistas in the study of turbulence and geophysical flows...
Comparative Two-Dimensional Fluorescence Gel Electrophoresis.
Ackermann, Doreen; König, Simone
2018-01-01
Two-dimensional comparative fluorescence gel electrophoresis (CoFGE) uses an internal standard to increase the reproducibility of coordinate assignment for protein spots visualized on 2D polyacrylamide gels. This is particularly important for samples, which need to be compared without the availability of replicates and thus cannot be studied using differential gel electrophoresis (DIGE). CoFGE corrects for gel-to-gel variability by co-running with the sample proteome a standardized marker grid of 80-100 nodes, which is formed by a set of purified proteins. Differentiation of reference and analyte is possible by the use of two fluorescent dyes. Variations in the y-dimension (molecular weight) are corrected by the marker grid. For the optional control of the x-dimension (pI), azo dyes can be used. Experiments are possible in both vertical and horizontal (h) electrophoresis devices, but hCoFGE is much easier to perform. For data analysis, commercial software capable of warping can be adapted.
Two-dimensional hexagonal semiconductors beyond graphene
Nguyen, Bich Ha; Hieu Nguyen, Van
2016-12-01
The rapid and successful development of the research on graphene and graphene-based nanostructures has been substantially enlarged to include many other two-dimensional hexagonal semiconductors (THS): phosphorene, silicene, germanene, hexagonal boron nitride (h-BN) and transition metal dichalcogenides (TMDCs) such as MoS2, MoSe2, WS2, WSe2 as well as the van der Waals heterostructures of various THSs (including graphene). The present article is a review of recent works on THSs beyond graphene and van der Waals heterostructures composed of different pairs of all THSs. One among the priorities of new THSs compared to graphene is the presence of a non-vanishing energy bandgap which opened up the ability to fabricate a large number of electronic, optoelectronic and photonic devices on the basis of these new materials and their van der Waals heterostructures. Moreover, a significant progress in the research on TMDCs was the discovery of valley degree of freedom. The results of research on valley degree of freedom and the development of a new technology based on valley degree of freedom-valleytronics are also presented. Thus the scientific contents of the basic research and practical applications os THSs are very rich and extremely promising.
Two-Dimensional Phononic Crystals: Disorder Matters.
Wagner, Markus R; Graczykowski, Bartlomiej; Reparaz, Juan Sebastian; El Sachat, Alexandros; Sledzinska, Marianna; Alzina, Francesc; Sotomayor Torres, Clivia M
2016-09-14
The design and fabrication of phononic crystals (PnCs) hold the key to control the propagation of heat and sound at the nanoscale. However, there is a lack of experimental studies addressing the impact of order/disorder on the phononic properties of PnCs. Here, we present a comparative investigation of the influence of disorder on the hypersonic and thermal properties of two-dimensional PnCs. PnCs of ordered and disordered lattices are fabricated of circular holes with equal filling fractions in free-standing Si membranes. Ultrafast pump and probe spectroscopy (asynchronous optical sampling) and Raman thermometry based on a novel two-laser approach are used to study the phononic properties in the gigahertz (GHz) and terahertz (THz) regime, respectively. Finite element method simulations of the phonon dispersion relation and three-dimensional displacement fields furthermore enable the unique identification of the different hypersonic vibrations. The increase of surface roughness and the introduction of short-range disorder are shown to modify the phonon dispersion and phonon coherence in the hypersonic (GHz) range without affecting the room-temperature thermal conductivity. On the basis of these findings, we suggest a criteria for predicting phonon coherence as a function of roughness and disorder.
Two-dimensional topological photonic systems
Sun, Xiao-Chen; He, Cheng; Liu, Xiao-Ping; Lu, Ming-Hui; Zhu, Shi-Ning; Chen, Yan-Feng
2017-09-01
The topological phase of matter, originally proposed and first demonstrated in fermionic electronic systems, has drawn considerable research attention in the past decades due to its robust transport of edge states and its potential with respect to future quantum information, communication, and computation. Recently, searching for such a unique material phase in bosonic systems has become a hot research topic worldwide. So far, many bosonic topological models and methods for realizing them have been discovered in photonic systems, acoustic systems, mechanical systems, etc. These discoveries have certainly yielded vast opportunities in designing material phases and related properties in the topological domain. In this review, we first focus on some of the representative photonic topological models and employ the underlying Dirac model to analyze the edge states and geometric phase. On the basis of these models, three common types of two-dimensional topological photonic systems are discussed: 1) photonic quantum Hall effect with broken time-reversal symmetry; 2) photonic topological insulator and the associated pseudo-time-reversal symmetry-protected mechanism; 3) time/space periodically modulated photonic Floquet topological insulator. Finally, we provide a summary and extension of this emerging field, including a brief introduction to the Weyl point in three-dimensional systems.
Radiation effects on two-dimensional materials
Energy Technology Data Exchange (ETDEWEB)
Walker, R.C. II; Robinson, J.A. [Department of Materials Science, Penn State, University Park, PA (United States); Center for Two-Dimensional Layered Materials, Penn State, University Park, PA (United States); Shi, T. [Department of Mechanical and Nuclear Engineering, Penn State, University Park, PA (United States); Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States); Silva, E.C. [GlobalFoundries, Malta, NY (United States); Jovanovic, I. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States)
2016-12-15
The effects of electromagnetic and particle irradiation on two-dimensional materials (2DMs) are discussed in this review. Radiation creates defects that impact the structure and electronic performance of materials. Determining the impact of these defects is important for developing 2DM-based devices for use in high-radiation environments, such as space or nuclear reactors. As such, most experimental studies have been focused on determining total ionizing dose damage to 2DMs and devices. Total dose experiments using X-rays, gamma rays, electrons, protons, and heavy ions are summarized in this review. We briefly discuss the possibility of investigating single event effects in 2DMs based on initial ion beam irradiation experiments and the development of 2DM-based integrated circuits. Additionally, beneficial uses of irradiation such as ion implantation to dope materials or electron-beam and helium-beam etching to shape materials have begun to be used on 2DMs and are reviewed as well. For non-ionizing radiation, such as low-energy photons, we review the literature on 2DM-based photo-detection from terahertz to UV. The majority of photo-detecting devices operate in the visible and UV range, and for this reason they are the focus of this review. However, we review the progress in developing 2DMs for detecting infrared and terahertz radiation. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Photodetectors based on two dimensional materials
Zheng, Lou; Zhongzhu, Liang; Guozhen, Shen
2016-09-01
Two-dimensional (2D) materials with unique properties have received a great deal of attention in recent years. This family of materials has rapidly established themselves as intriguing building blocks for versatile nanoelectronic devices that offer promising potential for use in next generation optoelectronics, such as photodetectors. Furthermore, their optoelectronic performance can be adjusted by varying the number of layers. They have demonstrated excellent light absorption, enabling ultrafast and ultrasensitive detection of light in photodetectors, especially in their single-layer structure. Moreover, due to their atomic thickness, outstanding mechanical flexibility, and large breaking strength, these materials have been of great interest for use in flexible devices and strain engineering. Toward that end, several kinds of photodetectors based on 2D materials have been reported. Here, we present a review of the state-of-the-art in photodetectors based on graphene and other 2D materials, such as the graphene, transition metal dichalcogenides, and so on. Project supported by the National Natural Science Foundation of China (Nos. 61377033, 61574132, 61504136) and the State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences.
Asymptotics for Two-dimensional Atoms
DEFF Research Database (Denmark)
Nam, Phan Thanh; Portmann, Fabian; Solovej, Jan Philip
2012-01-01
We prove that the ground state energy of an atom confined to two dimensions with an infinitely heavy nucleus of charge $Z>0$ and $N$ quantum electrons of charge -1 is $E(N,Z)=-{1/2}Z^2\\ln Z+(E^{\\TF}(\\lambda)+{1/2}c^{\\rm H})Z^2+o(Z^2)$ when $Z\\to \\infty$ and $N/Z\\to \\lambda$, where $E^{\\TF}(\\lambd......We prove that the ground state energy of an atom confined to two dimensions with an infinitely heavy nucleus of charge $Z>0$ and $N$ quantum electrons of charge -1 is $E(N,Z)=-{1/2}Z^2\\ln Z+(E^{\\TF}(\\lambda)+{1/2}c^{\\rm H})Z^2+o(Z^2)$ when $Z\\to \\infty$ and $N/Z\\to \\lambda$, where $E......^{\\TF}(\\lambda)$ is given by a Thomas-Fermi type variational problem and $c^{\\rm H}\\approx -2.2339$ is an explicit constant. We also show that the radius of a two-dimensional neutral atom is unbounded when $Z\\to \\infty$, which is contrary to the expected behavior of three-dimensional atoms....
Predicting Two-Dimensional Silicon Carbide Monolayers.
Shi, Zhiming; Zhang, Zhuhua; Kutana, Alex; Yakobson, Boris I
2015-10-27
Intrinsic semimetallicity of graphene and silicene largely limits their applications in functional devices. Mixing carbon and silicon atoms to form two-dimensional (2D) silicon carbide (SixC1-x) sheets is promising to overcome this issue. Using first-principles calculations combined with the cluster expansion method, we perform a comprehensive study on the thermodynamic stability and electronic properties of 2D SixC1-x monolayers with 0 ≤ x ≤ 1. Upon varying the silicon concentration, the 2D SixC1-x presents two distinct structural phases, a homogeneous phase with well dispersed Si (or C) atoms and an in-plane hybrid phase rich in SiC domains. While the in-plane hybrid structure shows uniform semiconducting properties with widely tunable band gap from 0 to 2.87 eV due to quantum confinement effect imposed by the SiC domains, the homogeneous structures can be semiconducting or remain semimetallic depending on a superlattice vector which dictates whether the sublattice symmetry is topologically broken. Moreover, we reveal a universal rule for describing the electronic properties of the homogeneous SixC1-x structures. These findings suggest that the 2D SixC1-x monolayers may present a new "family" of 2D materials, with a rich variety of properties for applications in electronics and optoelectronics.
Smail, Linda
2016-06-01
The basic task of any probabilistic inference system in Bayesian networks is computing the posterior probability distribution for a subset or subsets of random variables, given values or evidence for some other variables from the same Bayesian network. Many methods and algorithms have been developed to exact and approximate inference in Bayesian networks. This work compares two exact inference methods in Bayesian networks-Lauritzen-Spiegelhalter and the successive restrictions algorithm-from the perspective of computational efficiency. The two methods were applied for comparison to a Chest Clinic Bayesian Network. Results indicate that the successive restrictions algorithm shows more computational efficiency than the Lauritzen-Spiegelhalter algorithm.
Finite de Finetti theorem for conditional probability distributions describing physical theories
Christandl, Matthias; Toner, Ben
2009-04-01
We work in a general framework where the state of a physical system is defined by its behavior under measurement and the global state is constrained by no-signaling conditions. We show that the marginals of symmetric states in such theories can be approximated by convex combinations of independent and identical conditional probability distributions, generalizing the classical finite de Finetti theorem of Diaconis and Freedman. Our results apply to correlations obtained from quantum states even when there is no bound on the local dimension, so that known quantum de Finetti theorems cannot be used.
Discrete coherent states and probability distributions in finite-dimensional spaces
Energy Technology Data Exchange (ETDEWEB)
Galetti, D.; Marchiolli, M.A.
1995-06-01
Operator bases are discussed in connection with the construction of phase space representatives of operators in finite-dimensional spaces and their properties are presented. It is also shown how these operator bases allow for the construction of a finite harmonic oscillator-like coherent state. Creation and annihilation operators for the Fock finite-dimensional space are discussed and their expressions in terms of the operator bases are explicitly written. The relevant finite-dimensional probability distributions are obtained and their limiting behavior for an infinite-dimensional space are calculated which agree with the well know results. (author). 20 refs, 2 figs.
On the Meta Distribution of Coverage Probability in Uplink Cellular Networks
Elsawy, Hesham
2017-04-07
This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP) for modeling the spatial locations of base stations (BSs), we obtain the percentiles of users that achieve a target uplink CP over an arbitrary, but fixed, realization of the PPP. To this end, the effect of the users activity factor (p) and the path-loss compensation factor () on the uplink performance are analyzed. The results show that decreasing p and/or increasing reduce the CP variation around the spatially averaged value.
Spectra and probability distributions of thermal flux in turbulent Rayleigh-B\\'{e}nard convection
Pharasi, Hirdesh K; Kumar, Krishna; Bhattacharjee, Jayanta K
2016-01-01
The spectra of turbulent heat flux $\\mathrm{H}(k)$ in Rayleigh-B\\'{e}nard convection with and without uniform rotation are presented. The spectrum $\\mathrm{H}(k)$ scales with wave number $k$ as $\\sim k^{-2}$. The scaling exponent is almost independent of the Taylor number $\\mathrm{Ta}$ and Prandtl number $\\mathrm{Pr}$ for higher values of the reduced Rayleigh number $r$ ($ > 10^3$). The exponent, however, depends on $\\mathrm{Ta}$ and $\\mathrm{Pr}$ for smaller values of $r$ ($<10^3$). The probability distribution functions of the local heat fluxes are non-Gaussian and have exponential tails.
Two-dimensional temperature determination in sooting flames by filtered Rayleigh scattering
Hoffman, D.; Münch, K.-U.; Leipertz, A.
1996-04-01
We present what to our knowledge are the first filtered Rayleigh scattering temperature measurements and use them in sooting flame. This new technique for two-dimensional thermography in gas combustion overcomes some of the major disadvantages of the standard Rayleigh technique. It suppresses scattered background light from walls or windows and permits detection of two-dimensional Rayleigh intensity distributions of the gas phase in the presence of small particles by spectral filtering of the scattered light.
Performance of Thomas-Fermi and linear response approaches in periodic two-dimensional systems
Energy Technology Data Exchange (ETDEWEB)
Calderin, L; Stott, M J [Department of Physics, Queen' s University, Kingston, Ontario, K7 L 3N6 (Canada)], E-mail: calderin@physics.queensu.ca, E-mail: stott@mjs.phy.queensu.ca
2010-04-16
A study of the performance of Thomas-Fermi and linear response theories in the case of a two-dimensional periodic model system is presented. The calculated density distribution and total energy per unit cell compare very well with exact results except when there is a small number of particles per cell, even though the potential has narrow tight-binding bands. The results supplement earlier findings of Koivisto and Stott for a localized impurity in a two-dimensional uniform gas.
Wind speed analysis in La Vainest, Mexico: a bimodal probability distribution case
Energy Technology Data Exchange (ETDEWEB)
Jaramillo, O.A.; Borja, M.A. [Energias No Convencionales, Morelos (Mexico). Instituto de Investigaciones Electricas
2004-08-01
The statistical characteristics of the wind speed in La Vainest, Oxoic, Mexico, have been analyzed by using wind speed data recorded by Instituto de Investigaciones Electricas (IIE). By grouping the observations by annual, seasonal and wind direction, we show that the wind speed distribution, with calms included, is not represented by the typical two-parameter Weibull function. A mathematical formulation by using a bimodal Weibull and Weibull probability distribution function (PDF) has been developed to analyse the wind speed frequency distribution in that region. The model developed here can be applied for similar regions where the wind speed distribution presents a bimodal PDF. The two-parameter Weibull wind speed distribution must not be generalised, since it is not accurate to represent some wind regimes as the case of La Ventosa, Mexico. The analysis of wind data shows that computing the capacity factor for wind power plants to be installed in La Ventosa must be carded out by means of a bimodal PDF instead of the typical Weibull PDF. Otherwise, the capacity factor will be underestimated. (author)
Long-Term Probability Distribution of Wind Turbine Planetary Bearing Loads (Poster)
Energy Technology Data Exchange (ETDEWEB)
Jiang, Z.; Xing, Y.; Guo, Y.; Dong, W.; Moan, T.; Gao, Z.
2013-04-01
Among the various causes of bearing damage and failure, metal fatigue of the rolling contact surface is the dominant failure mechanism. The fatigue life is associated with the load conditions under which wind turbines operate in the field. Therefore, it is important to understand the long-term distribution of the bearing loads under various environmental conditions. The National Renewable Energy Laboratory's 750-kW Gearbox Reliability Collaborative wind turbine is studied in this work. A decoupled analysis using several computer codes is carried out. The global aero-elastic simulations are performed using HAWC2. The time series of the drivetrain loads and motions from the global dynamic analysis are fed to a drivetrain model in SIMPACK. The time-varying internal pressure distribution along the raceway is obtained analytically. A series of probability distribution functions are then used to fit the long-term statistical distribution at different locations along raceways. The long-term distribution of the bearing raceway loads are estimated under different environmental conditions. Finally, the bearing fatigue lives are calculated.
Study of the SEMG probability distribution of the paretic tibialis anterior muscle
Cherniz, Analía S.; Bonell, Claudia E.; Tabernig, Carolina B.
2007-11-01
The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed.
Study of the SEMG probability distribution of the paretic tibialis anterior muscle
Energy Technology Data Exchange (ETDEWEB)
Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B [Laboratorio de Ingenieria de Rehabilitacion e Investigaciones Neuromusculares y Sensoriales, Facultad de Ingenieria, UNER, Oro Verde (Argentina)
2007-11-15
The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed.
EVALUATION OF THE PROBABILITY DISTRIBUTION OF PITTING CORROSION FATIGUE LIFE IN AIRCRAFT MATERIALS
Institute of Scientific and Technical Information of China (English)
WANG Qingyuan (王清远); N.KAWAGOISHI; Q.CHEN; R.M.PIDAPARTI
2003-01-01
Corrosion and fatigue properties of aircraft materials are known to have a considerable scatter due to the random nature of materials,loading,and environmental conditions.A probabilistic approach for predicting the pitting corrosion fatigue life has been investigated which captures the effect of the interaction of the cyclic load and corrosive environment and all stages of the corrosion fatigue process (i.e.the pit nucleation and growth,pit-crack transition,short- and long-crack propagation).The probabilistic model investigated considers the uncertainties in the initial pit size,corrosion pitting current,and material properties due to the scatter found in the experimental data.Monte Carlo simulations were performed to define the failure probability distribution.Predicted cumulative distribution functions of fatigue life agreed reasonably well with the existing experimental data.
Interaction of two-dimensional magnetoexcitons
Dumanov, E. V.; Podlesny, I. V.; Moskalenko, S. A.; Liberman, M. A.
2017-04-01
We study interaction of the two-dimensional magnetoexcitons with in-plane wave vector k→∥ = 0 , taking into account the influence of the excited Landau levels (ELLs) and of the external electric field perpendicular to the surface of the quantum well and parallel to the external magnetic field. It is shown that the account of the ELLs gives rise to the repulsion between the spinless magnetoexcitons with k→∥ = 0 in the Fock approximation, with the interaction constant g decreasing inverse proportional to the magnetic field strength B (g (0) ∼ 1 / B) . In the presence of the perpendicular electric field the Rashba spin-orbit coupling (RSOC), Zeeman splitting (ZS) and nonparabolicity of the heavy-hole dispersion law affect the Landau quantization of the electrons and holes. They move along the new cyclotron orbits, change their Coulomb interactions and cause the interaction between 2D magnetoexcitons with k→∥ = 0 . The changes of the Coulomb interactions caused by the electrons and by the holes moving with new cyclotron orbits are characterized by some coefficients, which in the absence of the electric field turn to be unity. The differences between these coefficients of the electron-hole pairs forming the magnetoexcitons determine their affinities to the interactions. The interactions between the homogeneous, semihomogeneous and heterogeneous magnetoexcitons forming the symmetric states with the same signs of their affinities are attractive whereas in the case of different sign affinities are repulsive. In the heterogeneous asymmetric states the interactions have opposite signs in comparison with the symmetric states. In all these cases the interaction constant g have the dependence g (0) 1 /√{ B} .
Extending models for two-dimensional constraints
DEFF Research Database (Denmark)
Forchhammer, Søren
2009-01-01
Random fields in two dimensions may be specified on 2 times 2 elements such that the probabilities of finite configurations and the entropy may be calculated explicitly. The Pickard random field is one example where probability of a new (non-boundary) element is conditioned on three previous...... elements. To extend the concept we consider extending such a field such that a vector or block of elements is conditioned on a larger set of previous elements. Given a stationary model defined on 2 times 2 elements, iterative scaling is used to define the extended model. The extended model may be used...
Two-dimensional materials and their prospects in transistor electronics.
Schwierz, F; Pezoldt, J; Granzner, R
2015-05-14
During the past decade, two-dimensional materials have attracted incredible interest from the electronic device community. The first two-dimensional material studied in detail was graphene and, since 2007, it has intensively been explored as a material for electronic devices, in particular, transistors. While graphene transistors are still on the agenda, researchers have extended their work to two-dimensional materials beyond graphene and the number of two-dimensional materials under examination has literally exploded recently. Meanwhile several hundreds of different two-dimensional materials are known, a substantial part of them is considered useful for transistors, and experimental transistors with channels of different two-dimensional materials have been demonstrated. In spite of the rapid progress in the field, the prospects of two-dimensional transistors still remain vague and optimistic opinions face rather reserved assessments. The intention of the present paper is to shed more light on the merits and drawbacks of two-dimensional materials for transistor electronics and to add a few more facets to the ongoing discussion on the prospects of two-dimensional transistors. To this end, we compose a wish list of properties for a good transistor channel material and examine to what extent the two-dimensional materials fulfill the criteria of the list. The state-of-the-art two-dimensional transistors are reviewed and a balanced view of both the pros and cons of these devices is provided.
Directory of Open Access Journals (Sweden)
Rani K
2014-08-01
Full Text Available Photocopy documents are very common in our normal life. People are permitted to carry and present photocopied documents to avoid damages to the original documents. But this provision is misused for temporary benefits by fabricating fake photocopied documents. Fabrication of fake photocopied document is possible only in 2nd and higher order recursive order of photocopies. Whenever a photocopied document is submitted, it may be required to check its originality. When the document is 1st order photocopy, chances of fabrication may be ignored. On the other hand when the photocopy order is 2nd or above, probability of fabrication may be suspected. Hence when a photocopy document is presented, the recursive order number of photocopy is to be estimated to ascertain the originality. This requirement demands to investigate methods to estimate order number of photocopy. In this work, a voting based approach is used to detect the recursive order number of the photocopy document using probability distributions exponential, extreme values and lognormal distributions is proposed. A detailed experimentation is performed on a generated data set and the method exhibits efficiency close to 89%.
Institute of Scientific and Technical Information of China (English)
张奇志
2001-01-01
Wavelet transform image coding scheme is the widest used one ofimage compression approaches. The quantization of wavelet transform coefficients is a key to obtain the compression image with low bit ratios and the reconstruction image with high signal to noise ratio. To obtain the optimal quantizer,the distributions of wavelet transform coefficients for image must be determined. The purpose of the experiment is to determine the distributions of wavelet transform coefficients for image. Four standard images, named “Face”, “Girl”, “Lena” and “Panda”, are selected to study the distribution rule. The “KS” statistical tests are applied to studying the distributions of wavelet transform coefficients for images. Utilizing the Vetterli biorthogonal wavelet (L=18), the images that have size of 256×256 pels with 256 gray levels are decomposed to three level and ten subimages. The results of tests of Rayleigh assumption, Laplacian assumption and Gaussian assumption are given. The results of tests have shown that the low-pass subimages are best approximated by a Gaussian distribution and the others are best approximated by a Laplacian distribution. A simulation indicates that the Laplacian assumption of coefficients yields a higher actual output signal-to-noise ratio than the Gaussian assumption.%小波变换编码是目前研究较多的图像压缩方法，变换系数的量化是获得低比特率、高信噪比压缩图像的关键步骤。为了设计最优量化器，必须确定变换系数的分布规律。选择“Face”、“Girl”、“Lena”和“Panda”4幅标准图像数据进行统计研究，用长度L=18的Vetterli双正交小波将256灰度级256×256图像分解为3层10个子带，使用“KS”测试统计方法确定图像小波变换系数的分布规律。给出了瑞利分布、高斯分布和拉普拉斯分布假设下的“KS”测试统计结果。统计结果表明，低频部分符合高斯分布，其余部分符合拉普
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Directory of Open Access Journals (Sweden)
Jayajit Das '
2015-07-01
Full Text Available A common statistical situation concerns inferring an unknown distribution Q(x from a known distribution P(y, where X (dimension n, and Y (dimension m have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt approach that estimates Q(x based only on the available data, namely, P(y. The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.
Two-dimensional visualization of cluster beams by microchannel plates
Energy Technology Data Exchange (ETDEWEB)
Khoukaz, A., E-mail: khoukaz@uni-muenster.de; Bonaventura, D.; Grieser, S.; Hergemöller, A.-K.; Köhler, E.; Täschner, A.
2014-01-21
An advanced technique for a two-dimensional real time visualization of cluster beams in a vacuum as well as of the overlap volume of cluster beams with particle accelerator beams is presented. The detection system consists of an array of microchannel plates (MCPs) in combination with a phosphor screen which is read out by a CCD camera. This setup together with the ionization of a cluster beam by an electron or ion beam allows for spatial resolved investigations of the cluster beam position, size, and intensity. Moreover, since electrically uncharged clusters remain undetected, the operation in an internal beam experiment opens the way to monitor the overlap region and thus the position and size of an accelerator beam crossing an originally electrically neutral cluster jet. The observed intensity distribution of the recorded image is directly proportional to the convolution of the spatial ion beam and cluster beam intensities and is by this a direct measure of the two-dimensional luminosity distribution. This information can directly be used for the reconstruction of vertex positions as well as for an input for numerical simulations of the reaction zone. The spatial resolution of the images is dominated by the granularity of the complete MCP device and was found to be in the order of σ≈100μm. -- Highlights: • We present a MCP system for a 2D real time visualization of cluster target beams. • With this device the vertex region of storage ring experiments can be investigated. • Time resolved 2D information about the target thickness distribution is accessible. • A spatial resolution of the MCP device of 0.1 mm was achieved. • The presented MCP system also allows for measurements on cluster masses.
Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.
Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon
2016-01-01
Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.
Sasaki, Tomohiko; Kondo, Osamu
2016-03-01
In paleodemography, the Bayesian approach has been suggested to provide an effective means by which mortality profiles of past populations can be adequately estimated, and thus avoid problems of "age-mimicry" inherent in conventional approaches. In this study, we propose an application of the Gompertz model using an "informative" prior probability distribution by revising a recent example of the Bayesian approach based on an "uninformative" distribution. Life-table data of 134 human populations including those of contemporary hunter-gatherers were used to determine the Gompertz parameters of each population. In each population, we used both raw life-table data and the Gompertz parameters to calculate some demographic values such as the mean life-span, to confirm representativeness of the model. Then, the correlation between the two Gompertz parameters (the Strehler-Mildvan correlation) was re-established. We incorporated the correlation into the Bayesian approach as an "informative" prior probability distribution, and tested its effectiveness using simulated data. Our analyses showed that the mean life-span (≥ age 15) and the proportion of living persons aging over 45 were well-reproduced by the Gompertz model. The simulation showed that using the correlation as an informative prior provides a narrower estimation range in the Bayesian approach than does the uninformative prior. The Gompertz model can be assumed to accurately estimate the mean life-span and/or the proportion of old people in a population. We suggest that the Strehler-Mildvan correlation can be used as a useful constraint in demographic reconstructions of past human populations. © 2015 Wiley Periodicals, Inc.
Ultrafast two dimensional infrared chemical exchange spectroscopy
Fayer, Michael
2011-03-01
The method of ultrafast two dimensional infrared (2D IR) vibrational echo spectroscopy is described. Three ultrashort IR pulses tuned to the frequencies of the vibrational transitions of interest are directed into the sample. The interaction of these pulses with the molecular vibrational oscillators produces a polarization that gives rise to a fourth pulse, the vibrational echo. The vibrational echo pulse is combined with another pulse, the local oscillator, for heterodyne detection of the signal. For fixed time between the second and third pulses, the waiting time, the first pulse is scanned. Two Fourier transforms of the data yield a 2D IR spectrum. The waiting time is increased, and another spectrum is obtained. The change in the 2D IR spectra with increased waiting time provides information on the time evolution of the structure of the molecular system under observation. In a 2D IR chemical exchange experiment, two species A and B, are undergoing chemical exchange. A's are turning into B's, and B's are turning into A's, but the overall concentrations of the species are not changing. The kinetics of the chemical exchange on the ground electronic state under thermal equilibrium conditions can be obtained 2D IR spectroscopy. A vibration that has a different frequency for the two species is monitored. At very short time, there will be two peaks on the diagonal of the 2D IR spectrum, one for A and one for B. As the waiting time is increased, chemical exchange causes off-diagonal peaks to grow in. The time dependence of the growth of these off-diagonal peaks gives the chemical exchange rate. The method is applied to organic solute-solvent complex formation, orientational isomerization about a carbon-carbon single bond, migration of a hydrogen bond from one position on a molecule to another, protein structural substate interconversion, and water hydrogen bond switching between ions and water molecules. This work was supported by the Air Force Office of Scientific
Molecular assembly on two-dimensional materials
Kumar, Avijit; Banerjee, Kaustuv; Liljeroth, Peter
2017-02-01
Molecular self-assembly is a well-known technique to create highly functional nanostructures on surfaces. Self-assembly on two-dimensional (2D) materials is a developing field driven by the interest in functionalization of 2D materials in order to tune their electronic properties. This has resulted in the discovery of several rich and interesting phenomena. Here, we review this progress with an emphasis on the electronic properties of the adsorbates and the substrate in well-defined systems, as unveiled by scanning tunneling microscopy. The review covers three aspects of the self-assembly. The first one focuses on non-covalent self-assembly dealing with site-selectivity due to inherent moiré pattern present on 2D materials grown on substrates. We also see that modification of intermolecular interactions and molecule–substrate interactions influences the assembly drastically and that 2D materials can also be used as a platform to carry out covalent and metal-coordinated assembly. The second part deals with the electronic properties of molecules adsorbed on 2D materials. By virtue of being inert and possessing low density of states near the Fermi level, 2D materials decouple molecules electronically from the underlying metal substrate and allow high-resolution spectroscopy and imaging of molecular orbitals. The moiré pattern on the 2D materials causes site-selective gating and charging of molecules in some cases. The last section covers the effects of self-assembled, acceptor and donor type, organic molecules on the electronic properties of graphene as revealed by spectroscopy and electrical transport measurements. Non-covalent functionalization of 2D materials has already been applied for their application as catalysts and sensors. With the current surge of activity on building van der Waals heterostructures from atomically thin crystals, molecular self-assembly has the potential to add an extra level of flexibility and functionality for applications ranging
Bounds on the Capacity of Weakly constrained two-dimensional Codes
DEFF Research Database (Denmark)
Forchhammer, Søren
2002-01-01
Upper and lower bounds are presented for the capacity of weakly constrained two-dimensional codes. The maximum entropy is calculated for two simple models of 2-D codes constraining the probability of neighboring 1s as an example. For given models of the coded data, upper and lower bounds...
Two dimensional basic linear algebra communication subprograms
Energy Technology Data Exchange (ETDEWEB)
Dongarra, J.J.; Whaley, R.C. [Univ. of Tennessee, Knoxville, TN (United States); Geijn, R.A. van de [Univ. of Texas, Austin, TX (United States)
1993-12-31
This paper describes a package of linear algebra communication routines for manipulating and communicating data structures that are distributed among the memories of a distributed memory MIMD computer. The motivation for the BLACS is to increase portability, efficiency and modularity at a high level. The audience of the BLACS are mathematical software experts and people with large scale scientific computation to perform.
Dynamic Properties of Two-Dimensional Polydisperse Granular Gases
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
We propose a two-dimensional model of polydisperse granular mixtures with a power-law size distribution in the presence of stochastic driving. A fractal dimension D is introduced as a measurement of the inhomogeneity of the size distribution of particles. We define the global and partial granular temperatures of the multi-component mixture. By direct simulation Monte Carlo, we investigate how the inhomogeneity of the size distribution influences the dynamic properties of the mixture, focusing on the granular temperature, dissipated energy, velocity distribution, spatial clusterization, and collision time. We get the following results: a single granular temperature does not characterize a multi-component mixture and each species attains its own "granular temperature"; The velocity deviation from Gaussian distribution becomes more and more pronounced and the partial density of the assembly is more inhomogeneous with the increasing value of the fractal dimension D; The global granular temperature decreases and average dissipated energy per particle increases as the value of D augments.
Modeling the probability distribution of positional errors incurred by residential address geocoding
Directory of Open Access Journals (Sweden)
Mazumdar Soumya
2007-01-01
Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.
Cooperation in two-dimensional mixed-games
Amaral, Marco A; Wardil, Lucas
2015-01-01
Evolutionary game theory is a common framework to study the evolution of cooperation, where it is usually assumed that the same game is played in all interactions. Here, we investigate a model where the game that is played by two individuals is uniformly drawn from a sample of two different games. Using the master equation approach we show that the random mixture of two games is equivalent to play the average game when (i) the strategies are statistically independent of the game distribution and (ii) the transition rates are linear functions of the payoffs. We also use Monte-Carlo simulations in a two dimensional lattice and mean-field techniques to investigate the scenario when the two above conditions do not hold. We find that even outside of such conditions, several quantities characterizing the mixed-games are still the same as the ones obtained in the average game when the two games are not very different.
Dielectric-barrier discharges in two-dimensional lattice potentials
Sinclair, Josiah
2011-01-01
We use a pin-grid electrode to introduce a corrugated electrical potential into a planar dielectric-barrier discharge (DBD) system, so that the amplitude of the applied electric field has the profile of a two-dimensional square lattice. The lattice potential provides a template for the spatial distribution of plasma filaments in the system and has pronounced effects on the patterns that can form. The positions at which filaments become localized within the lattice unit cell vary with the width of the discharge gap. The patterns that appear when filaments either overfill or under-fill the lattice are reminiscent of those observed in other physical systems involving 2d lattices. We suggest that the connection between lattice-driven DBDs and other areas of physics may benefit from the further development of models that treat plasma filaments as interacting particles.
Approaches to verification of two-dimensional water quality models
Energy Technology Data Exchange (ETDEWEB)
Butkus, S.R. (Tennessee Valley Authority, Chattanooga, TN (USA). Water Quality Dept.)
1990-11-01
The verification of a water quality model is the one procedure most needed by decision making evaluating a model predictions, but is often not adequate or done at all. The results of a properly conducted verification provide the decision makers with an estimate of the uncertainty associated with model predictions. Several statistical tests are available for quantifying of the performance of a model. Six methods of verification were evaluated using an application of the BETTER two-dimensional water quality model for Chickamauga reservoir. Model predictions for ten state variables were compared to observed conditions from 1989. Spatial distributions of the verification measures showed the model predictions were generally adequate, except at a few specific locations in the reservoir. The most useful statistics were the mean standard error of the residuals. Quantifiable measures of model performance should be calculated during calibration and verification of future applications of the BETTER model. 25 refs., 5 figs., 7 tabs.
Thermal conductivity of disordered two-dimensional binary alloys.
Zhou, Yang; Guo, Zhi-Xin; Cao, Hai-Yuan; Chen, Shi-You; Xiang, Hong-Jun; Gong, Xin-Gao
2016-10-20
Using non-equilibrium molecular dynamics simulations, we have studied the effect of disorder on the thermal conductivity of two-dimensional (2D) C1-xNx alloys. We find that the thermal conductivity not only depends on the substitution concentration of nitrogen, but also strongly depends on the disorder distribution. A general linear relationship is revealed between the thermal conductivity and the participation ratio of phonons in 2D alloys. Localization mode analysis further indicates that the thermal conductivity variation in the ordered alloys can be attributed to the number of inequivalent atoms. As for the disordered alloys, we find that the thermal conductivity variation can be described by a simple linear formula with the disorder degree and the substitution concentration. The present study suggests some general guidance for phonon manipulation and thermal engineering in low dimensional alloys.
Swimming of Vorticella in two-dimensional confinements
Sotelo, Luz; Park, Young-Gil; Jung, Sunghwan; Ryu, Sangjin
2015-03-01
Vorticellais a ciliate observed in the stalked sessile form (trophont), which consists of an inverted bell-shaped cell body (zooid) and a slender stalk attaching the zooid to a substrate. Having circular cilia bands around the oral part, the stalkless zooid of Vorticella can serve as a model system for microorganism swimming. Here we present how the stalkess trophont zooid of Vorticella swims in two-dimensional confined geometries which are similar to the Hele-Shaw cell. Having harvested stalkless Vorticella zooids, we observed their swimming in water between two glass surfaces using video microscopy. Based on measured swimming trajectories and distributions of zooid orientation and swimming velocity, we analyzed how Vorticella's swimming mobility was influenced by the geometry constraints. Supported by First Award grant from Nebraska EPSCoR.
Effective-range dependence of two-dimensional Fermi gases
Schonenberg, L. M.; Verpoort, P. C.; Conduit, G. J.
2017-08-01
The Feshbach resonance provides precise control over the scattering length and effective range of interactions between ultracold atoms. We propose the ultratransferable pseudopotential to model effective interaction ranges -1.5 ≤kF2Reff2≤0 , where Reff is the effective range and kF is the Fermi wave vector, describing narrow to broad Feshbach resonances. We develop a mean-field treatment and exploit the pseudopotential to perform a variational and diffusion Monte Carlo study of the ground state of the two-dimensional Fermi gas, reporting on the ground-state energy, contact, condensate fraction, momentum distribution, and pair-correlation functions as a function of the effective interaction range across the BEC-BCS crossover. The limit kF2Reff2→-∞ is a gas of bosons with zero binding energy, whereas ln(kFa )→-∞ corresponds to noninteracting bosons with infinite binding energy.
On the probability distribution of daily streamflow in the United States
Blum, Annalise G.; Archfield, Stacey A.; Vogel, Richard M.
2017-01-01
Daily streamflows are often represented by flow duration curves (FDCs), which illustrate the frequency with which flows are equaled or exceeded. FDCs have had broad applications across both operational and research hydrology for decades; however, modeling FDCs has proven elusive. Daily streamflow is a complex time series with flow values ranging over many orders of magnitude. The identification of a probability distribution that can approximate daily streamflow would improve understanding of the behavior of daily flows and the ability to estimate FDCs at ungaged river locations. Comparisons of modeled and empirical FDCs at nearly 400 unregulated, perennial streams illustrate that the four-parameter kappa distribution provides a very good representation of daily streamflow across the majority of physiographic regions in the conterminous United States (US). Further, for some regions of the US, the three-parameter generalized Pareto and lognormal distributions also provide a good approximation to FDCs. Similar results are found for the period of record FDCs, representing the long-term hydrologic regime at a site, and median annual FDCs, representing the behavior of flows in a typical year.
Detection of two power-law tails in the probability distribution functions of massive GMCs
Schneider, N; Girichidis, P; Rayner, T; Motte, F; Andre, P; Russeil, D; Abergel, A; Anderson, L; Arzoumanian, D; Benedettini, M; Csengeri, T; Didelon, P; Francesco, J D; Griffin, M; Hill, T; Klessen, R S; Ossenkopf, V; Pezzuto, S; Rivera-Ingraham, A; Spinoglio, L; Tremblin, P; Zavagno, A
2015-01-01
We report the novel detection of complex high-column density tails in the probability distribution functions (PDFs) for three high-mass star-forming regions (CepOB3, MonR2, NGC6334), obtained from dust emission observed with Herschel. The low column density range can be fit with a lognormal distribution. A first power-law tail starts above an extinction (Av) of ~6-14. It has a slope of alpha=1.3-2 for the rho~r^-alpha profile for an equivalent density distribution (spherical or cylindrical geometry), and is thus consistent with free-fall gravitational collapse. Above Av~40, 60, and 140, we detect an excess that can be fitted by a flatter power law tail with alpha>2. It correlates with the central regions of the cloud (ridges/hubs) of size ~1 pc and densities above 10^4 cm^-3. This excess may be caused by physical processes that slow down collapse and reduce the flow of mass towards higher densities. Possible are: 1. rotation, which introduces an angular momentum barrier, 2. increasing optical depth and weaker...
Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng
2013-01-01
New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. © 2012 Society for Risk Analysis.
SCAPS, a two-dimensional ion detector for mass spectrometer
Yurimoto, Hisayoshi
2014-05-01
Faraday Cup (FC) and electron multiplier (EM) are of the most popular ion detector for mass spectrometer. FC is used for high-count-rate ion measurements and EM can detect from single ion. However, FC is difficult to detect lower intensities less than kilo-cps, and EM loses ion counts higher than Mega-cps. Thus, FC and EM are used complementary each other, but they both belong to zero-dimensional detector. On the other hand, micro channel plate (MCP) is a popular ion signal amplifier with two-dimensional capability, but additional detection system must be attached to detect the amplified signals. Two-dimensional readout for the MCP signals, however, have not achieve the level of FC and EM systems. A stacked CMOS active pixel sensor (SCAPS) has been developed to detect two-dimensional ion variations for a spatial area using semiconductor technology [1-8]. The SCAPS is an integrated type multi-detector, which is different from EM and FC, and is composed of more than 500×500 pixels (micro-detectors) for imaging of cm-area with a pixel of less than 20 µm in square. The SCAPS can be detected from single ion to 100 kilo-count ions per one pixel. Thus, SCAPS can be accumulated up to several giga-count ions for total pixels, i.e. for total imaging area. The SCAPS has been applied to stigmatic ion optics of secondary ion mass spectrometer, as a detector of isotope microscope [9]. The isotope microscope has capabilities of quantitative isotope images of hundred-micrometer area on a sample with sub-micrometer resolution and permil precision, and of two-dimensional mass spectrum on cm-scale of mass dispersion plane of a sector magnet with ten-micrometer resolution. The performance has been applied to two-dimensional isotope spatial distribution for mainly hydrogen, carbon, nitrogen and oxygen of natural (extra-terrestrial and terrestrial) samples and samples simulated natural processes [e.g. 10-17]. References: [1] Matsumoto, K., et al. (1993) IEEE Trans. Electron Dev. 40
Directory of Open Access Journals (Sweden)
Han Liwei
2014-07-01
Full Text Available Monitoring data on an earth-rockfill dam constitutes a form of spatial data. Such data include much uncertainty owing to the limitation of measurement information, material parameters, load, geometry size, initial conditions, boundary conditions and the calculation model. So the cloud probability density of the monitoring data must be addressed. In this paper, the cloud theory model was used to address the uncertainty transition between the qualitative concept and the quantitative description. Then an improved algorithm of cloud probability distribution density based on a backward cloud generator was proposed. This was used to effectively convert certain parcels of accurate data into concepts which can be described by proper qualitative linguistic values. Such qualitative description was addressed as cloud numerical characteristics-- {Ex, En, He}, which could represent the characteristics of all cloud drops. The algorithm was then applied to analyze the observation data of a piezometric tube in an earth-rockfill dam. And experiment results proved that the proposed algorithm was feasible, through which, we could reveal the changing regularity of piezometric tube’s water level. And the damage of the seepage in the body was able to be found out.
Becerril, Laura; Cappello, Annalisa; Galindo, Inés; Neri, Marco; Del Negro, Ciro
2013-05-01
The 2011 submarine eruption that took place in the proximity of El Hierro Island (Canary Islands, Spain) has raised the need to identify the most likely future emission zones even on volcanoes characterized by low frequency activity. Here, we propose a probabilistic method to build the susceptibility map of El Hierro, i.e. the spatial distribution of vent opening for future eruptions, based on the probabilistic analysis of volcano-structural data of the Island collected through new fieldwork measurements, bathymetric information, as well as analysis of geological maps, orthophotos and aerial photographs. These data have been divided into different datasets and converted into separate and weighted probability density functions, which were included in a non-homogeneous Poisson process to produce the volcanic susceptibility map. The most likely area to host new eruptions in El Hierro is in the south-western part of the West rift. High probability locations are also found in the Northeast and South rifts, and along the submarine parts of the rifts. This map represents the first effort to deal with the volcanic hazard at El Hierro and can be a support tool for decision makers in land planning, emergency measures and civil defense actions.
DEFF Research Database (Denmark)
Helles, Glennie; Fonseca, Rasmus
2009-01-01
Predicting the three-dimensional structure of a protein from its amino acid sequence is currently one of the most challenging problems in bioinformatics. The internal structure of helices and sheets is highly recurrent and help reduce the search space significantly. However, random coil segments...... make up nearly 40\\% of proteins, and they do not have any apparent recurrent patterns which complicates overall prediction accuracy of protein structure prediction methods. Luckily, previous work has indicated that coil segments are in fact not completely random in structure and flanking residues do...... seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been...
Institute of Scientific and Technical Information of China (English)
LI Hai-Xia; CHENG Chuan-Fu
2011-01-01
@@ We study the light scattering of an orthogonal anisotropic rough surface with secondary most probable slope distribution It is found that the scattered intensity profiles have obvious secondary maxima, and in the direction perpendicular to the plane of incidence, the secondary maxima are oriented in a curve on the observation plane,which is called the orientation curve.By numerical calculation of the scattering wave fields with the height data of the sample, it is validated that the secondary maxima are induced by the side face element, which constitutes the prismoid structure of the anisotropic surface.We derive the equation of the quadratic orientation curve.Experimentally, we construct the system for light scattering measurement using a CCD.The scattered intensity profiles are extracted from the images at different angles of incidence along the orientation curves.The experimental results conform to the theory.
EVALUATION OF THE PROBABILITY DISTRIBUTION OF PITTING CORROSION FATIGUE LIFE IN AIRCRAFT MATERIALS
Institute of Scientific and Technical Information of China (English)
王清远; N.KAWAGOISHI; Q.CHEN; R.M.PIDAPARTI
2003-01-01
Corrosion and fatigue properties of aircraft materials axe known to have a considerablescatter due to the random nature of materials, loading, and environmental conditions. A probabilisticapproach for predicting the pitting corrosion fatigue life has been investigated which captures the effectof the interaction of the cyclic load and corrosive environment and all stages of the corrosion fatigueprocess (i.e. the pit nucleation and growth, pit-crack transition, short- and long-crack propagation).The probabilistic model investigated considers the uncertainties in the initial pit size, corrosion pittingcurrent, and material properties due to the scatter found in the experimental data. Monte Carlo simu-lations were performed to define the failure probability distribution. Predicted cumulative distributionfunctions of fatigue life agreed reasonably well with the existing experimental data.
Andrade, Daniel
2012-01-01
We present a new method to propagate lower bounds on conditional probability distributions in conventional Bayesian networks. Our method guarantees to provide outer approximations of the exact lower bounds. A key advantage is that we can use any available algorithms and tools for Bayesian networks in order to represent and infer lower bounds. This new method yields results that are provable exact for trees with binary variables, and results which are competitive to existing approximations in credal networks for all other network structures. Our method is not limited to a specific kind of network structure. Basically, it is also not restricted to a specific kind of inference, but we restrict our analysis to prognostic inference in this article. The computational complexity is superior to that of other existing approaches.
Dai, Mi; Wang, Yun
2016-06-01
In order to obtain robust cosmological constraints from Type Ia supernova (SN Ia) data, we have applied Markov Chain Monte Carlo (MCMC) to SN Ia lightcurve fitting. We develop a method for sampling the resultant probability density distributions (pdf) of the SN Ia lightcuve parameters in the MCMC likelihood analysis to constrain cosmological parameters, and validate it using simulated data sets. Applying this method to the `joint lightcurve analysis (JLA)' data set of SNe Ia, we find that sampling the SN Ia lightcurve parameter pdf's leads to cosmological parameters closer to that of a flat Universe with a cosmological constant, compared to the usual practice of using only the best-fitting values of the SN Ia lightcurve parameters. Our method will be useful in the use of SN Ia data for precision cosmology.
Arnaut, L R
2006-01-01
Using a TE/TM decomposition for an angular plane-wave spectrum of free random electromagnetic waves and matched boundary conditions, we derive the probability density function for the energy density of the vector electric field in the presence of a semi-infinite isotropic medium. The theoretical analysis is illustrated with calculations and results for good electric conductors and for a lossless dielectric half-space. The influence of the permittivity and conductivity on the intensity, random polarization, statistical distribution and standard deviation of the field is investigated, both for incident plus reflected fields and for refracted fields. External refraction is found to result in compression of the fluctuations of the random field.
Seto, Naoki
2014-01-01
We analytically discuss probability distribution function (PDF) for inclinations of merging compact binaries whose gravitational waves are coherently detected by a network of ground based interferometers. The PDF would be useful for studying prospects of (1) simultaneously detecting electromagnetic signals (such as gamma-ray-bursts) associated with binary mergers and (2) statistically constraining the related theoretical models from the actual observational data of multi-messenger astronomy. Our approach is similar to Schutz (2011), but we explicitly include the dependence of the polarization angles of the binaries, based on the concise formulation given in Cutler and Flanagan (1994). We find that the overall profiles of the PDFs are similar for any networks composed by the second generation detectors (Advanced-LIGO, Advanced-Virgo, KAGRA, LIGO-India). For example, 5.1% of detected binaries would have inclination angle less than 10 degree with at most 0.1% differences between the potential networks. A perturb...
Ossenkopf, Volker; Schneider, Nicola; Federrath, Christoph; Klessen, Ralf S
2016-01-01
Probability distribution functions (PDFs) of column densities are an established tool to characterize the evolutionary state of interstellar clouds. Using simulations, we show to what degree their determination is affected by noise, line-of-sight contamination, field selection, and the incomplete sampling in interferometric measurements. We solve the integrals that describe the convolution of a cloud PDF with contaminating sources and study the impact of missing information on the measured column density PDF. The effect of observational noise can be easily estimated and corrected for if the root mean square (rms) of the noise is known. For $\\sigma_{noise}$ values below 40\\,\\% of the typical cloud column density, $N_{peak}$, this involves almost no degradation of the accuracy of the PDF parameters. For higher noise levels and narrow cloud PDFs the width of the PDF becomes increasingly uncertain. A contamination by turbulent foreground or background clouds can be removed as a constant shield if the PDF of the c...
The HI Probability Distribution Function and the Atomic-to-Molecular Transition in Molecular Clouds
Imara, Nia
2016-01-01
We characterize the column density probability distributions functions (PDFs) of the atomic hydrogen gas, HI, associated with seven Galactic molecular clouds (MCs). We use 21 cm observations from the Leiden/Argentine/Bonn Galactic HI Survey to derive column density maps and PDFs. We find that the peaks of the HI PDFs occur at column densities ranging from ~1-2$\\times 10^{21}$ cm$^2$ (equivalently, ~0.5-1 mag). The PDFs are uniformly narrow, with a mean dispersion of $\\sigma_{HI}\\approx 10^{20}$ cm$^2$ (~0.1 mag). We also investigate the HI-to-H$_2$ transition towards the cloud complexes and estimate HI surface densities ranging from 7-16 $M_\\odot$ pc$^{-2}$ at the transition. We propose that the HI PDF is a fitting tool for identifying the HI-to-H$_2$ transition column in Galactic MCs.
Probability distribution function and multiscaling properties in the Korean stock market
Lee, Kyoung Eun; Lee, Jae Woo
2007-09-01
We consider the probability distribution function (pdf) and the multiscaling properties of the index and the traded volume in the Korean stock market. We observed the power law of the pdf at the fat tail region for the return, volatility, the traded volume, and changes of the traded volume. We also investigate the multifractality in the Korean stock market. We consider the multifractality by the detrended fluctuation analysis (MFDFA). We observed the multiscaling behaviors for index, return, traded volume, and the changes of the traded volume. We apply MFDFA method for the randomly shuffled time series to observe the effects of the autocorrelations. The multifractality is strongly originated from the long time correlations of the time series.
Analysis of Low Probability of Intercept (LPI) Radar Signals Using the Wigner Distribution
Gau, Jen-Yu
2002-09-01
The parameters of Low Probability of Intercept (LPI) radar signals are hard to identity by using traditional periodogram signal processing techniques. Using the Wigner Distribution (WD), this thesis examines eight types of LPI radar signals. Signal to noise ratios of 0 dB and -6 dB are also investigated. The eight types LPI radar signals examined include Frequency Modulation Continuous Wave (FMCW), Frank code, Pt code, P2 code, P3 code, P4 code, COSTAS frequency hopping and Phase Shift Keying/Frequency Shift Keying (PSK/FSK) signals. Binary Phase Shift Keying (BPSK) signals although not used in modern LPI radars are also examined to further illustrate the principal characteristics of the WD.
Binomial moments of the distance distribution and the probability of undetected error
Energy Technology Data Exchange (ETDEWEB)
Barg, A. [Lucent Technologies, Murray Hill, NJ (United States). Bell Labs.; Ashikhmin, A. [Los Alamos National Lab., NM (United States)
1998-09-01
In [1] K.A.S. Abdel-Ghaffar derives a lower bound on the probability of undetected error for unrestricted codes. The proof relies implicitly on the binomial moments of the distance distribution of the code. The authors use the fact that these moments count the size of subcodes of the code to give a very simple proof of the bound in [1] by showing that it is essentially equivalent to the Singleton bound. They discuss some combinatorial connections revealed by this proof. They also discuss some improvements of this bound. Finally, they analyze asymptotics. They show that an upper bound on the undetected error exponent that corresponds to the bound of [1] improves known bounds on this function.
Muralisankar, S; Manivannan, A; Balasubramaniam, P
2015-09-01
The aim of this manuscript is to investigate the mean square delay dependent-probability-distribution stability analysis of neutral type stochastic neural networks with time-delays. The time-delays are assumed to be interval time-varying and randomly occurring. Based on the new Lyapunov-Krasovskii functional and stochastic analysis approach, a novel sufficient condition is obtained in the form of linear matrix inequality such that the delayed stochastic neural networks are globally robustly asymptotically stable in the mean-square sense for all admissible uncertainties. Finally, the derived theoretical results are validated through numerical examples in which maximum allowable upper bounds are calculated for different lower bounds of time-delay.
The H I Probability Distribution Function and the Atomic-to-molecular Transition in Molecular Clouds
Imara, Nia; Burkhart, Blakesley
2016-10-01
We characterize the column-density probability distribution functions (PDFs) of the atomic hydrogen gas, H i, associated with seven Galactic molecular clouds (MCs). We use 21 cm observations from the Leiden/Argentine/Bonn Galactic H i Survey to derive column-density maps and PDFs. We find that the peaks of the H i PDFs occur at column densities in the range ˜1-2 × 1021 {{cm}}-2 (equivalently, ˜0.5-1 mag). The PDFs are uniformly narrow, with a mean dispersion of {σ }{{H}{{I}}}≈ {10}20 {{cm}}-2 (˜0.1 mag). We also investigate the H i-to-H2 transition toward the cloud complexes and estimate H i surface densities ranging from 7 to 16 {M}⊙ {{pc}}-2 at the transition. We propose that the H i PDF is a fitting tool for identifying the H i-to-H2 transition column in Galactic MCs.
Random numbers from the tails of probability distributions using the transformation method
Fulger, Daniel; Germano, Guido
2009-01-01
The speed of many one-line transformation methods for the production of, for example, Levy alpha-stable random numbers, which generalize Gaussian ones, and Mittag-Leffler random numbers, which generalize exponential ones, is very high and satisfactory for most purposes. However, for the class of decreasing probability densities fast rejection implementations like the Ziggurat by Marsaglia and Tsang promise a significant speed-up if it is possible to complement them with a method that samples the tails of the infinite support. This requires the fast generation of random numbers greater or smaller than a certain value. We present a method to achieve this, and also to generate random numbers within any arbitrary interval. We demonstrate the method showing the properties of the transform maps of the above mentioned distributions as examples of stable and geometric stable random numbers used for the stochastic solution of the space-time fractional diffusion equation.
Pinto, Pedro C
2010-01-01
We present a mathematical model for communication subject to both network interference and noise. We introduce a framework where the interferers are scattered according to a spatial Poisson process, and are operating asynchronously in a wireless environment subject to path loss, shadowing, and multipath fading. We consider both cases of slow and fast-varying interferer positions. The paper is comprised of two separate parts. In Part I, we determine the distribution of the aggregate network interference at the output of a linear receiver. We characterize the error performance of the link, in terms of average and outage probabilities. The proposed model is valid for any linear modulation scheme (e.g., M-ary phase shift keying or M-ary quadrature amplitude modulation), and captures all the essential physical parameters that affect network interference. Our work generalizes the conventional analysis of communication in the presence of additive white Gaussian noise and fast fading, allowing the traditional results...
Directory of Open Access Journals (Sweden)
V.M. Emelyanov
2015-12-01
Full Text Available Parameters of two-dimensional analytical model of an assessment of crossing of ellipses of distribution at recognition of nanoparticles of colloidal silver are given in polyair fibers on multidimensional correlation components of the Raman ranges with control according to polarizing characteristics. Reliability of recognition of nanoparticles increased more than by 1000 times and was estimated on joint probability of normal distributions of intensivnost of the Raman spectrograms of nanoparticles of silver on polyair fibers depending on longitudinal and cross polarization of laser radiation on all range of a range with the analysis of 9 main peaks.
Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.
2012-01-01
1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities
Ca2+ movement in smooth muscle cells studied with one- and two-dimensional diffusion models.
Kargacin, G; Fay, F S
1991-11-01
Although many of the processes involved in the regulation of Ca2+ in smooth muscle have been studied separately, it is still not well known how they are integrated into an overall regulatory system. To examine this question and to study the time course and spatial distribution of Ca2+ in cells after activation, one- and two-dimensional diffusion models of the cell that included the major processes thought to be involved in Ca regulation were developed. The models included terms describing Ca influx, buffering, plasma membrane extrusion, and release and reuptake by the sarcoplasmic reticulum. When possible these processes were described with known parameters. Simulations with the models indicated that the sarcoplasmic reticulum Ca pump is probably primarily responsible for the removal of cytoplasmic Ca2+ after cell activation. The plasma membrane Ca-ATPase and Na/Ca exchange appeared more likely to be involved in the long term regulation of Ca2+. Pumping processes in general had little influence on the rate of rise of Ca transients. The models also showed that spatial inhomogeneities in Ca2+ probably occur in cells during the spread of the Ca signal following activation and during the subsequent return of Ca2+ to its resting level.
National Research Council Canada - National Science Library
N. Nishimoto; S. Terae; M. Uesugi; K. Ogasawara; T. Sakurai
2008-01-01
Objectives: The objectives of this study were to investigate the transitional probability distribution of medical term boundaries between characters and to develop a parsing algorithm specifically for medical texts. Methods...
Size effect on strength and lifetime probability distributions of quasibrittle structures
Indian Academy of Sciences (India)
Zdeněk P Bažant; Jia-Liang Le
2012-02-01
Engineering structures such as aircraft, bridges, dams, nuclear containments and ships, as well as computer circuits, chips and MEMS, should be designed for failure probability < $10^{-6}-10^{-7}$ per lifetime. The safety factors required to ensure it are still determined empirically, even though they represent much larger and much more uncertain corrections to deterministic calculations than do the typical errors of modern computer analysis of structures. The empirical approach is sufﬁcient for perfectly brittle and perfectly ductile structures since the cumulative distribution function (cdf) of random strength is known, making it possible to extrapolate to the tail from the mean and variance. However, the empirical approach does not apply to structures consisting of quasibrittle materials, which are brittle materials with inhomogeneities that are not negligible compared to structure size. This paper presents a reﬁned theory on the strength distribution of quasibrittle structures, which is based on the fracture mechanics of nanocracks propagating by activation energy controlled small jumps through the atomic lattice and an analytical model for the multi-scale transition of strength statistics. Based on the power law for creep crack growth rate and the cdf of material strength, the lifetime distribution of quasibrittle structures under constant load is derived. Both the strength and lifetime cdfs are shown to be sizeand geometry-dependent. The theory predicts intricate size effects on both the mean structural strength and lifetime, the latter being much stronger. The theory is shown to match the experimentally observed systematic deviations of strength and lifetime histograms of industrial ceramics from the Weibull distribution.
The convolution theorem for two-dimensional continuous wavelet transform
Institute of Scientific and Technical Information of China (English)
ZHANG CHI
2013-01-01
In this paper , application of two -dimensional continuous wavelet transform to image processes is studied. We first show that the convolution and correlation of two continuous wavelets satisfy the required admissibility and regularity conditions ,and then we derive the convolution and correlation theorem for two-dimensional continuous wavelet transform. Finally, we present numerical example showing the usefulness of applying the convolution theorem for two -dimensional continuous wavelet transform to perform image restoration in the presence of additive noise.
The probability distribution for non-Gaussianity estimators constructed from the CMB trispectrum
Smith, Tristan L
2012-01-01
Considerable recent attention has focussed on the prospects to use the cosmic microwave background (CMB) trispectrum to probe the physics of the early universe. Here we evaluate the probability distribution function (PDF) for the standard estimator tau_nle for the amplitude tau_nl of the CMB trispectrum both for the null-hypothesis (i.e., for Gaussian maps with tau_nl = 0) and for maps with a non-vanishing trispectrum (|tau_nl|>0). We find these PDFs to be highly non-Gaussian in both cases. We also evaluate the variance with which the trispectrum amplitude can be measured, , as a function of its underlying value, tau_nl. We find a strong dependence of this variance on tau_nl. We also find that the variance does not, given the highly non-Gaussian nature of the PDF, effectively characterize the distribution. Detailed knowledge of these PDFs will therefore be imperative in order to properly interpret the implications of any given trispectrum measurement. For example, if a CMB experiment with a maximum multipole ...
A new probability distribution model of turbulent irradiance based on Born perturbation theory
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled.Theory reliably describes the behavior in the weak turbulence regime,but theoretical description in the strong and whole turbulence regimes are still controversial.Based on Born perturbation theory,the physical manifestations and correlations of three typical PDF models (Rice-Nakagami,exponential-Bessel and negative-exponential distribution) were theoretically analyzed.It is shown that these models can be derived by separately making circular-Gaussian,strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory,which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications.In addition,a common shortcoming of the three models is that they are all approximations.A new model,called the Maclaurin-spread distribution,is proposed without any approximation except for assuming the correlation coefficient to be zero.So,it is considered that the new model can exactly reflect the Born perturbation theory.Simulated results prove the accuracy of this new model.
Probability distribution functions for ELM bursts in a series of JET tokamak discharges
Energy Technology Data Exchange (ETDEWEB)
Greenhough, J [Space and Astrophysics Group, Department of Physics, Warwick University, Coventry CV4 7AL (United Kingdom); Chapman, S C [Space and Astrophysics Group, Department of Physics, Warwick University, Coventry CV4 7AL (United Kingdom); Dendy, R O [Space and Astrophysics Group, Department of Physics, Warwick University, Coventry CV4 7AL (United Kingdom); Ward, D J [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom)
2003-05-01
A novel statistical treatment of the full raw edge localized mode (ELM) signal from a series of previously studied JET plasmas is tested. The approach involves constructing probability distribution functions (PDFs) for ELM amplitudes and time separations, and quantifying the fit between the measured PDFs and model distributions (Gaussian, inverse exponential) and Poisson processes. Uncertainties inherent in the discreteness of the raw signal require the application of statistically rigorous techniques to distinguish ELM data points from background, and to extrapolate peak amplitudes. The accuracy of PDF construction is further constrained by the relatively small number of ELM bursts (several hundred) in each sample. In consequence the statistical technique is found to be difficult to apply to low frequency (typically Type I) ELMs, so the focus is narrowed to four JET plasmas with high frequency (typically Type III) ELMs. The results suggest that there may be several fundamentally different kinds of Type III ELMing process at work. It is concluded that this novel statistical treatment can be made to work, may have wider applications to ELM data, and has immediate practical value as an additional quantitative discriminant between classes of ELMing behaviour.
Coding for Two Dimensional Constrained Fields
DEFF Research Database (Denmark)
Laursen, Torben Vaarbye
2006-01-01
for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower....... The important concept of entropy is introduced. In general, the entropy of a constrained field is not readily computable, but we give a series of upper and lower bounds based on one dimensional techniques. We discuss the use of a Pickard probability model for constrained fields. The novelty lies in using...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....
Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.
Burkhart, Blakesley; Lee, Min-Young; Murray, Claire E.; Stanimirović, Snezana
2015-10-01
The shape of the probability distribution function (PDF) of molecular clouds is an important ingredient for modern theories of star formation and turbulence. Recently, several studies have pointed out observational difficulties with constraining the low column density (i.e., {A}V\\lt 1) PDF using dust tracers. In order to constrain the shape and properties of the low column density PDF, we investigate the PDF of multiphase atomic gas in the Perseus molecular cloud using opacity-corrected GALFA-HI data and compare the PDF shape and properties to the total gas PDF and the N(H2) PDF. We find that the shape of the PDF in the atomic medium of Perseus is well described by a lognormal distribution and not by a power-law or bimodal distribution. The peak of the atomic gas PDF in and around Perseus lies at the HI-H2 transition column density for this cloud, past which the N(H2) PDF takes on a power-law form. We find that the PDF of the atomic gas is narrow, and at column densities larger than the HI-H2 transition, the HI rapidly depletes, suggesting that the HI PDF may be used to find the HI-H2 transition column density. We also calculate the sonic Mach number of the atomic gas by using HI absorption line data, which yield a median value of Ms = 4.0 for the CNM, while the HI emission PDF, which traces both the WNM and CNM, has a width more consistent with transonic turbulence.
Development of two-dimensional hot pool model
Energy Technology Data Exchange (ETDEWEB)
Lee, Yong Bum; Hahn, H. D
2000-05-01
During a normal reactor scram, the heat generation is reduced almost instantaneously while the coolant flow rate follows the pump coast-down. This mismatch between power and flow results in a situation where the core flow entering the hot pool is at a lower temperature than the temperature of the bulk pool sodium. This temperature difference leads to thermal stratification. Thermal stratification can occur in the hot pool region if the entering coolant is colder than the existing hot pool coolant and the flow momentum is not large enough to overcome the negative buoyancy force. Since the fluid of hot pool enters IHX{sub s}, the temperature distribution of hot pool can alter the overall system response. Hence, it is necessary to predict the pool coolant temperature distribution with sufficient accuracy to determine the inlet temperature conditions for the IHX{sub s} and its contribution to the net buoyancy head. Therefore, in this study two-dimensional hot pool model is developed instead of existing one-dimensional model to predict the hot pool coolant temperature and velocity distribution more accurately and is applied to the SSC-K code.
Internetwork magnetic field as revealed by two-dimensional inversions
Danilovic, S.; van Noort, M.; Rempel, M.
2016-09-01
Context. Properties of magnetic field in the internetwork regions are still fairly unknown because of rather weak spectropolarimetric signals. Aims: We address the matter by using the two-dimensional (2D) inversion code, which is able to retrieve the information on smallest spatial scales up to the diffraction limit, while being less susceptible to noise than most of the previous methods used. Methods: Performance of the code and the impact of various effects on the retrieved field distribution is tested first on the realistic magneto-hydrodynamic (MHD) simulations. The best inversion scenario is then applied to the real data obtained by Spectropolarimeter (SP) on board Hinode. Results: Tests on simulations show that: (1) the best choice of node position ensures a decent retrieval of all parameters; (2) the code performs well for different configurations of magnetic field; (3) slightly different noise levels or slightly different defocus included in the spatial point spread function (PSF) produces no significant effect on the results; and (4) temporal integration shifts the field distribution to a stronger, more horizontally inclined field. Conclusions: Although the contribution of the weak field is slightly overestimated owing to noise, 2D inversions are able to recover well the overall distribution of the magnetic field strength. Application of the 2D inversion code on the Hinode SP internetwork observations reveals a monotonic field strength distribution. The mean field strength at optical depth unity is ~ 130 G. At higher layers, field strength drops as the field becomes more horizontal. Regarding the distribution of the field inclination, tests show that we cannot directly retrieve it with the observations and tools at hand, however, the obtained distributions are consistent with those expected from simulations with a quasi-isotropic field inclination after accounting for observational effects.
The Chandrasekhar's Equation for Two-Dimensional Hypothetical White Dwarfs
De, Sanchari
2014-01-01
In this article we have extended the original work of Chandrasekhar on the structure of white dwarfs to the two-dimensional case. Although such two-dimensional stellar objects are hypothetical in nature, we strongly believe that the work presented in this article may be prescribed as Master of Science level class problem for the students in physics.
Beginning Introductory Physics with Two-Dimensional Motion
Huggins, Elisha
2009-01-01
During the session on "Introductory College Physics Textbooks" at the 2007 Summer Meeting of the AAPT, there was a brief discussion about whether introductory physics should begin with one-dimensional motion or two-dimensional motion. Here we present the case that by starting with two-dimensional motion, we are able to introduce a considerable…
Spatiotemporal surface solitons in two-dimensional photonic lattices.
Mihalache, Dumitru; Mazilu, Dumitru; Lederer, Falk; Kivshar, Yuri S
2007-11-01
We analyze spatiotemporal light localization in truncated two-dimensional photonic lattices and demonstrate the existence of two-dimensional surface light bullets localized in the lattice corners or the edges. We study the families of the spatiotemporal surface solitons and their properties such as bistability and compare them with the modes located deep inside the photonic lattice.
Explorative data analysis of two-dimensional electrophoresis gels
DEFF Research Database (Denmark)
Schultz, J.; Gottlieb, D.M.; Petersen, Marianne Kjerstine;
2004-01-01
Methods for classification of two-dimensional (2-DE) electrophoresis gels based on multivariate data analysis are demonstrated. Two-dimensional gels of ten wheat varieties are analyzed and it is demonstrated how to classify the wheat varieties in two qualities and a method for initial screening...
Mechanics of Apparent Horizon in Two Dimensional Dilaton Gravity
Cai, Rong-Gen
2016-01-01
In this article, we give a definition of apparent horizon in a two dimensional general dilaton gravity theory. With this definition, we construct the mechanics of the apparent horizon by introducing a quasi-local energy of the theory. Our discussion generalizes the apparent horizons mechanics in general spherically symmetric spactimes in four or higher dimensions to the two dimensional dilaton gravity case.
Topological aspect of disclinations in two-dimensional crystals
Institute of Scientific and Technical Information of China (English)
Qi Wei-Kai; Zhu Tao; Chen Yong; Ren Ji-Rong
2009-01-01
By using topological current theory, this paper studies the inner topological structure of disclinations during the melting of two-dimensional systems. From two-dimensional elasticity theory, it finds that there are topological currents for topological defects in homogeneous equation. The evolution of disclinations is studied, and the branch conditions for generating, annihilating, crossing, splitting and merging of disclinations are given.
Hilbert Statistics of Vorticity Scaling in Two-Dimensional Turbulence
Tan, H S; Meng, Jianping
2014-01-01
In this paper, the scaling property of the inverse energy cascade and forward enstrophy cascade of the vorticity filed $\\omega(x,y)$ in two-dimensional (2D) turbulence is analyzed. This is accomplished by applying a Hilbert-based technique, namely Hilbert-Huang Transform, to a vorticity field obtained from a $8192^2$ grid-points direct numerical simulation of the 2D turbulence with a forcing scale $k_f=100$ and an Ekman friction. The measured joint probability density function $p(C,k)$ of mode $C_i(x)$ of the vorticity $\\omega$ and instantaneous wavenumber $k(x)$ is separated by the forcing scale $k_f$ into two parts, which corresponding to the inverse energy cascade and the forward enstrophy cascade. It is found that all conditional pdf $p(C\\vert k)$ at given wavenumber $k$ has an exponential tail. In the inverse energy cascade, the shape of $p(C\\vert k)$ does collapse with each other, indicating a nonintermittent cascade. The measured scaling exponent $\\zeta_{\\omega}^I(q)$ is linear with the statistical ord...
Defect engineering of two-dimensional transition metal dichalcogenides
Lin, Zhong; Carvalho, Bruno R.; Kahn, Ethan; Lv, Ruitao; Rao, Rahul; Terrones, Humberto; Pimenta, Marcos A.; Terrones, Mauricio
2016-06-01
Two-dimensional transition metal dichalcogenides (TMDs), an emerging family of layered materials, have provided researchers a fertile ground for harvesting fundamental science and emergent applications. TMDs can contain a number of different structural defects in their crystal lattices which significantly alter their physico-chemical properties. Having structural defects can be either detrimental or beneficial, depending on the targeted application. Therefore, a comprehensive understanding of structural defects is required. Here we review different defects in semiconducting TMDs by summarizing: (i) the dimensionalities and atomic structures of defects; (ii) the pathways to generating structural defects during and after synthesis and, (iii) the effects of having defects on the physico-chemical properties and applications of TMDs. Thus far, significant progress has been made, although we are probably still witnessing the tip of the iceberg. A better understanding and control of defects is important in order to move forward the field of Defect Engineering in TMDs. Finally, we also provide our perspective on the challenges and opportunities in this emerging field.
Broken Ergodicity in Two-Dimensional Homogeneous Magnetohydrodynamic Turbulence
Shebalin, John V.
2010-01-01
Two-dimensional (2-D) homogeneous magnetohydrodynamic (MHD) turbulence has many of the same qualitative features as three-dimensional (3-D) homogeneous MHD turbulence.The se features include several ideal invariants, along with the phenomenon of broken ergodicity. Broken ergodicity appears when certain modes act like random variables with mean values that are large compared to their standard deviations, indicating a coherent structure or dynamo.Recently, the origin of broken ergodicity in 3-D MHD turbulence that is manifest in the lowest wavenumbers was explained. Here, a detailed description of the origins of broken ergodicity in 2-D MHD turbulence is presented. It will be seen that broken ergodicity in ideal 2-D MHD turbulence can be manifest in the lowest wavenumbers of a finite numerical model for certain initial conditions or in the highest wavenumbers for another set of initial conditions.T he origins of broken ergodicity in ideal 2-D homogeneous MHD turbulence are found through an eigen analysis of the covariance matrices of the modal probability density functions.It will also be shown that when the lowest wavenumber magnetic field becomes quasi-stationary, the higher wavenumber modes can propagate as Alfven waves on these almost static large-scale magnetic structures
Invariant Subspaces of the Two-Dimensional Nonlinear Evolution Equations
Directory of Open Access Journals (Sweden)
Chunrong Zhu
2016-11-01
Full Text Available In this paper, we develop the symmetry-related methods to study invariant subspaces of the two-dimensional nonlinear differential operators. The conditional Lie–Bäcklund symmetry and Lie point symmetry methods are used to construct invariant subspaces of two-dimensional differential operators. We first apply the multiple conditional Lie–Bäcklund symmetries to derive invariant subspaces of the two-dimensional operators. As an application, the invariant subspaces for a class of two-dimensional nonlinear quadratic operators are provided. Furthermore, the invariant subspace method in one-dimensional space combined with the Lie symmetry reduction method and the change of variables is used to obtain invariant subspaces of the two-dimensional nonlinear operators.
Goldstein, Sheldon; Lebowitz, Joel L.; Mastrodonato, Christian; Tumulka, Roderich; Zanghì, Nino
2016-03-01
A quantum system (with Hilbert space {H}1) entangled with its environment (with Hilbert space {H}2) is usually not attributed to a wave function but only to a reduced density matrix {ρ1}. Nevertheless, there is a precise way of attributing to it a random wave function {ψ1}, called its conditional wave function, whose probability distribution {μ1} depends on the entangled wave function {ψ in H1 ⊗ H2} in the Hilbert space of system and environment together. It also depends on a choice of orthonormal basis of H2 but in relevant cases, as we show, not very much. We prove several universality (or typicality) results about {μ1}, e.g., that if the environment is sufficiently large then for every orthonormal basis of H2, most entangled states {ψ} with given reduced density matrix {ρ1} are such that {μ1} is close to one of the so-called GAP (Gaussian adjusted projected) measures, {GAP(ρ1)}. We also show that, for most entangled states {ψ} from a microcanonical subspace (spanned by the eigenvectors of the Hamiltonian with energies in a narrow interval {[E, E+ δ E]}) and most orthonormal bases of H2, {μ1} is close to {GAP({tr}2 ρ_{mc})} with {ρ_{mc}} the normalized projection to the microcanonical subspace. In particular, if the coupling between the system and the environment is weak, then {μ1} is close to {GAP(ρ_β)} with {ρ_β} the canonical density matrix on H1 at inverse temperature {β=β(E)}. This provides the mathematical justification of our claim in Goldstein et al. (J Stat Phys 125: 1193-1221, 2006) that GAP measures describe the thermal equilibrium distribution of the wave function.
Molecular rattling in two-dimensional fluids: Simulations and theory
Variyar, Jayasankar E.; Kivelson, Daniel; Tarjus, Gilles; Talbot, Julian
1992-01-01
We have carried out molecular dynamic simulations over a range of densities for two-dimensional fluids consisting of hard, soft, and Lennard-Jones disks. For comparison we have also carried out simulations for the corresponding systems in which all but one particle are frozen in position. We have studied the velocity autocorrelation functions and the closely related velocity-sign autocorrelation functions, and have examined the probabilities per unit time that a particle will undergo a first velocity sign reversal after an elapsed time t measured alternately from the last velocity reversal or from a given arbitrary time. At all densities studied, the first of these probabilities per unit time is zero at t=0 and rises to a maximum at a later time, but as the hardness of the disks is increased, the maximum moves in toward t→0. This maximum can be correlated with the ``negative'' dip observed in the velocity correlation functions when plotted versus time. Our conclusion is that all these phenomena can be explained qualitatively on the basis of a model where memory does not extend back beyond the last velocity reversal. However, at high density, the velocity-sign-autocorrelation function not only shows a negative dip (which is explained by the model) but also a second ``oscillation'' which is not described, even qualitatively, by the model. We conclude that the first dip in the velocity and velocity-sign correlation functions can occur even if there are no correlated or coherent librations, but the existence of a ``second'' oscillation is a better indication of such correlations.
Probability Distribution Function of a Forced Passive Tracer in the Lower Stratosphere
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The probability distribution function (PDF) of a passive tracer, forced by a "mean gradient", is studied. First, we take two theoretical approaches, the Lagrangian and the conditional closure formalisms, to study the PDFs of such an externally forced passive tracer. Then, we carry out numerical simulations for an idealized random flow on a sphere and for European Center for Medium-Range Weather Forecasts (ECMWF) stratospheric winds to test whether the mean-gradient model can be applied to studying stratospheric tracer mixing in midlatitude surf zones, in which a weak and poleward zonal-mean gradient is maintained by tracer leakage through polar and tropical mixing barriers, and whether the PDFs of tracer fluctuations in midlatitudes are consistent with the theoretical predictions. The numerical simulations show that when diffusive dissipation is balanced by the mean-gradient forcing, the PDF in the random flow and the Southern-Hemisphere PDFs in ECMWF winds show time-invariant exponential tails, consistent with theoretical predictions. In the Northern Hemisphere, the PDFs exhibit non-Gaussian tails. However, the PDF tails are not consistent with theoretical expectations. The long-term behavior of the PDF tails of the forced tracer is compared to that of a decaying tracer. It is found that the PDF tails of the decaying tracer are time-dependent, and evolve toward flatter than exponential.
Schneider, N; Csengeri, T; Klessen, R; Federrath, C; Tremblin, P; Girichidis, P; Bontemps, S; Andre, Ph
2014-01-01
Column density maps of molecular clouds are one of the most important observables in the context of molecular cloud- and star-formation (SF) studies. With Herschel it is now possible to reveal rather precisely the column density of dust, which is the best tracer of the bulk of material in molecular clouds. However, line-of-sight (LOS) contamination from fore- or background clouds can lead to an overestimation of the dust emission of molecular clouds, in particular for distant clouds. This implies too high values for column density and mass, and a misleading interpretation of probability distribution functions (PDFs) of the column density. In this paper, we demonstrate by using observations and simulations how LOS contamination affects the PDF. We apply a first-order approximation (removing a constant level) to the molecular clouds of Auriga and Maddalena (low-mass star-forming), and Carina and NGC3603(both high-mass SF regions). In perfect agreement with the simulations, we find that the PDFs become broader, ...
Turbulence-Induced Relative Velocity of Dust Particles III: The Probability Distribution
Pan, Liubin; Scalo, John
2014-01-01
Motivated by its important role in the collisional growth of dust particles in protoplanetary disks, we investigate the probability distribution function (PDF) of the relative velocity of inertial particles suspended in turbulent flows. Using the simulation from our previous work, we compute the relative velocity PDF as a function of the friction timescales, tau_p1 and tau_p2, of two particles of arbitrary sizes. The friction time of particles included in the simulation ranges from 0.1 tau_eta to 54T_L, with tau_eta and T_L the Kolmogorov time and the Lagrangian correlation time of the flow, respectively. The relative velocity PDF is generically non-Gaussian, exhibiting fat tails. For a fixed value of tau_p1, the PDF is the fattest for equal-size particles (tau_p2~tau_p1), and becomes thinner at both tau_p2tau_p1. Defining f as the friction time ratio of the smaller particle to the larger one, we find that, at a given f in 1/2>T_L). These features are successfully explained by the Pan & Padoan model. Usin...
Exact probability distributions of selected species in stochastic chemical reaction networks.
López-Caamal, Fernando; Marquez-Lago, Tatiana T
2014-09-01
Chemical reactions are discrete, stochastic events. As such, the species' molecular numbers can be described by an associated master equation. However, handling such an equation may become difficult due to the large size of reaction networks. A commonly used approach to forecast the behaviour of reaction networks is to perform computational simulations of such systems and analyse their outcome statistically. This approach, however, might require high computational costs to provide accurate results. In this paper we opt for an analytical approach to obtain the time-dependent solution of the Chemical Master Equation for selected species in a general reaction network. When the reaction networks are composed exclusively of zeroth and first-order reactions, this analytical approach significantly alleviates the computational burden required by simulation-based methods. By building upon these analytical solutions, we analyse a general monomolecular reaction network with an arbitrary number of species to obtain the exact marginal probability distribution for selected species. Additionally, we study two particular topologies of monomolecular reaction networks, namely (i) an unbranched chain of monomolecular reactions with and without synthesis and degradation reactions and (ii) a circular chain of monomolecular reactions. We illustrate our methodology and alternative ways to use it for non-linear systems by analysing a protein autoactivation mechanism. Later, we compare the computational load required for the implementation of our results and a pure computational approach to analyse an unbranched chain of monomolecular reactions. Finally, we study calcium ions gates in the sarco/endoplasmic reticulum mediated by ryanodine receptors.
ANNz2: Photometric Redshift and Probability Distribution Function Estimation using Machine Learning
Sadeh, I.; Abdalla, F. B.; Lahav, O.
2016-10-01
We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister & Lahav, which now includes generation of full probability distribution functions (PDFs). ANNz2 utilizes multiple machine learning methods, such as artificial neural networks and boosted decision/regression trees. The objective of the algorithm is to optimize the performance of the photo-z estimation, to properly derive the associated uncertainties, and to produce both single-value solutions and PDFs. In addition, estimators are made available, which mitigate possible problems of non-representative or incomplete spectroscopic training samples. ANNz2 has already been used as part of the first weak lensing analysis of the Dark Energy Survey, and is included in the experiment's first public data release. Here we illustrate the functionality of the code using data from the tenth data release of the Sloan Digital Sky Survey and the Baryon Oscillation Spectroscopic Survey. The code is available for download at http://github.com/IftachSadeh/ANNZ.
Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw
2011-07-01
Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.
Interaction of two-dimensional impulsively started airfoils
Institute of Scientific and Technical Information of China (English)
WU Fu-bing; ZENG Nian-dong; ZHANG Liang; WU De-ming
2004-01-01
Continuous vorticity panels were used to model general unsteady inviscid, incompressible, two-dimensional flows. The geometry of thc airfoil was approximated by series of short straight segments having endpoints that lie on the actual surface. A piecewise linear, continuous distribution of vorticity over the airfoil surface was used to generate disturbance flow. The no-penetration condition was imposed at the midpoint of each segment and at discrete times. The wake was simulated by a system of point vortices, which moved at local fluid velocity. At each time step, a new wake panel with uniform vorticity distribution was attached to the trailing edge, and the condition of constant circulation around the airfoil and wake was imposed. A new expression for Kutta condition was developed to study the interference effect between two impulsively started NACA0012 airfoils. The tandem arrangement was found to be the most effective to enhance the lift of the rear airfoil. The interference effect between tidal turbine blades was shown clearly.
Institute of Scientific and Technical Information of China (English)
Ren-Jie He; Zhen-Yu Yang
2012-01-01
Differential evolution (DE) has become a very popular and effective global optimization algorithm in the area of evolutionary computation.In spite of many advantages such as conceptual simplicity,high efficiency and ease of use,DE has two main components,i.e.,mutation scheme and parameter control,which significantly influence its performance.In this paper we intend to improve the performance of DE by using carefully considered strategies for both of the two components.We first design an adaptive mutation scheme,which adaptively makes use of the bias of superior individuals when generating new solutions.Although introducing such a bias is not a new idea,existing methods often use heuristic rules to control the bias.They can hardly maintain the appropriate balance between exploration and exploitation during the search process,because the preferred bias is often problem and evolution-stage dependent.Instead of using any fixed rule,a novel strategy is adopted in the new adaptive mutation scheme to adjust the bias dynamically based on the identified local fitness landscape captured by the current population.As for the other component,i.e.,parameter control,we propose a mechanism by using the Lévy probability distribution to adaptively control the scale factor F of DE.For every mutation in each generation,an Fi is produced from one of four different Lévy distributions according to their historical performance.With the adaptive mutation scheme and parameter control using Lévy distribution as the main components,we present a new DE variant called Lévy DE (LDE).Experimental studies were carried out on a broad range of benchmark functions in global numerical optimization.The results show that LDE is very competitive,and both of the two main components have contributed to its overall performance.The scalability of LDE is also discussed by conducting experiments on some selected benchmark functions with dimensions from 30 to 200.
Two-dimensional discrete gap breathers in a two-dimensional discrete diatomic Klein-Gordon lattice
Institute of Scientific and Technical Information of China (English)
XU Quan; QIANG Tian
2009-01-01
We study the existence and stability of two-dimensional discrete breathers in a two-dimensional discrete diatomic Klein-Gordon lattice consisting of alternating light and heavy atoms, with nearest-neighbor harmonic coupling.Localized solutions to the corresponding nonlinear differential equations with frequencies inside the gap of the linear wave spectrum, i.e. two-dimensional gap breathers, are investigated numerically. The numerical results of the corresponding algebraic equations demonstrate the possibility of the existence of two-dimensional gap breathers with three types of symmetries, i.e., symmetric, twin-antisymmetric and single-antisymmetric. Their stability depends on the nonlinear on-site potential (soft or hard), the interaction potential (attractive or repulsive)and the center of the two-dimensional gap breather (on a light or a heavy atom).
Simple models of two-dimensional information sources and codes
DEFF Research Database (Denmark)
Justesen, Jørn; Shtarkov, Y. M.
1998-01-01
We consider discrete random fields which have simple descriptions of rows and columns. We present constructions which combine high entropy with simple means of generating the fields and analyzing the probability distribution. Hidden state Markov sources are an essential tool in the construction...
RESEARCH ON TWO-DIMENSIONAL LDA FOR FACE RECOGNITION
Institute of Scientific and Technical Information of China (English)
Han Ke; Zhu Xiuchang
2006-01-01
The letter presents an improved two-dimensional linear discriminant analysis method for feature extraction. Compared with the current two-dimensional methods for feature extraction, the improved two-dimensional linear discriminant analysis method makes full use of not only the row and the column direction information of face images but also the discriminant information among different classes. The method is evaluated using the Nanjing University of Science and Technology (NUST) 603 face database and the Aleix Martinez and Robert Benavente (AR) face database. Experimental results show that the method in the letter is feasible and effective.
ONE-DIMENSIONAL AND TWO-DIMENSIONAL LEADERSHIP STYLES
Directory of Open Access Journals (Sweden)
Nikola Stefanović
2007-06-01
Full Text Available In order to motivate their group members to perform certain tasks, leaders use different leadership styles. These styles are based on leaders' backgrounds, knowledge, values, experiences, and expectations. The one-dimensional styles, used by many world leaders, are autocratic and democratic styles. These styles lie on the two opposite sides of the leadership spectrum. In order to precisely define the leadership styles on the spectrum between the autocratic leadership style and the democratic leadership style, leadership theory researchers use two dimensional matrices. The two-dimensional matrices define leadership styles on the basis of different parameters. By using these parameters, one can identify two-dimensional styles.
Directory of Open Access Journals (Sweden)
Jinhua Xu
Full Text Available Visual saliency is the perceptual quality that makes some items in visual scenes stand out from their immediate contexts. Visual saliency plays important roles in natural vision in that saliency can direct eye movements, deploy attention, and facilitate tasks like object detection and scene understanding. A central unsolved issue is: What features should be encoded in the early visual cortex for detecting salient features in natural scenes? To explore this important issue, we propose a hypothesis that visual saliency is based on efficient encoding of the probability distributions (PDs of visual variables in specific contexts in natural scenes, referred to as context-mediated PDs in natural scenes. In this concept, computational units in the model of the early visual system do not act as feature detectors but rather as estimators of the context-mediated PDs of a full range of visual variables in natural scenes, which directly give rise to a measure of visual saliency of any input stimulus. To test this hypothesis, we developed a model of the context-mediated PDs in natural scenes using a modified algorithm for independent component analysis (ICA and derived a measure of visual saliency based on these PDs estimated from a set of natural scenes. We demonstrated that visual saliency based on the context-mediated PDs in natural scenes effectively predicts human gaze in free-viewing of both static and dynamic natural scenes. This study suggests that the computation based on the context-mediated PDs of visual variables in natural scenes may underlie the neural mechanism in the early visual cortex for detecting salient features in natural scenes.
Characterisation of seasonal flood types according to timescales in mixed probability distributions
Fischer, Svenja; Schumann, Andreas; Schulte, Markus
2016-08-01
When flood statistics are based on annual maximum series (AMS), the sample often contains flood peaks, which differ in their genesis. If the ratios among event types change over the range of observations, the extrapolation of a probability distribution function (pdf) can be dominated by a majority of events that belong to a certain flood type. If this type is not typical for extraordinarily large extremes, such an extrapolation of the pdf is misleading. To avoid this breach of the assumption of homogeneity, seasonal models were developed that differ between winter and summer floods. We show that a distinction between summer and winter floods is not always sufficient if seasonal series include events with different geneses. Here, we differentiate floods by their timescales into groups of long and short events. A statistical method for such a distinction of events is presented. To demonstrate their applicability, timescales for winter and summer floods in a German river basin were estimated. It is shown that summer floods can be separated into two main groups, but in our study region, the sample of winter floods consists of at least three different flood types. The pdfs of the two groups of summer floods are combined via a new mixing model. This model considers that information about parallel events that uses their maximum values only is incomplete because some of the realisations are overlaid. A statistical method resulting in an amendment of statistical parameters is proposed. The application in a German case study demonstrates the advantages of the new model, with specific emphasis on flood types.
Schneider, N.; Bontemps, S.; Motte, F.; Ossenkopf, V.; Klessen, R. S.; Simon, R.; Fechtenbaum, S.; Herpin, F.; Tremblin, P.; Csengeri, T.; Myers, P. C.; Hill, T.; Cunningham, M.; Federrath, C.
2016-03-01
The probability distribution function of column density (N-PDF) serves as a powerful tool to characterise the various physical processes that influence the structure of molecular clouds. Studies that use extinction maps or H2 column-density maps (N) that are derived from dust show that star-forming clouds can best be characterised by lognormal PDFs for the lower N range and a power-law tail for higher N, which is commonly attributed to turbulence and self-gravity and/or pressure, respectively. While PDFs from dust cover a large dynamic range (typically N ~ 1020-24 cm-2 or Av~ 0.1-1000), PDFs obtained from molecular lines - converted into H2 column density - potentially trace more selectively different regimes of (column) densities and temperatures. They also enable us to distinguish different clouds along the line of sight through using the velocity information. We report here on PDFs that were obtained from observations of 12CO, 13CO, C18O, CS, and N2H+ in the Cygnus X North region, and make a comparison to a PDF that was derived from dust observations with the Herschel satellite. The PDF of 12CO is lognormal for Av ~ 1-30, but is cut for higher Av because of optical depth effects. The PDFs of C18O and 13CO are mostly lognormal up to Av ~ 1-15, followed by excess up to Av ~ 40. Above that value, all CO PDFs drop, which is most likely due to depletion. The high density tracers CS and N2H+ exhibit only a power law distribution between Av ~ 15 and 400, respectively. The PDF from dust is lognormal for Av ~ 3-15 and has a power-law tail up to Av ~ 500. Absolute values for the molecular line column densities are, however, rather uncertain because of abundance and excitation temperature variations. If we take the dust PDF at face value, we "calibrate" the molecular line PDF of CS to that of the dust and determine an abundance [CS]/[H2] of 10-9. The slopes of the power-law tails of the CS, N2H+, and dust PDFs are -1.6, -1.4, and -2.3, respectively, and are thus consistent
Boots, Nam Kyoo; Shahabuddin, Perwez
2001-01-01
This paper deals with estimating small tail probabilities of thesteady-state waiting time in a GI/GI/1 queue with heavy-tailed (subexponential) service times. The problem of estimating infinite horizon ruin probabilities in insurance risk processes with heavy-tailed claims can be transformed into th
Boots, Nam Kyoo; Shahabuddin, Perwez
2001-01-01
This paper deals with estimating small tail probabilities of thesteady-state waiting time in a GI/GI/1 queue with heavy-tailed (subexponential) service times. The problem of estimating infinite horizon ruin probabilities in insurance risk processes with heavy-tailed claims can be transformed into th
Density of states of Frenkel excitons in strongly disordered two-dimensional systems
Siemann, Robert; Boukahil, Abdelkrim
2014-03-01
We present the calculation of the density of states of Frenkel excitons in strongly disordered two-dimensional systems. A random distribution of transition frequencies with variance σ2 characterizes the disorder. The Coherent Potential Approximation (CPA) calculations show a strong dependence of the density of states (DOS) on the disorder parameter σ.
DEFF Research Database (Denmark)
de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper;
2014-01-01
uses no external excitation and determines the quasi-normal modes as unity eigenvalues of the cavity roundtrip matrix. We demonstrate the method and the quasi-normal modes for two types of two-dimensional photonic crystal structures, and discuss the quasi-normal mode eld distributions and Q...
Greiderer, A.; Steeneken, L.; Aalbers, T.; Vivó-Truyols, G.; Schoenmakers, P.
2011-01-01
Various hydroxyl-propylmethylcellulose (HPMC) polymers were characterized according to size and compositional distributions (percentage of methoxyl and hydroxyl-propoxyl substitution) by means of comprehensive two-dimensional liquid chromatography (LC × LC) using reversed-phase (RP) liquid chromatog
Feedback stabilisation of a two-dimensional pool-boiling system by modal control
van Gils, R.W.; Speetjens, M.F.M; Zwart, Heiko J.; Nijmeijer, H.
The present study concerns the feedback stabilisation of the unstable equilibria of a two-dimensional nonlinear pool-boiling system with essentially heterogeneous temperature distributions in the fluid-heater interface. Regulation of such equilibria has great potential for application in, for
DEFF Research Database (Denmark)
Ruban, V.P.; Senchenko, Sergey
2004-01-01
The evolution of piecewise constant distributions of a conserved quantity related to the frozen-in canonical vorticity in effectively two-dimensional incompressible ideal EMHD flows is analytically investigated by the Hamiltonian method. The study includes the case of axisymmetric flows with zero...