WorldWideScience

Sample records for two-dimensional probability distributions

  1. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    Science.gov (United States)

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  2. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions

    DEFF Research Database (Denmark)

    Yura, Harold; Hanson, Steen Grüner

    2012-01-01

    with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative...

  3. Stress distribution in two-dimensional silos

    Science.gov (United States)

    Blanco-Rodríguez, Rodolfo; Pérez-Ángel, Gabriel

    2018-01-01

    Simulations of a polydispersed two-dimensional silo were performed using molecular dynamics, with different numbers of grains reaching up to 64 000, verifying numerically the model derived by Janssen and also the main assumption that the walls carry part of the weight due to the static friction between grains with themselves and those with the silo's walls. We vary the friction coefficient, the radii dispersity, the silo width, and the size of grains. We find that the Janssen's model becomes less relevant as the the silo width increases since the behavior of the stresses becomes more hydrostatic. Likewise, we get the normal and tangential stress distribution on the walls evidencing the existence of points of maximum stress. We also obtained the stress matrix with which we observe zones of concentration of load, located always at a height around two thirds of the granular columns. Finally, we observe that the size of the grains affects the distribution of stresses, increasing the weight on the bottom and reducing the normal stress on the walls, as the grains are made smaller (for the same total mass of the granulate), giving again a more hydrostatic and therefore less Janssen-type behavior for the weight of the column.

  4. Probability of failure of the watershed algorithm for peak detection in comprehensive two-dimensional chromatography

    NARCIS (Netherlands)

    Vivó-Truyols, G.; Janssen, H.-G.

    2010-01-01

    The watershed algorithm is the most common method used for peak detection and integration In two-dimensional chromatography However, the retention time variability in the second dimension may render the algorithm to fail A study calculating the probabilities of failure of the watershed algorithm was

  5. Ion distributions in a two-dimensional reconnection field geometry

    International Nuclear Information System (INIS)

    Curran, D.B.; Goertz, C.K.; Whelan, T.A.

    1987-01-01

    ISEE observations have shown trapped ion distributions in the magnetosphere along with streaming ion distributions in the magnetosheath. The more energetic ion beams are found to exist further away from the magnetopause than lower-energy ion beams. In order to understand these properties of the data, we have taken a simple two-dimensional reconnection model which contains a neutral line and an azimuthal electric field and compared its predictions with the experimental data of September 8, 1978. Our model explains trapped particles in the magnetosphere due to nonadiabatic mirroring in the magnetosheath and streaming ions in the magnetosheath due to energization at the magnetopause. The model also shows the higher-energy ions extending further into the magnetosheath, away from the magnetopause than the lower-energy ions. This suggests the ion data of September 8, 1978 are consistent with a reconnection geometry. Copyright American Geophysical Union 1987

  6. Approximate solutions of the two-dimensional integral transport equation by collision probability methods

    International Nuclear Information System (INIS)

    Sanchez, Richard

    1977-01-01

    A set of approximate solutions for the isotropic two-dimensional neutron transport problem has been developed using the Interface Current formalism. The method has been applied to regular lattices of rectangular cells containing a fuel pin, cladding and water, or homogenized structural material. The cells are divided into zones which are homogeneous. A zone-wise flux expansion is used to formulate a direct collision probability problem within a cell. The coupling of the cells is made by making extra assumptions on the currents entering and leaving the interfaces. Two codes have been written: the first uses a cylindrical cell model and one or three terms for the flux expansion; the second uses a two-dimensional flux representation and does a truly two-dimensional calculation inside each cell. In both codes one or three terms can be used to make a space-independent expansion of the angular fluxes entering and leaving each side of the cell. The accuracies and computing times achieved with the different approximations are illustrated by numerical studies on two benchmark pr

  7. Device for measuring the two-dimensional distribution of a radioactive substance on a surface

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A device is described by which, using a one-dimensional measuring proportional counter tube depending on position, one can measure the two-dimensionally distributed radioactivity of a surface and can plot this to scale two-dimensionally, after computer processing, or can show it two-dimensionally on a monitor. (orig.) [de

  8. Lorentz covariant tempered distributions in two-dimensional space-time

    International Nuclear Information System (INIS)

    Zinov'ev, Yu.M.

    1989-01-01

    The problem of describing Lorentz covariant distributions without any spectral condition has hitherto remained unsolved even for two-dimensional space-time. Attempts to solve this problem have already been made. Zharinov obtained an integral representation for the Laplace transform of Lorentz invariant distributions with support in the product of two-dimensional future light cones. However, this integral representation does not make it possible to obtain a complete description of the corresponding Lorentz invariant distributions. In this paper the author gives a complete description of Lorentz covariant distributions for two-dimensional space-time. No spectral conditions is assumed

  9. Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence

    Science.gov (United States)

    Bandi, Mahesh M.; Connaughton, Colm

    2008-03-01

    We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig’s XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.

  10. Graphene materials having randomly distributed two-dimensional structural defects

    Science.gov (United States)

    Kung, Harold H; Zhao, Xin; Hayner, Cary M; Kung, Mayfair C

    2013-10-08

    Graphene-based storage materials for high-power battery applications are provided. The storage materials are composed of vertical stacks of graphene sheets and have reduced resistance for Li ion transport. This reduced resistance is achieved by incorporating a random distribution of structural defects into the stacked graphene sheets, whereby the structural defects facilitate the diffusion of Li ions into the interior of the storage materials.

  11. Collision probability in two-dimensional lattice by ray-trace method and its applications to cell calculations

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro

    1985-03-01

    A series of formulations to evaluate collision probability for multi-region cells expressed by either of three one-dimensional coordinate systems (plane, sphere and cylinder) or by the general two-dimensional cylindrical coordinate system is presented. They are expressed in a suitable form to have a common numerical process named ''Ray-Trace'' method. Applications of the collision probability method to two optional treatments for the resonance absorption are presented. One is a modified table-look-up method based on the intermediate resonance approximation, and the other is a rigorous method to calculate the resonance absorption in a multi-region cell in which nearly continuous energy spectra of the resonance neutron range can be solved and interaction effect between different resonance nuclides can be evaluated. Two works on resonance absorption in a doubly heterogeneous system with grain structure are presented. First, the effect of a random distribution of particles embedded in graphite diluent on the resonance integral is studied. Next, the ''Accretion'' method proposed by Leslie and Jonsson to define the collision probability in a doubly heterogeneous system is applied to evaluate the resonance absorption in coated particles dispersed in fuel pellet of the HTGR. Several optional models are proposed to define the collision rates in the medium with the microscopic heterogeneity. By making use of the collision probability method developed by the present study, the JAERI thermal reactor standard nuclear design code system SRAC has been developed. Results of several benchmark tests for the SRAC are presented. The analyses of critical experiments of the SHE, DCA, and FNR show good agreement of critical masses with their experimental values. (J.P.N.)

  12. Two-dimensional distributed-phase-reference protocol for quantum key distribution

    Science.gov (United States)

    Bacco, Davide; Christensen, Jesper Bjerge; Castaneda, Mario A. Usuga; Ding, Yunhong; Forchhammer, Søren; Rottwitt, Karsten; Oxenløwe, Leif Katsuo

    2016-12-01

    Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable.

  13. Two-dimensional distributed-phase-reference protocol for quantum key distribution

    DEFF Research Database (Denmark)

    Bacco, Davide; Christensen, Jesper Bjerge; Usuga Castaneda, Mario A.

    2016-01-01

    10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak......Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last...... coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable....

  14. Crucial role of sidewalls in velocity distributions in quasi-two-dimensional granular gases

    NARCIS (Netherlands)

    van Zon, J.S.; Kreft, J.; Goldman, D.L.; Miracle, D.; Swift, J. B.; Swinney, H. L.

    2004-01-01

    The significance of sidewalls which yield velocity distributions with non-Gaussian tails and a peak near zero velocity in quasi-two-dimensional granular gases, was investigated. It was observed that the particles gained energy only through collisions with the bottom of the container, which was not

  15. Non-parametric comparison of histogrammed two-dimensional data distributions using the Energy Test

    International Nuclear Information System (INIS)

    Reid, Ivan D; Lopes, Raul H C; Hobson, Peter R

    2012-01-01

    When monitoring complex experiments, comparison is often made between regularly acquired histograms of data and reference histograms which represent the ideal state of the equipment. With the larger HEP experiments now ramping up, there is a need for automation of this task since the volume of comparisons could overwhelm human operators. However, the two-dimensional histogram comparison tools available in ROOT have been noted in the past to exhibit shortcomings. We discuss a newer comparison test for two-dimensional histograms, based on the Energy Test of Aslan and Zech, which provides more conclusive discrimination between histograms of data coming from different distributions than methods provided in a recent ROOT release.

  16. Energy Spectra of Vortex Distributions in Two-Dimensional Quantum Turbulence

    Directory of Open Access Journals (Sweden)

    Ashton S. Bradley

    2012-10-01

    Full Text Available We theoretically explore key concepts of two-dimensional turbulence in a homogeneous compressible superfluid described by a dissipative two-dimensional Gross-Pitaeveskii equation. Such a fluid supports quantized vortices that have a size characterized by the healing length ξ. We show that, for the divergence-free portion of the superfluid velocity field, the kinetic-energy spectrum over wave number k may be decomposed into an ultraviolet regime (k≫ξ^{-1} having a universal k^{-3} scaling arising from the vortex core structure, and an infrared regime (k≪ξ^{-1} with a spectrum that arises purely from the configuration of the vortices. The Novikov power-law distribution of intervortex distances with exponent -1/3 for vortices of the same sign of circulation leads to an infrared kinetic-energy spectrum with a Kolmogorov k^{-5/3} power law, which is consistent with the existence of an inertial range. The presence of these k^{-3} and k^{-5/3} power laws, together with the constraint of continuity at the smallest configurational scale k≈ξ^{-1}, allows us to derive a new analytical expression for the Kolmogorov constant that we test against a numerical simulation of a forced homogeneous, compressible, two-dimensional superfluid. The numerical simulation corroborates our analysis of the spectral features of the kinetic-energy distribution, once we introduce the concept of a clustered fraction consisting of the fraction of vortices that have the same sign of circulation as their nearest neighboring vortices. Our analysis presents a new approach to understanding two-dimensional quantum turbulence and interpreting similarities and differences with classical two-dimensional turbulence, and suggests new methods to characterize vortex turbulence in two-dimensional quantum fluids via vortex position and circulation measurements.

  17. COVAL, Compound Probability Distribution for Function of Probability Distribution

    International Nuclear Information System (INIS)

    Astolfi, M.; Elbaz, J.

    1979-01-01

    1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions

  18. R.f.-induced steps in mutually coupled, two-dimensional distributed Josephson tunnel junctions

    International Nuclear Information System (INIS)

    Klein, U.; Dammschneider, P.

    1991-01-01

    This paper reports on the amplitudes of the current steps in the I-V characteristics of mutually coupled two-dimensional distributed Josephson tunnel junctions driven by microwaves. For this purpose we use a numerical computation algorithm based on a planar resonator model for the individual Josephson tunnel junctions to calculate the d.c. current density distribution. In addition to the fundamental microwave frequency, harmonic contents of the tunneling current are also considered. The lateral dimensions of the individual junctions are small compared to the microwave wavelength and the Josephson penetration depth, giving an almost constant current density distribution. Therefore, the coupled junctions can give much greater step amplitudes than a single junction with an equal tunneling area, because of their nonuniform current density distribution

  19. Analysis of two-dimensional microdischarge distribution in dielectric-barrier discharges

    International Nuclear Information System (INIS)

    Chirokov, A; Gutsol, A; Fridman, A; Sieber, K D; Grace, J M; Robinson, K S

    2004-01-01

    The two-dimensional spatial distribution of microdischarges in atmospheric pressure dielectric-barrier discharges (DBDs) in air was studied. Experimental images of DBDs (Lichtenberg figures) were obtained using photostimulable phosphors. The storage phosphor imaging method takes advantage of the linear response of the phosphor for characterization of microdischarge intensity and position. A microdischarge interaction model in DBDs is proposed and a Monte Carlo simulation of microdischarge interactions in the discharge is presented. Comparison of modelled and experimental images indicates interactions and short-range structuring of microdischarge channels

  20. Direct observation of two dimensional trace gas distributions with an airborne Imaging DOAS instrument

    Directory of Open Access Journals (Sweden)

    K.-P. Heue

    2008-11-01

    Full Text Available In many investigations of tropospheric chemistry information about the two dimensional distribution of trace gases on a small scale (e.g. tens to hundreds of metres is highly desirable. An airborne instrument based on imaging Differential Optical Absorption Spectroscopy has been built to map the two dimensional distribution of a series of relevant trace gases including NO2, HCHO, C2H2O2, H2O, O4, SO2, and BrO on a scale of 100 m.

    Here we report on the first tests of the novel aircraft instrument over the industrialised South African Highveld, where large variations in NO2 column densities in the immediate vicinity of several sources e.g. power plants or steel works, were measured. The observed patterns in the trace gas distribution are interpreted with respect to flux estimates, and it is seen that the fine resolution of the measurements allows separate sources in close proximity to one another to be distinguished.

  1. Benchmark numerical solutions for radiative heat transfer in two-dimensional medium with graded index distribution

    Energy Technology Data Exchange (ETDEWEB)

    Liu, L.H. [School of Energy Science and Engineering, Harbin Institute of Technology, 92 West Dazhi Street, Harbin 150001 (China)]. E-mail: lhliu@hit.edu.cn

    2006-11-15

    In graded index media, the ray goes along a curved path determined by Fermat principle. Generally, the curved ray trajectory in graded index media is a complex implicit function, and the curved ray tracing is very difficult and complex. Only for some special refractive index distributions, the curved ray trajectory can be expressed as a simple explicit function. Two important examples are the layered and the radial graded index distributions. In this paper, the radiative heat transfer problems in two-dimensional square semitransparent with layered and radial graded index distributions are analyzed. After deduction of the ray trajectory, the radiative heat transfer problems are solved by using the Monte Carlo curved ray-tracing method. Some numerical solutions of dimensionless net radiative heat flux and medium temperature are tabulated as the benchmark solutions for the future development of approximation techniques for multi-dimensional radiative heat transfer in graded index media.

  2. Probability distribution relationships

    Directory of Open Access Journals (Sweden)

    Yousry Abdelkader

    2013-05-01

    Full Text Available In this paper, we are interesting to show the most famous distributions and their relations to the other distributions in collected diagrams. Four diagrams are sketched as networks. The first one is concerned to the continuous distributions and their relations. The second one presents the discrete distributions. The third diagram is depicted the famous limiting distributions. Finally, the Balakrishnan skew-normal density and its relationship with the other distributions are shown in the fourth diagram.

  3. [The reconstruction of two-dimensional distributions of gas concentration in the flat flame based on tunable laser absorption spectroscopy].

    Science.gov (United States)

    Jiang, Zhi-Shen; Wang, Fei; Xing, Da-Wei; Xu, Ting; Yan, Jian-Hua; Cen, Ke-Fa

    2012-11-01

    The experimental method by using the tunable diode laser absorption spectroscopy combined with the model and algo- rithm was studied to reconstruct the two-dimensional distribution of gas concentration The feasibility of the reconstruction program was verified by numerical simulation A diagnostic system consisting of 24 lasers was built for the measurement of H2O in the methane/air premixed flame. The two-dimensional distribution of H2O concentration in the flame was reconstructed, showing that the reconstruction results reflect the real two-dimensional distribution of H2O concentration in the flame. This diagnostic scheme provides a promising solution for combustion control.

  4. A two dimensional approach for temperature distribution in reactor lower head during severe accident

    International Nuclear Information System (INIS)

    Cao, Zhen; Liu, Xiaojing; Cheng, Xu

    2015-01-01

    Highlights: • Two dimensional module is developed to analyze integrity of lower head. • Verification step has been done to evaluate feasibility of new module. • The new module is applied to simulate large-scale advanced PWR. • Importance of 2-D approach is clearly quantified. • Major parameters affecting vessel temperature distribution are identified. - Abstract: In order to evaluate the safety margin during a postulated severe accident, a module named ASAP-2D (Accident Simulation on Pressure vessel-2 Dimensional), which can be implemented into the severe accident simulation codes (such as ATHLET-CD), is developed in Shanghai Jiao Tong University. Based on two-dimensional spherical coordinates, heat conduction equation for transient state is solved implicitly. Together with solid vessel thickness, heat flux distribution and heat transfer coefficient at outer vessel surface are obtained. Heat transfer regime when critical heat flux has been exceeded (POST-CHF regime) could be simulated in the code, and the transition behavior of boiling crisis (from spatial and temporal points of view) can be predicted. The module is verified against a one-dimensional analytical solution with uniform heat flux distribution, and afterwards this module is applied to the benchmark illustrated in NUREG/CR-6849. Benchmark calculation indicates that maximum heat flux at outer surface of RPV could be around 20% lower than that of at inner surface due to two-dimensional heat conduction. Then a preliminary analysis is performed on the integrity of the reactor vessel for which the geometric parameters and boundary conditions are derived from a large scale advanced pressurized water reactor. Results indicate that heat flux remains lower than critical heat flux. Sensitivity analysis indicates that outer heat flux distribution is more sensitive to input heat flux distribution and the transition boiling correlation than mass flow rate in external reactor vessel cooling (ERVC) channel

  5. Distributed Two-Dimensional Fourier Transforms on DSPs with an Application for Phase Retrieval

    Science.gov (United States)

    Smith, Jeffrey Scott

    2006-01-01

    Many applications of two-dimensional Fourier Transforms require fixed timing as defined by system specifications. One example is image-based wavefront sensing. The image-based approach has many benefits, yet it is a computational intensive solution for adaptive optic correction, where optical adjustments are made in real-time to correct for external (atmospheric turbulence) and internal (stability) aberrations, which cause image degradation. For phase retrieval, a type of image-based wavefront sensing, numerous two-dimensional Fast Fourier Transforms (FFTs) are used. To meet the required real-time specifications, a distributed system is needed, and thus, the 2-D FFT necessitates an all-to-all communication among the computational nodes. The 1-D floating point FFT is very efficient on a digital signal processor (DSP). For this study, several architectures and analysis of such are presented which address the all-to-all communication with DSPs. Emphasis of this research is on a 64-node cluster of Analog Devices TigerSharc TS-101 DSPs.

  6. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  7. First operation of a powerful FEL with two-dimensional distributed feedback

    CERN Document Server

    Agarin, N V; Bobylev, V B; Ginzburg, N S; Ivanenko, V G; Kalinin, P V; Kuznetsov, S A; Peskov, N Yu; Sergeev, A S; Sinitsky, S L; Stepanov, V D

    2000-01-01

    A W-band (75 GHz) FEL of planar geometry driven by a sheet electron beam was realised using the pulse accelerator ELMI (0.8 MeV/3 kA/5 mu s). To provide the spatial coherence of radiation from different parts of the electron beam with a cross-section of 0.4x12 cm two-dimensional distributed feedback systems have been employed using a 2-D Bragg resonator of planar geometry. The resonator consisted of two 2-D Bragg reflectors separated by a regular waveguide section. The total energy in the microwave pulse of microsecond duration was 100 J corresponding to a power of approx 100 MW. The main component of the FEL radiation spectrum was at 75 GHz that corresponded to the zone of effective Bragg reflection found from 'cold' microwave testing of the resonator. The experimental data compared well with the results of theoretical analysis.

  8. Two dimensional electron transport in disordered and ordered distributions of magnetic flux vortices

    International Nuclear Information System (INIS)

    Nielsen, M.; Hedegaard, P.

    1994-04-01

    We have considered the conductivity properties of a two dimensional electron gas (2DEG) in two different kinds of inhomogeneous magnetic fields, i.e. a disordered distribution of magnetic flux vortices, and a periodic array of magnetic flux vortices. The work falls in two parts. In the first part we show how the phase shifts for an electron scattering on an isolated vortex, can be calculated analytically, and related to the transport properties through the differential cross section. In the second part we present numerical results for the Hall conductivity of the 2DEG in a periodic array of flux vortices found by exact diagonalization. We find characteristic spikes in the Hall conductance, when it is plotted against the filling fraction. It is argued that the spikes can be interpreted in terms of ''topological charge'' piling up across local and global gaps in the energy spectrum. (au) (23 refs.)

  9. Two-dimensional potential and charge distributions of positive surface streamer

    International Nuclear Information System (INIS)

    Tanaka, Daiki; Matsuoka, Shigeyasu; Kumada, Akiko; Hidaka, Kunihiko

    2009-01-01

    Information on the potential and the field profile along a surface discharge is required for quantitatively discussing and clarifying the propagation mechanism. The sensing technique with a Pockels crystal has been developed for directly measuring the potential and electric field distribution on a dielectric material. In this paper, the Pockels sensing system consists of a pulse laser and a CCD camera for measuring the instantaneous two-dimensional potential distribution on a 25.4 mm square area with a 50 μm sampling pitch. The temporal resolution is 3.2 ns which is determined by the pulse width of the laser emission. The transient change in the potential distribution of a positive surface streamer propagating in atmospheric air is measured with this system. The electric field and the charge distributions are also calculated from the measured potential profile. The propagating direction component of the electric field near the tip of the propagating streamer reaches 3 kV mm -1 . When the streamer stops, the potential distribution along a streamer forms an almost linear profile with the distance from the electrode, and its gradient is about 0.5 kV mm -1 .

  10. Two-Dimensional Key Table-Based Group Key Distribution in Advanced Metering Infrastructure

    Directory of Open Access Journals (Sweden)

    Woong Go

    2014-01-01

    Full Text Available A smart grid provides two-way communication by using the information and communication technology. In order to establish two-way communication, the advanced metering infrastructure (AMI is used in the smart grid as the core infrastructure. This infrastructure consists of smart meters, data collection units, maintenance data management systems, and so on. However, potential security problems of the AMI increase owing to the application of the public network. This is because the transmitted information is electricity consumption data for charging. Thus, in order to establish a secure connection to transmit electricity consumption data, encryption is necessary, for which key distribution is required. Further, a group key is more efficient than a pairwise key in the hierarchical structure of the AMI. Therefore, we propose a group key distribution scheme using a two-dimensional key table through the analysis result of the sensor network group key distribution scheme. The proposed scheme has three phases: group key predistribution, selection of group key generation element, and generation of group key.

  11. Spectral line shapes in linear absorption and two-dimensional spectroscopy with skewed frequency distributions

    NARCIS (Netherlands)

    Farag, Marwa H.; Hoenders, Bernhard J.; Knoester, Jasper; Jansen, Thomas L. C.

    2017-01-01

    The effect of Gaussian dynamics on the line shapes in linear absorption and two-dimensional correlation spectroscopy is well understood as the second-order cumulant expansion provides exact spectra. Gaussian solvent dynamics can be well analyzed using slope line analysis of two-dimensional

  12. Two-dimensional distribution of carbon nanotubes in copper flake powders

    Energy Technology Data Exchange (ETDEWEB)

    Tan Zhanqiu; Li Zhiqiang; Fan Genlian; Li Wenhuan; Liu Qinglei; Zhang Wang; Zhang Di, E-mail: lizhq@sjtu.edu.cn, E-mail: zhangdi@sjtu.edu.cn [State Key Laboratory of Metal Matrix Composites, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2011-06-03

    We report an approach of flake powder metallurgy to the uniform, two-dimensional (2D) distribution of carbon nanotubes (CNTs) in Cu flake powders. It consists of the preparation of Cu flakes by ball milling in an imidazoline derivative (IMD) aqueous solution, surface modification of Cu flakes with polyvinyl alcohol (PVA) hydrosol and adsorption of CNTs from a CNT aqueous suspension. During ball milling, a hydrophobic monolayer of IMD is adsorbed on the surface of the Cu flakes, on top of which a hydrophilic PVA film is adsorbed subsequently. This PVA film could further interact with the carboxyl-group functionalized CNTs and act to lock the CNTs onto the surfaces of the Cu flakes. The CNT volume fraction is controlled easily by adjusting the concentration/volume of CNT aqueous suspension and Cu flake thickness. The as-prepared CNT/Cu composite flakes will serve as suitable building blocks for the self-assembly of CNT/Cu laminated composites that enable the full potential of 2D distributed CNTs to achieve high thermal conductivity.

  13. Two-dimensional distribution of carbon nanotubes in copper flake powders

    International Nuclear Information System (INIS)

    Tan Zhanqiu; Li Zhiqiang; Fan Genlian; Li Wenhuan; Liu Qinglei; Zhang Wang; Zhang Di

    2011-01-01

    We report an approach of flake powder metallurgy to the uniform, two-dimensional (2D) distribution of carbon nanotubes (CNTs) in Cu flake powders. It consists of the preparation of Cu flakes by ball milling in an imidazoline derivative (IMD) aqueous solution, surface modification of Cu flakes with polyvinyl alcohol (PVA) hydrosol and adsorption of CNTs from a CNT aqueous suspension. During ball milling, a hydrophobic monolayer of IMD is adsorbed on the surface of the Cu flakes, on top of which a hydrophilic PVA film is adsorbed subsequently. This PVA film could further interact with the carboxyl-group functionalized CNTs and act to lock the CNTs onto the surfaces of the Cu flakes. The CNT volume fraction is controlled easily by adjusting the concentration/volume of CNT aqueous suspension and Cu flake thickness. The as-prepared CNT/Cu composite flakes will serve as suitable building blocks for the self-assembly of CNT/Cu laminated composites that enable the full potential of 2D distributed CNTs to achieve high thermal conductivity.

  14. Two-dimensional distribution of carbon nanotubes in copper flake powders.

    Science.gov (United States)

    Tan, Zhanqiu; Li, Zhiqiang; Fan, Genlian; Li, Wenhuan; Liu, Qinglei; Zhang, Wang; Zhang, Di

    2011-06-03

    We report an approach of flake powder metallurgy to the uniform, two-dimensional (2D) distribution of carbon nanotubes (CNTs) in Cu flake powders. It consists of the preparation of Cu flakes by ball milling in an imidazoline derivative (IMD) aqueous solution, surface modification of Cu flakes with polyvinyl alcohol (PVA) hydrosol and adsorption of CNTs from a CNT aqueous suspension. During ball milling, a hydrophobic monolayer of IMD is adsorbed on the surface of the Cu flakes, on top of which a hydrophilic PVA film is adsorbed subsequently. This PVA film could further interact with the carboxyl-group functionalized CNTs and act to lock the CNTs onto the surfaces of the Cu flakes. The CNT volume fraction is controlled easily by adjusting the concentration/volume of CNT aqueous suspension and Cu flake thickness. The as-prepared CNT/Cu composite flakes will serve as suitable building blocks for the self-assembly of CNT/Cu laminated composites that enable the full potential of 2D distributed CNTs to achieve high thermal conductivity.

  15. Two Dimensional Finite Element Model to Study Calcium Distribution in Oocytes

    Science.gov (United States)

    Naik, Parvaiz Ahmad; Pardasani, Kamal Raj

    2015-06-01

    Cytosolic free calcium concentration is a key regulatory factor and perhaps the most widely used means of controlling cellular function. Calcium can enter cells through different pathways which are activated by specific stimuli including membrane depolarization, chemical signals and calcium depletion of intracellular stores. One of the important components of oocyte maturation is differentiation of the Ca2+ signaling machinery which is essential for egg activation after fertilization. Eggs acquire the ability to produce the fertilization-specific calcium signal during oocyte maturation. The calcium concentration patterns required during different stages of oocyte maturation are still not completely known. Also the mechanisms involved in calcium dynamics in oocyte cell are still not well understood. In view of above a two dimensional FEM model has been proposed to study calcium distribution in an oocyte cell. The parameters such as buffers, ryanodine receptor, SERCA pump and voltage gated calcium channel are incorporated in the model. Based on the biophysical conditions the initial and boundary conditions have been framed. The model is transformed into variational form and Ritz finite element method has been employed to obtain the solution. A program has been developed in MATLAB 7.10 for the entire problem and executed to obtain numerical results. The numerical results have been used to study the effect of buffers, RyR, SERCA pump and VGCC on calcium distribution in an oocyte cell.

  16. Study on two-dimensional distribution of X-ray image based on improved Elman algorithm

    International Nuclear Information System (INIS)

    Wang, Fang; Wang, Ming-Yuan; Tian, Feng-Shuo; Liu, Yu-Fang; Li, Lei; Zhao, Jing

    2015-01-01

    The principle of the X-ray detector which can simultaneously perform the measurement of the exposure rate and 2D (two-dimensional) distribution is described. A commercially available CMOS image sensor has been adopted as the key part to receive X-ray without any scintillators. The correlation between the pixel value (PV) and the absorbed exposure rate of X-ray is studied using the improved Elman neural network. Comparing the optimal adjustment process of the BP (Back Propagation) neural network and the improved Elman neural network, the neural network parameters are selected based on the fitting curve and the error curve. The experiments using the practical production data show that the proposed method achieves high accurate predictions to 10 −15 , which is consistent with the anticipated value. It is proven that it is possible to detect the exposure rate using the X-ray detector with the improved Elman algorithm for its advantages of fast converges and smooth error curve. - Highlights: • A method to measure the X-ray radiation with low cost and miniaturization. • A general CMOS image sensor is used to detect X-ray. • The system can measure exposure rate and 2D distribution simultaneously. • The Elman algorithm is adopted to improve the precision of the radiation detector

  17. An analysis of infiltration with moisture content distribution in a two-dimensional discretized water content domain

    KAUST Repository

    Yu, Han; Douglas, Craig C.

    2014-01-01

    On the basis of unsaturated Darcy's law, the Talbot-Ogden method provides a fast unconditional mass conservative algorithm to simulate groundwater infiltration in various unsaturated soil textures. Unlike advanced reservoir modelling methods that compute unsaturated flow in space, it only discretizes the moisture content domain into a suitable number of bins so that the vertical water movement is estimated piecewise in each bin. The dimensionality of the moisture content domain is extended from one dimensional to two dimensional in this study, which allows us to distinguish pore shapes within the same moisture content range. The vertical movement of water in the extended model imitates the infiltration phase in the Talbot-Ogden method. However, the difference in this extension is the directional redistribution, which represents the horizontal inter-bin flow and causes the water content distribution to have an effect on infiltration. Using this extension, we mathematically analyse the general relationship between infiltration and the moisture content distribution associated with wetting front depths in different bins. We show that a more negatively skewed moisture content distribution can produce a longer ponding time, whereas a higher overall flux cannot be guaranteed in this situation. It is proven on the basis of the water content probability distribution independent of soil textures. To illustrate this analysis, we also present numerical examples for both fine and coarse soil textures.

  18. An analysis of infiltration with moisture content distribution in a two-dimensional discretized water content domain

    KAUST Repository

    Yu, Han

    2014-06-11

    On the basis of unsaturated Darcy\\'s law, the Talbot-Ogden method provides a fast unconditional mass conservative algorithm to simulate groundwater infiltration in various unsaturated soil textures. Unlike advanced reservoir modelling methods that compute unsaturated flow in space, it only discretizes the moisture content domain into a suitable number of bins so that the vertical water movement is estimated piecewise in each bin. The dimensionality of the moisture content domain is extended from one dimensional to two dimensional in this study, which allows us to distinguish pore shapes within the same moisture content range. The vertical movement of water in the extended model imitates the infiltration phase in the Talbot-Ogden method. However, the difference in this extension is the directional redistribution, which represents the horizontal inter-bin flow and causes the water content distribution to have an effect on infiltration. Using this extension, we mathematically analyse the general relationship between infiltration and the moisture content distribution associated with wetting front depths in different bins. We show that a more negatively skewed moisture content distribution can produce a longer ponding time, whereas a higher overall flux cannot be guaranteed in this situation. It is proven on the basis of the water content probability distribution independent of soil textures. To illustrate this analysis, we also present numerical examples for both fine and coarse soil textures.

  19. The area distribution of two-dimensional random walks and non-Hermitian Hofstadter quantum mechanics

    International Nuclear Information System (INIS)

    Matveenko, Sergey; Ouvry, Stéphane

    2014-01-01

    When random walks on a square lattice are biased horizontally to move solely to the right, the probability distribution of their algebraic area can be obtained exactly (Mashkevich and Ouvry 2009 J. Stat. Phys. 137 71). We explicitly map this biased classical random system onto a non-Hermitian Hofstadter-like quantum model where a charged particle on a square lattice coupled to a perpendicular magnetic field hops only to the right. For the commensurate case, when the magnetic flux per unit cell is rational, an exact solution of the quantum model is obtained. The periodicity of the lattice allows one to relate traces of the Nth power of the Hamiltonian to probability distribution generating functions of biased walks of length N. (paper)

  20. A development of two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz

    Science.gov (United States)

    Onuma, Takashi; Otani, Yukitoshi

    2014-03-01

    A two-dimensional birefringence distribution measurement system with a sampling rate of 1.3 MHz is proposed. A polarization image sensor is developed as core device of the system. It is composed of a pixelated polarizer array made from photonic crystal and a parallel read out circuit with a multi-channel analog to digital converter specialized for two-dimensional polarization detection. By applying phase shifting algorism with circularly-polarized incident light, birefringence phase difference and azimuthal angle can be measured. The performance of the system is demonstrated experimentally by measuring actual birefringence distribution and polarization device such as Babinet-Soleil compensator.

  1. Investigation of Real-Time Two-Dimensional Visualization of Fuel Spray Liquid/Vapor Distribution via Exciplex Fluorescence.

    Science.gov (United States)

    1987-08-30

    EXCIPLEX FLUORESCENCE ~N 0FINAL REPORT 00 JAMES F. VERDIECK AND ARTHUR A. ROTUNNO UNITED TECHNOLOGIES RESEARCH CENTER 0 AND LYNN A. MELTON D I UNIVERSITY...DOCUMENTATION. "NWA 0. INVESTIGATION OF REAL-TINE TWO-DIMENSIONAL VISUALIZATION OF FUEL SPRAY LIQUID/VAPOR DISTRIBUTION VIA EXCIPLEX FLUORESCENCE FINAL...Spray Liquid/Vapor Distribution Via Exciplex Fluorescen , - 12. PERSONAL AUTHOR(S) J. F. Yeardierk. A- A. Rnriiunn-l L_ A. Millo - 13a TYPE OF REPORT

  2. Measurement of two-dimensional thermal neutron flux in a water phantom and evaluation of dose distribution characteristics

    International Nuclear Information System (INIS)

    Yamamoto, Kazuyoshi; Kumada, Hiroaki; Kishi, Toshiaki; Torii, Yoshiya; Horiguchi, Yoji

    2001-03-01

    To evaluate nitrogen dose, boron dose and gamma-ray dose occurred by neutron capture reaction of the hydrogen at the medical irradiation, two-dimensional distribution of the thermal neutron flux is very important because these doses are proportional to the thermal neutron distribution. This report describes the measurement of the two-dimensional thermal neutron distribution in a head water phantom by neutron beams of the JRR-4 and evaluation of the dose distribution characteristic. Thermal neutron flux in the phantom was measured by gold wire placed in the spokewise of every 30 degrees in order to avoid the interaction. Distribution of the thermal neutron flux was also calculated using two-dimensional Lagrange's interpolation program (radius, angle direction) developed this time. As a result of the analysis, it was confirmed to become distorted distribution which has annular peak at outside of the void, though improved dose profile of the deep direction was confirmed in the case which the radiation field in the phantom contains void. (author)

  3. Universal Distribution of Centers and Saddles in Two-Dimensional Turbulence

    International Nuclear Information System (INIS)

    Rivera, Michael; Wu, Xiao-Lun; Yeung, Chuck

    2001-01-01

    The statistical properties of the local topology of two-dimensional turbulence are investigated using an electromagnetically forced soap film. The local topology of the incompressible 2D flow is characterized by the Jacobian determinant Λ(x,y)=1/4 (ω 2 -σ 2 ) , where ω(x,y) is the local vorticity and σ(x,y) is the local strain rate. For turbulent flows driven by different external force configurations, P(Λ) is found to be a universal function when rescaled using the turbulent intensity. A simple model that agrees with the measured functional form of P(Λ) is constructed using the assumption that the stream function, ψ(x,y) , is a Gaussian random field

  4. Design of two-dimensional channels with prescribed velocity distributions along the channel walls

    Science.gov (United States)

    Stanitz, John D

    1953-01-01

    A general method of design is developed for two-dimensional unbranched channels with prescribed velocities as a function of arc length along the channel walls. The method is developed for both compressible and incompressible, irrotational, nonviscous flow and applies to the design of elbows, diffusers, nozzles, and so forth. In part I solutions are obtained by relaxation methods; in part II solutions are obtained by a Green's function. Five numerical examples are given in part I including three elbow designs with the same prescribed velocity as a function of arc length along the channel walls but with incompressible, linearized compressible, and compressible flow. One numerical example is presented in part II for an accelerating elbow with linearized compressible flow, and the time required for the solution by a Green's function in part II was considerably less than the time required for the same solution by relaxation methods in part I.

  5. OPT-TWO: Calculation code for two-dimensional MOX fuel models in the optimum concentration distribution

    International Nuclear Information System (INIS)

    Sato, Shohei; Okuno, Hiroshi; Sakai, Tomohiro

    2007-08-01

    OPT-TWO is a calculation code which calculates the optimum concentration distribution, i.e., the most conservative concentration distribution in the aspect of nuclear criticality safety, of MOX (mixed uranium and plutonium oxide) fuels in the two-dimensional system. To achieve the optimum concentration distribution, we apply the principle of flattened fuel importance distribution with which the fuel system has the highest reactivity. Based on this principle, OPT-TWO takes the following 3 calculation steps iteratively to achieve the optimum concentration distribution with flattened fuel importance: (1) the forward and adjoint neutron fluxes, and the neutron multiplication factor, with TWOTRAN code which is a two-dimensional neutron transport code based on the SN method, (2) the fuel importance, and (3) the quantity of the transferring fuel. In OPT-TWO, the components of MOX fuel are MOX powder, uranium dioxide powder and additive. This report describes the content of the calculation, the computational method, and the installation method of the OPT-TWO, and also describes the application method of the criticality calculation of OPT-TWO. (author)

  6. One-and two-dimensional topological charge distributions in stochastic optical fields

    CSIR Research Space (South Africa)

    Roux, FS

    2011-06-01

    Full Text Available The presentation on topological charge distributions in stochastic optical fields concludes that by using a combination of speckle fields one can produce inhomogeneous vortex distributions that allow both analytical calculations and numerical...

  7. Tomography for two-dimensional gas temperature distribution based on TDLAS

    Science.gov (United States)

    Luo, Can; Wang, Yunchu; Xing, Fei

    2018-03-01

    Based on tunable diode laser absorption spectroscopy (TDLAS), the tomography is used to reconstruct the combustion gas temperature distribution. The effects of number of rays, number of grids, and spacing of rays on the temperature reconstruction results for parallel ray are researched. The reconstruction quality is proportional to the ray number. The quality tends to be smoother when the ray number exceeds a certain value. The best quality is achieved when η is between 0.5 and 1. A virtual ray method combined with the reconstruction algorithms is tested. It is found that virtual ray method is effective to improve the accuracy of reconstruction results, compared with the original method. The linear interpolation method and cubic spline interpolation method, are used to improve the calculation accuracy of virtual ray absorption value. According to the calculation results, cubic spline interpolation is better. Moreover, the temperature distribution of a TBCC combustion chamber is used to validate those conclusions.

  8. Representative measurement of two-dimensional reactive phosphate distributions and co-distributed iron(II) and sulfide in seagrass sediment porewaters

    DEFF Research Database (Denmark)

    Pagès, Anaïs; Teasdale, Peter R.; Robertson, David

    2011-01-01

    The high degree of heterogeneity within sediments can make interpreting one-dimensional measurements difficult. The recent development and use of in situ techniques that measure two-dimensional distributions of porewater solutes have facilitated investigation of the role of spatial heterogeneity ...

  9. Fuzzy pool balance: An algorithm to achieve a two dimensional balance in distribute storage systems

    International Nuclear Information System (INIS)

    Wu, Wenjing; Chen, Gang

    2014-01-01

    The limitation of scheduling modules and the gradual addition of disk pools in distributed storage systems often result in imbalances among their disk pools in terms of both disk usage and file count. This can cause various problems to the storage system such as single point of failure, low system throughput and imbalanced resource utilization and system loads. An algorithm named Fuzzy Pool Balance (FPB) is proposed here to solve this problem. The input of FPB is the current file distribution among disk pools and the output is a file migration plan indicating what files are to be migrated to which pools. FPB uses an array to classify the files by their sizes. The file classification array is dynamically calculated with a defined threshold named T max that defines the allowed pool disk usage deviations. File classification is the basis of file migration. FPB also defines the Immigration Pool (IP) and Emigration Pool (EP) according to the pool disk usage and File Quantity Ratio (FQR) that indicates the percentage of each category of files in each disk pool, so files with higher FQR in an EP will be migrated to IP(s) with a lower FQR of this file category. To verify this algorithm, we implemented FPB on an ATLAS Tier2 dCache production system. The results show that FPB can achieve a very good balance in both free space and file counts, and adjusting the threshold value T max and the correction factor to the average FQR can achieve a tradeoff between free space and file count.

  10. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    Science.gov (United States)

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

  11. Spatial distribution of ozone density in pulsed corona discharges observed by two-dimensional laser absorption method

    Energy Technology Data Exchange (ETDEWEB)

    Ono, Ryo; Oda, Tetsuji [Department of Electrical Engineering, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656 (Japan)

    2004-03-07

    The spatial distribution of ozone density is measured in pulsed corona discharges with a 40 {mu}m spatial resolution using a two-dimensional laser absorption method. Discharge occurs in a 13 mm point-to-plane gap in dry air with a pulse duration of 100 ns. The result shows that the ozone density increases for about 100 {mu}s after the discharge pulse. The rate coefficient of the ozone-producing reaction, O + O{sub 2} + M {yields} O{sub 3} + M, is estimated to be 3.5 x 10{sup -34} cm{sup 6} s{sup -1}. It is observed that ozone is mostly distributed in the secondary-streamer channel. This suggests that most of the ozone is produced by the secondary streamer, not the primary streamer. After the discharge pulse, ozone diffuses into the background from the secondary-streamer channel. The diffusion coefficient of ozone is estimated to be approximately 0.1 to 0.2 cm{sup 2} s{sup -1}.

  12. Spatial distribution of ozone density in pulsed corona discharges observed by two-dimensional laser absorption method

    International Nuclear Information System (INIS)

    Ono, Ryo; Oda, Tetsuji

    2004-01-01

    The spatial distribution of ozone density is measured in pulsed corona discharges with a 40 μm spatial resolution using a two-dimensional laser absorption method. Discharge occurs in a 13 mm point-to-plane gap in dry air with a pulse duration of 100 ns. The result shows that the ozone density increases for about 100 μs after the discharge pulse. The rate coefficient of the ozone-producing reaction, O + O 2 + M → O 3 + M, is estimated to be 3.5 x 10 -34 cm 6 s -1 . It is observed that ozone is mostly distributed in the secondary-streamer channel. This suggests that most of the ozone is produced by the secondary streamer, not the primary streamer. After the discharge pulse, ozone diffuses into the background from the secondary-streamer channel. The diffusion coefficient of ozone is estimated to be approximately 0.1 to 0.2 cm 2 s -1

  13. The Multivariate Gaussian Probability Distribution

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2005-01-01

    This technical report intends to gather information about the multivariate gaussian distribution, that was previously not (at least to my knowledge) to be found in one place and written as a reference manual. Additionally, some useful tips and tricks are collected that may be useful in practical ...

  14. Two-dimensional microclimate distribution within and above a crop canopy in an arid environment: Modeling and observational studies

    Science.gov (United States)

    Naot, O.; Mahrer, Y.

    1991-08-01

    A numerical two-dimensional model based on higher-order closure assumptions is developed to simulate the horizontal microclimate distribution over an irrigated field in arid surroundings. The model considers heat, mass, momentum, and radiative fluxes in the soil-plant-atmosphere system. Its vertical domain extends through the whole planetary boundary layer. The model requires temporal solar and atmospheric radiation data, as well as temporal boundary conditions for wind-speed, air temperature, and humidity. These boundary conditions are specified by an auxiliary mesoscale model and are incorporated in the microscale model by a nudging method. Vegetation parameters (canopy height, leaf-angle orientation distribution, leaf-area index, photometric properties, root-density distribution), soil texture, and soil-hydraulic and photometric properties are considered. The model is tested using meteorological data obtained in a drip-irrigated cotton field located in an extremely arid area, where strong fetch effects are expected. Four masts located 50 m before the leading edge of the field and 10, 30, and 100 m inward from the leading edge are used to measure various meteorological parameters and their horizontal and vertical gradients. Calculated values of air and soil temperatures, wind-speed, net radiation and soil, latent, and sensible heat fluxes agreed well with measurements. Large horizontal gradients of air temperature are both observed and measured within the canopy in the first 40 m of the leading edge. Rate of evapotranspiration at both the upwind and the downwind edges of the field are higher by more than 15% of the midfield value. Model calculations show that a stable thermal stratification is maintained above the whole field for 24 h. The aerodynamic and thermal internal boundary layer (IBL) growth is proportional to the square root of the fetch. This is also the observed rate of growth of the thermal IBL over a cool sea surface.

  15. Analysis of Maneuvering Targets with Complex Motions by Two-Dimensional Product Modified Lv's Distribution for Quadratic Frequency Modulation Signals.

    Science.gov (United States)

    Jing, Fulong; Jiao, Shuhong; Hou, Changbo; Si, Weijian; Wang, Yu

    2017-06-21

    For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM) signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR) and the quadratic chirp rate (QCR) are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF) and modified scaled Fourier transform (mSFT), an effective parameter estimation algorithm is proposed-referred to as the Two-Dimensional product modified Lv's distribution (2D-PMLVD)-for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT) and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified.

  16. Correction of raindrop size distributions measured by Parsivel disdrometers, using a two-dimensional video disdrometer as a reference

    Directory of Open Access Journals (Sweden)

    T. H. Raupach

    2015-01-01

    Full Text Available The raindrop size distribution (DSD quantifies the microstructure of rainfall and is critical to studying precipitation processes. We present a method to improve the accuracy of DSD measurements from Parsivel (particle size and velocity disdrometers, using a two-dimensional video disdrometer (2DVD as a reference instrument. Parsivel disdrometers bin raindrops into velocity and equivolume diameter classes, but may misestimate the number of drops per class. In our correction method, drop velocities are corrected with reference to theoretical models of terminal drop velocity. We define a filter for raw disdrometer measurements to remove particles that are unlikely to be plausible raindrops. Drop concentrations are corrected such that on average the Parsivel concentrations match those recorded by a 2DVD. The correction can be trained on and applied to data from both generations of OTT Parsivel disdrometers, and indeed any disdrometer in general. The method was applied to data collected during field campaigns in Mediterranean France for a network of first- and second-generation Parsivel disdrometers, and on a first-generation Parsivel in Payerne, Switzerland. We compared the moments of the resulting DSDs to those of a collocated 2DVD, and the resulting DSD-derived rain rates to collocated rain gauges. The correction improved the accuracy of the moments of the Parsivel DSDs, and in the majority of cases the rain rate match with collocated rain gauges was improved. In addition, the correction was shown to be similar for two different climatologies, suggesting its general applicability.

  17. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  18. Pre-aggregation for Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....

  19. Pre-Aggregation with Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2006-01-01

    Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....

  20. Two-dimensional quantum key distribution (QKD) protocol for increased key rate fiber-based quantum communications

    DEFF Research Database (Denmark)

    da Lio, Beatrice; Bacco, Davide; Ding, Yunhong

    2017-01-01

    We experimentally prove a novel two-dimensional QKD scheme, relying on differential phasetime shifting (DPTS) of strongly attenuated weak coherent pulses. We demonstrate QKD transmission up to 170 km standard fiber, and even include a classical channel up to 90 km.......We experimentally prove a novel two-dimensional QKD scheme, relying on differential phasetime shifting (DPTS) of strongly attenuated weak coherent pulses. We demonstrate QKD transmission up to 170 km standard fiber, and even include a classical channel up to 90 km....

  1. APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2013-03-01

    Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.

  2. Bayesian Prior Probability Distributions for Internal Dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Miller, G.; Inkret, W.C.; Little, T.T.; Martz, H.F.; Schillaci, M.E

    2001-07-01

    The problem of choosing a prior distribution for the Bayesian interpretation of measurements (specifically internal dosimetry measurements) is considered using a theoretical analysis and by examining historical tritium and plutonium urine bioassay data from Los Alamos. Two models for the prior probability distribution are proposed: (1) the log-normal distribution, when there is some additional information to determine the scale of the true result, and (2) the 'alpha' distribution (a simplified variant of the gamma distribution) when there is not. These models have been incorporated into version 3 of the Bayesian internal dosimetric code in use at Los Alamos (downloadable from our web site). Plutonium internal dosimetry at Los Alamos is now being done using prior probability distribution parameters determined self-consistently from population averages of Los Alamos data. (author)

  3. NCEL: two dimensional finite element code for steady-state temperature distribution in seven rod-bundle

    International Nuclear Information System (INIS)

    Hrehor, M.

    1979-01-01

    The paper deals with an application of the finite element method to the heat transfer study in seven-pin models of LMFBR fuel subassembly. The developed code NCEL solves two-dimensional steady state heat conduction equation in the whole subassembly model cross-section and enebles to perform the analysis of thermal behaviour in both normal and accidental operational conditions as eccentricity of the central rod or full or partial (porous) blockage of some part of the cross-flow area. The heat removal is simulated by heat sinks in coolant under conditions of subchannels slug flow approximation

  4. Proposal for Modified Damage Probability Distribution Functions

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup; Hansen, Peter Friis

    1996-01-01

    Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub-committee on st......Immidiately following the Estonia disaster, the Nordic countries establishe a project entitled "Safety of Passenger/RoRo Vessels" As part of this project the present proposal for modified damage stability probability distribution functions has been developed. and submitted to "Sub...

  5. The Make 2D-DB II package: conversion of federated two-dimensional gel electrophoresis databases into a relational format and interconnection of distributed databases.

    Science.gov (United States)

    Mostaguir, Khaled; Hoogland, Christine; Binz, Pierre-Alain; Appel, Ron D

    2003-08-01

    The Make 2D-DB tool has been previously developed to help build federated two-dimensional gel electrophoresis (2-DE) databases on one's own web site. The purpose of our work is to extend the strength of the first package and to build a more efficient environment. Such an environment should be able to fulfill the different needs and requirements arising from both the growing use of 2-DE techniques and the increasing amount of distributed experimental data.

  6. Statistical models based on conditional probability distributions

    International Nuclear Information System (INIS)

    Narayanan, R.S.

    1991-10-01

    We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)

  7. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  8. Converting dose distributions into tumour control probability

    International Nuclear Information System (INIS)

    Nahum, A.E.

    1996-01-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s a can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s a . The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs

  9. Converting dose distributions into tumour control probability

    Energy Technology Data Exchange (ETDEWEB)

    Nahum, A E [The Royal Marsden Hospital, London (United Kingdom). Joint Dept. of Physics

    1996-08-01

    The endpoints in radiotherapy that are truly of relevance are not dose distributions but the probability of local control, sometimes known as the Tumour Control Probability (TCP) and the Probability of Normal Tissue Complications (NTCP). A model for the estimation of TCP based on simple radiobiological considerations is described. It is shown that incorporation of inter-patient heterogeneity into the radiosensitivity parameter a through s{sub a} can result in a clinically realistic slope for the dose-response curve. The model is applied to inhomogeneous target dose distributions in order to demonstrate the relationship between dose uniformity and s{sub a}. The consequences of varying clonogenic density are also explored. Finally the model is applied to the target-volume DVHs for patients in a clinical trial of conformal pelvic radiotherapy; the effect of dose inhomogeneities on distributions of TCP are shown as well as the potential benefits of customizing the target dose according to normal-tissue DVHs. (author). 37 refs, 9 figs.

  10. Confidence intervals for the lognormal probability distribution

    International Nuclear Information System (INIS)

    Smith, D.L.; Naberejnev, D.G.

    2004-01-01

    The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

  11. Chain end distribution of block copolymer in two-dimensional microphase-separated structure studied by scanning near-field optical microscopy.

    Science.gov (United States)

    Sekine, Ryojun; Aoki, Hiroyuki; Ito, Shinzaburo

    2009-10-01

    The chain end distribution of a block copolymer in a two-dimensional microphase-separated structure was studied by scanning near-field optical microscopy (SNOM). In the monolayer of poly(octadecyl methacrylate)-block-poly(isobutyl methacrylate) (PODMA-b-PiBMA), the free end of the PiBMA subchain was directly observed by SNOM, and the spatial distributions of the whole block and the chain end are examined and compared with the convolution of the point spread function of the microscope and distribution function of the model structures. It was found that the chain end distribution of the block copolymer confined in two dimensions has a peak near the domain center, being concentrated in the narrower region, as compared with three-dimensional systems.

  12. A new method for the determination of peak distribution across a two-dimensional separation space for the identification of optimal column combinations.

    Science.gov (United States)

    Leonhardt, Juri; Teutenberg, Thorsten; Buschmann, Greta; Gassner, Oliver; Schmidt, Torsten C

    2016-11-01

    For the identification of the optimal column combinations, a comparative orthogonality study of single columns and columns coupled in series for the first dimension of a microscale two-dimensional liquid chromatographic approach was performed. In total, eight columns or column combinations were chosen. For the assessment of the optimal column combination, the orthogonality value as well as the peak distributions across the first and second dimension was used. In total, three different methods of orthogonality calculation, namely the Convex Hull, Bin Counting, and Asterisk methods, were compared. Unfortunately, the first two methods do not provide any information of peak distribution. The third method provides this important information, but is not optimal when only a limited number of components are used for method development. Therefore, a new concept for peak distribution assessment across the separation space of two-dimensional chromatographic systems and clustering detection was developed. It could be shown that the Bin Counting method in combination with additionally calculated histograms for the respective dimensions is well suited for the evaluation of orthogonality and peak clustering. The newly developed method could be used generally in the assessment of 2D separations. Graphical Abstract ᅟ.

  13. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    Kuzio, S.

    2001-01-01

    The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

  14. Library of subroutines to produce one- and two-dimensional statistical distributions on the ES-1010 computer

    International Nuclear Information System (INIS)

    Vzorov, I.K.; Ivanov, V.V.

    1978-01-01

    A library of subroutines to produce 1- and 2-dimensional distribution on the ES-1010 computer is described. 1-dimensional distribution is represented as the histogram, 2-dimensional one is represented as the table. The library provides such opportunities as booking and deleting, filling and clearing histograms (tables), arithmetic operations with them, and printing histograms (tables) on the computer printer with variable printer line. All subroutines are written in FORTRAN-4 language and can be called from the program written in FORTRAN or in ASSEMBLER. This library can be implemented on all computer systems that offer a FORTRAN-4 compiler

  15. Details of 1π sr wide acceptance angle electrostatic lens for electron energy and two-dimensional angular distribution analysis combined with real space imaging

    International Nuclear Information System (INIS)

    Tóth, László; Matsuda, Hiroyuki; Matsui, Fumihiko; Goto, Kentaro; Daimon, Hiroshi

    2012-01-01

    We propose a new 1π sr Wide Acceptance Angle Electrostatic Lens (WAAEL), which works as a photoemission electron microscope (PEEM), a highly sensitive display-type electron energy and two-dimensional angular distribution analyzer. It can display two-dimensional angular distributions of charged particles within the acceptance angle of ±60° that is much larger than the largest acceptance angle range so far and comparable to the display-type spherical mirror analyzer developed by Daimon et al. . It has good focusing capabilities with 5-times magnification and 27(4) μm lateral-resolution. The relative energy resolution is typically from 2 to 5×10 -3 depending on the diameter of energy aperture and the emission area on the sample. Although, the lateral resolution of the presented lens is far from those are available nowadays, but this is the first working model that can form images using charged particles collected from 1π sr wide acceptance angle. The realization of such lens system is one of the first possible steps towards reaching the field of imaging type atomic resolution electron microscopy Feynman et al. Here some preliminary results are shown.

  16. Two Dimensional Verification of the Dose Distribution of Gamma Knife Model C using Monte Carlo Simulation with a Virtual Source

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae-Hoon; Kim, Yong-Kyun; Lee, Cheol Ho; Son, Jaebum; Lee, Sangmin; Kim, Dong Geon; Choi, Joonbum; Jang, Jae Yeong [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun-Tai [Seoul National University, Seoul (Korea, Republic of)

    2016-10-15

    Gamma Knife model C contains 201 {sup 60}Co sources located on a spherical surface, so that each beam is concentrated on the center of the sphere. In the last work, we simulated the Gamma Knife model C through Monte Carlo simulation code using Geant4. Instead of 201 multi-collimation system, we made one single collimation system that collects source parameter passing through the collimator helmet. Using the virtual source, we drastically reduced the simulation time to transport 201 gamma circle beams to the target. Gamma index has been widely used to compare two dose distributions in cancer radiotherapy. Gamma index pass rates were compared in two calculated results using the virtual source method and the original method and measured results obtained using radiocrhomic films. A virtual source method significantly reduces simulation time of a Gamma Knife Model C and provides equivalent absorbed dose distributions as that of the original method showing Gamma Index pass rate close to 100% under 1mm/3% criteria. On the other hand, it gives a little narrow dose distribution compared to the film measurement showing Gamma Index pass rate of 94%. More accurate and sophisticated examination on the accuracy of the simulation and film measurement is necessary.

  17. Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.

    Science.gov (United States)

    Xiao, Dan; Balcom, Bruce J

    2012-07-01

    Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  19. The Probability Distribution for a Biased Spinner

    Science.gov (United States)

    Foster, Colin

    2012-01-01

    This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)

  20. Probability evolution method for exit location distribution

    Science.gov (United States)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  1. Joint probability distributions and fluctuation theorems

    International Nuclear Information System (INIS)

    García-García, Reinaldo; Kolton, Alejandro B; Domínguez, Daniel; Lecomte, Vivien

    2012-01-01

    We derive various exact results for Markovian systems that spontaneously relax to a non-equilibrium steady state by using joint probability distribution symmetries of different entropy production decompositions. The analytical approach is applied to diverse problems such as the description of the fluctuations induced by experimental errors, for unveiling symmetries of correlation functions appearing in fluctuation–dissipation relations recently generalized to non-equilibrium steady states, and also for mapping averages between different trajectory-based dynamical ensembles. Many known fluctuation theorems arise as special instances of our approach for particular twofold decompositions of the total entropy production. As a complement, we also briefly review and synthesize the variety of fluctuation theorems applying to stochastic dynamics of both continuous systems described by a Langevin dynamics and discrete systems obeying a Markov dynamics, emphasizing how these results emerge from distinct symmetries of the dynamical entropy of the trajectory followed by the system. For Langevin dynamics, we embed the 'dual dynamics' with a physical meaning, and for Markov systems we show how the fluctuation theorems translate into symmetries of modified evolution operators

  2. Two-dimensional calculus

    CERN Document Server

    Osserman, Robert

    2011-01-01

    The basic component of several-variable calculus, two-dimensional calculus is vital to mastery of the broader field. This extensive treatment of the subject offers the advantage of a thorough integration of linear algebra and materials, which aids readers in the development of geometric intuition. An introductory chapter presents background information on vectors in the plane, plane curves, and functions of two variables. Subsequent chapters address differentiation, transformations, and integration. Each chapter concludes with problem sets, and answers to selected exercises appear at the end o

  3. Two-dimensional models

    International Nuclear Information System (INIS)

    Schroer, Bert; Freie Universitaet, Berlin

    2005-02-01

    It is not possible to compactly review the overwhelming literature on two-dimensional models in a meaningful way without a specific viewpoint; I have therefore tacitly added to the above title the words 'as theoretical laboratories for general quantum field theory'. I dedicate this contribution to the memory of J. A. Swieca with whom I have shared the passion of exploring 2-dimensional models for almost one decade. A shortened version of this article is intended as a contribution to the project 'Encyclopedia of mathematical physics' and comments, suggestions and critical remarks are welcome. (author)

  4. Two-dimensional ferroelectrics

    Energy Technology Data Exchange (ETDEWEB)

    Blinov, L M; Fridkin, Vladimir M; Palto, Sergei P [A.V. Shubnikov Institute of Crystallography, Russian Academy of Sciences, Moscow, Russian Federaion (Russian Federation); Bune, A V; Dowben, P A; Ducharme, Stephen [Department of Physics and Astronomy, Behlen Laboratory of Physics, Center for Materials Research and Analysis, University of Nebraska-Linkoln, Linkoln, NE (United States)

    2000-03-31

    The investigation of the finite-size effect in ferroelectric crystals and films has been limited by the experimental conditions. The smallest demonstrated ferroelectric crystals had a diameter of {approx}200 A and the thinnest ferroelectric films were {approx}200 A thick, macroscopic sizes on an atomic scale. Langmuir-Blodgett deposition of films one monolayer at a time has produced high quality ferroelectric films as thin as 10 A, made from polyvinylidene fluoride and its copolymers. These ultrathin films permitted the ultimate investigation of finite-size effects on the atomic thickness scale. Langmuir-Blodgett films also revealed the fundamental two-dimensional character of ferroelectricity in these materials by demonstrating that there is no so-called critical thickness; films as thin as two monolayers (1 nm) are ferroelectric, with a transition temperature near that of the bulk material. The films exhibit all the main properties of ferroelectricity with a first-order ferroelectric-paraelectric phase transition: polarization hysteresis (switching); the jump in spontaneous polarization at the phase transition temperature; thermal hysteresis in the polarization; the increase in the transition temperature with applied field; double hysteresis above the phase transition temperature; and the existence of the ferroelectric critical point. The films also exhibit a new phase transition associated with the two-dimensional layers. (reviews of topical problems)

  5. Correlation between DNAPL distribution area and dissolved concentration in surfactant enhanced aquifer remediation effluent: a two-dimensional flow cell study

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Bin; Li, Huiying; Du, Xiaoming; Zhong, Lirong; Yang, Bin; Du, Ping; Gu, Qingbao; Li, Fasheng

    2016-02-01

    During the process of surfactant enhanced aquifer remediation (SEAR), free phase dense non-aqueous phase liquid (DNAPL) may be mobilized and spread. The understanding of the impact of DNAPL spreading on the SEAR remediation is not sufficient with its positive effect infrequently mentioned. To evaluate the correlation between DNAPL spreading and remediation efficiency, a two-dimensional sandbox apparatus was used to simulate the migration and dissolution process of 1,2-DCA (1,2-dichloroethane) DNAPL in SEAR. Distribution area of DNAPL in the sandbox was determined by digital image analysis and correlated with effluent DNAPL concentration. The results showed that the effluent DNAPL concentration has significant positive linear correlation with the DNAPL distribution area, indicating the mobilization of DNAPL could improve remediation efficiency by enlarging total NAPL-water interfacial area for mass transfer. Meanwhile, the vertical migration of 1,2-DCA was limited within the boundary of aquifer in all experiments, implying that by manipulating injection parameters in SEAR, optimal remediation efficiency can be reached while the risk of DNAPL vertical migration is minimized. This study provides a convenient visible and quantitative method for the optimization of parameters for SEAR project, and an approach of rapid predicting the extent of DNAPL contaminant distribution based on the dissolved DNAPL concentration in the extraction well.

  6. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    S. Kuzio

    2004-01-01

    Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

  7. Quadratic Frequency Modulation Signals Parameter Estimation Based on Two-Dimensional Product Modified Parameterized Chirp Rate-Quadratic Chirp Rate Distribution.

    Science.gov (United States)

    Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong

    2018-05-19

    In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.

  8. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    The theoretical literature has a rich characterization of scoring rules for eliciting the subjective beliefs that an individual has for continuous events, but under the restrictive assumption of risk neutrality. It is well known that risk aversion can dramatically affect the incentives to correctly...... report the true subjective probability of a binary event, even under Subjective Expected Utility. To address this one can “calibrate” inferences about true subjective probabilities from elicited subjective probabilities over binary events, recognizing the incentives that risk averse agents have...... to distort reports. We characterize the comparable implications of the general case of a risk averse agent when facing a popular scoring rule over continuous events, and find that these concerns do not apply with anything like the same force. For empirically plausible levels of risk aversion, one can...

  9. The retrieval of two-dimensional distribution of the earth's surface aerodynamic roughness using SAR image and TM thermal infrared image

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Renhua; WANG; Jinfeng; ZHU; Caiying; SUN; Xiaomin

    2004-01-01

    After having analyzed the requirement on the aerodynamic earth's surface roughness in two-dimensional distribution in the research field of interaction between land surface and atmosphere, this paper presents a new way to calculate the aerodynamic roughness using the earth's surface geometric roughness retrieved from SAR (Synthetic Aperture Radar) and TM thermal infrared image data. On the one hand, the SPM (Small Perturbation Model) was used as a theoretical SAR backscattering model to describe the relationship between the SAR backscattering coefficient and the earth's surface geometric roughness and its dielectric constant retrieved from the physical model between the soil thermal inertia and the soil surface moisture with the simultaneous TM thermal infrared image data and the ground microclimate data. On the basis of the SAR image matching with the TM image, the non-volume scattering surface geometric information was obtained from the SPM model at the TM image pixel scale, and the ground pixel surface's equivalent geometric roughness-height standard RMS (Root Mean Square) was achieved from the geometric information by the transformation of the typical topographic factors. The vegetation (wheat, tree) height retrieved from spectrum model was also transferred into its equivalent geometric roughness. A completely two-dimensional distribution map of the equivalent geometric roughness over the experimental area was produced by the data mosaic technique. On the other hand, according to the atmospheric eddy currents theory, the aerodynamic surface roughness was iterated out with the atmosphere stability correction method using the wind and the temperature profiles data measured at several typical fields such as bare soil field and vegetation field. After having analyzed the effect of surface equivalent geometric roughness together with dynamic and thermodynamic factors on the aerodynamic surface roughness within the working area, this paper first establishes a scale

  10. Two-dimensional distribution of electron temperature in ergodic layer of LHD measured from line intensity ratio of CIV and NeVIII

    International Nuclear Information System (INIS)

    Wang, Erhui; Morita, Shigeru; Goto, Motoshi; Murakami, Izumi; Oishi, Tetsutarou; Dong, Chunfeng

    2013-01-01

    Two-dimensional distribution of impurity lines emitted from ergodic layer with stochastic magnetic field lines in Large Helical Device (LHD) has been observed using a space-resolved extreme ultraviolet (EUV) spectrometer. The two-dimensional electron temperature distribution in the ergodic layer is successfully measured using the line intensity ratio of Li-like NeVIII 2s-3p ( 2 S 1/2 - 2 P 3/2 : 88.09 Å, 2 S 1/2 - 2 P 1/2 : 88.13 Å) to 2p-3s ( 2 P 1/2 - 2 S 1/2 : 102.91 Å, 2 P 3/2 - 2 S 1/2 : 103.09 Å) transitions emitted from radial location near Last Closed Flux Surface (LCFS). The intensity ratio analyzed with ADAS code shows no dependence on the electron density below 10 14 cm -3 . The result indicates a little higher temperature, i.e., 220 eV, in the poloidal location at high-field side near helical coils called O-point compared to the temperature near X-point, i.e., 170 eV. The electron temperature profile is also measured at the edge boundary of ergodic layer using the line intensity ratio of Li-like CIV 2p-3d ( 2 P 1/2 - 2 D 3/2 : 384.03 Å, 2 P 3/2 - 2 D 5/2 : 384.18 Å) to 2p-3s ( 2 P 1/2 - 2 S 1/2 : 419.53 Å, 2 P 3/2 - 2 S 1/2 : 419.71 Å) transitions. The intensity ratios analyzed with CHIANTI, ADAS and T.Kawachi codes show a slightly higher temperature near O-point, i.e., 25 eV for CHIANTI, 21 eV for ADAS and 11 eV for T.Kawachi's codes, compared to the temperature at X-point, i.e., 15 - 21 eV for CHIANTI, 9 - 15 eV for ADAS and 6 - 9 eV for T.Kawachi codes. It suggests that the transport coefficient in the ergodic layer is varied with three-dimensional structure. (author)

  11. Matrix-exponential distributions in applied probability

    CERN Document Server

    Bladt, Mogens

    2017-01-01

    This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...

  12. The probability distribution of extreme precipitation

    Science.gov (United States)

    Korolev, V. Yu.; Gorshenin, A. K.

    2017-12-01

    On the basis of the negative binomial distribution of the duration of wet periods calculated per day, an asymptotic model is proposed for distributing the maximum daily rainfall volume during the wet period, having the form of a mixture of Frechet distributions and coinciding with the distribution of the positive degree of a random variable having the Fisher-Snedecor distribution. The method of proving the corresponding result is based on limit theorems for extreme order statistics in samples of a random volume with a mixed Poisson distribution. The adequacy of the models proposed and methods of their statistical analysis is demonstrated by the example of estimating the extreme distribution parameters based on real data.

  13. A Theoretical Study on Quantitative Prediction and Evaluation of Thermal Residual Stresses in Metal Matrix Composite (Case 1 : Two-Dimensional In-Plane Fiber Distribution)

    International Nuclear Information System (INIS)

    Lee, Joon Hyun; Son, Bong Jin

    1997-01-01

    Although discontinuously reinforced metal matrix composite(MMC) is one of the most promising materials for applications of aerospace, automotive industries, the thermal residual stresses developed in the MMC due to the mismatch in coefficients of thermal expansion between the matrix and the fiber under a temperature change has been pointed out as one of the serious problem in practical applications. There are very limited nondestructive techniques to measure the residual stress of composite materials. However, many difficulties have been reported in their applications. Therefore it is important to establish analytical model to evaluate the thermal residual stress of MMC for practical engineering application. In this study, an elastic model is developed to predict the average thermal residual stresses in the matrix and fiber of a misoriented short fiber composite. The thermal residual stresses are induced by the mismatch in the coefficient of the thermal expansion of the matrix and fiber when the composite is subjected to a uniform temperature change. The model considers two-dimensional in-plane fiber misorientation. The analytical formulation of the model is based on Eshelby's equivalent inclusion method and is unique in that it is able to account for interactions among fibers. This model is more general than past models to investigate the effect of parameters which might influence thermal residual stress in composites. The present model is to investigate the effects of fiber volume fraction, distribution type, distribution cut-off angle, and aspect ratio on thermal residual stress for in-plane fiber misorientation. Fiber volume fraction, aspect ratio, and distribution cut-off angle are shown to have more significant effects on the magnitude of the thermal residual stresses than fiber distribution type for in-plane misorientation

  14. Tunable diode laser absorption spectroscopy-based tomography system for on-line monitoring of two-dimensional distributions of temperature and H2O mole fraction

    International Nuclear Information System (INIS)

    Xu, Lijun; Liu, Chang; Jing, Wenyang; Cao, Zhang; Xue, Xin; Lin, Yuzhen

    2016-01-01

    To monitor two-dimensional (2D) distributions of temperature and H 2 O mole fraction, an on-line tomography system based on tunable diode laser absorption spectroscopy (TDLAS) was developed. To the best of the authors’ knowledge, this is the first report on a multi-view TDLAS-based system for simultaneous tomographic visualization of temperature and H 2 O mole fraction in real time. The system consists of two distributed feedback (DFB) laser diodes, a tomographic sensor, electronic circuits, and a computer. The central frequencies of the two DFB laser diodes are at 7444.36 cm −1 (1343.3 nm) and 7185.6 cm −1 (1391.67 nm), respectively. The tomographic sensor is used to generate fan-beam illumination from five views and to produce 60 ray measurements. The electronic circuits not only provide stable temperature and precise current controlling signals for the laser diodes but also can accurately sample the transmitted laser intensities and extract integrated absorbances in real time. Finally, the integrated absorbances are transferred to the computer, in which the 2D distributions of temperature and H 2 O mole fraction are reconstructed by using a modified Landweber algorithm. In the experiments, the TDLAS-based tomography system was validated by using asymmetric premixed flames with fixed and time-varying equivalent ratios, respectively. The results demonstrate that the system is able to reconstruct the profiles of the 2D distributions of temperature and H 2 O mole fraction of the flame and effectively capture the dynamics of the combustion process, which exhibits good potential for flame monitoring and on-line combustion diagnosis

  15. Probability distribution of machining center failures

    International Nuclear Information System (INIS)

    Jia Yazhou; Wang Molin; Jia Zhixin

    1995-01-01

    Through field tracing research for 24 Chinese cutter-changeable CNC machine tools (machining centers) over a period of one year, a database of operation and maintenance for machining centers was built, the failure data was fitted to the Weibull distribution and the exponential distribution, the effectiveness was tested, and the failure distribution pattern of machining centers was found. Finally, the reliability characterizations for machining centers are proposed

  16. Foundations of quantization for probability distributions

    CERN Document Server

    Graf, Siegfried

    2000-01-01

    Due to the rapidly increasing need for methods of data compression, quantization has become a flourishing field in signal and image processing and information theory. The same techniques are also used in statistics (cluster analysis), pattern recognition, and operations research (optimal location of service centers). The book gives the first mathematically rigorous account of the fundamental theory underlying these applications. The emphasis is on the asymptotics of quantization errors for absolutely continuous and special classes of singular probabilities (surface measures, self-similar measures) presenting some new results for the first time. Written for researchers and graduate students in probability theory the monograph is of potential interest to all people working in the disciplines mentioned above.

  17. Real-time two-dimensional imaging of potassium ion distribution using an ion semiconductor sensor with charged coupled device technology.

    Science.gov (United States)

    Hattori, Toshiaki; Masaki, Yoshitomo; Atsumi, Kazuya; Kato, Ryo; Sawada, Kazuaki

    2010-01-01

    Two-dimensional real-time observation of potassium ion distributions was achieved using an ion imaging device based on charge-coupled device (CCD) and metal-oxide semiconductor technologies, and an ion selective membrane. The CCD potassium ion image sensor was equipped with an array of 32 × 32 pixels (1024 pixels). It could record five frames per second with an area of 4.16 × 4.16 mm(2). Potassium ion images were produced instantly. The leaching of potassium ion from a 3.3 M KCl Ag/AgCl reference electrode was dynamically monitored in aqueous solution. The potassium ion selective membrane on the semiconductor consisted of plasticized poly(vinyl chloride) (PVC) with bis(benzo-15-crown-5). The addition of a polyhedral oligomeric silsesquioxane to the plasticized PVC membrane greatly improved adhesion of the membrane onto Si(3)N(4) of the semiconductor surface, and the potential response was stabilized. The potential response was linear from 10(-2) to 10(-5) M logarithmic concentration of potassium ion. The selectivity coefficients were K(K(+),Li(+))(pot) = 10(-2.85), K(K(+),Na(+))(pot) = 10(-2.30), K(K(+),Rb(+))(pot) =10(-1.16), and K(K(+),Cs(+))(pot) = 10(-2.05).

  18. Analysis of Maneuvering Targets with Complex Motions by Two-Dimensional Product Modified Lv’s Distribution for Quadratic Frequency Modulation Signals

    Directory of Open Access Journals (Sweden)

    Fulong Jing

    2017-06-01

    Full Text Available For targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, azimuth echo signals can be modeled as multicomponent quadratic frequency modulation (QFM signals after migration compensation and phase adjustment. For the QFM signal model, the chirp rate (CR and the quadratic chirp rate (QCR are two important physical quantities, which need to be estimated. For multicomponent QFM signals, the cross terms create a challenge for detection, which needs to be addressed. In this paper, by employing a novel multi-scale parametric symmetric self-correlation function (PSSF and modified scaled Fourier transform (mSFT, an effective parameter estimation algorithm is proposed—referred to as the Two-Dimensional product modified Lv’s distribution (2D-PMLVD—for QFM signals. The 2D-PMLVD is simple and can be easily implemented by using fast Fourier transform (FFT and complex multiplication. These measures are analyzed in the paper, including the principle, the cross term, anti-noise performance, and computational complexity. Compared to the other three representative methods, the 2D-PMLVD can achieve better anti-noise performance. The 2D-PMLVD, which is free of searching and has no identifiability problems, is more suitable for multicomponent situations. Through several simulations and analyses, the effectiveness of the proposed estimation algorithm is verified.

  19. Two-dimensional finite difference model to study temperature distribution in SST regions of human limbs immediately after physical exercise in cold climate

    Science.gov (United States)

    Kumari, Babita; Adlakha, Neeru

    2015-02-01

    Thermoregulation is a complex mechanism regulating heat production within the body (chemical thermoregulation) and heat exchange between the body and the environment (physical thermoregulation) in such a way that the heat exchange is balanced and deep body temperatures are relatively stable. The external heat transfer mechanisms are radiation, conduction, convection and evaporation. The physical activity causes thermal stress and poses challenges for this thermoregulation. In this paper, a model has been developed to study temperature distribution in SST regions of human limbs immediately after physical exercise under cold climate. It is assumed that the subject is doing exercise initially and comes to rest at time t = 0. The human limb is assumed to be of cylindrical shape. The peripheral region of limb is divided into three natural components namely epidermis, dermis and subdermal tissues (SST). Appropriate boundary conditions have been framed based on the physical conditions of the problem. Finite difference has been employed for time, radial and angular variables. The numerical results have been used to obtain temperature profiles in the SST region immediately after continuous exercise for a two-dimensional unsteady state case. The results have been used to analyze the thermal stress in relation to light, moderate and vigorous intensity exercise.

  20. Eliciting Subjective Probability Distributions with Binary Lotteries

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    2015-01-01

    We test in a laboratory experiment the theoretical prediction that risk attitudes have a surprisingly small role in distorting reports from true belief distributions. We find evidence consistent with theory in our experiment....

  1. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    2017-01-01

    significantly due to risk aversion. We characterize an approach for eliciting the entire subjective belief distribution that is minimally biased due to risk aversion. We offer simulated examples to demonstrate the intuition of our approach. We also provide theory to formally characterize our framework. And we...... provide experimental evidence which corroborates our theoretical results. We conclude that for empirically plausible levels of risk aversion, one can reliably elicit most important features of the latent subjective belief distribution without undertaking calibration for risk attitudes providing one...

  2. Most probable degree distribution at fixed structural entropy

    Indian Academy of Sciences (India)

    Here we derive the most probable degree distribution emerging ... the structural entropy of power-law networks is an increasing function of the expo- .... tition function Z of the network as the sum over all degree distributions, with given energy.

  3. Incorporating Skew into RMS Surface Roughness Probability Distribution

    Science.gov (United States)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  4. Two-dimensional quantum repeaters

    Science.gov (United States)

    Wallnöfer, J.; Zwerger, M.; Muschik, C.; Sangouard, N.; Dür, W.

    2016-11-01

    The endeavor to develop quantum networks gave rise to a rapidly developing field with far-reaching applications such as secure communication and the realization of distributed computing tasks. This ultimately calls for the creation of flexible multiuser structures that allow for quantum communication between arbitrary pairs of parties in the network and facilitate also multiuser applications. To address this challenge, we propose a two-dimensional quantum repeater architecture to establish long-distance entanglement shared between multiple communication partners in the presence of channel noise and imperfect local control operations. The scheme is based on the creation of self-similar multiqubit entanglement structures at growing scale, where variants of entanglement swapping and multiparty entanglement purification are combined to create high-fidelity entangled states. We show how such networks can be implemented using trapped ions in cavities.

  5. Probability distributions in risk management operations

    CERN Document Server

    Artikis, Constantinos

    2015-01-01

    This book is about the formulations, theoretical investigations, and practical applications of new stochastic models for fundamental concepts and operations of the discipline of risk management. It also examines how these models can be useful in the descriptions, measurements, evaluations, and treatments of risks threatening various modern organizations. Moreover, the book makes clear that such stochastic models constitute very strong analytical tools which substantially facilitate strategic thinking and strategic decision making in many significant areas of risk management. In particular the incorporation of fundamental probabilistic concepts such as the sum, minimum, and maximum of a random number of continuous, positive, independent, and identically distributed random variables in the mathematical structure of stochastic models significantly supports the suitability of these models in the developments, investigations, selections, and implementations of proactive and reactive risk management operations. The...

  6. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    Science.gov (United States)

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

  7. How Can Histograms Be Useful for Introducing Continuous Probability Distributions?

    Science.gov (United States)

    Derouet, Charlotte; Parzysz, Bernard

    2016-01-01

    The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…

  8. Modeling the probability distribution of peak discharge for infiltrating hillslopes

    Science.gov (United States)

    Baiamonte, Giorgio; Singh, Vijay P.

    2017-07-01

    Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.

  9. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  10. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    Energy Technology Data Exchange (ETDEWEB)

    Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)

    2014-06-19

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  11. Fitness Probability Distribution of Bit-Flip Mutation.

    Science.gov (United States)

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  12. Characterizing microbial communities and processes in a modern stromatolite (Shark Bay) using lipid biomarkers and two-dimensional distributions of porewater solutes

    DEFF Research Database (Denmark)

    Pagès, Anais; Grice, Kliti; Vacher, Michael

    2014-01-01

    Summary: Modern microbial mats are highly complex and dynamic ecosystems. Diffusive equilibration in thin films (DET) and diffusive gradients in thin films (DGT) samplers were deployed in a modern smooth microbial mat from Shark Bay in order to observe, for the first time, two-dimensional distrib......Summary: Modern microbial mats are highly complex and dynamic ecosystems. Diffusive equilibration in thin films (DET) and diffusive gradients in thin films (DGT) samplers were deployed in a modern smooth microbial mat from Shark Bay in order to observe, for the first time, two...

  13. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  14. Probability distributions with truncated, log and bivariate extensions

    CERN Document Server

    Thomopoulos, Nick T

    2018-01-01

    This volume presents a concise and practical overview of statistical methods and tables not readily available in other publications. It begins with a review of the commonly used continuous and discrete probability distributions. Several useful distributions that are not so common and less understood are described with examples and applications in full detail: discrete normal, left-partial, right-partial, left-truncated normal, right-truncated normal, lognormal, bivariate normal, and bivariate lognormal. Table values are provided with examples that enable researchers to easily apply the distributions to real applications and sample data. The left- and right-truncated normal distributions offer a wide variety of shapes in contrast to the symmetrically shaped normal distribution, and a newly developed spread ratio enables analysts to determine which of the three distributions best fits a particular set of sample data. The book will be highly useful to anyone who does statistical and probability analysis. This in...

  15. STADIC: a computer code for combining probability distributions

    International Nuclear Information System (INIS)

    Cairns, J.J.; Fleming, K.N.

    1977-03-01

    The STADIC computer code uses a Monte Carlo simulation technique for combining probability distributions. The specific function for combination of the input distribution is defined by the user by introducing the appropriate FORTRAN statements to the appropriate subroutine. The code generates a Monte Carlo sampling from each of the input distributions and combines these according to the user-supplied function to provide, in essence, a random sampling of the combined distribution. When the desired number of samples is obtained, the output routine calculates the mean, standard deviation, and confidence limits for the resultant distribution. This method of combining probability distributions is particularly useful in cases where analytical approaches are either too difficult or undefined

  16. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran Kumar; Mai, Paul Martin

    2016-01-01

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  17. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  18. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , S st , p st ) for stochastic uncertainty, a probability space (S su , S su , p su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , S st , p st ) and (S su , S su , p su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  19. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    International Nuclear Information System (INIS)

    Helton, J.C.

    1996-01-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S st , L st , P st ) for stochastic uncertainty, a probability space (S su , L su , P su ) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S st , L st , P st ) and (S su , L su , P su ). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the US Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems

  20. Comparative analysis through probability distributions of a data set

    Science.gov (United States)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  1. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  2. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  3. Parametric Probability Distribution Functions for Axon Diameters of Corpus Callosum

    Directory of Open Access Journals (Sweden)

    Farshid eSepehrband

    2016-05-01

    Full Text Available Axon diameter is an important neuroanatomical characteristic of the nervous system that alters in the course of neurological disorders such as multiple sclerosis. Axon diameters vary, even within a fiber bundle, and are not normally distributed. An accurate distribution function is therefore beneficial, either to describe axon diameters that are obtained from a direct measurement technique (e.g., microscopy, or to infer them indirectly (e.g., using diffusion-weighted MRI. The gamma distribution is a common choice for this purpose (particularly for the inferential approach because it resembles the distribution profile of measured axon diameters which has been consistently shown to be non-negative and right-skewed. In this study we compared a wide range of parametric probability distribution functions against empirical data obtained from electron microscopy images. We observed that the gamma distribution fails to accurately describe the main characteristics of the axon diameter distribution, such as location and scale of the mode and the profile of distribution tails. We also found that the generalized extreme value distribution consistently fitted the measured distribution better than other distribution functions. This suggests that there may be distinct subpopulations of axons in the corpus callosum, each with their own distribution profiles. In addition, we observed that several other distributions outperformed the gamma distribution, yet had the same number of unknown parameters; these were the inverse Gaussian, log normal, log logistic and Birnbaum-Saunders distributions.

  4. A Probability Distribution over Latent Causes, in the Orbitofrontal Cortex.

    Science.gov (United States)

    Chan, Stephanie C Y; Niv, Yael; Norman, Kenneth A

    2016-07-27

    The orbitofrontal cortex (OFC) has been implicated in both the representation of "state," in studies of reinforcement learning and decision making, and also in the representation of "schemas," in studies of episodic memory. Both of these cognitive constructs require a similar inference about the underlying situation or "latent cause" that generates our observations at any given time. The statistically optimal solution to this inference problem is to use Bayes' rule to compute a posterior probability distribution over latent causes. To test whether such a posterior probability distribution is represented in the OFC, we tasked human participants with inferring a probability distribution over four possible latent causes, based on their observations. Using fMRI pattern similarity analyses, we found that BOLD activity in the OFC is best explained as representing the (log-transformed) posterior distribution over latent causes. Furthermore, this pattern explained OFC activity better than other task-relevant alternatives, such as the most probable latent cause, the most recent observation, or the uncertainty over latent causes. Our world is governed by hidden (latent) causes that we cannot observe, but which generate the observations we see. A range of high-level cognitive processes require inference of a probability distribution (or "belief distribution") over the possible latent causes that might be generating our current observations. This is true for reinforcement learning and decision making (where the latent cause comprises the true "state" of the task), and for episodic memory (where memories are believed to be organized by the inferred situation or "schema"). Using fMRI, we show that this belief distribution over latent causes is encoded in patterns of brain activity in the orbitofrontal cortex, an area that has been separately implicated in the representations of both states and schemas. Copyright © 2016 the authors 0270-6474/16/367817-12$15.00/0.

  5. Decoherence in two-dimensional quantum walks

    International Nuclear Information System (INIS)

    Oliveira, A. C.; Portugal, R.; Donangelo, R.

    2006-01-01

    We analyze the decoherence in quantum walks in two-dimensional lattices generated by broken-link-type noise. In this type of decoherence, the links of the lattice are randomly broken with some given constant probability. We obtain the evolution equation for a quantum walker moving on two-dimensional (2D) lattices subject to this noise, and we point out how to generalize for lattices in more dimensions. In the nonsymmetric case, when the probability of breaking links in one direction is different from the probability in the perpendicular direction, we have obtained a nontrivial result. If one fixes the link-breaking probability in one direction, and gradually increases the probability in the other direction from 0 to 1, the decoherence initially increases until it reaches a maximum value, and then it decreases. This means that, in some cases, one can increase the noise level and still obtain more coherence. Physically, this can be explained as a transition from a decoherent 2D walk to a coherent 1D walk

  6. Two-dimensional NMR spectrometry

    International Nuclear Information System (INIS)

    Farrar, T.C.

    1987-01-01

    This article is the second in a two-part series. In part one (ANALYTICAL CHEMISTRY, May 15) the authors discussed one-dimensional nuclear magnetic resonance (NMR) spectra and some relatively advanced nuclear spin gymnastics experiments that provide a capability for selective sensitivity enhancements. In this article and overview and some applications of two-dimensional NMR experiments are presented. These powerful experiments are important complements to the one-dimensional experiments. As in the more sophisticated one-dimensional experiments, the two-dimensional experiments involve three distinct time periods: a preparation period, t 0 ; an evolution period, t 1 ; and a detection period, t 2

  7. Quasi-two-dimensional holography

    International Nuclear Information System (INIS)

    Kutzner, J.; Erhard, A.; Wuestenberg, H.; Zimpfer, J.

    1980-01-01

    The acoustical holography with numerical reconstruction by area scanning is memory- and time-intensive. With the experiences by the linear holography we tried to derive a scanning for the evaluating of the two-dimensional flaw-sizes. In most practical cases it is sufficient to determine the exact depth extension of a flaw, whereas the accuracy of the length extension is less critical. For this reason the applicability of the so-called quasi-two-dimensional holography is appropriate. The used sound field given by special probes is divergent in the inclined plane and light focussed in the perpendicular plane using cylindrical lenses. (orig.) [de

  8. Bayesian approach for peak detection in two-dimensional chromatography.

    Science.gov (United States)

    Vivó-Truyols, Gabriel

    2012-03-20

    A new method for peak detection in two-dimensional chromatography is presented. In a first step, the method starts with a conventional one-dimensional peak detection algorithm to detect modulated peaks. In a second step, a sophisticated algorithm is constructed to decide which of the individual one-dimensional peaks have been originated from the same compound and should then be arranged in a two-dimensional peak. The merging algorithm is based on Bayesian inference. The user sets prior information about certain parameters (e.g., second-dimension retention time variability, first-dimension band broadening, chromatographic noise). On the basis of these priors, the algorithm calculates the probability of myriads of peak arrangements (i.e., ways of merging one-dimensional peaks), finding which of them holds the highest value. Uncertainty in each parameter can be accounted by adapting conveniently its probability distribution function, which in turn may change the final decision of the most probable peak arrangement. It has been demonstrated that the Bayesian approach presented in this paper follows the chromatographers' intuition. The algorithm has been applied and tested with LC × LC and GC × GC data and takes around 1 min to process chromatograms with several thousands of peaks.

  9. Evaluation of burst probability for tubes by Weibull distributions

    International Nuclear Information System (INIS)

    Kao, S.

    1975-10-01

    The investigations of candidate distributions that best describe the burst pressure failure probability characteristics of nuclear power steam generator tubes has been continued. To date it has been found that the Weibull distribution provides an acceptable fit for the available data from both the statistical and physical viewpoints. The reasons for the acceptability of the Weibull distribution are stated together with the results of tests for the suitability of fit. In exploring the acceptability of the Weibull distribution for the fitting, a graphical method to be called the ''density-gram'' is employed instead of the usual histogram. With this method a more sensible graphical observation on the empirical density may be made for cases where the available data is very limited. Based on these methods estimates of failure pressure are made for the left-tail probabilities

  10. Probability distribution of extreme share returns in Malaysia

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin

    2014-09-01

    The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.

  11. Multimode Interference: Identifying Channels and Ridges in Quantum Probability Distributions

    OpenAIRE

    O'Connell, Ross C.; Loinaz, Will

    2004-01-01

    The multimode interference technique is a simple way to study the interference patterns found in many quantum probability distributions. We demonstrate that this analysis not only explains the existence of so-called "quantum carpets," but can explain the spatial distribution of channels and ridges in the carpets. With an understanding of the factors that govern these channels and ridges we have a limited ability to produce a particular pattern of channels and ridges by carefully choosing the ...

  12. Modeling highway travel time distribution with conditional probability models

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira Neto, Francisco Moraes [ORNL; Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL; Han, Lee [University of Tennessee, Knoxville (UTK)

    2014-01-01

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program provides a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).

  13. Some applications of the fractional Poisson probability distribution

    International Nuclear Information System (INIS)

    Laskin, Nick

    2009-01-01

    Physical and mathematical applications of the recently invented fractional Poisson probability distribution have been presented. As a physical application, a new family of quantum coherent states has been introduced and studied. As mathematical applications, we have developed the fractional generalization of Bell polynomials, Bell numbers, and Stirling numbers of the second kind. The appearance of fractional Bell polynomials is natural if one evaluates the diagonal matrix element of the evolution operator in the basis of newly introduced quantum coherent states. Fractional Stirling numbers of the second kind have been introduced and applied to evaluate the skewness and kurtosis of the fractional Poisson probability distribution function. A representation of the Bernoulli numbers in terms of fractional Stirling numbers of the second kind has been found. In the limit case when the fractional Poisson probability distribution becomes the Poisson probability distribution, all of the above listed developments and implementations turn into the well-known results of the quantum optics and the theory of combinatorial numbers.

  14. Numerical Loading of a Maxwellian Probability Distribution Function

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    A renormalization procedure for the numerical loading of a Maxwellian probability distribution function (PDF) is formulated. The procedure, which involves the solution of three coupled nonlinear equations, yields a numerically loaded PDF with improved properties for higher velocity moments. This method is particularly useful for low-noise particle-in-cell simulations with electron dynamics

  15. Two-dimensional metamaterial optics

    International Nuclear Information System (INIS)

    Smolyaninov, I I

    2010-01-01

    While three-dimensional photonic metamaterials are difficult to fabricate, many new concepts and ideas in the metamaterial optics can be realized in two spatial dimensions using planar optics of surface plasmon polaritons. In this paper we review recent progress in this direction. Two-dimensional photonic crystals, hyperbolic metamaterials, and plasmonic focusing devices are demonstrated and used in novel microscopy and waveguiding schemes

  16. Quantum Fourier transform, Heisenberg groups and quasi-probability distributions

    International Nuclear Information System (INIS)

    Patra, Manas K; Braunstein, Samuel L

    2011-01-01

    This paper aims to explore the inherent connection between Heisenberg groups, quantum Fourier transform (QFT) and (quasi-probability) distribution functions. Distribution functions for continuous and finite quantum systems are examined from three perspectives and all of them lead to Weyl-Gabor-Heisenberg groups. The QFT appears as the intertwining operator of two equivalent representations arising out of an automorphism of the group. Distribution functions correspond to certain distinguished sets in the group algebra. The marginal properties of a particular class of distribution functions (Wigner distributions) arise from a class of automorphisms of the group algebra of the Heisenberg group. We then study the reconstruction of the Wigner function from the marginal distributions via inverse Radon transform giving explicit formulae. We consider some applications of our approach to quantum information processing and quantum process tomography.

  17. Simulation of Daily Weather Data Using Theoretical Probability Distributions.

    Science.gov (United States)

    Bruhn, J. A.; Fry, W. E.; Fick, G. W.

    1980-09-01

    A computer simulation model was constructed to supply daily weather data to a plant disease management model for potato late blight. In the weather model Monte Carlo techniques were employed to generate daily values of precipitation, maximum temperature, minimum temperature, minimum relative humidity and total solar radiation. Each weather variable is described by a known theoretical probability distribution but the values of the parameters describing each distribution are dependent on the occurrence of rainfall. Precipitation occurrence is described by a first-order Markov chain. The amount of rain, given that rain has occurred, is described by a gamma probability distribution. Maximum and minimum temperature are simulated with a trivariate normal probability distribution involving maximum temperature on the previous day, maximum temperature on the current day and minimum temperature on the current day. Parameter values for this distribution are dependent on the occurrence of rain on the previous day. Both minimum relative humidity and total solar radiation are assumed to be normally distributed. The values of the parameters describing the distribution of minimum relative humidity is dependent on rainfall occurrence on the previous day and current day. Parameter values for total solar radiation are dependent on the occurrence of rain on the current day. The assumptions made during model construction were found to be appropriate for actual weather data from Geneva, New York. The performance of the weather model was evaluated by comparing the cumulative frequency distributions of simulated weather data with the distributions of actual weather data from Geneva, New York and Fort Collins, Colorado. For each location, simulated weather data were similar to actual weather data in terms of mean response, variability and autocorrelation. The possible applications of this model when used with models of other components of the agro-ecosystem are discussed.

  18. Exact probability distribution function for the volatility of cumulative production

    Science.gov (United States)

    Zadourian, Rubina; Klümper, Andreas

    2018-04-01

    In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.

  19. Probability, conditional probability and complementary cumulative distribution functions in performance assessment for radioactive waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States)

    1996-03-01

    A formal description of the structure of several recent performance assessments (PAs) for the Waste Isolation Pilot Plant (WIPP) is given in terms of the following three components: a probability space (S{sub st}, S{sub st}, p{sub st}) for stochastic uncertainty, a probability space (S{sub su}, S{sub su}, p{sub su}) for subjective uncertainty and a function (i.e., a random variable) defined on the product space associated with (S{sub st}, S{sub st}, p{sub st}) and (S{sub su}, S{sub su}, p{sub su}). The explicit recognition of the existence of these three components allows a careful description of the use of probability, conditional probability and complementary cumulative distribution functions within the WIPP PA. This usage is illustrated in the context of the U.S. Environmental Protection Agency`s standard for the geologic disposal of radioactive waste (40 CFR 191, Subpart B). The paradigm described in this presentation can also be used to impose a logically consistent structure on PAs for other complex systems.

  20. Idealized models of the joint probability distribution of wind speeds

    Science.gov (United States)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  1. Two-dimensional flexible nanoelectronics

    Science.gov (United States)

    Akinwande, Deji; Petrone, Nicholas; Hone, James

    2014-12-01

    2014/2015 represents the tenth anniversary of modern graphene research. Over this decade, graphene has proven to be attractive for thin-film transistors owing to its remarkable electronic, optical, mechanical and thermal properties. Even its major drawback--zero bandgap--has resulted in something positive: a resurgence of interest in two-dimensional semiconductors, such as dichalcogenides and buckled nanomaterials with sizeable bandgaps. With the discovery of hexagonal boron nitride as an ideal dielectric, the materials are now in place to advance integrated flexible nanoelectronics, which uniquely take advantage of the unmatched portfolio of properties of two-dimensional crystals, beyond the capability of conventional thin films for ubiquitous flexible systems.

  2. Two-dimensional topological photonics

    Science.gov (United States)

    Khanikaev, Alexander B.; Shvets, Gennady

    2017-12-01

    Originating from the studies of two-dimensional condensed-matter states, the concept of topological order has recently been expanded to other fields of physics and engineering, particularly optics and photonics. Topological photonic structures have already overturned some of the traditional views on wave propagation and manipulation. The application of topological concepts to guided wave propagation has enabled novel photonic devices, such as reflection-free sharply bent waveguides, robust delay lines, spin-polarized switches and non-reciprocal devices. Discrete degrees of freedom, widely used in condensed-matter physics, such as spin and valley, are now entering the realm of photonics. In this Review, we summarize the latest advances in this highly dynamic field, with special emphasis on the experimental work on two-dimensional photonic topological structures.

  3. Two-dimensional thermofield bosonization

    International Nuclear Information System (INIS)

    Amaral, R.L.P.G.; Belvedere, L.V.; Rothe, K.D.

    2005-01-01

    The main objective of this paper was to obtain an operator realization for the bosonization of fermions in 1 + 1 dimensions, at finite, non-zero temperature T. This is achieved in the framework of the real-time formalism of Thermofield Dynamics. Formally, the results parallel those of the T = 0 case. The well-known two-dimensional Fermion-Boson correspondences at zero temperature are shown to hold also at finite temperature. To emphasize the usefulness of the operator realization for handling a large class of two-dimensional quantum field-theoretic problems, we contrast this global approach with the cumbersome calculation of the fermion-current two-point function in the imaginary-time formalism and real-time formalisms. The calculations also illustrate the very different ways in which the transmutation from Fermi-Dirac to Bose-Einstein statistics is realized

  4. Geometry of q-Exponential Family of Probability Distributions

    Directory of Open Access Journals (Sweden)

    Shun-ichi Amari

    2011-06-01

    Full Text Available The Gibbs distribution of statistical physics is an exponential family of probability distributions, which has a mathematical basis of duality in the form of the Legendre transformation. Recent studies of complex systems have found lots of distributions obeying the power law rather than the standard Gibbs type distributions. The Tsallis q-entropy is a typical example capturing such phenomena. We treat the q-Gibbs distribution or the q-exponential family by generalizing the exponential function to the q-family of power functions, which is useful for studying various complex or non-standard physical phenomena. We give a new mathematical structure to the q-exponential family different from those previously given. It has a dually flat geometrical structure derived from the Legendre transformation and the conformal geometry is useful for understanding it. The q-version of the maximum entropy theorem is naturally induced from the q-Pythagorean theorem. We also show that the maximizer of the q-escort distribution is a Bayesian MAP (Maximum A posteriori Probability estimator.

  5. Superthermal photon bunching in terms of simple probability distributions

    Science.gov (United States)

    Lettau, T.; Leymann, H. A. M.; Melcher, B.; Wiersig, J.

    2018-05-01

    We analyze the second-order photon autocorrelation function g(2 ) with respect to the photon probability distribution and discuss the generic features of a distribution that results in superthermal photon bunching [g(2 )(0 ) >2 ]. Superthermal photon bunching has been reported for a number of optical microcavity systems that exhibit processes such as superradiance or mode competition. We show that a superthermal photon number distribution cannot be constructed from the principle of maximum entropy if only the intensity and the second-order autocorrelation are given. However, for bimodal systems, an unbiased superthermal distribution can be constructed from second-order correlations and the intensities alone. Our findings suggest modeling superthermal single-mode distributions by a mixture of a thermal and a lasinglike state and thus reveal a generic mechanism in the photon probability distribution responsible for creating superthermal photon bunching. We relate our general considerations to a physical system, i.e., a (single-emitter) bimodal laser, and show that its statistics can be approximated and understood within our proposed model. Furthermore, the excellent agreement of the statistics of the bimodal laser and our model reveals that the bimodal laser is an ideal source of bunched photons, in the sense that it can generate statistics that contain no other features but the superthermal bunching.

  6. Two-dimensional critical phenomena

    International Nuclear Information System (INIS)

    Saleur, H.

    1987-09-01

    Two dimensional critical systems are studied using transformation to free fields and conformal invariance methods. The relations between the two approaches are also studied. The analytical results obtained generally depend on universality hypotheses or on renormalization group trajectories which are not established rigorously, so numerical verifications, mainly using the transfer matrix approach, are presented. The exact determination of critical exponents; the partition functions of critical models on toruses; and results as the critical point is approached are discussed [fr

  7. Two dimensional unstable scar statistics.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Kotulski, Joseph Daniel; Lee, Kelvin S. H. (ITT Industries/AES Los Angeles, CA)

    2006-12-01

    This report examines the localization of time harmonic high frequency modal fields in two dimensional cavities along periodic paths between opposing sides of the cavity. The cases where these orbits lead to unstable localized modes are known as scars. This paper examines the enhancements for these unstable orbits when the opposing mirrors are both convex and concave. In the latter case the construction includes the treatment of interior foci.

  8. Finding two-dimensional peaks

    International Nuclear Information System (INIS)

    Silagadze, Z.K.

    2007-01-01

    Two-dimensional generalization of the original peak finding algorithm suggested earlier is given. The ideology of the algorithm emerged from the well-known quantum mechanical tunneling property which enables small bodies to penetrate through narrow potential barriers. We merge this 'quantum' ideology with the philosophy of Particle Swarm Optimization to get the global optimization algorithm which can be called Quantum Swarm Optimization. The functionality of the newborn algorithm is tested on some benchmark optimization problems

  9. Measurement of probability distributions for internal stresses in dislocated crystals

    Energy Technology Data Exchange (ETDEWEB)

    Wilkinson, Angus J.; Tarleton, Edmund; Vilalta-Clemente, Arantxa; Collins, David M. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Jiang, Jun; Britton, T. Benjamin [Department of Materials, Imperial College London, Royal School of Mines, Exhibition Road, London SW7 2AZ (United Kingdom)

    2014-11-03

    Here, we analyse residual stress distributions obtained from various crystal systems using high resolution electron backscatter diffraction (EBSD) measurements. Histograms showing stress probability distributions exhibit tails extending to very high stress levels. We demonstrate that these extreme stress values are consistent with the functional form that should be expected for dislocated crystals. Analysis initially developed by Groma and co-workers for X-ray line profile analysis and based on the so-called “restricted second moment of the probability distribution” can be used to estimate the total dislocation density. The generality of the results are illustrated by application to three quite different systems, namely, face centred cubic Cu deformed in uniaxial tension, a body centred cubic steel deformed to larger strain by cold rolling, and hexagonal InAlN layers grown on misfitting sapphire and silicon carbide substrates.

  10. Outage probability of distributed beamforming with co-channel interference

    KAUST Repository

    Yang, Liang

    2012-03-01

    In this letter, we consider a distributed beamforming scheme (DBF) in the presence of equal-power co-channel interferers for both amplify-and-forward and decode-and-forward relaying protocols over Rayleigh fading channels. We first derive outage probability expressions for the DBF systems. We then present a performance analysis for a scheme relying on source selection. Numerical results are finally presented to verify our analysis. © 2011 IEEE.

  11. Probabilities of filaments in a Poissonian distribution of points -I

    International Nuclear Information System (INIS)

    Betancort-Rijo, J.

    1989-01-01

    Statistical techniques are devised to assess the likelihood of a Poisson sample of points in two and three dimensions, containing specific filamentary structures. For that purpose, the expression of Otto et al (1986. Astrophys. J., 304) for the probability density of clumps in a Poissonian distribution of points is generalized for any value of the density contrast. A way of counting filaments differing from that of Otto et al. is proposed, because at low density contrast the filaments counted by Otto et al. are distributed in a clumpy fashion, each clump of filaments corresponding to a distinct observed filament. (author)

  12. Universal Probability Distribution Function for Bursty Transport in Plasma Turbulence

    International Nuclear Information System (INIS)

    Sandberg, I.; Benkadda, S.; Garbet, X.; Ropokis, G.; Hizanidis, K.; Castillo-Negrete, D. del

    2009-01-01

    Bursty transport phenomena associated with convective motion present universal statistical characteristics among different physical systems. In this Letter, a stochastic univariate model and the associated probability distribution function for the description of bursty transport in plasma turbulence is presented. The proposed stochastic process recovers the universal distribution of density fluctuations observed in plasma edge of several magnetic confinement devices and the remarkable scaling between their skewness S and kurtosis K. Similar statistical characteristics of variabilities have been also observed in other physical systems that are characterized by convection such as the x-ray fluctuations emitted by the Cygnus X-1 accretion disc plasmas and the sea surface temperature fluctuations.

  13. Estimating probable flaw distributions in PWR steam generator tubes

    International Nuclear Information System (INIS)

    Gorman, J.A.; Turner, A.P.L.

    1997-01-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses

  14. Probability distributions for Markov chain based quantum walks

    Science.gov (United States)

    Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.

    2018-01-01

    We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.

  15. Theoretical derivation of wind power probability distribution function and applications

    International Nuclear Information System (INIS)

    Altunkaynak, Abdüsselam; Erdik, Tarkan; Dabanlı, İsmail; Şen, Zekai

    2012-01-01

    Highlights: ► Derivation of wind power stochastic characteristics are standard deviation and the dimensionless skewness. ► The perturbation is expressions for the wind power statistics from Weibull probability distribution function (PDF). ► Comparisons with the corresponding characteristics of wind speed PDF abides by the Weibull PDF. ► The wind power abides with the Weibull-PDF. -- Abstract: The instantaneous wind power contained in the air current is directly proportional with the cube of the wind speed. In practice, there is a record of wind speeds in the form of a time series. It is, therefore, necessary to develop a formulation that takes into consideration the statistical parameters of such a time series. The purpose of this paper is to derive the general wind power formulation in terms of the statistical parameters by using the perturbation theory, which leads to a general formulation of the wind power expectation and other statistical parameter expressions such as the standard deviation and the coefficient of variation. The formulation is very general and can be applied specifically for any wind speed probability distribution function. Its application to two-parameter Weibull probability distribution of wind speeds is presented in full detail. It is concluded that provided wind speed is distributed according to a Weibull distribution, the wind power could be derived based on wind speed data. It is possible to determine wind power at any desired risk level, however, in practical studies most often 5% or 10% risk levels are preferred and the necessary simple procedure is presented for this purpose in this paper.

  16. Two dimensional infinite conformal symmetry

    International Nuclear Information System (INIS)

    Mohanta, N.N.; Tripathy, K.C.

    1993-01-01

    The invariant discontinuous (discrete) conformal transformation groups, namely the Kleinian and Fuchsian groups Gamma (with an arbitrary signature) of H (the Poincare upper half-plane l) and the unit disc Delta are explicitly constructed from the fundamental domain D. The Riemann surface with signatures of Gamma and conformally invariant automorphic forms (functions) with Peterson scalar product are discussed. The functor, where the category of complex Hilbert spaces spanned by the space of cusp forms constitutes the two dimensional conformal field theory. (Author) 7 refs

  17. Two-dimensional liquid chromatography

    DEFF Research Database (Denmark)

    Græsbøll, Rune

    -dimensional separation space. Optimization of gradients in online RP×RP is more difficult than in normal HPLC as a result of the increased number of parameters and their influence on each other. Modeling the coverage of the compounds across the two-dimensional chromatogram as a result of a change in gradients could...... be used for optimization purposes, and reduce the time spend on optimization. In this thesis (chapter 6), and manuscript B, a measure of the coverage of the compounds in the twodimensional separation space is defined. It is then shown that this measure can be modeled for changes in the gradient in both...

  18. Ionization of oriented targets by intense circularly polarized laser pulses: Imprints of orbital angular nodes in the two-dimensional momentum distribution

    DEFF Research Database (Denmark)

    Martiny, Christian; Abu-Samha, Mahmoud; Madsen, Lars Bojer

    2010-01-01

    We solve the three-dimensional time-dependent Schrödinger equation for a few-cycle circularly polarized femtosecond laser pulse that interacts with an oriented target exemplified by an argon atom, initially in a 3px or 3py state. The photoelectron momentum distributions show distinct signatures o...

  19. Two-dimensional capillary origami

    Energy Technology Data Exchange (ETDEWEB)

    Brubaker, N.D., E-mail: nbrubaker@math.arizona.edu; Lega, J., E-mail: lega@math.arizona.edu

    2016-01-08

    We describe a global approach to the problem of capillary origami that captures all unfolded equilibrium configurations in the two-dimensional setting where the drop is not required to fully wet the flexible plate. We provide bifurcation diagrams showing the level of encapsulation of each equilibrium configuration as a function of the volume of liquid that it contains, as well as plots representing the energy of each equilibrium branch. These diagrams indicate at what volume level the liquid drop ceases to be attached to the endpoints of the plate, which depends on the value of the contact angle. As in the case of pinned contact points, three different parameter regimes are identified, one of which predicts instantaneous encapsulation for small initial volumes of liquid. - Highlights: • Full solution set of the two-dimensional capillary origami problem. • Fluid does not necessarily wet the entire plate. • Global energy approach provides exact differential equations satisfied by minimizers. • Bifurcation diagrams highlight three different regimes. • Conditions for spontaneous encapsulation are identified.

  20. Two-dimensional capillary origami

    International Nuclear Information System (INIS)

    Brubaker, N.D.; Lega, J.

    2016-01-01

    We describe a global approach to the problem of capillary origami that captures all unfolded equilibrium configurations in the two-dimensional setting where the drop is not required to fully wet the flexible plate. We provide bifurcation diagrams showing the level of encapsulation of each equilibrium configuration as a function of the volume of liquid that it contains, as well as plots representing the energy of each equilibrium branch. These diagrams indicate at what volume level the liquid drop ceases to be attached to the endpoints of the plate, which depends on the value of the contact angle. As in the case of pinned contact points, three different parameter regimes are identified, one of which predicts instantaneous encapsulation for small initial volumes of liquid. - Highlights: • Full solution set of the two-dimensional capillary origami problem. • Fluid does not necessarily wet the entire plate. • Global energy approach provides exact differential equations satisfied by minimizers. • Bifurcation diagrams highlight three different regimes. • Conditions for spontaneous encapsulation are identified.

  1. Two dimensional solid state NMR

    International Nuclear Information System (INIS)

    Kentgens, A.P.M.

    1987-01-01

    This thesis illustrates, by discussing some existing and newly developed 2D solid state experiments, that two-dimensional NMR of solids is a useful and important extension of NMR techniques. Chapter 1 gives an overview of spin interactions and averaging techniques important in solid state NMR. As 2D NMR is already an established technique in solutions, only the basics of two dimensional NMR are presented in chapter 2, with an emphasis on the aspects important for solid spectra. The following chapters discuss the theoretical background and applications of specific 2D solid state experiments. An application of 2D-J resolved NMR, analogous to J-resolved spectroscopy in solutions, to natural rubber is given in chapter 3. In chapter 4 the anisotropic chemical shift is mapped out against the heteronuclear dipolar interaction to obtain information about the orientation of the shielding tensor in poly-(oxymethylene). Chapter 5 concentrates on the study of super-slow molecular motions in polymers using a variant of the 2D exchange experiment developed by us. Finally chapter 6 discusses a new experiment, 2D nutation NMR, which makes it possible to study the quadrupole interaction of half-integer spins. 230 refs.; 48 figs.; 8 tabs

  2. Two-dimensional turbulent convection

    Science.gov (United States)

    Mazzino, Andrea

    2017-11-01

    We present an overview of the most relevant, and sometimes contrasting, theoretical approaches to Rayleigh-Taylor and mean-gradient-forced Rayleigh-Bénard two-dimensional turbulence together with numerical and experimental evidences for their support. The main aim of this overview is to emphasize that, despite the different character of these two systems, especially in relation to their steadiness/unsteadiness, turbulent fluctuations are well described by the same scaling relationships originated from the Bolgiano balance. The latter states that inertial terms and buoyancy terms balance at small scales giving rise to an inverse kinetic energy cascade. The main difference with respect to the inverse energy cascade in hydrodynamic turbulence [R. H. Kraichnan, "Inertial ranges in two-dimensional turbulence," Phys. Fluids 10, 1417 (1967)] is that the rate of cascade of kinetic energy here is not constant along the inertial range of scales. Thanks to the absence of physical boundaries, the two systems here investigated turned out to be a natural physical realization of the Kraichnan scaling regime hitherto associated with the elusive "ultimate state of thermal convection" [R. H. Kraichnan, "Turbulent thermal convection at arbitrary Prandtl number," Phys. Fluids 5, 1374-1389 (1962)].

  3. Neutron coincidence counting based on time interval analysis with dead time corrected one and two dimensional Rossi-alpha distributions: an application for passive neutron waste assay

    International Nuclear Information System (INIS)

    Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.

    1996-03-01

    The report describes a new neutron multiplicity counting method based on Rossi-alpha distributions. The report also gives the necessary dead time correction formulas for the multiplicity counting method. The method was tested numerically using a Monte Carlo simulation of pulse trains. The use of this multiplicity method in the field of waste assay is explained: it can be used to determine the amount of fissile material in a waste drum without prior knowledge of the actual detection efficiency

  4. Probability

    CERN Document Server

    Shiryaev, A N

    1996-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, ergodic theory, weak convergence of probability measures, stationary stochastic processes, and the Kalman-Bucy filter Many examples are discussed in detail, and there are a large number of exercises The book is accessible to advanced undergraduates and can be used as a text for self-study This new edition contains substantial revisions and updated references The reader will find a deeper study of topics such as the distance between probability measures, metrization of weak convergence, and contiguity of probability measures Proofs for a number of some important results which were merely stated in the first edition have been added The author included new material on the probability of large deviations, and on the central limit theorem for sums of dependent random variables

  5. Scarred resonances and steady probability distribution in a chaotic microcavity

    International Nuclear Information System (INIS)

    Lee, Soo-Young; Rim, Sunghwan; Kim, Chil-Min; Ryu, Jung-Wan; Kwon, Tae-Yoon

    2005-01-01

    We investigate scarred resonances of a stadium-shaped chaotic microcavity. It is shown that two components with different chirality of the scarring pattern are slightly rotated in opposite ways from the underlying unstable periodic orbit, when the incident angles of the scarring pattern are close to the critical angle for total internal reflection. In addition, the correspondence of emission pattern with the scarring pattern disappears when the incident angles are much larger than the critical angle. The steady probability distribution gives a consistent explanation about these interesting phenomena and makes it possible to expect the emission pattern in the latter case

  6. A Bernstein-Von Mises Theorem for discrete probability distributions

    OpenAIRE

    Boucheron, S.; Gassiat, E.

    2008-01-01

    We investigate the asymptotic normality of the posterior distribution in the discrete setting, when model dimension increases with sample size. We consider a probability mass function θ0 on ℕ∖{0} and a sequence of truncation levels (kn)n satisfying kn3≤ninf i≤knθ0(i). Let θ̂ denote the maximum likelihood estimate of (θ0(i))i≤kn and let Δn(θ0) denote the kn-dimensional vector which i-th coordinate is defined by $\\sqrt{n}(\\hat{\\theta}_{n}(i)-\\theta_{0}(i))$ for 1≤i≤kn. We check that under mild ...

  7. A two-dimensional fully analytical model with polarization effect for off-state channel potential and electric field distributions of GaN-based field-plated high electron mobility transistor

    International Nuclear Information System (INIS)

    Mao Wei; She Wei-Bo; Zhang Chao; Zhang Jin-Cheng; Zhang Jin-Feng; Liu Hong-Xia; Yang Lin-An; Zhang Kai; Zhao Sheng-Lei; Chen Yong-He; Zheng Xue-Feng; Hao Yue; Yang Cui; Ma Xiao-Hua

    2014-01-01

    In this paper, we present a two-dimensional (2D) fully analytical model with consideration of polarization effect for the channel potential and electric field distributions of the gate field-plated high electron mobility transistor (FP-HEMT) on the basis of 2D Poisson's solution. The dependences of the channel potential and electric field distributions on drain bias, polarization charge density, FP structure parameters, AlGaN/GaN material parameters, etc. are investigated. A simple and convenient approach to designing high breakdown voltage FP-HEMTs is also proposed. The validity of this model is demonstrated by comparison with the numerical simulations with Silvaco—Atlas. The method in this paper can be extended to the development of other analytical models for different device structures, such as MIS-HEMTs, multiple-FP HETMs, slant-FP HEMTs, etc. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  8. Log-concave Probability Distributions: Theory and Statistical Testing

    DEFF Research Database (Denmark)

    An, Mark Yuing

    1996-01-01

    This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...

  9. Micromachined two dimensional resistor arrays for determination of gas parameters

    NARCIS (Netherlands)

    van Baar, J.J.J.; Verwey, Willem B.; Dijkstra, Mindert; Dijkstra, Marcel; Wiegerink, Remco J.; Lammerink, Theodorus S.J.; Krijnen, Gijsbertus J.M.; Elwenspoek, Michael Curt

    A resistive sensor array is presented for two dimensional temperature distribution measurements in a micromachined flow channel. This allows simultaneous measurement of flow velocity and fluid parameters, like thermal conductivity, diffusion coefficient and viscosity. More general advantages of

  10. Landslide Probability Assessment by the Derived Distributions Technique

    Science.gov (United States)

    Muñoz, E.; Ochoa, A.; Martínez, H.

    2012-12-01

    Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model

  11. Diachronic changes in word probability distributions in daily press

    Directory of Open Access Journals (Sweden)

    Stanković Jelena

    2006-01-01

    Full Text Available Changes in probability distributions of individual words and word types were investigated within two samples of daily press in the span of fifty years. Two samples of daily press were used in this study. The one derived from the Corpus of Serbian Language (CSL /Kostić, Đ., 2001/ that covers period between 1945. and 1957. and the other derived from the Ebart Media Documentation (EBR that was complied from seven daily news and five weekly magazines from 2002. and 2003. Each sample consisted of about 1 million words. The obtained results indicate that nouns and adjectives were more frequent in the CSL, while verbs and prepositions are more frequent in the EBR sample, suggesting a decrease of sentence length in the last five decades. Conspicuous changes in probability distribution of individual words were observed for nouns and adjectives, while minimal or no changes were observed for verbs and prepositions. Such an outcome suggests that nouns and adjectives are most susceptible to diachronic changes, while verbs and prepositions appear to be resistant to such changes.

  12. Probability Distribution and Projected Trends of Daily Precipitation in China

    Institute of Scientific and Technical Information of China (English)

    CAO; Li-Ge; ZHONG; Jun; SU; Bu-Da; ZHAI; Jian-Qing; Macro; GEMMER

    2013-01-01

    Based on observed daily precipitation data of 540 stations and 3,839 gridded data from the high-resolution regional climate model COSMO-Climate Limited-area Modeling(CCLM)for 1961–2000,the simulation ability of CCLM on daily precipitation in China is examined,and the variation of daily precipitation distribution pattern is revealed.By applying the probability distribution and extreme value theory to the projected daily precipitation(2011–2050)under SRES A1B scenario with CCLM,trends of daily precipitation series and daily precipitation extremes are analyzed.Results show that except for the western Qinghai-Tibetan Plateau and South China,distribution patterns of the kurtosis and skewness calculated from the simulated and observed series are consistent with each other;their spatial correlation coefcients are above 0.75.The CCLM can well capture the distribution characteristics of daily precipitation over China.It is projected that in some parts of the Jianghuai region,central-eastern Northeast China and Inner Mongolia,the kurtosis and skewness will increase significantly,and precipitation extremes will increase during 2011–2050.The projected increase of maximum daily rainfall and longest non-precipitation period during flood season in the aforementioned regions,also show increasing trends of droughts and floods in the next 40 years.

  13. Non-Gaussian probability distributions of solar wind fluctuations

    Directory of Open Access Journals (Sweden)

    E. Marsch

    Full Text Available The probability distributions of field differences ∆x(τ=x(t+τ-x(t, where the variable x(t may denote any solar wind scalar field or vector field component at time t, have been calculated from time series of Helios data obtained in 1976 at heliocentric distances near 0.3 AU. It is found that for comparatively long time lag τ, ranging from a few hours to 1 day, the differences are normally distributed according to a Gaussian. For shorter time lags, of less than ten minutes, significant changes in shape are observed. The distributions are often spikier and narrower than the equivalent Gaussian distribution with the same standard deviation, and they are enhanced for large, reduced for intermediate and enhanced for very small values of ∆x. This result is in accordance with fluid observations and numerical simulations. Hence statistical properties are dominated at small scale τ by large fluctuation amplitudes that are sparsely distributed, which is direct evidence for spatial intermittency of the fluctuations. This is in agreement with results from earlier analyses of the structure functions of ∆x. The non-Gaussian features are differently developed for the various types of fluctuations. The relevance of these observations to the interpretation and understanding of the nature of solar wind magnetohydrodynamic (MHD turbulence is pointed out, and contact is made with existing theoretical concepts of intermittency in fluid turbulence.

  14. Characterizing single-molecule FRET dynamics with probability distribution analysis.

    Science.gov (United States)

    Santoso, Yusdi; Torella, Joseph P; Kapanidis, Achillefs N

    2010-07-12

    Probability distribution analysis (PDA) is a recently developed statistical tool for predicting the shapes of single-molecule fluorescence resonance energy transfer (smFRET) histograms, which allows the identification of single or multiple static molecular species within a single histogram. We used a generalized PDA method to predict the shapes of FRET histograms for molecules interconverting dynamically between multiple states. This method is tested on a series of model systems, including both static DNA fragments and dynamic DNA hairpins. By fitting the shape of this expected distribution to experimental data, the timescale of hairpin conformational fluctuations can be recovered, in good agreement with earlier published results obtained using different techniques. This method is also applied to studying the conformational fluctuations in the unliganded Klenow fragment (KF) of Escherichia coli DNA polymerase I, which allows both confirmation of the consistency of a simple, two-state kinetic model with the observed smFRET distribution of unliganded KF and extraction of a millisecond fluctuation timescale, in good agreement with rates reported elsewhere. We expect this method to be useful in extracting rates from processes exhibiting dynamic FRET, and in hypothesis-testing models of conformational dynamics against experimental data.

  15. Subspace Learning via Local Probability Distribution for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Huiwu Luo

    2015-01-01

    Full Text Available The computational procedure of hyperspectral image (HSI is extremely complex, not only due to the high dimensional information, but also due to the highly correlated data structure. The need of effective processing and analyzing of HSI has met many difficulties. It has been evidenced that dimensionality reduction has been found to be a powerful tool for high dimensional data analysis. Local Fisher’s liner discriminant analysis (LFDA is an effective method to treat HSI processing. In this paper, a novel approach, called PD-LFDA, is proposed to overcome the weakness of LFDA. PD-LFDA emphasizes the probability distribution (PD in LFDA, where the maximum distance is replaced with local variance for the construction of weight matrix and the class prior probability is applied to compute the affinity matrix. The proposed approach increases the discriminant ability of the transformed features in low dimensional space. Experimental results on Indian Pines 1992 data indicate that the proposed approach significantly outperforms the traditional alternatives.

  16. Measuring Robustness of Timetables at Stations using a Probability Distribution

    DEFF Research Database (Denmark)

    Jensen, Lars Wittrup; Landex, Alex

    Stations are often the limiting capacity factor in a railway network. This induces interdependencies, especially at at-grade junctions, causing network effects. This paper presents three traditional methods that can be used to measure the complexity of a station, indicating the robustness...... of the station’s infrastructure layout and plan of operation. However, these three methods do not take the timetable at the station into consideration. Therefore, two methods are introduced in this paper, making it possible to estimate the robustness of different timetables at a station or different...... infrastructure layouts given a timetable. These two methods provide different precision at the expense of a more complex calculation process. The advanced and more precise method is based on a probability distribution that can describe the expected delay between two trains as a function of the buffer time...

  17. Equilibrium: two-dimensional configurations

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    In Chapter 6, the problem of toroidal force balance is addressed in the simplest, nontrivial two-dimensional geometry, that of an axisymmetric torus. A derivation is presented of the Grad-Shafranov equation, the basic equation describing axisymmetric toroidal equilibrium. The solutions to equations provide a complete description of ideal MHD equilibria: radial pressure balance, toroidal force balance, equilibrium Beta limits, rotational transform, shear, magnetic wall, etc. A wide number of configurations are accurately modeled by the Grad-Shafranov equation. Among them are all types of tokamaks, the spheromak, the reversed field pinch, and toroidal multipoles. An important aspect of the analysis is the use of asymptotic expansions, with an inverse aspect ratio serving as the expansion parameter. In addition, an equation similar to the Grad-Shafranov equation, but for helically symmetric equilibria, is presented. This equation represents the leading-order description low-Beta and high-Beta stellarators, heliacs, and the Elmo bumpy torus. The solutions all correspond to infinitely long straight helices. Bending such a configuration into a torus requires a full three-dimensional calculation and is discussed in Chapter 7

  18. Two-Dimensional Distributed Velocity Collision Avoidance

    Science.gov (United States)

    2014-02-11

    place (i.e., in the global problem space) as much as possible in an effort to simplify the process/description. Additionally, to make some of the...guide agents without collision in the vast majority of cases. NAWCWD TP 8786 31 7.0 REFERENCES 1. P. L. Franchi . “Near Misses Between

  19. Two-Dimensional Homogeneous Fermi Gases

    Science.gov (United States)

    Hueck, Klaus; Luick, Niclas; Sobirey, Lennart; Siegl, Jonas; Lompe, Thomas; Moritz, Henning

    2018-02-01

    We report on the experimental realization of homogeneous two-dimensional (2D) Fermi gases trapped in a box potential. In contrast to harmonically trapped gases, these homogeneous 2D systems are ideally suited to probe local as well as nonlocal properties of strongly interacting many-body systems. As a first benchmark experiment, we use a local probe to measure the density of a noninteracting 2D Fermi gas as a function of the chemical potential and find excellent agreement with the corresponding equation of state. We then perform matter wave focusing to extract the momentum distribution of the system and directly observe Pauli blocking in a near unity occupation of momentum states. Finally, we measure the momentum distribution of an interacting homogeneous 2D gas in the crossover between attractively interacting fermions and bosonic dimers.

  20. Optimizing separations in online comprehensive two-dimensional liquid chromatography.

    Science.gov (United States)

    Pirok, Bob W J; Gargano, Andrea F G; Schoenmakers, Peter J

    2018-01-01

    Online comprehensive two-dimensional liquid chromatography has become an attractive option for the analysis of complex nonvolatile samples found in various fields (e.g. environmental studies, food, life, and polymer sciences). Two-dimensional liquid chromatography complements the highly popular hyphenated systems that combine liquid chromatography with mass spectrometry. Two-dimensional liquid chromatography is also applied to the analysis of samples that are not compatible with mass spectrometry (e.g. high-molecular-weight polymers), providing important information on the distribution of the sample components along chemical dimensions (molecular weight, charge, lipophilicity, stereochemistry, etc.). Also, in comparison with conventional one-dimensional liquid chromatography, two-dimensional liquid chromatography provides a greater separation power (peak capacity). Because of the additional selectivity and higher peak capacity, the combination of two-dimensional liquid chromatography with mass spectrometry allows for simpler mixtures of compounds to be introduced in the ion source at any given time, improving quantitative analysis by reducing matrix effects. In this review, we summarize the rationale and principles of two-dimensional liquid chromatography experiments, describe advantages and disadvantages of combining different selectivities and discuss strategies to improve the quality of two-dimensional liquid chromatography separations. © 2017 The Authors. Journal of Separation Science published by WILEY-VCH Verlag GmbH & Co. KGaA.

  1. Joint Probability Distributions for a Class of Non-Markovian Processes

    OpenAIRE

    Baule, A.; Friedrich, R.

    2004-01-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H.C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single time probability distributions to the case of N-time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fr...

  2. Multiscale probability distribution of pressure fluctuations in fluidized beds

    International Nuclear Information System (INIS)

    Ghasemi, Fatemeh; Sahimi, Muhammad; Reza Rahimi Tabar, M; Peinke, Joachim

    2012-01-01

    Analysis of flow in fluidized beds, a common chemical reactor, is of much current interest due to its fundamental as well as industrial importance. Experimental data for the successive increments of the pressure fluctuations time series in a fluidized bed are analyzed by computing a multiscale probability density function (PDF) of the increments. The results demonstrate the evolution of the shape of the PDF from the short to long time scales. The deformation of the PDF across time scales may be modeled by the log-normal cascade model. The results are also in contrast to the previously proposed PDFs for the pressure fluctuations that include a Gaussian distribution and a PDF with a power-law tail. To understand better the properties of the pressure fluctuations, we also construct the shuffled and surrogate time series for the data and analyze them with the same method. It turns out that long-range correlations play an important role in the structure of the time series that represent the pressure fluctuation. (paper)

  3. Two-dimensional simulation of sintering process

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Pinto, Lucio Carlos Martins; Vasconcelos, Wander L.

    1996-01-01

    The results of two-dimensional simulations are directly applied to systems in which one of the dimensions is much smaller than the others, and to sections of three dimensional models. Moreover, these simulations are the first step of the analysis of more complex three-dimensional systems. In this work, two basic features of the sintering process are studied: the types of particle size distributions related to the powder production processes and the evolution of geometric parameters of the resultant microstructures during the solid-state sintering. Random packing of equal spheres is considered in the sintering simulation. The packing algorithm does not take into account the interactive forces between the particles. The used sintering algorithm causes the densification of the particle set. (author)

  4. Performance Probability Distributions for Sediment Control Best Management Practices

    Science.gov (United States)

    Ferrell, L.; Beighley, R.; Walsh, K.

    2007-12-01

    Controlling soil erosion and sediment transport can be a significant challenge during the construction process due to the extent and conditions of bare, disturbed soils. Best Management Practices (BMPs) are used as the framework for the design of sediment discharge prevention systems in stormwater pollution prevention plans which are typically required for construction sites. This research focuses on commonly-used BMP systems for perimeter control of sediment export: silt fences and fiber rolls. Although these systems are widely used, the physical and engineering parameters describing their performance are not well understood. Performance expectations are based on manufacturer results, but due to the dynamic conditions that exist on a construction site performance expectations are not always achievable in the field. Based on experimental results product performance is shown to be highly variable. Experiments using the same installation procedures show inconsistent sediment removal performances ranging from (>)85 percent to zero. The goal of this research is to improve the determination of off-site sediment yield based on probabilistic performance results of perimeter control BMPs. BMPs are evaluated in the Soil Erosion Research Laboratory (SERL) in the Civil and Environmental Engineering department at San Diego State University. SERL experiments are performed on a 3-m by 10-m tilting soil bed with a soil depth of 0.5 meters and a slope of 33 percent. The simulated storm event consists of 17 mm/hr for 20 minutes followed by 51 mm/hr for 30 minutes. The storm event is based on an ASTM design storm intended to simulate BMP failures. BMP performance is assessed based on experiments where BMPs are installed per manufacture specifications, less than optimal installations, and no treatment conditions. Preliminary results from 30 experiments are presented and used to develop probability distributions for BMP sediment removal efficiencies. The results are then combined with

  5. Tunable diode laser absorption spectroscopy-based tomography system for on-line monitoring of two-dimensional distributions of temperature and H{sub 2}O mole fraction

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Lijun, E-mail: lijunxu@buaa.edu.cn; Liu, Chang; Jing, Wenyang; Cao, Zhang [School of Instrument Science and Opto-Electronic Engineering, Beihang University, Beijing 100191 (China); Ministry of Education’s Key Laboratory of Precision Opto-Mechatronics Technology, Beijing 100191 (China); Xue, Xin; Lin, Yuzhen [School of Energy and Power Engineering, Beihang University, Beijing 100191 (China)

    2016-01-15

    To monitor two-dimensional (2D) distributions of temperature and H{sub 2}O mole fraction, an on-line tomography system based on tunable diode laser absorption spectroscopy (TDLAS) was developed. To the best of the authors’ knowledge, this is the first report on a multi-view TDLAS-based system for simultaneous tomographic visualization of temperature and H{sub 2}O mole fraction in real time. The system consists of two distributed feedback (DFB) laser diodes, a tomographic sensor, electronic circuits, and a computer. The central frequencies of the two DFB laser diodes are at 7444.36 cm{sup −1} (1343.3 nm) and 7185.6 cm{sup −1} (1391.67 nm), respectively. The tomographic sensor is used to generate fan-beam illumination from five views and to produce 60 ray measurements. The electronic circuits not only provide stable temperature and precise current controlling signals for the laser diodes but also can accurately sample the transmitted laser intensities and extract integrated absorbances in real time. Finally, the integrated absorbances are transferred to the computer, in which the 2D distributions of temperature and H{sub 2}O mole fraction are reconstructed by using a modified Landweber algorithm. In the experiments, the TDLAS-based tomography system was validated by using asymmetric premixed flames with fixed and time-varying equivalent ratios, respectively. The results demonstrate that the system is able to reconstruct the profiles of the 2D distributions of temperature and H{sub 2}O mole fraction of the flame and effectively capture the dynamics of the combustion process, which exhibits good potential for flame monitoring and on-line combustion diagnosis.

  6. Optimal design of unit hydrographs using probability distribution and ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    optimization formulation is solved using binary-coded genetic algorithms. The number of variables to ... Unit hydrograph; rainfall-runoff; hydrology; genetic algorithms; optimization; probability ..... Application of the model. Data derived from the ...

  7. Topology optimization of two-dimensional waveguides

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard; Sigmund, Ole

    2003-01-01

    In this work we use the method of topology optimization to design two-dimensional waveguides with low transmission loss.......In this work we use the method of topology optimization to design two-dimensional waveguides with low transmission loss....

  8. Spin dynamics in a two-dimensional quantum gas

    DEFF Research Database (Denmark)

    Pedersen, Poul Lindholm; Gajdacz, Miroslav; Deuretzbacher, Frank

    2014-01-01

    We have investigated spin dynamics in a two-dimensional quantum gas. Through spin-changing collisions, two clouds with opposite spin orientations are spontaneously created in a Bose-Einstein condensate. After ballistic expansion, both clouds acquire ring-shaped density distributions with superimp......We have investigated spin dynamics in a two-dimensional quantum gas. Through spin-changing collisions, two clouds with opposite spin orientations are spontaneously created in a Bose-Einstein condensate. After ballistic expansion, both clouds acquire ring-shaped density distributions...

  9. Tools for Bramwell-Holdsworth-Pinton Probability Distribution

    Directory of Open Access Journals (Sweden)

    Mirela Danubianu

    2009-01-01

    Full Text Available This paper is a synthesis of a range of papers presentedat various conferences related to distribution Bramwell-Holdsworth-Pinton. S. T. Bramwell, P. C. W. Holdsworth, J. F.Pinton introduced a new non-parametric distribution (calledBHP after studying some magnetization problems in 2D. Probabilitydensity function of distribution can be aproximated as amodified GFT (Gumbel-Fisher-Tippitt distribution.

  10. Tile-based Fisher ratio analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC × GC-TOFMS) data using a null distribution approach.

    Science.gov (United States)

    Parsons, Brendon A; Marney, Luke C; Siegler, W Christopher; Hoggard, Jamin C; Wright, Bob W; Synovec, Robert E

    2015-04-07

    Comprehensive two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC-TOFMS) is a versatile instrumental platform capable of collecting highly informative, yet highly complex, chemical data for a variety of samples. Fisher-ratio (F-ratio) analysis applied to the supervised comparison of sample classes algorithmically reduces complex GC × GC-TOFMS data sets to find class distinguishing chemical features. F-ratio analysis, using a tile-based algorithm, significantly reduces the adverse effects of chromatographic misalignment and spurious covariance of the detected signal, enhancing the discovery of true positives while simultaneously reducing the likelihood of detecting false positives. Herein, we report a study using tile-based F-ratio analysis whereby four non-native analytes were spiked into diesel fuel at several concentrations ranging from 0 to 100 ppm. Spike level comparisons were performed in two regimes: comparing the spiked samples to the nonspiked fuel matrix and to each other at relative concentration factors of two. Redundant hits were algorithmically removed by refocusing the tiled results onto the original high resolution pixel level data. To objectively limit the tile-based F-ratio results to only features which are statistically likely to be true positives, we developed a combinatorial technique using null class comparisons, called null distribution analysis, by which we determined a statistically defensible F-ratio cutoff for the analysis of the hit list. After applying null distribution analysis, spiked analytes were reliably discovered at ∼1 to ∼10 ppm (∼5 to ∼50 pg using a 200:1 split), depending upon the degree of mass spectral selectivity and 2D chromatographic resolution, with minimal occurrence of false positives. To place the relevance of this work among other methods in this field, results are compared to those for pixel and peak table-based approaches.

  11. Influence of nucleon density distribution in nucleon emission probability

    International Nuclear Information System (INIS)

    Paul, Sabyasachi; Nandy, Maitreyee; Mohanty, A.K.; Sarkar, P.K.; Gambhir, Y.K.

    2014-01-01

    Different decay modes are observed in heavy ion reactions at low to intermediate energies. It is interesting to study total neutron emission in these reactions which may be contributed by all/many of these decay modes. In an attempt to understand the importance of mean field and the entrance channel angular momentum, we study their influence on the emission probability of nucleons in heavy ion reactions in this work. This study owes its significance to the fact that once population of different states are determined, emission probability governs the double differential neutron yield

  12. Turbulent equipartitions in two dimensional drift convection

    International Nuclear Information System (INIS)

    Isichenko, M.B.; Yankov, V.V.

    1995-01-01

    Unlike the thermodynamic equipartition of energy in conservative systems, turbulent equipartitions (TEP) describe strongly non-equilibrium systems such as turbulent plasmas. In turbulent systems, energy is no longer a good invariant, but one can utilize the conservation of other quantities, such as adiabatic invariants, frozen-in magnetic flux, entropy, or combination thereof, in order to derive new, turbulent quasi-equilibria. These TEP equilibria assume various forms, but in general they sustain spatially inhomogeneous distributions of the usual thermodynamic quantities such as density or temperature. This mechanism explains the effects of particle and energy pinch in tokamaks. The analysis of the relaxed states caused by turbulent mixing is based on the existence of Lagrangian invariants (quantities constant along fluid-particle or other orbits). A turbulent equipartition corresponds to the spatially uniform distribution of relevant Lagrangian invariants. The existence of such turbulent equilibria is demonstrated in the simple model of two dimensional electrostatically turbulent plasma in an inhomogeneous magnetic field. The turbulence is prescribed, and the turbulent transport is assumed to be much stronger than the classical collisional transport. The simplicity of the model makes it possible to derive the equations describing the relaxation to the TEP state in several limits

  13. Investigation of Probability Distributions Using Dice Rolling Simulation

    Science.gov (United States)

    Lukac, Stanislav; Engel, Radovan

    2010-01-01

    Dice are considered one of the oldest gambling devices and thus many mathematicians have been interested in various dice gambling games in the past. Dice have been used to teach probability, and dice rolls can be effectively simulated using technology. The National Council of Teachers of Mathematics (NCTM) recommends that teachers use simulations…

  14. Joint probability distributions for a class of non-Markovian processes.

    Science.gov (United States)

    Baule, A; Friedrich, R

    2005-02-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H. C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single-time probability distributions to the case of N -time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fractional time derivatives reflecting the non-Markovian character of the process.

  15. Influence of dose distribution homogeneity on the tumor control probability in heavy-ion radiotherapy

    International Nuclear Information System (INIS)

    Wen Xiaoqiong; Li Qiang; Zhou Guangming; Li Wenjian; Wei Zengquan

    2001-01-01

    In order to estimate the influence of the un-uniform dose distribution on the clinical treatment result, the Influence of dose distribution homogeneity on the tumor control probability was investigated. Basing on the formula deduced previously for survival fraction of cells irradiated by the un-uniform heavy-ion irradiation field and the theory of tumor control probability, the tumor control probability was calculated for a tumor mode exposed to different dose distribution homogeneity. The results show that the tumor control probability responding to the same total dose will decrease if the dose distribution homogeneity gets worse. In clinical treatment, the dose distribution homogeneity should be better than 95%

  16. The Impact of an Instructional Intervention Designed to Support Development of Stochastic Understanding of Probability Distribution

    Science.gov (United States)

    Conant, Darcy Lynn

    2013-01-01

    Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…

  17. Modified Stieltjes Transform and Generalized Convolutions of Probability Distributions

    Directory of Open Access Journals (Sweden)

    Lev B. Klebanov

    2018-01-01

    Full Text Available The classical Stieltjes transform is modified in such a way as to generalize both Stieltjes and Fourier transforms. This transform allows the introduction of new classes of commutative and non-commutative generalized convolutions. A particular case of such a convolution for degenerate distributions appears to be the Wigner semicircle distribution.

  18. Some explicit expressions for the probability distribution of force ...

    Indian Academy of Sciences (India)

    96: Art. No. 098001. Tighe B P, Socolar J E S, Schaeffer D G, Mitchener W G, Huber M L 2005 Force distributions in a triangular lattice of rigid bars. Phys. Rev. E 72: Art. No. 031306. Vargas W L, Murcia J C, Palacio L E, Dominguez D M 2003 Fractional diffusion model for force distribution in static granular media. Phys. Rev.

  19. Transient two-dimensional flow in porous media

    International Nuclear Information System (INIS)

    Sharpe, L. Jr.

    1979-01-01

    The transient flow of an isothermal ideal gas from the cavity formed by an underground nuclear explosion is investigated. A two-dimensional finite element method is used in analyzing the gas flow. Numerical results of the pressure distribution are obtained for both the stemming column and the surrounding porous media

  20. Probability Distribution Function of the Upper Equatorial Pacific Current Speeds

    National Research Council Canada - National Science Library

    Chu, Peter C

    2005-01-01

    ...), constructed from hourly ADCP data (1990-2007) at six stations for the Tropical Atmosphere Ocean project satisfies the two-parameter Weibull distribution reasonably well with different characteristics between El Nino and La Nina events...

  1. Piezoelectricity in Two-Dimensional Materials

    KAUST Repository

    Wu, Tao; Zhang, Hua

    2015-01-01

    Powering up 2D materials: Recent experimental studies confirmed the existence of piezoelectricity - the conversion of mechanical stress into electricity - in two-dimensional single-layer MoS2 nanosheets. The results represent a milestone towards

  2. Construction of two-dimensional quantum chromodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Klimek, S.; Kondracki, W.

    1987-12-01

    We present a sketch of the construction of the functional measure for the SU(2) quantum chromodynamics with one generation of fermions in two-dimensional space-time. The method is based on a detailed analysis of Wilson loops.

  3. Development of Two-Dimensional NMR

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 20; Issue 11. Development of Two-Dimensional NMR: Strucure Determination of Biomolecules in Solution. Anil Kumar. General Article Volume 20 Issue 11 November 2015 pp 995-1002 ...

  4. Supervised learning of probability distributions by neural networks

    Science.gov (United States)

    Baum, Eric B.; Wilczek, Frank

    1988-01-01

    Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

  5. Phase transitions in two-dimensional systems

    International Nuclear Information System (INIS)

    Salinas, S.R.A.

    1983-01-01

    Some experiences are related using synchrotron radiation beams, to characterize solid-liquid (fusion) and commensurate solid-uncommensurate solid transitions in two-dimensional systems. Some ideas involved in the modern theories of two-dimensional fusion are shortly exposed. The systems treated consist of noble gases (Kr,Ar,Xe) adsorbed in the basal plane of graphite and thin films formed by some liquid crystal shells. (L.C.) [pt

  6. Determination of local texture and stress distributions on submicro-/nanocrystalline multiphase gradient materials by means of two-dimensional X-ray diffraction as well by means of analytical and numerical modeling approaches

    International Nuclear Information System (INIS)

    Eschke, Andy

    2015-01-01

    Examination object of the present thesis was the determination of local distributions of crystallographic texture and mechanical (eigen-)stresses in submicro-/nan0crystalline many-phase gradient materials. For this at the one hand experimental methods of the two-dimensional X-ray diffraction were applied as well as at the other hand theoretical calculations performed by means of analytical and numerical modeling approaches. The interest for the material is founded on the fact that ultrafine-granular materials because of their mechanical propertier (for instance hardness, ductility) ar to be stressed for advanced engineering application purposes. Furthermore the application of many-phase gradient materials makes to some extent possible a manufacture for measure concerning physical properties and by this a manifold of application potentials as well as a tuning of the material properties to the differential requirements in the application fields. This measure tailoring is related both to the degree of gradiation and to the special composition of the composite materials by the chosen starting materials. The work performed in the framework of the excellence cluster ''European Centre for Emerging Materials and Processes Dresden (ECEMP)'' of the Saxonian excellence initiative aimed especially to the analysis of an especially processed, ultrafine-granular Ti/Al composite, which was and is research object of the partial ECEMP project ''High strength metallic composites'' (HSMetComp). Thereby were process as well as materials in the focus of the above mentioned (indirect) examination methods. which were adapted and further developed for these purposes. The results of the experimental as well as theoretical studies could contribute to an increased understanding of the technological process as well as the material behaviour and can by this also used for hints concerning process- and/or material-sided optimizations. Altogether they

  7. Lagrangian statistics in weakly forced two-dimensional turbulence.

    Science.gov (United States)

    Rivera, Michael K; Ecke, Robert E

    2016-01-01

    Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale ri. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in terms of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.

  8. Two-dimensional atom localization via two standing-wave fields in a four-level atomic system

    International Nuclear Information System (INIS)

    Zhang Hongtao; Wang Hui; Wang Zhiping

    2011-01-01

    We propose a scheme for the two-dimensional (2D) localization of an atom in a four-level Y-type atomic system. By applying two orthogonal standing-wave fields, the atoms can be localized at some special positions, leading to the formation of sub-wavelength 2D periodic spatial distributions. The localization peak position and number as well as the conditional position probability can be controlled by the intensities and detunings of optical fields.

  9. Percentile estimation using the normal and lognormal probability distribution

    International Nuclear Information System (INIS)

    Bement, T.R.

    1980-01-01

    Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution

  10. A new expression of the probability distribution in Incomplete Statistics and fundamental thermodynamic relations

    International Nuclear Information System (INIS)

    Huang Zhifu; Lin Bihong; ChenJincan

    2009-01-01

    In order to overcome the limitations of the original expression of the probability distribution appearing in literature of Incomplete Statistics, a new expression of the probability distribution is derived, where the Lagrange multiplier β introduced here is proved to be identical with that introduced in the second and third choices for the internal energy constraint in Tsallis' statistics and to be just equal to the physical inverse temperature. It is expounded that the probability distribution described by the new expression is invariant through uniform translation of the energy spectrum. Moreover, several fundamental thermodynamic relations are given and the relationship between the new and the original expressions of the probability distribution is discussed.

  11. Study on probability distribution of fire scenarios in risk assessment to emergency evacuation

    International Nuclear Information System (INIS)

    Chu Guanquan; Wang Jinhui

    2012-01-01

    Event tree analysis (ETA) is a frequently-used technique to analyze the probability of probable fire scenario. The event probability is usually characterized by definite value. It is not appropriate to use definite value as these estimates may be the result of poor quality statistics and limited knowledge. Without addressing uncertainties, ETA will give imprecise results. The credibility of risk assessment will be undermined. This paper presents an approach to address event probability uncertainties and analyze probability distribution of probable fire scenario. ETA is performed to construct probable fire scenarios. The activation time of every event is characterized as stochastic variable by considering uncertainties of fire growth rate and other input variables. To obtain probability distribution of probable fire scenario, Markov Chain is proposed to combine with ETA. To demonstrate the approach, a case study is presented.

  12. Evaluation of the probability distribution of intake from a single measurement on a personal air sampler

    International Nuclear Information System (INIS)

    Birchall, A.; Muirhead, C.R.; James, A.C.

    1988-01-01

    An analytical expression has been derived for the k-sum distribution, formed by summing k random variables from a lognormal population. Poisson statistics are used with this distribution to derive distribution of intake when breathing an atmosphere with a constant particle number concentration. Bayesian inference is then used to calculate the posterior probability distribution of concentrations from a given measurement. This is combined with the above intake distribution to give the probability distribution of intake resulting from a single measurement of activity made by an ideal sampler. It is shown that the probability distribution of intake is very dependent on the prior distribution used in Bayes' theorem. The usual prior assumption, that all number concentrations are equally probable, leads to an imbalance in the posterior intake distribution. This can be resolved if a new prior proportional to w -2/3 is used, where w is the expected number of particles collected. (author)

  13. Cosmological constraints from the convergence 1-point probability distribution

    Energy Technology Data Exchange (ETDEWEB)

    Patton, Kenneth [The Ohio State Univ., Columbus, OH (United States); Blazek, Jonathan [The Ohio State Univ., Columbus, OH (United States); Ecole Polytechnique Federale de Lausanne (EPFL), Versoix (Switzerland); Honscheid, Klaus [The Ohio State Univ., Columbus, OH (United States); Huff, Eric [The Ohio State Univ., Columbus, OH (United States); California Inst. of Technology (CalTech), Pasadena, CA (United States); Melchior, Peter [Princeton Univ., Princeton, NJ (United States); Ross, Ashley J. [The Ohio State Univ., Columbus, OH (United States); Suchyta, Eric D. [The Ohio State Univ., Columbus, OH (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-06-29

    Here, we examine the cosmological information available from the 1-point probability density function (PDF) of the weak-lensing convergence field, utilizing fast l-picola simulations and a Fisher analysis. We find competitive constraints in the Ωm–σ8 plane from the convergence PDF with 188 arcmin2 pixels compared to the cosmic shear power spectrum with an equivalent number of modes (ℓ < 886). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is less susceptible, and improves the total figure of merit by a factor of 2–3, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.

  14. Generalization of Poisson distribution for the case of changing probability of consequential events

    International Nuclear Information System (INIS)

    Kushnirenko, E.

    1995-01-01

    The generalization of the Poisson distribution for the case of changing probabilities of the consequential events is done. It is shown that the classical Poisson distribution is the special case of this generalized distribution when the probabilities of the consequential events are constant. The using of the generalized Poisson distribution gives the possibility in some cases to obtain analytical result instead of making Monte-Carlo calculation

  15. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com [Karadeniz Technical University, Trabzon (Turkey); Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr [Ağrı İbrahim Çeçen University, Ağrı (Turkey)

    2016-04-18

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed time using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.

  16. Probability distributions for first neighbor distances between resonances that belong to two different families

    International Nuclear Information System (INIS)

    Difilippo, F.C.

    1994-01-01

    For a mixture of two families of resonances, we found the probability distribution for the distance, as first neighbors, between resonances that belong to different families. Integration of this distribution gives the probability of accidental overlapping of resonances of one isotope by resonances of the other, provided that the resonances of each isotope belong to a single family. (author)

  17. Approximate solutions for the two-dimensional integral transport equation. Solution of complex two-dimensional transport problems

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1980-11-01

    This work is divided into two parts: the first part deals with the solution of complex two-dimensional transport problems, the second one (note CEA-N-2166) treats the critically mixed methods of resolution. A set of approximate solutions for the isotropic two-dimensional neutron transport problem has been developed using the interface current formalism. The method has been applied to regular lattices of rectangular cells containing a fuel pin, cladding, and water, or homogenized structural material. The cells are divided into zones that are homogeneous. A zone-wise flux expansion is used to formulate a direct collision probability problem within a cell. The coupling of the cells is effected by making extra assumptions on the currents entering and leaving the interfaces. Two codes have been written: CALLIOPE uses a cylindrical cell model and one or three terms for the flux expansion, and NAUSICAA uses a two-dimensional flux representation and does a truly two-dimensional calculation inside each cell. In both codes, one or three terms can be used to make a space-independent expansion of the angular fluxes entering and leaving each side of the cell. The accuracies and computing times achieved with the different approximations are illustrated by numerical studies on two benchmark problems and by calculations performed in the APOLLO multigroup code [fr

  18. Two-dimensional nuclear magnetic resonance spectroscopy

    International Nuclear Information System (INIS)

    Bax, A.; Lerner, L.

    1986-01-01

    Great spectral simplification can be obtained by spreading the conventional one-dimensional nuclear magnetic resonance (NMR) spectrum in two independent frequency dimensions. This so-called two-dimensional NMR spectroscopy removes spectral overlap, facilitates spectral assignment, and provides a wealth of additional information. For example, conformational information related to interproton distances is available from resonance intensities in certain types of two-dimensional experiments. Another method generates 1 H NMR spectra of a preselected fragment of the molecule, suppressing resonances from other regions and greatly simplifying spectral appearance. Two-dimensional NMR spectroscopy can also be applied to the study of 13 C and 15 N, not only providing valuable connectivity information but also improving sensitivity of 13 C and 15 N detection by up to two orders of magnitude. 45 references, 10 figures

  19. Two-dimensional x-ray diffraction

    CERN Document Server

    He, Bob B

    2009-01-01

    Written by one of the pioneers of 2D X-Ray Diffraction, this useful guide covers the fundamentals, experimental methods and applications of two-dimensional x-ray diffraction, including geometry convention, x-ray source and optics, two-dimensional detectors, diffraction data interpretation, and configurations for various applications, such as phase identification, texture, stress, microstructure analysis, crystallinity, thin film analysis and combinatorial screening. Experimental examples in materials research, pharmaceuticals, and forensics are also given. This presents a key resource to resea

  20. Equivalence of two-dimensional gravities

    International Nuclear Information System (INIS)

    Mohammedi, N.

    1990-01-01

    The authors find the relationship between the Jackiw-Teitelboim model of two-dimensional gravity and the SL(2,R) induced gravity. These are shown to be related to a two-dimensional gauge theory obtained by dimensionally reducing the Chern-Simons action of the 2 + 1 dimensional gravity. The authors present an explicit solution to the equations of motion of the auxiliary field of the Jackiw-Teitelboim model in the light-cone gauge. A renormalization of the cosmological constant is also given

  1. Family of probability distributions derived from maximal entropy principle with scale invariant restrictions.

    Science.gov (United States)

    Sonnino, Giorgio; Steinbrecher, György; Cardinali, Alessandro; Sonnino, Alberto; Tlidi, Mustapha

    2013-01-01

    Using statistical thermodynamics, we derive a general expression of the stationary probability distribution for thermodynamic systems driven out of equilibrium by several thermodynamic forces. The local equilibrium is defined by imposing the minimum entropy production and the maximum entropy principle under the scale invariance restrictions. The obtained probability distribution presents a singularity that has immediate physical interpretation in terms of the intermittency models. The derived reference probability distribution function is interpreted as time and ensemble average of the real physical one. A generic family of stochastic processes describing noise-driven intermittency, where the stationary density distribution coincides exactly with the one resulted from entropy maximization, is presented.

  2. Approximate solution for the reactor neutron probability distribution

    International Nuclear Information System (INIS)

    Ruby, L.; McSwine, T.L.

    1985-01-01

    Several authors have studied the Kolmogorov equation for a fission-driven chain-reacting system, written in terms of the generating function G(x,y,z,t) where x, y, and z are dummy variables referring to the neutron, delayed neutron precursor, and detector-count populations, n, m, and c, respectively. Pal and Zolotukhin and Mogil'ner have shown that if delayed neutrons are neglected, the solution is approximately negative binomial for the neutron population. Wang and Ruby have shown that if the detector effect is neglected, the solution, including the effect of delayed neutrons, is approximately negative binomial. All of the authors assumed prompt-neutron emission not exceeding two neutrons per fission. An approximate method of separating the detector effect from the statistics of the neutron and precursor populations has been proposed by Ruby. In this weak-coupling limit, it is assumed that G(x,y,z,t) = H(x,y)I(z,t). Substitution of this assumption into the Kolmogorov equation separates the latter into two equations, one for H(x,y) and the other for I(z,t). Solution of the latter then gives a generating function, which indicates that in the weak-coupling limit, the detector counts are Poisson distributed. Ruby also showed that if the detector effect is neglected in the equation for H(x,y), i.e., the detector efficiency is set to zero, then the resulting equation is identical with that considered by Wang and Ruby. The authors present here an approximate solution for H(x,y) that does not set the detector efficiency to zero

  3. Separating the contributions of variability and parameter uncertainty in probability distributions

    International Nuclear Information System (INIS)

    Sankararaman, S.; Mahadevan, S.

    2013-01-01

    This paper proposes a computational methodology to quantify the individual contributions of variability and distribution parameter uncertainty to the overall uncertainty in a random variable. Even if the distribution type is assumed to be known, sparse or imprecise data leads to uncertainty about the distribution parameters. If uncertain distribution parameters are represented using probability distributions, then the random variable can be represented using a family of probability distributions. The family of distributions concept has been used to obtain qualitative, graphical inference of the contributions of natural variability and distribution parameter uncertainty. The proposed methodology provides quantitative estimates of the contributions of the two types of uncertainty. Using variance-based global sensitivity analysis, the contributions of variability and distribution parameter uncertainty to the overall uncertainty are computed. The proposed method is developed at two different levels; first, at the level of a variable whose distribution parameters are uncertain, and second, at the level of a model output whose inputs have uncertain distribution parameters

  4. Analytical simulation of two dimensional advection dispersion ...

    African Journals Online (AJOL)

    The study was designed to investigate the analytical simulation of two dimensional advection dispersion equation of contaminant transport. The steady state flow condition of the contaminant transport where inorganic contaminants in aqueous waste solutions are disposed of at the land surface where it would migrate ...

  5. Analytical Simulation of Two Dimensional Advection Dispersion ...

    African Journals Online (AJOL)

    ADOWIE PERE

    ABSTRACT: The study was designed to investigate the analytical simulation of two dimensional advection dispersion equation of contaminant transport. The steady state flow condition of the contaminant transport where inorganic contaminants in aqueous waste solutions are disposed of at the land surface where it would ...

  6. Sums of two-dimensional spectral triples

    DEFF Research Database (Denmark)

    Christensen, Erik; Ivan, Cristina

    2007-01-01

    construct a sum of two dimensional modules which reflects some aspects of the topological dimensions of the compact metric space, but this will only give the metric back approximately. At the end we make an explicit computation of the last module for the unit interval in. The metric is recovered exactly...

  7. Stability of two-dimensional vorticity filaments

    International Nuclear Information System (INIS)

    Elhmaidi, D.; Provenzale, A.; Lili, T.; Babiano, A.

    2004-01-01

    We discuss the results of a numerical study on the stability of two-dimensional vorticity filaments around a circular vortex. We illustrate how the stability of the filaments depends on the balance between the strain associated with the far field of the vortex and the local vorticity of the filament, and we discuss an empirical criterion for filament stability

  8. Two-Dimensional Motions of Rockets

    Science.gov (United States)

    Kang, Yoonhwan; Bae, Saebyok

    2007-01-01

    We analyse the two-dimensional motions of the rockets for various types of rocket thrusts, the air friction and the gravitation by using a suitable representation of the rocket equation and the numerical calculation. The slope shapes of the rocket trajectories are discussed for the three types of rocket engines. Unlike the projectile motions, the…

  9. Two-dimensional microstrip detector for neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Oed, A [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France)

    1997-04-01

    Because of their robust design, gas microstrip detectors, which were developed at ILL, can be assembled relatively quickly, provided the prefabricated components are available. At the beginning of 1996, orders were received for the construction of three two-dimensional neutron detectors. These detectors have been completed. The detectors are outlined below. (author). 2 refs.

  10. Conformal invariance and two-dimensional physics

    International Nuclear Information System (INIS)

    Zuber, J.B.

    1993-01-01

    Actually, physicists and mathematicians are very interested in conformal invariance: geometric transformations which keep angles. This symmetry is very important for two-dimensional systems as phase transitions, string theory or node mathematics. In this article, the author presents the conformal invariance and explains its usefulness

  11. Matching Two-dimensional Gel Electrophoresis' Spots

    DEFF Research Database (Denmark)

    Dos Anjos, António; AL-Tam, Faroq; Shahbazkia, Hamid Reza

    2012-01-01

    This paper describes an approach for matching Two-Dimensional Electrophoresis (2-DE) gels' spots, involving the use of image registration. The number of false positive matches produced by the proposed approach is small, when compared to academic and commercial state-of-the-art approaches. This ar...

  12. Two-dimensional membranes in motion

    NARCIS (Netherlands)

    Davidovikj, D.

    2018-01-01

    This thesis revolves around nanomechanical membranes made of suspended two - dimensional materials. Chapters 1-3 give an introduction to the field of 2D-based nanomechanical devices together with an overview of the underlying physics and the measurementtools used in subsequent chapters. The research

  13. Extended Polymorphism of Two-Dimensional Material

    NARCIS (Netherlands)

    Yoshida, Masaro; Ye, Jianting; Zhang, Yijin; Imai, Yasuhiko; Kimura, Shigeru; Fujiwara, Akihiko; Nishizaki, Terukazu; Kobayashi, Norio; Nakano, Masaki; Iwasa, Yoshihiro

    When controlling electronic properties of bulk materials, we usually assume that the basic crystal structure is fixed. However, in two-dimensional (2D) materials, atomic structure or to functionalize their properties. Various polymorphs can exist in transition metal dichalcogenides (TMDCs) from

  14. Piezoelectricity in Two-Dimensional Materials

    KAUST Repository

    Wu, Tao

    2015-02-25

    Powering up 2D materials: Recent experimental studies confirmed the existence of piezoelectricity - the conversion of mechanical stress into electricity - in two-dimensional single-layer MoS2 nanosheets. The results represent a milestone towards embedding low-dimensional materials into future disruptive technologies. © 2015 Wiley-VCH Verlag GmbH & Co. KGaA.

  15. Dynamics of vortex interactions in two-dimensional flows

    DEFF Research Database (Denmark)

    Juul Rasmussen, J.; Nielsen, A.H.; Naulin, V.

    2002-01-01

    The dynamics and interaction of like-signed vortex structures in two dimensional flows are investigated by means of direct numerical solutions of the two-dimensional Navier-Stokes equations. Two vortices with distributed vorticity merge when their distance relative to their radius, d/R-0l. is below...... a critical value, a(c). Using the Weiss-field, a(c) is estimated for vortex patches. Introducing an effective radius for vortices with distributed vorticity, we find that 3.3 ... is effectively producing small scale structures and the relation to the enstrophy "cascade" in developed 2D turbulence is discussed. The influence of finite viscosity on the merging is also investigated. Additionally, we examine vortex interactions on a finite domain, and discuss the results in connection...

  16. The exact probability distribution of the rank product statistics for replicated experiments.

    Science.gov (United States)

    Eisinga, Rob; Breitling, Rainer; Heskes, Tom

    2013-03-18

    The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  17. Collective motions of globally coupled oscillators and some probability distributions on circle

    Energy Technology Data Exchange (ETDEWEB)

    Jaćimović, Vladimir [Faculty of Natural Sciences and Mathematics, University of Montenegro, Cetinjski put, bb., 81000 Podgorica (Montenegro); Crnkić, Aladin, E-mail: aladin.crnkic@hotmail.com [Faculty of Technical Engineering, University of Bihać, Ljubijankićeva, bb., 77000 Bihać, Bosnia and Herzegovina (Bosnia and Herzegovina)

    2017-06-28

    In 2010 Kato and Jones described a new family of probability distributions on circle, obtained as Möbius transformation of von Mises distribution. We present the model demonstrating that these distributions appear naturally in study of populations of coupled oscillators. We use this opportunity to point out certain relations between Directional Statistics and collective motion of coupled oscillators. - Highlights: • We specify probability distributions on circle that arise in Kuramoto model. • We study how the mean-field coupling affects the shape of distribution of phases. • We discuss potential applications in some experiments on cell cycle. • We apply Directional Statistics to study collective dynamics of coupled oscillators.

  18. GEPOIS: a two dimensional nonuniform mesh Poisson solver

    International Nuclear Information System (INIS)

    Quintenz, J.P.; Freeman, J.R.

    1979-06-01

    A computer code is described which solves Poisson's equation for the electric potential over a two dimensional cylindrical (r,z) nonuniform mesh which can contain internal electrodes. Poisson's equation is solved over a given region subject to a specified charge distribution with either Neumann or Dirichlet perimeter boundary conditions and with Dirichlet boundary conditions on internal surfaces. The static electric field is also computed over the region with special care given to normal electric field components at boundary surfaces

  19. Acoustic transparency in two-dimensional sonic crystals

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Dehesa, Jose; Torrent, Daniel [Wave Phenomena Group, Department of Electronic Engineering, Polytechnic University of Valencia, C/ Camino de Vera s/n, E-46022 Valencia (Spain); Cai Liangwu [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS 66506 (United States)], E-mail: jsdehesa@upvnet.upv.es

    2009-01-15

    Acoustic transparency is studied in two-dimensional sonic crystals consisting of hexagonal distributions of cylinders with continuously varying properties. The transparency condition is achieved by selectively closing the acoustic bandgaps, which are governed by the structure factor of the cylindrical scatterers. It is shown here that cylindrical scatterers with the proposed continuously varying properties are physically realizable by using metafluids based on sonic crystals. The feasibility of this proposal is analyzed by a numerical experiment based on multiple scattering theory.

  20. New family of probability distributions with applications to Monte Carlo studies

    International Nuclear Information System (INIS)

    Johnson, M.E.; Tietjen, G.L.; Beckman, R.J.

    1980-01-01

    A new probability distribution is presented that offers considerable potential for providing stochastic inputs to Monte Carlo simulation studies. The distribution includes the exponential power family as a special case. An efficient computational strategy is proposed for random variate generation. An example for testing the hypothesis of unit variance illustrates the advantages of the proposed distribution

  1. Two-dimensional sub-half-wavelength atom localization via controlled spontaneous emission.

    Science.gov (United States)

    Wan, Ren-Gang; Zhang, Tong-Yi

    2011-12-05

    We propose a scheme for two-dimensional (2D) atom localization based on the controlled spontaneous emission, in which the atom interacts with two orthogonal standing-wave fields. Due to the spatially dependent atom-field interaction, the position probability distribution of the atom can be directly determined by measuring the resulting spontaneously emission spectrum. The phase sensitive property of the atomic system leads to quenching of the spontaneous emission in some regions of the standing-waves, which significantly reduces the uncertainty in the position measurement of the atom. We find that the frequency measurement of the emitted light localizes the atom in half-wavelength domain. Especially the probability of finding the atom at a particular position can reach 100% when a photon with certain frequency is detected. By increasing the Rabi frequencies of the driving fields, such 2D sub-half-wavelength atom localization can acquire high spatial resolution.

  2. Calculation of ruin probabilities for a dense class of heavy tailed distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis; Samorodnitsky, Gennady

    2015-01-01

    In this paper, we propose a class of infinite-dimensional phase-type distributions with finitely many parameters as models for heavy tailed distributions. The class of finite-dimensional phase-type distributions is dense in the class of distributions on the positive reals and may hence approximate...... any such distribution. We prove that formulas from renewal theory, and with a particular attention to ruin probabilities, which are true for common phase-type distributions also hold true for the infinite-dimensional case. We provide algorithms for calculating functionals of interest...... such as the renewal density and the ruin probability. It might be of interest to approximate a given heavy tailed distribution of some other type by a distribution from the class of infinite-dimensional phase-type distributions and to this end we provide a calibration procedure which works for the approximation...

  3. A transmission probability method for calculation of neutron flux distributions in hexagonal geometry

    International Nuclear Information System (INIS)

    Wasastjerna, F.; Lux, I.

    1980-03-01

    A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)

  4. A measure of mutual divergence among a number of probability distributions

    Directory of Open Access Journals (Sweden)

    J. N. Kapur

    1987-01-01

    major inequalities due to Shannon, Renyi and Holder. The inequalities are then used to obtain some useful results in information theory. In particular measures are obtained to measure the mutual divergence among two or more probability distributions.

  5. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP

  6. Test of quantum thermalization in the two-dimensional transverse-field Ising model.

    Science.gov (United States)

    Blaß, Benjamin; Rieger, Heiko

    2016-12-01

    We study the quantum relaxation of the two-dimensional transverse-field Ising model after global quenches with a real-time variational Monte Carlo method and address the question whether this non-integrable, two-dimensional system thermalizes or not. We consider both interaction quenches in the paramagnetic phase and field quenches in the ferromagnetic phase and compare the time-averaged probability distributions of non-conserved quantities like magnetization and correlation functions to the thermal distributions according to the canonical Gibbs ensemble obtained with quantum Monte Carlo simulations at temperatures defined by the excess energy in the system. We find that the occurrence of thermalization crucially depends on the quench parameters: While after the interaction quenches in the paramagnetic phase thermalization can be observed, our results for the field quenches in the ferromagnetic phase show clear deviations from the thermal system. These deviations increase with the quench strength and become especially clear comparing the shape of the thermal and the time-averaged distributions, the latter ones indicating that the system does not completely lose the memory of its initial state even for strong quenches. We discuss our results with respect to a recently formulated theorem on generalized thermalization in quantum systems.

  7. Test of quantum thermalization in the two-dimensional transverse-field Ising model

    Science.gov (United States)

    Blaß, Benjamin; Rieger, Heiko

    2016-01-01

    We study the quantum relaxation of the two-dimensional transverse-field Ising model after global quenches with a real-time variational Monte Carlo method and address the question whether this non-integrable, two-dimensional system thermalizes or not. We consider both interaction quenches in the paramagnetic phase and field quenches in the ferromagnetic phase and compare the time-averaged probability distributions of non-conserved quantities like magnetization and correlation functions to the thermal distributions according to the canonical Gibbs ensemble obtained with quantum Monte Carlo simulations at temperatures defined by the excess energy in the system. We find that the occurrence of thermalization crucially depends on the quench parameters: While after the interaction quenches in the paramagnetic phase thermalization can be observed, our results for the field quenches in the ferromagnetic phase show clear deviations from the thermal system. These deviations increase with the quench strength and become especially clear comparing the shape of the thermal and the time-averaged distributions, the latter ones indicating that the system does not completely lose the memory of its initial state even for strong quenches. We discuss our results with respect to a recently formulated theorem on generalized thermalization in quantum systems. PMID:27905523

  8. Two-dimensional confinement of heavy fermions

    International Nuclear Information System (INIS)

    Shishido, Hiroaki; Shibauchi, Takasada; Matsuda, Yuji; Terashima, Takahito

    2010-01-01

    Metallic systems with the strongest electron correlations are realized in certain rare-earth and actinide compounds whose physics are dominated by f-electrons. These materials are known as heavy fermions, so called because the effective mass of the conduction electrons is enhanced via correlation effects up to as much as several hundreds times the free electron mass. To date the electronic structure of all heavy-fermion compounds is essentially three-dimensional. Here we report on the first realization of a two-dimensional heavy-fermion system, where the dimensionality is adjusted in a controllable fashion by fabricating heterostructures using molecular beam epitaxy. The two-dimensional heavy fermion system displays striking deviations from the standard Fermi liquid low-temperature electronic properties. (author)

  9. Two-dimensional sensitivity calculation code: SENSETWO

    International Nuclear Information System (INIS)

    Yamauchi, Michinori; Nakayama, Mitsuo; Minami, Kazuyoshi; Seki, Yasushi; Iida, Hiromasa.

    1979-05-01

    A SENSETWO code for the calculation of cross section sensitivities with a two-dimensional model has been developed, on the basis of first order perturbation theory. It uses forward neutron and/or gamma-ray fluxes and adjoint fluxes obtained by two-dimensional discrete ordinates code TWOTRAN-II. The data and informations of cross sections, geometry, nuclide density, response functions, etc. are transmitted to SENSETWO by the dump magnetic tape made in TWOTRAN calculations. The required input for SENSETWO calculations is thus very simple. The SENSETWO yields as printed output the cross section sensitivities for each coarse mesh zone and for each energy group, as well as the plotted output of sensitivity profiles specified by the input. A special feature of the code is that it also calculates the reaction rate with the response function used as the adjoint source in TWOTRAN adjoint calculation and the calculated forward flux from the TWOTRAN forward calculation. (author)

  10. Two-dimensional ranking of Wikipedia articles

    Science.gov (United States)

    Zhirov, A. O.; Zhirov, O. V.; Shepelyansky, D. L.

    2010-10-01

    The Library of Babel, described by Jorge Luis Borges, stores an enormous amount of information. The Library exists ab aeterno. Wikipedia, a free online encyclopaedia, becomes a modern analogue of such a Library. Information retrieval and ranking of Wikipedia articles become the challenge of modern society. While PageRank highlights very well known nodes with many ingoing links, CheiRank highlights very communicative nodes with many outgoing links. In this way the ranking becomes two-dimensional. Using CheiRank and PageRank we analyze the properties of two-dimensional ranking of all Wikipedia English articles and show that it gives their reliable classification with rich and nontrivial features. Detailed studies are done for countries, universities, personalities, physicists, chess players, Dow-Jones companies and other categories.

  11. Toward two-dimensional search engines

    International Nuclear Information System (INIS)

    Ermann, L; Shepelyansky, D L; Chepelianskii, A D

    2012-01-01

    We study the statistical properties of various directed networks using ranking of their nodes based on the dominant vectors of the Google matrix known as PageRank and CheiRank. On average PageRank orders nodes proportionally to a number of ingoing links, while CheiRank orders nodes proportionally to a number of outgoing links. In this way, the ranking of nodes becomes two dimensional which paves the way for the development of two-dimensional search engines of a new type. Statistical properties of information flow on the PageRank–CheiRank plane are analyzed for networks of British, French and Italian universities, Wikipedia, Linux Kernel, gene regulation and other networks. A special emphasis is done for British universities networks using the large database publicly available in the UK. Methods of spam links control are also analyzed. (paper)

  12. Acoustic phonon emission by two dimensional plasmons

    International Nuclear Information System (INIS)

    Mishonov, T.M.

    1990-06-01

    Acoustic wave emission of the two dimensional plasmons in a semiconductor or superconductor microstructure is investigated by using the phenomenological deformation potential within the jellium model. The plasmons are excited by the external electromagnetic (e.m.) field. The power conversion coefficient of e.m. energy into acoustic wave energy is also estimated. It is shown, the coherent transformation has a sharp resonance at the plasmon frequency of the two dimensional electron gas (2DEG). The incoherent transformation of the e.m. energy is generated by ohmic dissipation of 2DEG. The method proposed for coherent phonon beam generation can be very effective for high mobility 2DEG and for thin superconducting layers if the plasmon frequency ω is smaller than the superconducting gap 2Δ. (author). 21 refs, 1 fig

  13. Confined catalysis under two-dimensional materials

    OpenAIRE

    Li, Haobo; Xiao, Jianping; Fu, Qiang; Bao, Xinhe

    2017-01-01

    Small spaces in nanoreactors may have big implications in chemistry, because the chemical nature of molecules and reactions within the nanospaces can be changed significantly due to the nanoconfinement effect. Two-dimensional (2D) nanoreactor formed under 2D materials can provide a well-defined model system to explore the confined catalysis. We demonstrate a general tendency for weakened surface adsorption under the confinement of graphene overlayer, illustrating the feasible modulation of su...

  14. Two-Dimensional Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Bo Jia

    2015-01-01

    (BP networks. However, like many other methods, ELM is originally proposed to handle vector pattern while nonvector patterns in real applications need to be explored, such as image data. We propose the two-dimensional extreme learning machine (2DELM based on the very natural idea to deal with matrix data directly. Unlike original ELM which handles vectors, 2DELM take the matrices as input features without vectorization. Empirical studies on several real image datasets show the efficiency and effectiveness of the algorithm.

  15. Superintegrability on the two dimensional hyperboloid

    International Nuclear Information System (INIS)

    Akopyan, E.; Pogosyan, G.S.; Kalnins, E.G.; Miller, W. Jr

    1998-01-01

    This work is devoted to the investigation of the quantum mechanical systems on the two dimensional hyperboloid which admit separation of variables in at least two coordinate systems. Here we consider two potentials introduced in a paper of C.P.Boyer, E.G.Kalnins and P.Winternitz, which haven't been studied yet. An example of an interbasis expansion is given and the structure of the quadratic algebra generated by the integrals of motion is carried out

  16. Two-dimensional Kagome photonic bandgap waveguide

    DEFF Research Database (Denmark)

    Nielsen, Jens Bo; Søndergaard, Thomas; Libori, Stig E. Barkou

    2000-01-01

    The transverse-magnetic photonic-bandgap-guidance properties are investigated for a planar two-dimensional (2-D) Kagome waveguide configuration using a full-vectorial plane-wave-expansion method. Single-moded well-localized low-index guided modes are found. The localization of the optical modes...... is investigated with respect to the width of the 2-D Kagome waveguide, and the number of modes existing for specific frequencies and waveguide widths is mapped out....

  17. Mechanical exfoliation of two-dimensional materials

    Science.gov (United States)

    Gao, Enlai; Lin, Shao-Zhen; Qin, Zhao; Buehler, Markus J.; Feng, Xi-Qiao; Xu, Zhiping

    2018-06-01

    Two-dimensional materials such as graphene and transition metal dichalcogenides have been identified and drawn much attention over the last few years for their unique structural and electronic properties. However, their rise begins only after these materials are successfully isolated from their layered assemblies or adhesive substrates into individual monolayers. Mechanical exfoliation and transfer are the most successful techniques to obtain high-quality single- or few-layer nanocrystals from their native multi-layer structures or their substrate for growth, which involves interfacial peeling and intralayer tearing processes that are controlled by material properties, geometry and the kinetics of exfoliation. This procedure is rationalized in this work through theoretical analysis and atomistic simulations. We propose a criterion to assess the feasibility for the exfoliation of two-dimensional sheets from an adhesive substrate without fracturing itself, and explore the effects of material and interface properties, as well as the geometrical, kinetic factors on the peeling behaviors and the torn morphology. This multi-scale approach elucidates the microscopic mechanism of the mechanical processes, offering predictive models and tools for the design of experimental procedures to obtain single- or few-layer two-dimensional materials and structures.

  18. Warranty menu design for a two-dimensional warranty

    International Nuclear Information System (INIS)

    Ye, Zhi-Sheng; Murthy, D.N. Pra

    2016-01-01

    Fierce competitions in the commercial product market have forced manufacturers to provide customer-friendly warranties with a view to achieving higher customer satisfaction and increasing the market share. This study proposes a strategy that offers customers a two-dimensional warranty menu with a number of warranty choices, called a flexible warranty policy. We investigate the design of a flexible two-dimensional warranty policy that contains a number of rectangular regions. This warranty policy is obtained by dividing customers into several groups according to their use rates and providing each group a germane warranty region. Consumers choose a favorable one from the menu according to their usage behaviors. Evidently, this flexible warranty policy is attractive to users of different usage behaviors, and thus, it gives the manufacturer a good position in advertising the product. When consumers are unaware about their use rates upon purchase, we consider a fixed two-dimensional warranty policy with a stair-case warranty region and show that it is equivalent to the flexible policy. Such an equivalence reveals the inherent relationship between the rectangular warranty policy, the L-shape warranty policy, the step-stair warranty policy and the iso-probability of failure warranty policy that were extensively discussed in the literature. - Highlights: • We design a two-dimensional warranty menu with a number of warranty choices. • Consumers can choose a favorable one from the menu as per their usage behavior. • We further consider a fixed 2D warranty policy with a stair-case warranty region. • We show the equivalence of the two warranty policies.

  19. Quasi-integrability and two-dimensional QCD

    International Nuclear Information System (INIS)

    Abdalla, E.; Mohayaee, R.

    1996-10-01

    The notion of integrability in two-dimensional QCD is discussed. We show that in spite of an infinite number of conserved charges, particle production is not entirely suppressed. This phenomenon, which we call quasi-integrability, is explained in terms of quantum corrections to the combined algebra of higher-conserved and spectrum-generating currents. We predict the qualitative form of particle production probabilities and verify that they are in agreement with numerical data. We also discuss four-dimensional self-dual Yang-Mills theory in the light of our results. (author). 25 refs, 4 figs, 1 tab

  20. Probability Distribution and Deviation Information Fusion Driven Support Vector Regression Model and Its Application

    Directory of Open Access Journals (Sweden)

    Changhao Fan

    2017-01-01

    Full Text Available In modeling, only information from the deviation between the output of the support vector regression (SVR model and the training sample is considered, whereas the other prior information of the training sample, such as probability distribution information, is ignored. Probabilistic distribution information describes the overall distribution of sample data in a training sample that contains different degrees of noise and potential outliers, as well as helping develop a high-accuracy model. To mine and use the probability distribution information of a training sample, a new support vector regression model that incorporates probability distribution information weight SVR (PDISVR is proposed. In the PDISVR model, the probability distribution of each sample is considered as the weight and is then introduced into the error coefficient and slack variables of SVR. Thus, the deviation and probability distribution information of the training sample are both used in the PDISVR model to eliminate the influence of noise and outliers in the training sample and to improve predictive performance. Furthermore, examples with different degrees of noise were employed to demonstrate the performance of PDISVR, which was then compared with those of three SVR-based methods. The results showed that PDISVR performs better than the three other methods.

  1. Predicting dihedral angle probability distributions for protein coil residues from primary sequence using neural networks

    DEFF Research Database (Denmark)

    Helles, Glennie; Fonseca, Rasmus

    2009-01-01

    residue in the input-window. The trained neural network shows a significant improvement (4-68%) in predicting the most probable bin (covering a 30°×30° area of the dihedral angle space) for all amino acids in the data set compared to first order statistics. An accuracy comparable to that of secondary...... seem to have a significant influence on the dihedral angles adopted by the individual amino acids in coil segments. In this work we attempt to predict a probability distribution of these dihedral angles based on the flanking residues. While attempts to predict dihedral angles of coil segments have been...... done previously, none have, to our knowledge, presented comparable results for the probability distribution of dihedral angles. Results: In this paper we develop an artificial neural network that uses an input-window of amino acids to predict a dihedral angle probability distribution for the middle...

  2. The distributed failure probability approach to dependent failure analysis, and its application

    International Nuclear Information System (INIS)

    Hughes, R.P.

    1989-01-01

    The Distributed Failure Probability (DFP) approach to the problem of dependent failures in systems is presented. The basis of the approach is that the failure probability of a component is a variable. The source of this variability is the change in the 'environment' of the component, where the term 'environment' is used to mean not only obvious environmental factors such as temperature etc., but also such factors as the quality of maintenance and manufacture. The failure probability is distributed among these various 'environments' giving rise to the Distributed Failure Probability method. Within the framework which this method represents, modelling assumptions can be made, based both on engineering judgment and on the data directly. As such, this DFP approach provides a soundly based and scrutable technique by which dependent failures can be quantitatively assessed. (orig.)

  3. Probability distributions in conservative energy exchange models of multiple interacting agents

    International Nuclear Information System (INIS)

    Scafetta, Nicola; West, Bruce J

    2007-01-01

    Herein we study energy exchange models of multiple interacting agents that conserve energy in each interaction. The models differ regarding the rules that regulate the energy exchange and boundary effects. We find a variety of stochastic behaviours that manifest energy equilibrium probability distributions of different types and interaction rules that yield not only the exponential distributions such as the familiar Maxwell-Boltzmann-Gibbs distribution of an elastically colliding ideal particle gas, but also uniform distributions, truncated exponential distributions, Gaussian distributions, Gamma distributions, inverse power law distributions, mixed exponential and inverse power law distributions, and evolving distributions. This wide variety of distributions should be of value in determining the underlying mechanisms generating the statistical properties of complex phenomena including those to be found in complex chemical reactions

  4. Two-dimensional beam profiles and one-dimensional projections

    Science.gov (United States)

    Findlay, D. J. S.; Jones, B.; Adams, D. J.

    2018-05-01

    One-dimensional projections of improved two-dimensional representations of transverse profiles of particle beams are proposed for fitting to data from harp-type monitors measuring beam profiles on particle accelerators. Composite distributions, with tails smoothly matched on to a central (inverted) parabola, are shown to give noticeably better fits than single gaussian and single parabolic distributions to data from harp-type beam profile monitors all along the proton beam transport lines to the two target stations on the ISIS Spallation Neutron Source. Some implications for inferring beam current densities on the beam axis are noted.

  5. Two-dimensional unsteady lift problems in supersonic flight

    Science.gov (United States)

    Heaslet, Max A; Lomax, Harvard

    1949-01-01

    The variation of pressure distribution is calculated for a two-dimensional supersonic airfoil either experiencing a sudden angle-of-attack change or entering a sharp-edge gust. From these pressure distributions the indicial lift functions applicable to unsteady lift problems are determined for two cases. Results are presented which permit the determination of maximum increment in lift coefficient attained by an unrestrained airfoil during its flight through a gust. As an application of these results, the minimum altitude for safe flight through a specific gust is calculated for a particular supersonic wing of given strength and wing loading.

  6. Predicting the probability of slip in gait: methodology and distribution study.

    Science.gov (United States)

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  7. Bounds for the probability distribution function of the linear ACD process

    OpenAIRE

    Fernandes, Marcelo

    2003-01-01

    Rio de Janeiro This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.

  8. WIENER-HOPF SOLVER WITH SMOOTH PROBABILITY DISTRIBUTIONS OF ITS COMPONENTS

    Directory of Open Access Journals (Sweden)

    Mr. Vladimir A. Smagin

    2016-12-01

    Full Text Available The Wiener – Hopf solver with smooth probability distributions of its component is presented. The method is based on hyper delta approximations of initial distributions. The use of Fourier series transformation and characteristic function allows working with the random variable method concentrated in transversal axis of absc.

  9. Loaded dice in Monte Carlo : importance sampling in phase space integration and probability distributions for discrepancies

    NARCIS (Netherlands)

    Hameren, Andreas Ferdinand Willem van

    2001-01-01

    Discrepancies play an important role in the study of uniformity properties of point sets. Their probability distributions are a help in the analysis of the efficiency of the Quasi Monte Carlo method of numerical integration, which uses point sets that are distributed more uniformly than sets of

  10. Probability distribution of long-run indiscriminate felling of trees in ...

    African Journals Online (AJOL)

    The study was undertaken to determine the probability distribution of Long-run indiscriminate felling of trees in northern senatorial district of Adamawa State. Specifically, the study focused on examining the future direction of indiscriminate felling of trees as well as its equilibrium distribution. A multi-stage and simple random ...

  11. Two-dimensional atom localization based on coherent field controlling in a five-level M-type atomic system.

    Science.gov (United States)

    Jiang, Xiangqian; Li, Jinjiang; Sun, Xiudong

    2017-12-11

    We study two-dimensional sub-wavelength atom localization based on the microwave coupling field controlling and spontaneously generated coherence (SGC) effect. For a five-level M-type atom, introducing a microwave coupling field between two upper levels and considering the quantum interference between two transitions from two upper levels to lower levels, the analytical expression of conditional position probability (CPP) distribution is obtained using the iterative method. The influence of the detuning of a spontaneously emitted photon, Rabi frequency of the microwave field, and the SGC effect on the CPP are discussed. The two-dimensional sub-half-wavelength atom localization with high-precision and high spatial resolution is achieved by adjusting the detuning and the Rabi frequency, where the atom can be localized in a region smaller thanλ/10×λ/10. The spatial resolution is improved significantly compared with the case without the microwave field.

  12. Feynman quasi probability distribution for spin-(1/2), and its generalizations

    International Nuclear Information System (INIS)

    Colucci, M.

    1999-01-01

    It has been examined the Feynman's paper Negative probability, in which, after a discussion about the possibility of attributing a real physical meaning to quasi probability distributions, he introduces a new kind of distribution for spin-(1/2), with a possible method of generalization to systems with arbitrary number of states. The principal aim of this article is to shed light upon the method of construction of these distributions, taking into consideration their application to some experiments, and discussing their positive and negative aspects

  13. The distribution function of a probability measure on a space with a fractal structure

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Granero, M.A.; Galvez-Rodriguez, J.F.

    2017-07-01

    In this work we show how to define a probability measure with the help of a fractal structure. One of the keys of this approach is to use the completion of the fractal structure. Then we use the theory of a cumulative distribution function on a Polish ultrametric space and describe it in this context. Finally, with the help of fractal structures, we prove that a function satisfying the properties of a cumulative distribution function on a Polish ultrametric space is a cumulative distribution function with respect to some probability measure on the space. (Author)

  14. Vector (two-dimensional) magnetic phenomena

    International Nuclear Information System (INIS)

    Enokizono, Masato

    2002-01-01

    In this paper, some interesting phenomena were described from the viewpoint of two-dimensional magnetic property, which is reworded with the vector magnetic property. It shows imperfection of conventional magnetic property and some interested phenomena were discovered, too. We found magnetic materials had the strong nonlinearity both magnitude and spatial phase due to the relationship between the magnetic field strength H-vector and the magnetic flux density B-vector. Therefore, magnetic properties should be defined as the vector relationship. Furthermore, the new Barukhausen signal was observed under rotating flux. (Author)

  15. Two-dimensional Semiconductor-Superconductor Hybrids

    DEFF Research Database (Denmark)

    Suominen, Henri Juhani

    This thesis investigates hybrid two-dimensional semiconductor-superconductor (Sm-S) devices and presents a new material platform exhibiting intimate Sm-S coupling straight out of the box. Starting with the conventional approach, we investigate coupling superconductors to buried quantum well....... To overcome these issues we integrate the superconductor directly into the semiconducting material growth stack, depositing it in-situ in a molecular beam epitaxy system under high vacuum. We present a number of experiments on these hybrid heterostructures, demonstrating near unity interface transparency...

  16. Optimized two-dimensional Sn transport (BISTRO)

    International Nuclear Information System (INIS)

    Palmiotti, G.; Salvatores, M.; Gho, C.

    1990-01-01

    This paper reports on an S n two-dimensional transport module developed for the French fast reactor code system CCRR to optimize algorithms in order to obtain the best performance in terms of computational time. A form of diffusion synthetic acceleration was adopted, and a special effort was made to solve the associated diffusion equation efficiently. The improvements in the algorithms, along with the use of an efficient programming language, led to a significant gain in computational time with respect to the DOT code

  17. Binding energy of two-dimensional biexcitons

    DEFF Research Database (Denmark)

    Singh, Jai; Birkedal, Dan; Vadim, Lyssenko

    1996-01-01

    Using a model structure for a two-dimensional (2D) biexciton confined in a quantum well, it is shown that the form of the Hamiltonian of the 2D biexciton reduces into that of an exciton. The binding energies and Bohr radii of a 2D biexciton in its various internal energy states are derived...... analytically using the fractional dimension approach. The ratio of the binding energy of a 2D biexciton to that of a 2D exciton is found to be 0.228, which agrees very well with the recent experimental value. The results of our approach are compared with those of earlier theories....

  18. Airy beams on two dimensional materials

    Science.gov (United States)

    Imran, Muhammad; Li, Rujiang; Jiang, Yuyu; Lin, Xiao; Zheng, Bin; Dehdashti, Shahram; Xu, Zhiwei; Wang, Huaping

    2018-05-01

    We propose that quasi-transverse-magnetic (quasi-TM) Airy beams can be supported on two dimensional (2D) materials. By taking graphene as a typical example, the solution of quasi-TM Airy beams is studied under the paraxial approximation. The analytical field intensity in a bilayer graphene-based planar plasmonic waveguide is confirmed by the simulation results. Due to the tunability of the chemical potential of graphene, the self-accelerating behavior of the quasi-TM Airy beam can be steered effectively. 2D materials thus provide a good platform to investigate the propagation of Airy beams.

  19. Two-dimensional heat flow apparatus

    Science.gov (United States)

    McDougall, Patrick; Ayars, Eric

    2014-06-01

    We have created an apparatus to quantitatively measure two-dimensional heat flow in a metal plate using a grid of temperature sensors read by a microcontroller. Real-time temperature data are collected from the microcontroller by a computer for comparison with a computational model of the heat equation. The microcontroller-based sensor array allows previously unavailable levels of precision at very low cost, and the combination of measurement and modeling makes for an excellent apparatus for the advanced undergraduate laboratory course.

  20. Two-dimensional shielding benchmarks for iron at YAYOI, (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; An, Shigehiro; Kasai, Shigeru; Miyasaka, Shun-ichi; Koyama, Kinji.

    The aim of this work is to assess the collapsed neutron and gamma multigroup cross sections for two dimensional discrete ordinate transport code. Two dimensional distributions of neutron flux and gamma ray dose through a 70cm thick and 94cm square iron shield were measured at the fast neutron source reactor ''YAYOI''. The iron shield was placed over the lead reflector in the vertical experimental column surrounded by heavy concrete wall. The detectors used in this experiment were threshold detectors In, Ni, Al, Mg, Fe and Zn, sandwitch resonance detectors Au, W and Co, activation foils Au for neutrons and thermoluminescence detectors for gamma ray dose. The experimental results were compared with the calculated ones by the discrete ordinate transport code ANISN and TWOTRAN. The region-wise, coupled neutron-gamma multigroup cross-sections (100n+20gamma, EURLIB structure) were generated from ENDF/B-IV library for neutrons and POPOP4 library for gamma-ray production cross-sections by using the code system RADHEAT. The effective microscopic neutron cross sections were obtained from the infinite dilution values applying ABBN type self-shielding factors. The gamma ray production multigroup cross-sections were calculated from these effective microscopic neutron cross-sections. For two-dimensional calculations the group constants were collapsed into 10 neutron groups and 3 gamma groups by using ANISN. (auth.)

  1. Time dependent and asymptotic neutron number probability distribution calculation using discrete Fourier transform

    International Nuclear Information System (INIS)

    Humbert, Ph.

    2005-01-01

    In this paper we consider the probability distribution of neutrons in a multiplying assembly. The problem is studied using a space independent one group neutron point reactor model without delayed neutrons. We recall the generating function methodology and analytical results obtained by G.I. Bell when the c 2 approximation is used and we present numerical solutions in the general case, without this approximation. The neutron source induced distribution is calculated using the single initial neutron distribution which satisfies a master (Kolmogorov backward) equation. This equation is solved using the generating function method. The generating function satisfies a differential equation and the probability distribution is derived by inversion of the generating function. Numerical results are obtained using the same methodology where the generating function is the Fourier transform of the probability distribution. Discrete Fourier transforms are used to calculate the discrete time dependent distributions and continuous Fourier transforms are used to calculate the asymptotic continuous probability distributions. Numerical applications are presented to illustrate the method. (author)

  2. The p-sphere and the geometric substratum of power-law probability distributions

    International Nuclear Information System (INIS)

    Vignat, C.; Plastino, A.

    2005-01-01

    Links between power law probability distributions and marginal distributions of uniform laws on p-spheres in R n show that a mathematical derivation of the Boltzmann-Gibbs distribution necessarily passes through power law ones. Results are also given that link parameters p and n to the value of the non-extensivity parameter q that characterizes these power laws in the context of non-extensive statistics

  3. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  4. Investigating and improving student understanding of the probability distributions for measuring physical observables in quantum mechanics

    International Nuclear Information System (INIS)

    Marshman, Emily; Singh, Chandralekha

    2017-01-01

    A solid grasp of the probability distributions for measuring physical observables is central to connecting the quantum formalism to measurements. However, students often struggle with the probability distributions of measurement outcomes for an observable and have difficulty expressing this concept in different representations. Here we first describe the difficulties that upper-level undergraduate and PhD students have with the probability distributions for measuring physical observables in quantum mechanics. We then discuss how student difficulties found in written surveys and individual interviews were used as a guide in the development of a quantum interactive learning tutorial (QuILT) to help students develop a good grasp of the probability distributions of measurement outcomes for physical observables. The QuILT strives to help students become proficient in expressing the probability distributions for the measurement of physical observables in Dirac notation and in the position representation and be able to convert from Dirac notation to position representation and vice versa. We describe the development and evaluation of the QuILT and findings about the effectiveness of the QuILT from in-class evaluations. (paper)

  5. Two-dimensional goodness-of-fit testing in astronomy

    International Nuclear Information System (INIS)

    Peacock, J.A

    1983-01-01

    This paper deals with the techniques available to test for consistency between the empirical distribution of data points on a plane and a hypothetical density law. Two new statistical tests are developed. The first is a two-dimensional version of the Kolmogorov-Smirnov test, for which the distribution of the test statistic is investigated using a Monte Carlo method. This test is found in practice to be very nearly distribution-free, and empirical formulae for the confidence levels are given. Secondly, the method of power-spectrum analysis is extended to deal with cases in which the null hypothesis is not a uniform distribution. These methods are illustrated by application to the distribution of quasar candidates found on an objective-prism plate of the Virgo Cluster. (author)

  6. Regional probability distribution of the annual reference evapotranspiration and its effective parameters in Iran

    Science.gov (United States)

    Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad

    2017-10-01

    The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.

  7. Quantitative non-monotonic modeling of economic uncertainty by probability and possibility distributions

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2012-01-01

    uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... are thoroughly discussed in the case of rectangular representation of uncertainty by the uniform probability distribution and the interval, respectively. Also triangular representations are dealt with and compared. Calculation of monotonic as well as non-monotonic functions of variables represented...

  8. Calculation of magnetization curves and probability distribution for monoclinic and uniaxial systems

    International Nuclear Information System (INIS)

    Sobh, Hala A.; Aly, Samy H.; Yehia, Sherif

    2013-01-01

    We present the application of a simple classical statistical mechanics-based model to selected monoclinic and hexagonal model systems. In this model, we treat the magnetization as a classical vector whose angular orientation is dictated by the laws of equilibrium classical statistical mechanics. We calculate for these anisotropic systems, the magnetization curves, energy landscapes and probability distribution for different sets of relevant parameters and magnetic fields of different strengths and directions. Our results demonstrate a correlation between the most probable orientation of the magnetization vector, the system's parameters, and the external magnetic field. -- Highlights: ► We calculate magnetization curves and probability angular distribution of the magnetization. ► The magnetization curves are consistent with probability results for the studied systems. ► Monoclinic and hexagonal systems behave differently due to their different anisotropies

  9. Study of two-dimensional interchange turbulence

    International Nuclear Information System (INIS)

    Sugama, Hideo; Wakatani, Masahiro.

    1990-04-01

    An eddy viscosity model describing enstrophy transfer in two-dimensional turbulence is presented. This model is similar to that of Canuto et al. and provides an equation for the energy spectral function F(k) as a function of the energy input rate to the system per unit wavenumber, γ s (k). In the enstrophy-transfer inertial range, F(k)∝ k -3 is predicted by the model. The eddy viscosity model is applied to the interchange turbulence of a plasma in shearless magnetic field. Numerical simulation of the two-dimensional interchange turbulence demonstrates that the energy spectrum in the high wavenumber region is well described by this model. The turbulent transport driven by the interchange turbulence is expressed in terms of the Nusselt number Nu, the Rayleigh number Ra and Prantl number Pr in the same manner as that of thermal convection problem. When we use the linear growth rate for γ s (k), our theoretical model predicts that Nu ∝ (Ra·Pr) 1/2 for a constant background pressure gradient and Nu ∝ (Ra·Pr) 1/3 for a self-consistent background pressure profile with the stress-free slip boundary conditions. The latter agrees with our numerical result showing Nu ∝ Ra 1/3 . (author)

  10. Two-Dimensional Theory of Scientific Representation

    Directory of Open Access Journals (Sweden)

    A Yaghmaie

    2013-03-01

    Full Text Available Scientific representation is an interesting topic for philosophers of science, many of whom have recently explored it from different points of view. There are currently two competing approaches to the issue: cognitive and non-cognitive, and each of them claims its own merits over the other. This article tries to provide a hybrid theory of scientific representation, called Two-Dimensional Theory of Scientific Representation, which has the merits of the two accounts and is free of their shortcomings. To do this, we will argue that although scientific representation needs to use the notion of intentionality, such a notion is defined and realized in a simply structural form contrary to what cognitive approach says about intentionality. After a short introduction, the second part of the paper is devoted to introducing theories of scientific representation briefly. In the third part, the structural accounts of representation will be criticized. The next step is to introduce the two-dimensional theory which involves two key components: fixing and structural fitness. It will be argued that fitness is an objective and non-intentional relation, while fixing is intentional.

  11. Count data, detection probabilities, and the demography, dynamics, distribution, and decline of amphibians.

    Science.gov (United States)

    Schmidt, Benedikt R

    2003-08-01

    The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.

  12. A two-dimensional model with three regions for the reflooding study

    International Nuclear Information System (INIS)

    Motta, A.M.T.; Kinrys, S.; Roberty, N.C.; Carmo, E.G.D. do; Oliveira, L.F.S. de.

    1983-02-01

    A two-dimensional semi-analytical model, with three heat transfer regions is described for the calculation of flood ratio, the lenght of quenching front and the temperature distribution in the cladding. (E.G.) [pt

  13. A two-dimensional model with three regions for the reflooding study

    International Nuclear Information System (INIS)

    Motta, A.M.T.; Kinrys, S.; Roberty, N.C.; Carmo, E.G.D. do; Oliveira, L.F.S. de

    1982-01-01

    A two-dimensional semi-analytical model, with three heat transfer regions is described for the calculation of flood ratio, the length of quenching front and the temperature distribution in the cladding. (E.G.) [pt

  14. On the probability distribution of the stochastic saturation scale in QCD

    International Nuclear Information System (INIS)

    Marquet, C.; Soyez, G.; Xiao Bowen

    2006-01-01

    It was recently noticed that high-energy scattering processes in QCD have a stochastic nature. An event-by-event scattering amplitude is characterised by a saturation scale which is a random variable. The statistical ensemble of saturation scales formed with all the events is distributed according to a probability law whose cumulants have been recently computed. In this work, we obtain the probability distribution from the cumulants. We prove that it can be considered as Gaussian over a large domain that we specify and our results are confirmed by numerical simulations

  15. Statistical thermodynamics of a two-dimensional relativistic gas.

    Science.gov (United States)

    Montakhab, Afshin; Ghodrat, Malihe; Barati, Mahmood

    2009-03-01

    In this paper we study a fully relativistic model of a two-dimensional hard-disk gas. This model avoids the general problems associated with relativistic particle collisions and is therefore an ideal system to study relativistic effects in statistical thermodynamics. We study this model using molecular-dynamics simulation, concentrating on the velocity distribution functions. We obtain results for x and y components of velocity in the rest frame (Gamma) as well as the moving frame (Gamma;{'}) . Our results confirm that Jüttner distribution is the correct generalization of Maxwell-Boltzmann distribution. We obtain the same "temperature" parameter beta for both frames consistent with a recent study of a limited one-dimensional model. We also address the controversial topic of temperature transformation. We show that while local thermal equilibrium holds in the moving frame, relying on statistical methods such as distribution functions or equipartition theorem are ultimately inconclusive in deciding on a correct temperature transformation law (if any).

  16. Engineering topological edge states in two dimensional magnetic photonic crystal

    Science.gov (United States)

    Yang, Bing; Wu, Tong; Zhang, Xiangdong

    2017-01-01

    Based on a perturbative approach, we propose a simple and efficient method to engineer the topological edge states in two dimensional magnetic photonic crystals. The topological edge states in the microstructures can be constructed and varied by altering the parameters of the microstructure according to the field-energy distributions of the Bloch states at the related Bloch wave vectors. The validity of the proposed method has been demonstrated by exact numerical calculations through three concrete examples. Our method makes the topological edge states "designable."

  17. Field analysis of two-dimensional focusing grating

    OpenAIRE

    Borsboom, P.P.; Frankena, H.J.

    1995-01-01

    The method that we have developed [P-P. Borsboom, Ph.D. dissertation (Delft University of Technology, Delft, The Netherlands); P-P. Borsboom and H. J. Frankena, J. Opt. Soc. Am. A 12, 1134–1141 (1995)] is successfully applied to a two-dimensional focusing grating coupler. The field in the focal region has been determined for symmetrical chirped gratings consisting of as many as 124 corrugations. The intensity distribution in the focal region agrees well with the approximate predictions of geo...

  18. Autocorrelation based reconstruction of two-dimensional binary objects

    International Nuclear Information System (INIS)

    Mejia-Barbosa, Y.; Castaneda, R.

    2005-10-01

    A method for reconstructing two-dimensional binary objects from its autocorrelation function is discussed. The objects consist of a finite set of identical elements. The reconstruction algorithm is based on the concept of class of element pairs, defined as the set of element pairs with the same separation vector. This concept allows to solve the redundancy introduced by the element pairs of each class. It is also shown that different objects, consisting of an equal number of elements and the same classes of pairs, provide Fraunhofer diffraction patterns with identical intensity distributions. However, the method predicts all the possible objects that produce the same Fraunhofer pattern. (author)

  19. The joint probability distribution of structure factors incorporating anomalous-scattering and isomorphous-replacement data

    International Nuclear Information System (INIS)

    Peschar, R.; Schenk, H.

    1991-01-01

    A method to derive joint probability distributions of structure factors is presented which incorporates anomalous-scattering and isomorphous-replacement data in a unified procedure. The structure factors F H and F -H , whose magnitudes are different due to anomalous scattering, are shown to be isomorphously related. This leads to a definition of isomorphism by means of which isomorphous-replacement and anomalous-scattering data can be handled simultaneously. The definition and calculation of the general term of the joint probability distribution for isomorphous structure factors turns out to be crucial. Its analytical form leads to an algorithm by means of which any particular joint probability distribution of structure factors can be constructed. The calculation of the general term is discussed for the case of four isomorphous structure factors in P1, assuming the atoms to be independently and uniformly distributed. A main result is the construction of the probability distribution of the 64 triplet phase sums present in space group P1 amongst four isomorphous structure factors F H , four isomorphous F K and four isomorphous F -H-K . The procedure is readily generalized in the case where an arbitrary number of isomorphous structure factors are available for F H , F K and F -H-K . (orig.)

  20. Two dimensional generalizations of the Newcomb equation

    International Nuclear Information System (INIS)

    Dewar, R.L.; Pletzer, A.

    1989-11-01

    The Bineau reduction to scalar form of the equation governing ideal, zero frequency linearized displacements from a hydromagnetic equilibrium possessing a continuous symmetry is performed in 'universal coordinates', applicable to both the toroidal and helical cases. The resulting generalized Newcomb equation (GNE) has in general a more complicated form than the corresponding one dimensional equation obtained by Newcomb in the case of circular cylindrical symmetry, but in this cylindrical case , the equation can be transformed to that of Newcomb. In the two dimensional case there is a transformation which leaves the form of the GNE invariant and simplifies the Frobenius expansion about a rational surface, especially in the limit of zero pressure gradient. The Frobenius expansions about a mode rational surface is developed and the connection with Hamiltonian transformation theory is shown. 17 refs

  1. Pressure of two-dimensional Yukawa liquids

    International Nuclear Information System (INIS)

    Feng, Yan; Wang, Lei; Tian, Wen-de; Goree, J; Liu, Bin

    2016-01-01

    A simple analytic expression for the pressure of a two-dimensional Yukawa liquid is found by fitting results from a molecular dynamics simulation. The results verify that the pressure can be written as the sum of a potential term which is a simple multiple of the Coulomb potential energy at a distance of the Wigner–Seitz radius, and a kinetic term which is a multiple of the one for an ideal gas. Dimensionless coefficients for each of these terms are found empirically, by fitting. The resulting analytic expression, with its empirically determined coefficients, is plotted as isochores, or curves of constant area. These results should be applicable to monolayer dusty plasmas. (paper)

  2. Two dimensional nanomaterials for flexible supercapacitors.

    Science.gov (United States)

    Peng, Xu; Peng, Lele; Wu, Changzheng; Xie, Yi

    2014-05-21

    Flexible supercapacitors, as one of most promising emerging energy storage devices, are of great interest owing to their high power density with great mechanical compliance, making them very suitable as power back-ups for future stretchable electronics. Two-dimensional (2D) nanomaterials, including the quasi-2D graphene and inorganic graphene-like materials (IGMs), have been greatly explored to providing huge potential for the development of flexible supercapacitors with higher electrochemical performance. This review article is devoted to recent progresses in engineering 2D nanomaterials for flexible supercapacitors, which survey the evolution of electrode materials, recent developments in 2D nanomaterials and their hybrid nanostructures with regulated electrical properties, and the new planar configurations of flexible supercapacitors. Furthermore, a brief discussion on future directions, challenges and opportunities in this fascinating area is also provided.

  3. Geometrical aspects of solvable two dimensional models

    International Nuclear Information System (INIS)

    Tanaka, K.

    1989-01-01

    It was noted that there is a connection between the non-linear two-dimensional (2D) models and the scalar curvature r, i.e., when r = -2 the equations of motion of the Liouville and sine-Gordon models were obtained. Further, solutions of various classical nonlinear 2D models can be obtained from the condition that the appropriate curvature two form Ω = 0, which suggests that these models are closely related. This relation is explored further in the classical version by obtaining the equations of motion from the evolution equations, the infinite number of conserved quantities, and the common central charge. The Poisson brackets of the solvable 2D models are specified by the Virasoro algebra. 21 refs

  4. Two-dimensional materials for ultrafast lasers

    International Nuclear Information System (INIS)

    Wang Fengqiu

    2017-01-01

    As the fundamental optical properties and novel photophysics of graphene and related two-dimensional (2D) crystals are being extensively investigated and revealed, a range of potential applications in optical and optoelectronic devices have been proposed and demonstrated. Of the many possibilities, the use of 2D materials as broadband, cost-effective and versatile ultrafast optical switches (or saturable absorbers) for short-pulsed lasers constitutes a rapidly developing field with not only a good number of publications, but also a promising prospect for commercial exploitation. This review primarily focuses on the recent development of pulsed lasers based on several representative 2D materials. The comparative advantages of these materials are discussed, and challenges to practical exploitation, which represent good future directions of research, are laid out. (paper)

  5. Two-dimensional phase fraction charts

    International Nuclear Information System (INIS)

    Morral, J.E.

    1984-01-01

    A phase fraction chart is a graphical representation of the amount of each phase present in a system as a function of temperature, composition or other variable. Examples are phase fraction versus temperature charts used to characterize specific alloys and as a teaching tool in elementary texts, and Schaeffler diagrams used to predict the amount of ferrite in stainless steel welds. Isothermal-transformation diagrams (TTT diagrams) are examples that give phase (or microconstituent) amount versus temperature and time. The purpose of this communication is to discuss the properties of two-dimensional phase fraction charts in more general terms than have been reported before. It is shown that they can represent multi-component, multiphase equilibria in a way which is easier to read and which contains more information than the isotherms and isopleths of multi-component phase diagrams

  6. Two-dimensional motions of rockets

    International Nuclear Information System (INIS)

    Kang, Yoonhwan; Bae, Saebyok

    2007-01-01

    We analyse the two-dimensional motions of the rockets for various types of rocket thrusts, the air friction and the gravitation by using a suitable representation of the rocket equation and the numerical calculation. The slope shapes of the rocket trajectories are discussed for the three types of rocket engines. Unlike the projectile motions, the descending parts of the trajectories tend to be gentler and straighter slopes than the ascending parts for relatively large launching angles due to the non-vanishing thrusts. We discuss the ranges, the maximum altitudes and the engine performances of the rockets. It seems that the exponential fuel exhaustion can be the most potent engine for the longest and highest flights

  7. Two dimensional NMR studies of polysaccharides

    International Nuclear Information System (INIS)

    Byrd, R.A.; Egan, W.; Summers, M.F.

    1987-01-01

    Polysaccharides are very important components in the immune response system. Capsular polysaccharides and lipopolysaccharides occupy cell surface sites of bacteria, play key roles in recognition and some have been used to develop vaccines. Consequently, the ability to determine chemical structures of these systems is vital to an understanding of their immunogenic action. The authors have been utilizing recently developed two-dimensional homonuclear and heteronuclear correlation spectroscopy for unambiguous assignment and structure determination of a number of polysaccharides. In particular, the 1 H-detected heteronuclear correlation experiments are essential to the rapid and sensitive determination of these structures. Linkage sites are determined by independent polarization transfer experiments and multiple quantum correlation experiments. These methods permit the complete structure determination on very small amounts of the polysaccharides. They present the results of a number of structural determinations and discuss the limits of these experiments in terms of their applications to polysaccharides

  8. Two-dimensional electroacoustic waves in silicene

    Science.gov (United States)

    Zhukov, Alexander V.; Bouffanais, Roland; Konobeeva, Natalia N.; Belonenko, Mikhail B.

    2018-01-01

    In this letter, we investigate the propagation of two-dimensional electromagnetic waves in a piezoelectric medium built upon silicene. Ultrashort optical pulses of Gaussian form are considered to probe this medium. On the basis of Maxwell's equations supplemented with the wave equation for the medium's displacement vector, we obtain the effective governing equation for the vector potential associated with the electromagnetic field, as well as the component of the displacement vector. The dependence of the pulse shape on the bandgap in silicene and the piezoelectric coefficient of the medium was analyzed, thereby revealing a nontrivial triadic interplay between the characteristics of the pulse dynamics, the electronic properties of silicene, and the electrically induced mechanical vibrations of the medium. In particular, we uncovered the possibility for an amplification of the pulse amplitude through the tuning of the piezoelectric coefficient. This property could potentially offer promising prospects for the development of amplification devices for the optoelectronics industry.

  9. Versatile two-dimensional transition metal dichalcogenides

    DEFF Research Database (Denmark)

    Canulescu, Stela; Affannoukoué, Kévin; Döbeli, Max

    ), a strategy for the fabrication of 2D heterostructures must be developed. Here we demonstrate a novel approach for the bottom-up synthesis of TMDC monolayers, namely Pulsed Laser Deposition (PLD) combined with a sulfur evaporation beam. PLD relies on the use of a pulsed laser (ns pulse duration) to induce...... material transfer from a solid source (such as a sintered target of MoS2) to a substrate (such as Si or sapphire). The deposition rate in PLD is typically much less than a monolayer per pulse, meaning that the number of MLs can be controlled by a careful selection of the number of laser pulses......Two-dimensional transition metal dichalcogenides (2D-TMDCs), such as MoS2, have emerged as a new class of semiconducting materials with distinct optical and electrical properties. The availability of 2D-TMDCs with distinct band gaps allows for unlimited combinations of TMDC monolayers (MLs...

  10. Two-dimensional heterostructures for energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Gogotsi, Yury G. [Drexel Univ., Philadelphia, PA (United States); Pomerantseva, Ekaterina [Drexel Univ., Philadelphia, PA (United States)

    2017-06-12

    Two-dimensional (2D) materials provide slit-shaped ion diffusion channels that enable fast movement of lithium and other ions. However, electronic conductivity, the number of intercalation sites, and stability during extended cycling are also crucial for building high-performance energy storage devices. While individual 2D materials, such as graphene, show some of the required properties, none of them can offer all properties needed to maximize energy density, power density, and cycle life. Here we argue that stacking different 2D materials into heterostructured architectures opens an opportunity to construct electrodes that would combine the advantages of the individual building blocks while eliminating the associated shortcomings. We discuss characteristics of common 2D materials and provide examples of 2D heterostructured electrodes that showed new phenomena leading to superior electrochemical performance. As a result, we also consider electrode fabrication approaches and finally outline future steps to create 2D heterostructured electrodes that could greatly expand current energy storage technologies.

  11. Two-dimensional fourier transform spectrometer

    Science.gov (United States)

    DeFlores, Lauren; Tokmakoff, Andrei

    2013-09-03

    The present invention relates to a system and methods for acquiring two-dimensional Fourier transform (2D FT) spectra. Overlap of a collinear pulse pair and probe induce a molecular response which is collected by spectral dispersion of the signal modulated probe beam. Simultaneous collection of the molecular response, pulse timing and characteristics permit real time phasing and rapid acquisition of spectra. Full spectra are acquired as a function of pulse pair timings and numerically transformed to achieve the full frequency-frequency spectrum. This method demonstrates the ability to acquire information on molecular dynamics, couplings and structure in a simple apparatus. Multi-dimensional methods can be used for diagnostic and analytical measurements in the biological, biomedical, and chemical fields.

  12. Equivalency of two-dimensional algebras

    International Nuclear Information System (INIS)

    Santos, Gildemar Carneiro dos; Pomponet Filho, Balbino Jose S.

    2011-01-01

    Full text: Let us consider a vector z = xi + yj over the field of real numbers, whose basis (i,j) satisfy a given algebra. Any property of this algebra will be reflected in any function of z, so we can state that the knowledge of the properties of an algebra leads to more general conclusions than the knowledge of the properties of a function. However structural properties of an algebra do not change when this algebra suffers a linear transformation, though the structural constants defining this algebra do change. We say that two algebras are equivalent to each other whenever they are related by a linear transformation. In this case, we have found that some relations between the structural constants are sufficient to recognize whether or not an algebra is equivalent to another. In spite that the basis transform linearly, the structural constants change like a third order tensor, but some combinations of these tensors result in a linear transformation, allowing to write the entries of the transformation matrix as function of the structural constants. Eventually, a systematic way to find the transformation matrix between these equivalent algebras is obtained. In this sense, we have performed the thorough classification of associative commutative two-dimensional algebras, and find that even non-division algebra may be helpful in solving non-linear dynamic systems. The Mandelbrot set was used to have a pictorial view of each algebra, since equivalent algebras result in the same pattern. Presently we have succeeded in classifying some non-associative two-dimensional algebras, a task more difficult than for associative one. (author)

  13. Row—column visibility graph approach to two-dimensional landscapes

    International Nuclear Information System (INIS)

    Xiao Qin; Pan Xue; Li Xin-Li; Stephen Mutua; Yang Hui-Jie; Jiang Yan; Wang Jian-Yong; Zhang Qing-Jun

    2014-01-01

    A new concept, called the row—column visibility graph, is proposed to map two-dimensional landscapes to complex networks. A cluster coverage is introduced to describe the extensive property of node clusters on a Euclidean lattice. Graphs mapped from fractals generated with the probability redistribution model behave scale-free. They have pattern-induced hierarchical organizations and comparatively much more extensive structures. The scale-free exponent has a negative correlation with the Hurst exponent, however, there is no deterministic relation between them. Graphs for fractals generated with the midpoint displacement model are exponential networks. When the Hurst exponent is large enough (e.g., H > 0.5), the degree distribution decays much more slowly, the average coverage becomes significant large, and the initially hierarchical structure at H < 0.5 is destroyed completely. Hence, the row—column visibility graph can be used to detect the pattern-related new characteristics of two-dimensional landscapes. (interdisciplinary physics and related areas of science and technology)

  14. Two-dimensional electron density characterisation of arc interruption phenomenon in current-zero phase

    Science.gov (United States)

    Inada, Yuki; Kamiya, Tomoki; Matsuoka, Shigeyasu; Kumada, Akiko; Ikeda, Hisatoshi; Hidaka, Kunihiko

    2018-01-01

    Two-dimensional electron density imaging over free burning SF6 arcs and SF6 gas-blast arcs was conducted at current zero using highly sensitive Shack-Hartmann type laser wavefront sensors in order to experimentally characterise electron density distributions for the success and failure of arc interruption in the thermal reignition phase. The experimental results under an interruption probability of 50% showed that free burning SF6 arcs with axially asymmetric electron density profiles were interrupted with a success rate of 88%. On the other hand, the current interruption of SF6 gas-blast arcs was reproducibly achieved under locally reduced electron densities and the interruption success rate was 100%.

  15. Exact critical properties of two-dimensional polymer networks from conformal invariance

    International Nuclear Information System (INIS)

    Duplantier, B.

    1988-03-01

    An infinity of exact critical exponents for two-dimensional self-avoiding walks can be derived from conformal invariance and Coulomb gas techniques applied to the O(n) model and to the Potts model. They apply to polymer networks of any topology, for which a general scaling theory is given, valid in any dimension d. The infinite set of exponents has also been calculated to O(ε 2 ), for d=4-ε. The 2D study also includes other universality classes like the dense polymers, the Hamiltonian walks, the polymers at their θ-point. Exact correlation functions can be further given for Hamiltonian walks, and exact winding angle probability distributions for the self-avoiding walks

  16. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    Science.gov (United States)

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  17. The probability distribution model of air pollution index and its dominants in Kuala Lumpur

    Science.gov (United States)

    AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah

    2016-11-01

    This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.

  18. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We

  19. Examining barrier distributions and, in extension, energy derivative of probabilities for surrogate experiments

    International Nuclear Information System (INIS)

    Romain, P.; Duarte, H.; Morillon, B.

    2012-01-01

    The energy derivatives of probabilities are functions suited to a best understanding of certain mechanisms. Applied to compound nuclear reactions, they can bring information on fusion barrier distributions as originally introduced, and also, as presented here, on fission barrier distributions and heights. Extendedly, they permit to access the compound nucleus spin-parity states preferentially populated according to an entrance channel, at a given energy. (authors)

  20. Neighbor-dependent Ramachandran probability distributions of amino acids developed from a hierarchical Dirichlet process model.

    Directory of Open Access Journals (Sweden)

    Daniel Ting

    2010-04-01

    Full Text Available Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1 input data size and criteria for structure inclusion (resolution, R-factor, etc.; 2 filtering of suspect conformations and outliers using B-factors or other features; 3 secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included; 4 the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5 whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp.

  1. Application of the Unbounded Probability Distribution of the Johnson System for Floods Estimation

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2015-09-01

    Full Text Available Floods designs constitute a key to estimate the sizing of new water works and to review the hydrological security of existing ones. The most reliable method for estimating their magnitudes associated with certain return periods is to fit a probabilistic model to available records of maximum annual flows. Since such model is at first unknown, several models need to be tested in order to select the most appropriate one according to an arbitrary statistical index, commonly the standard error of fit. Several probability distributions have shown versatility and consistency of results when processing floods records and therefore, its application has been established as a norm or precept. The Johnson System has three families of distributions, one of which is the Log–Normal model with three parameters of fit, which is also the border between the bounded distributions and those with no upper limit. These families of distributions have four adjustment parameters and converge to the standard normal distribution, so that their predictions are obtained with such a model. Having contrasted the three probability distributions established by precept in 31 historical records of hydrological events, the Johnson system is applied to such data. The results of the unbounded distribution of the Johnson system (SJU are compared to the optimal results from the three distributions. It was found that the predictions of the SJU distribution are similar to those obtained with the other models in the low return periods ( 1000 years. Because of its theoretical support, the SJU model is recommended in flood estimation.

  2. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  3. Spectral shaping of a randomized PWM DC-DC converter using maximum entropy probability distributions

    CSIR Research Space (South Africa)

    Dove, Albert

    2017-01-01

    Full Text Available maintaining constraints in a DC-DC converter is investigated. A probability distribution whose aim is to ensure maximal harmonic spreading and yet mainaint constraints is presented. The PDFs are determined from a direct application of the method of Maximum...

  4. Extreme points of the convex set of joint probability distributions with ...

    Indian Academy of Sciences (India)

    Here we address the following problem: If G is a standard ... convex set of all joint probability distributions on the product Borel space (X1 ×X2, F1 ⊗. F2) which .... cannot be identically zero when X and Y vary in A1 and u and v vary in H2. Thus.

  5. Providing probability distributions for the causal pathogen of clinical mastitis using naive Bayesian networks

    NARCIS (Netherlands)

    Steeneveld, W.; Gaag, van der L.C.; Barkema, H.W.; Hogeveen, H.

    2009-01-01

    Clinical mastitis (CM) can be caused by a wide variety of pathogens and farmers must start treatment before the actual causal pathogen is known. By providing a probability distribution for the causal pathogen, naive Bayesian networks (NBN) can serve as a management tool for farmers to decide which

  6. Extending models for two-dimensional constraints

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2009-01-01

    Random fields in two dimensions may be specified on 2 times 2 elements such that the probabilities of finite configurations and the entropy may be calculated explicitly. The Pickard random field is one example where probability of a new (non-boundary) element is conditioned on three previous elem...

  7. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  8. Quantile selection procedure and assoiated distribution of ratios of order statistics from a restricted family of probability distributions

    International Nuclear Information System (INIS)

    Gupta, S.S.; Panchapakesan, S.

    1975-01-01

    A quantile selection procedure in reliability problems pertaining to a restricted family of probability distributions is discussed. This family is assumed to be star-ordered with respect to the standard normal distribution folded at the origin. Motivation for this formulation of the problem is described. Both exact and asymptotic results dealing with the distribution of the maximum of ratios of order statistics from such a family are obtained and tables of the appropriate constants, percentiles of this statistic, are given in order to facilitate the use of the selection procedure

  9. Critical behavior of the two-dimensional first passage time

    International Nuclear Information System (INIS)

    Chayes, J.T.; Chayes, L.; Durrett, R.

    1986-01-01

    We study the two-dimensional first passage problem in which bonds have zero and unit passage times with probability p and 1-p, respectively. We provide that as the zero-time bonds approach the percolation threshold p/sub c/, the first passage time exhibits the same critical behavior as the correlation function of the underlying percolation problem. In particular, if the correlation length obeys ξ(p)--chemical bondp-p/sub c/chemical bond/sup -//sup v/, then the first passage time constant satisfies μ(p)--chemical bondp-p/sub c/chemical bond/sup v/. At p/sub c/, where it has been asserted that the first passage time from 0 to x scales as chemical bondxchemical bond to a power psi with 0< psi<1, we show that the passage times grow like log chemical bondxchemical bond, i.e., the fluid spreads exponentially rapidly

  10. Precipitation intensity probability distribution modelling for hydrological and construction design purposes

    International Nuclear Information System (INIS)

    Koshinchanov, Georgy; Dimitrov, Dobri

    2008-01-01

    The characteristics of rainfall intensity are important for many purposes, including design of sewage and drainage systems, tuning flood warning procedures, etc. Those estimates are usually statistical estimates of the intensity of precipitation realized for certain period of time (e.g. 5, 10 min., etc) with different return period (e.g. 20, 100 years, etc). The traditional approach in evaluating the mentioned precipitation intensities is to process the pluviometer's records and fit probability distribution to samples of intensities valid for certain locations ore regions. Those estimates further become part of the state regulations to be used for various economic activities. Two problems occur using the mentioned approach: 1. Due to various factors the climate conditions are changed and the precipitation intensity estimates need regular update; 2. As far as the extremes of the probability distribution are of particular importance for the practice, the methodology of the distribution fitting needs specific attention to those parts of the distribution. The aim of this paper is to make review of the existing methodologies for processing the intensive rainfalls and to refresh some of the statistical estimates for the studied areas. The methodologies used in Bulgaria for analyzing the intensive rainfalls and produce relevant statistical estimates: - The method of the maximum intensity, used in the National Institute of Meteorology and Hydrology to process and decode the pluviometer's records, followed by distribution fitting for each precipitation duration period; - As the above, but with separate modeling of probability distribution for the middle and high probability quantiles. - Method is similar to the first one, but with a threshold of 0,36 mm/min of intensity; - Another method proposed by the Russian hydrologist G. A. Aleksiev for regionalization of estimates over some territory, improved and adapted by S. Gerasimov for Bulgaria; - Next method is considering only

  11. Gas Hydrate Formation Probability Distributions: The Effect of Shear and Comparisons with Nucleation Theory.

    Science.gov (United States)

    May, Eric F; Lim, Vincent W; Metaxas, Peter J; Du, Jianwei; Stanwix, Paul L; Rowland, Darren; Johns, Michael L; Haandrikman, Gert; Crosby, Daniel; Aman, Zachary M

    2018-03-13

    Gas hydrate formation is a stochastic phenomenon of considerable significance for any risk-based approach to flow assurance in the oil and gas industry. In principle, well-established results from nucleation theory offer the prospect of predictive models for hydrate formation probability in industrial production systems. In practice, however, heuristics are relied on when estimating formation risk for a given flowline subcooling or when quantifying kinetic hydrate inhibitor (KHI) performance. Here, we present statistically significant measurements of formation probability distributions for natural gas hydrate systems under shear, which are quantitatively compared with theoretical predictions. Distributions with over 100 points were generated using low-mass, Peltier-cooled pressure cells, cycled in temperature between 40 and -5 °C at up to 2 K·min -1 and analyzed with robust algorithms that automatically identify hydrate formation and initial growth rates from dynamic pressure data. The application of shear had a significant influence on the measured distributions: at 700 rpm mass-transfer limitations were minimal, as demonstrated by the kinetic growth rates observed. The formation probability distributions measured at this shear rate had mean subcoolings consistent with theoretical predictions and steel-hydrate-water contact angles of 14-26°. However, the experimental distributions were substantially wider than predicted, suggesting that phenomena acting on macroscopic length scales are responsible for much of the observed stochastic formation. Performance tests of a KHI provided new insights into how such chemicals can reduce the risk of hydrate blockage in flowlines. Our data demonstrate that the KHI not only reduces the probability of formation (by both shifting and sharpening the distribution) but also reduces hydrate growth rates by a factor of 2.

  12. Electronic Transport in Two-Dimensional Materials

    Science.gov (United States)

    Sangwan, Vinod K.; Hersam, Mark C.

    2018-04-01

    Two-dimensional (2D) materials have captured the attention of the scientific community due to the wide range of unique properties at nanometer-scale thicknesses. While significant exploratory research in 2D materials has been achieved, the understanding of 2D electronic transport and carrier dynamics remains in a nascent stage. Furthermore, because prior review articles have provided general overviews of 2D materials or specifically focused on charge transport in graphene, here we instead highlight charge transport mechanisms in post-graphene 2D materials, with particular emphasis on transition metal dichalcogenides and black phosphorus. For these systems, we delineate the intricacies of electronic transport, including band structure control with thickness and external fields, valley polarization, scattering mechanisms, electrical contacts, and doping. In addition, electronic interactions between 2D materials are considered in the form of van der Waals heterojunctions and composite films. This review concludes with a perspective on the most promising future directions in this fast-evolving field.

  13. Asymptotics for Two-dimensional Atoms

    DEFF Research Database (Denmark)

    Nam, Phan Thanh; Portmann, Fabian; Solovej, Jan Philip

    2012-01-01

    We prove that the ground state energy of an atom confined to two dimensions with an infinitely heavy nucleus of charge $Z>0$ and $N$ quantum electrons of charge -1 is $E(N,Z)=-{1/2}Z^2\\ln Z+(E^{\\TF}(\\lambda)+{1/2}c^{\\rm H})Z^2+o(Z^2)$ when $Z\\to \\infty$ and $N/Z\\to \\lambda$, where $E^{\\TF}(\\lambd......We prove that the ground state energy of an atom confined to two dimensions with an infinitely heavy nucleus of charge $Z>0$ and $N$ quantum electrons of charge -1 is $E(N,Z)=-{1/2}Z^2\\ln Z+(E^{\\TF}(\\lambda)+{1/2}c^{\\rm H})Z^2+o(Z^2)$ when $Z\\to \\infty$ and $N/Z\\to \\lambda$, where $E......^{\\TF}(\\lambda)$ is given by a Thomas-Fermi type variational problem and $c^{\\rm H}\\approx -2.2339$ is an explicit constant. We also show that the radius of a two-dimensional neutral atom is unbounded when $Z\\to \\infty$, which is contrary to the expected behavior of three-dimensional atoms....

  14. Seismic isolation of two dimensional periodic foundations

    International Nuclear Information System (INIS)

    Yan, Y.; Mo, Y. L.; Laskar, A.; Cheng, Z.; Shi, Z.; Menq, F.; Tang, Y.

    2014-01-01

    Phononic crystal is now used to control acoustic waves. When the crystal goes to a larger scale, it is called periodic structure. The band gaps of the periodic structure can be reduced to range from 0.5 Hz to 50 Hz. Therefore, the periodic structure has potential applications in seismic wave reflection. In civil engineering, the periodic structure can be served as the foundation of upper structure. This type of foundation consisting of periodic structure is called periodic foundation. When the frequency of seismic waves falls into the band gaps of the periodic foundation, the seismic wave can be blocked. Field experiments of a scaled two dimensional (2D) periodic foundation with an upper structure were conducted to verify the band gap effects. Test results showed the 2D periodic foundation can effectively reduce the response of the upper structure for excitations with frequencies within the frequency band gaps. When the experimental and the finite element analysis results are compared, they agree well with each other, indicating that 2D periodic foundation is a feasible way of reducing seismic vibrations.

  15. Two-dimensional transport of tokamak plasmas

    International Nuclear Information System (INIS)

    Hirshman, S.P.; Jardin, S.C.

    1979-01-01

    A reduced set of two-fluid transport equations is obtained from the conservation equations describing the time evolution of the differential particle number, entropy, and magnetic fluxes in an axisymmetric toroidal plasma with nested magnetic surfaces. Expanding in the small ratio of perpendicular to parallel mobilities and thermal conductivities yields as solubility constraints one-dimensional equations for the surface-averaged thermodynamic variables and magnetic fluxes. Since Ohm's law E +u x B =R', where R' accounts for any nonideal effects, only determines the particle flow relative to the diffusing magnetic surfaces, it is necessary to solve a single two-dimensional generalized differential equation, (partial/partialt) delpsi. (delp - J x B) =0, to find the absolute velocity of a magnetic surface enclosing a fixed toroidal flux. This equation is linear but nonstandard in that it involves flux surface averages of the unknown velocity. Specification of R' and the cross-field ion and electron heat fluxes provides a closed system of equations. A time-dependent coordinate transformation is used to describe the diffusion of plasma quantities through magnetic surfaces of changing shape

  16. Two-dimensional topological photonic systems

    Science.gov (United States)

    Sun, Xiao-Chen; He, Cheng; Liu, Xiao-Ping; Lu, Ming-Hui; Zhu, Shi-Ning; Chen, Yan-Feng

    2017-09-01

    The topological phase of matter, originally proposed and first demonstrated in fermionic electronic systems, has drawn considerable research attention in the past decades due to its robust transport of edge states and its potential with respect to future quantum information, communication, and computation. Recently, searching for such a unique material phase in bosonic systems has become a hot research topic worldwide. So far, many bosonic topological models and methods for realizing them have been discovered in photonic systems, acoustic systems, mechanical systems, etc. These discoveries have certainly yielded vast opportunities in designing material phases and related properties in the topological domain. In this review, we first focus on some of the representative photonic topological models and employ the underlying Dirac model to analyze the edge states and geometric phase. On the basis of these models, three common types of two-dimensional topological photonic systems are discussed: 1) photonic quantum Hall effect with broken time-reversal symmetry; 2) photonic topological insulator and the associated pseudo-time-reversal symmetry-protected mechanism; 3) time/space periodically modulated photonic Floquet topological insulator. Finally, we provide a summary and extension of this emerging field, including a brief introduction to the Weyl point in three-dimensional systems.

  17. Radiation effects on two-dimensional materials

    Energy Technology Data Exchange (ETDEWEB)

    Walker, R.C. II; Robinson, J.A. [Department of Materials Science, Penn State, University Park, PA (United States); Center for Two-Dimensional Layered Materials, Penn State, University Park, PA (United States); Shi, T. [Department of Mechanical and Nuclear Engineering, Penn State, University Park, PA (United States); Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States); Silva, E.C. [GlobalFoundries, Malta, NY (United States); Jovanovic, I. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States)

    2016-12-15

    The effects of electromagnetic and particle irradiation on two-dimensional materials (2DMs) are discussed in this review. Radiation creates defects that impact the structure and electronic performance of materials. Determining the impact of these defects is important for developing 2DM-based devices for use in high-radiation environments, such as space or nuclear reactors. As such, most experimental studies have been focused on determining total ionizing dose damage to 2DMs and devices. Total dose experiments using X-rays, gamma rays, electrons, protons, and heavy ions are summarized in this review. We briefly discuss the possibility of investigating single event effects in 2DMs based on initial ion beam irradiation experiments and the development of 2DM-based integrated circuits. Additionally, beneficial uses of irradiation such as ion implantation to dope materials or electron-beam and helium-beam etching to shape materials have begun to be used on 2DMs and are reviewed as well. For non-ionizing radiation, such as low-energy photons, we review the literature on 2DM-based photo-detection from terahertz to UV. The majority of photo-detecting devices operate in the visible and UV range, and for this reason they are the focus of this review. However, we review the progress in developing 2DMs for detecting infrared and terahertz radiation. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  18. Buckled two-dimensional Xene sheets.

    Science.gov (United States)

    Molle, Alessandro; Goldberger, Joshua; Houssa, Michel; Xu, Yong; Zhang, Shou-Cheng; Akinwande, Deji

    2017-02-01

    Silicene, germanene and stanene are part of a monoelemental class of two-dimensional (2D) crystals termed 2D-Xenes (X = Si, Ge, Sn and so on) which, together with their ligand-functionalized derivatives referred to as Xanes, are comprised of group IVA atoms arranged in a honeycomb lattice - similar to graphene but with varying degrees of buckling. Their electronic structure ranges from trivial insulators, to semiconductors with tunable gaps, to semi-metallic, depending on the substrate, chemical functionalization and strain. More than a dozen different topological insulator states are predicted to emerge, including the quantum spin Hall state at room temperature, which, if realized, would enable new classes of nanoelectronic and spintronic devices, such as the topological field-effect transistor. The electronic structure can be tuned, for example, by changing the group IVA element, the degree of spin-orbit coupling, the functionalization chemistry or the substrate, making the 2D-Xene systems promising multifunctional 2D materials for nanotechnology. This Perspective highlights the current state of the art and future opportunities in the manipulation and stability of these materials, their functions and applications, and novel device concepts.

  19. Pattern recognition in spaces of probability distributions for the analysis of edge-localized modes in tokamak plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Shabbir, Aqsa

    2016-07-07

    scaling (MDS) and landmark multidimensional scaling (LMDS) for data visualization (dimensionality reduction). Furthermore, two new classification schemes are developed: a distance-to-centroid classifier (D2C) and a principal geodesic classifier (PGC). D2C classifies on the basis of the minimum GD to the class centroids and PGC considers the shape of the class on the manifold by determining the minimum distance to the principal geodesic of each class. The methods are validated by their application to the classification and retrieval of colored texture images represented in the wavelet domain. Both methods prove to be computationally efficient, yield high accuracy and also clearly exhibit the adequacy of the GD and its superiority over the Euclidean distance, for comparing PDFs. The second main goal of the work targets ELM analysis at three fronts, using pattern recognition and probabilistic modeling: (i) We first concentrate on visualization of ELM characteristics by creating maps containing projections of multidimensional ELM data, as well as the corresponding probabilistic models. In particular, GD-based MDS is used for representing the complete distributions of the multidimensional data characterizing the operational space of ELMs onto two-dimensional maps. Clusters corresponding to type I and type III ELMs are identified and the maps enable tracking of trends in plasma parameters across the operational space. It is shown that the maps can also be used with reasonable accuracy for predicting the values of the plasma parameters at a certain point in the operational space. (ii) Our second application concerns fast, standardized and automated classification of ELM types. The presented classification schemes are aimed at complementing the phenomenological characterization using standardized methods that are less susceptible to subjective interpretation, while considerably reducing the effort of ELM experts in identifying ELM types. To this end, different classification

  20. Pattern recognition in spaces of probability distributions for the analysis of edge-localized modes in tokamak plasmas

    International Nuclear Information System (INIS)

    Shabbir, Aqsa

    2016-01-01

    scaling (MDS) and landmark multidimensional scaling (LMDS) for data visualization (dimensionality reduction). Furthermore, two new classification schemes are developed: a distance-to-centroid classifier (D2C) and a principal geodesic classifier (PGC). D2C classifies on the basis of the minimum GD to the class centroids and PGC considers the shape of the class on the manifold by determining the minimum distance to the principal geodesic of each class. The methods are validated by their application to the classification and retrieval of colored texture images represented in the wavelet domain. Both methods prove to be computationally efficient, yield high accuracy and also clearly exhibit the adequacy of the GD and its superiority over the Euclidean distance, for comparing PDFs. The second main goal of the work targets ELM analysis at three fronts, using pattern recognition and probabilistic modeling: (i) We first concentrate on visualization of ELM characteristics by creating maps containing projections of multidimensional ELM data, as well as the corresponding probabilistic models. In particular, GD-based MDS is used for representing the complete distributions of the multidimensional data characterizing the operational space of ELMs onto two-dimensional maps. Clusters corresponding to type I and type III ELMs are identified and the maps enable tracking of trends in plasma parameters across the operational space. It is shown that the maps can also be used with reasonable accuracy for predicting the values of the plasma parameters at a certain point in the operational space. (ii) Our second application concerns fast, standardized and automated classification of ELM types. The presented classification schemes are aimed at complementing the phenomenological characterization using standardized methods that are less susceptible to subjective interpretation, while considerably reducing the effort of ELM experts in identifying ELM types. To this end, different classification

  1. Laser sheet dropsizing based on two-dimensional Raman and Mie scattering.

    Science.gov (United States)

    Malarski, Anna; Schürer, Benedikt; Schmitz, Ingo; Zigan, Lars; Flügel, Alexandre; Leipertz, Alfred

    2009-04-01

    The imaging and quantification of droplet sizes in sprays is a challenging task for optical scientists and engineers. Laser sheet dropsizing (LSDS) combines the two-dimensional information of two different optical processes, one that is proportional to the droplet volume and one that depends on the droplet surface, e.g., Mie scattering. Besides Mie scattering, here we use two-dimensional Raman scattering as the volume-dependent measurement technique. Two different calibration strategies are presented and discussed. Two-dimensional droplet size distributions in a spray have been validated in comparison with the results of point-resolved phase Doppler anemometry (PDA) measurements.

  2. Laser sheet dropsizing based on two-dimensional Raman and Mie scattering

    International Nuclear Information System (INIS)

    Malarski, Anna; Schuerer, Benedikt; Schmitz, Ingo; Zigan, Lars; Fluegel, Alexandre; Leipertz, Alfred

    2009-01-01

    The imaging and quantification of droplet sizes in sprays is a challenging task for optical scientists and engineers. Laser sheet dropsizing (LSDS) combines the two-dimensional information of two different optical processes, one that is proportional to the droplet volume and one that depends on the droplet surface, e.g., Mie scattering. Besides Mie scattering, here we use two-dimensional Raman scattering as the volume-dependent measurement technique. Two different calibration strategies are presented and discussed. Two-dimensional droplet size distributions in a spray have been validated in comparison with the results of point-resolved phase Doppler anemometry (PDA) measurements

  3. Mode selection in two-dimensional Bragg resonators based on planar dielectric waveguides

    International Nuclear Information System (INIS)

    Baryshev, V R; Ginzburg, N S; Zaslavskii, V Yu; Malkin, A M; Sergeev, A S; Thumm, M

    2009-01-01

    Two-dimensional Bragg resonators based on planar dielectric waveguides are analysed. It is shown that the doubly periodic corrugation deposited on the dielectric surface in the form of two gratings with translational vectors directed perpendicular to each other ensures effective selection of modes along two coordinates at large Fresnel parameters. This result is obtained both by the method of coupled waves (geometrical optics approximation) and by the direct numerical simulations. Two-dimensional Bragg resonators make it possible to fabricate two-dimensional distributed feedback lasers and to provide generation of spatially coherent radiation in large-volume active media. (waveguides)

  4. On the issues of probability distribution of GPS carrier phase observations

    Science.gov (United States)

    Luo, X.; Mayer, M.; Heck, B.

    2009-04-01

    In common practice the observables related to Global Positioning System (GPS) are assumed to follow a Gauss-Laplace normal distribution. Actually, full knowledge of the observables' distribution is not required for parameter estimation by means of the least-squares algorithm based on the functional relation between observations and unknown parameters as well as the associated variance-covariance matrix. However, the probability distribution of GPS observations plays a key role in procedures for quality control (e.g. outlier and cycle slips detection, ambiguity resolution) and in reliability-related assessments of the estimation results. Under non-ideal observation conditions with respect to the factors impacting GPS data quality, for example multipath effects and atmospheric delays, the validity of the normal distribution postulate of GPS observations is in doubt. This paper presents a detailed analysis of the distribution properties of GPS carrier phase observations using double difference residuals. For this purpose 1-Hz observation data from the permanent SAPOS

  5. Tumour control probability (TCP) for non-uniform activity distribution in radionuclide therapy

    International Nuclear Information System (INIS)

    Uusijaervi, Helena; Bernhardt, Peter; Forssell-Aronsson, Eva

    2008-01-01

    Non-uniform radionuclide distribution in tumours will lead to a non-uniform absorbed dose. The aim of this study was to investigate how tumour control probability (TCP) depends on the radionuclide distribution in the tumour, both macroscopically and at the subcellular level. The absorbed dose in the cell nuclei of tumours was calculated for 90 Y, 177 Lu, 103m Rh and 211 At. The radionuclides were uniformly distributed within the subcellular compartment and they were uniformly, normally or log-normally distributed among the cells in the tumour. When all cells contain the same amount of activity, the cumulated activities required for TCP = 0.99 (A-tilde TCP=0.99 ) were 1.5-2 and 2-3 times higher when the activity was distributed on the cell membrane compared to in the cell nucleus for 103m Rh and 211 At, respectively. TCP for 90 Y was not affected by different radionuclide distributions, whereas for 177 Lu, it was slightly affected when the radionuclide was in the nucleus. TCP for 103m Rh and 211 At were affected by different radionuclide distributions to a great extent when the radionuclides were in the cell nucleus and to lesser extents when the radionuclides were distributed on the cell membrane or in the cytoplasm. When the activity was distributed in the nucleus, A-tilde TCP=0.99 increased when the activity distribution became more heterogeneous for 103m Rh and 211 At, and the increase was large when the activity was normally distributed compared to log-normally distributed. When the activity was distributed on the cell membrane, A-tilde TCP=0.99 was not affected for 103m Rh and 211 At when the activity distribution became more heterogeneous. A-tilde TCP=0.99 for 90 Y and 177 Lu were not affected by different activity distributions, neither macroscopic nor subcellular

  6. THREE-MOMENT BASED APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING SYSTEMS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2014-03-01

    Full Text Available The paper deals with the problem of approximation of probability distributions of random variables defined in positive area of real numbers with coefficient of variation different from unity. While using queueing systems as models for computer networks, calculation of characteristics is usually performed at the level of expectation and variance. At the same time, one of the main characteristics of multimedia data transmission quality in computer networks is delay jitter. For jitter calculation the function of packets time delay distribution should be known. It is shown that changing the third moment of distribution of packets delay leads to jitter calculation difference in tens or hundreds of percent, with the same values of the first two moments – expectation value and delay variation coefficient. This means that delay distribution approximation for the calculation of jitter should be performed in accordance with the third moment of delay distribution. For random variables with coefficients of variation greater than unity, iterative approximation algorithm with hyper-exponential two-phase distribution based on three moments of approximated distribution is offered. It is shown that for random variables with coefficients of variation less than unity, the impact of the third moment of distribution becomes negligible, and for approximation of such distributions Erlang distribution with two first moments should be used. This approach gives the possibility to obtain upper bounds for relevant characteristics, particularly, the upper bound of delay jitter.

  7. Mixing times in quantum walks on two-dimensional grids

    International Nuclear Information System (INIS)

    Marquezino, F. L.; Portugal, R.; Abal, G.

    2010-01-01

    Mixing properties of discrete-time quantum walks on two-dimensional grids with toruslike boundary conditions are analyzed, focusing on their connection to the complexity of the corresponding abstract search algorithm. In particular, an exact expression for the stationary distribution of the coherent walk over odd-sided lattices is obtained after solving the eigenproblem for the evolution operator for this particular graph. The limiting distribution and mixing time of a quantum walk with a coin operator modified as in the abstract search algorithm are obtained numerically. On the basis of these results, the relation between the mixing time of the modified walk and the running time of the corresponding abstract search algorithm is discussed.

  8. Two-dimensional void reconstruction by neutron transmission

    International Nuclear Information System (INIS)

    Zakaib, G.D.; Harms, A.A.; Vlachopoulos, J.

    1978-01-01

    Contemporary algebraic reconstruction methods are utilized in investigating the two-dimensional void distribution in a water analog from neutron transmission measurements. It is sought to ultimately apply these techniques to the determination of time-averaged void distribution in two-phase flow systems as well as for potential usage in neutron radiography. Initially, projection data were obtained from a digitized model of a hypothetical two-phase representation and later from neutron beam traverses across a voided methacrylate plastic model. From 10 to 15 views were incorporated, and decoupling of overlapped measurements was utilized to afford greater resolution. In general, the additive Algebraic Reconstruction Technique yielded the best reconstructions, with others showing promise for noisy data. Results indicate the need for some further development of the method in interpreting real data

  9. Moderator feedback effects in two-dimensional nodal methods for pressurized water reactor analysis

    International Nuclear Information System (INIS)

    Downar, T.J.

    1987-01-01

    A method was developed for incorporating moderator feedback effects in two-dimensional nodal codes used for pressurized water reactor (PWR) neutronic analysis. Equations for the assembly average quality and density are developed in terms of the assembly power calculated in two dimensions. The method is validated with a Westinghouse PWR using the Electric Power Research Institute code SIMULATE-E. Results show a several percent improvement is achieved in the two-dimensional power distribution prediction compared to methods without moderator feedback

  10. A microcomputer program for energy assessment and aggregation using the triangular probability distribution

    Science.gov (United States)

    Crovelli, R.A.; Balay, R.H.

    1991-01-01

    A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.

  11. Rank-Ordered Multifractal Analysis (ROMA of probability distributions in fluid turbulence

    Directory of Open Access Journals (Sweden)

    C. C. Wu

    2011-04-01

    Full Text Available Rank-Ordered Multifractal Analysis (ROMA was introduced by Chang and Wu (2008 to describe the multifractal characteristic of intermittent events. The procedure provides a natural connection between the rank-ordered spectrum and the idea of one-parameter scaling for monofractals. This technique has successfully been applied to MHD turbulence simulations and turbulence data observed in various space plasmas. In this paper, the technique is applied to the probability distributions in the inertial range of the turbulent fluid flow, as given in the vast Johns Hopkins University (JHU turbulence database. In addition, a new way of finding the continuous ROMA spectrum and the scaled probability distribution function (PDF simultaneously is introduced.

  12. The force distribution probability function for simple fluids by density functional theory.

    Science.gov (United States)

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  13. Exact solutions and symmetry analysis for the limiting probability distribution of quantum walks

    International Nuclear Information System (INIS)

    Xu, Xin-Ping; Ide, Yusuke

    2016-01-01

    In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coin and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.

  14. Exact solutions and symmetry analysis for the limiting probability distribution of quantum walks

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Xin-Ping, E-mail: xuxp@mail.ihep.ac.cn [School of Physical Science and Technology, Soochow University, Suzhou 215006 (China); Ide, Yusuke [Department of Information Systems Creation, Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa, 221-8686 (Japan)

    2016-10-15

    In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coin and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.

  15. An experimental study of the surface elevation probability distribution and statistics of wind-generated waves

    Science.gov (United States)

    Huang, N. E.; Long, S. R.

    1980-01-01

    Laboratory experiments were performed to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data along with some limited field data were compared. The statistical properties of the surface elevation were processed for comparison with the results derived from the Longuet-Higgins (1963) theory. It is found that, even for the highly non-Gaussian cases, the distribution function proposed by Longuet-Higgins still gives good approximations.

  16. Topology of two-dimensional turbulent flows of dust and gas

    Science.gov (United States)

    Mitra, Dhrubaditya; Perlekar, Prasad

    2018-04-01

    We perform direct numerical simulations (DNS) of passive heavy inertial particles (dust) in homogeneous and isotropic two-dimensional turbulent flows (gas) for a range of Stokes number, StDNS confirms that the statistics of topological properties of B are the same in Eulerian and Lagrangian frames only if the Eulerian data are weighed by the dust density. We use this correspondence to study the statistics of topological properties of A in the Lagrangian frame from our Eulerian simulations by calculating density-weighted probability distribution functions. We further find that in the Lagrangian frame, the mean value of the trace of A is negative and its magnitude increases with St approximately as exp(-C /St) with a constant C ≈0.1 . The statistical distribution of different topological structures that appear in the dust flow is different in Eulerian and Lagrangian (density-weighted Eulerian) cases, particularly for St close to unity. In both of these cases, for small St the topological structures have close to zero divergence and are either vortical (elliptic) or strain dominated (hyperbolic, saddle). As St increases, the contribution to negative divergence comes mostly from saddles and the contribution to positive divergence comes from both vortices and saddles. Compared to the Eulerian case, the Lagrangian (density-weighted Eulerian) case has less outward spirals and more converging saddles. Inward spirals are the least probable topological structures in both cases.

  17. Probability distribution of pitting corrosion depth and rate in underground pipelines: A Monte Carlo study

    International Nuclear Information System (INIS)

    Caleyo, F.; Velazquez, J.C.; Valor, A.; Hallen, J.M.

    2009-01-01

    The probability distributions of external-corrosion pit depth and pit growth rate were investigated in underground pipelines using Monte Carlo simulations. The study combines a predictive pit growth model developed by the authors with the observed distributions of the model variables in a range of soils. Depending on the pipeline age, any of the three maximal extreme value distributions, i.e. Weibull, Frechet or Gumbel, can arise as the best fit to the pitting depth and rate data. The Frechet distribution best fits the corrosion data for long exposure periods. This can be explained by considering the long-term stabilization of the diffusion-controlled pit growth. The findings of the study provide reliability analysts with accurate information regarding the stochastic characteristics of the pitting damage in underground pipelines.

  18. Probability distribution of pitting corrosion depth and rate in underground pipelines: A Monte Carlo study

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F. [Departamento de Ingenieria Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, 07738 Mexico, D.F. (Mexico)], E-mail: fcaleyo@gmail.com; Velazquez, J.C. [Departamento de Ingenieria Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, 07738 Mexico, D.F. (Mexico); Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400, La Habana (Cuba); Hallen, J.M. [Departamento de Ingenieria Metalurgica, ESIQIE, IPN, UPALM Edif. 7, Zacatenco, 07738 Mexico, D.F. (Mexico)

    2009-09-15

    The probability distributions of external-corrosion pit depth and pit growth rate were investigated in underground pipelines using Monte Carlo simulations. The study combines a predictive pit growth model developed by the authors with the observed distributions of the model variables in a range of soils. Depending on the pipeline age, any of the three maximal extreme value distributions, i.e. Weibull, Frechet or Gumbel, can arise as the best fit to the pitting depth and rate data. The Frechet distribution best fits the corrosion data for long exposure periods. This can be explained by considering the long-term stabilization of the diffusion-controlled pit growth. The findings of the study provide reliability analysts with accurate information regarding the stochastic characteristics of the pitting damage in underground pipelines.

  19. A least squares approach to estimating the probability distribution of unobserved data in multiphoton microscopy

    Science.gov (United States)

    Salama, Paul

    2008-02-01

    Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.

  20. The limiting conditional probability distribution in a stochastic model of T cell repertoire maintenance.

    Science.gov (United States)

    Stirk, Emily R; Lythe, Grant; van den Berg, Hugo A; Hurst, Gareth A D; Molina-París, Carmen

    2010-04-01

    The limiting conditional probability distribution (LCD) has been much studied in the field of mathematical biology, particularly in the context of epidemiology and the persistence of epidemics. However, it has not yet been applied to the immune system. One of the characteristic features of the T cell repertoire is its diversity. This diversity declines in old age, whence the concepts of extinction and persistence are also relevant to the immune system. In this paper we model T cell repertoire maintenance by means of a continuous-time birth and death process on the positive integers, where the origin is an absorbing state. We show that eventual extinction is guaranteed. The late-time behaviour of the process before extinction takes place is modelled by the LCD, which we prove always exists for the process studied here. In most cases, analytic expressions for the LCD cannot be computed but the probability distribution may be approximated by means of the stationary probability distributions of two related processes. We show how these approximations are related to the LCD of the original process and use them to study the LCD in two special cases. We also make use of the large N expansion to derive a further approximation to the LCD. The accuracy of the various approximations is then analysed. (c) 2009 Elsevier Inc. All rights reserved.

  1. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    Science.gov (United States)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  2. The probability distribution of intergranular stress corrosion cracking life for sensitized 304 stainless steels in high temperature, high purity water

    International Nuclear Information System (INIS)

    Akashi, Masatsune; Kenjyo, Takao; Matsukura, Shinji; Kawamoto, Teruaki

    1984-01-01

    In order to discuss the probability distribution of intergranular stress corrsion carcking life for sensitized 304 stainless steels, a series of the creviced bent beem (CBB) and the uni-axial constant load tests were carried out in oxygenated high temperature, high purity water. The following concludions were resulted; (1) The initiation process of intergranular stress corrosion cracking has been assumed to be approximated by the Poisson stochastic process, based on the CBB test results. (2) The probability distribution of intergranular stress corrosion cracking life may consequently be approximated by the exponential probability distribution. (3) The experimental data could be fitted to the exponential probability distribution. (author)

  3. Two-dimensional vibrational-electronic spectroscopy

    Science.gov (United States)

    Courtney, Trevor L.; Fox, Zachary W.; Slenkamp, Karla M.; Khalil, Munira

    2015-10-01

    Two-dimensional vibrational-electronic (2D VE) spectroscopy is a femtosecond Fourier transform (FT) third-order nonlinear technique that creates a link between existing 2D FT spectroscopies in the vibrational and electronic regions of the spectrum. 2D VE spectroscopy enables a direct measurement of infrared (IR) and electronic dipole moment cross terms by utilizing mid-IR pump and optical probe fields that are resonant with vibrational and electronic transitions, respectively, in a sample of interest. We detail this newly developed 2D VE spectroscopy experiment and outline the information contained in a 2D VE spectrum. We then use this technique and its single-pump counterpart (1D VE) to probe the vibrational-electronic couplings between high frequency cyanide stretching vibrations (νCN) and either a ligand-to-metal charge transfer transition ([FeIII(CN)6]3- dissolved in formamide) or a metal-to-metal charge transfer (MMCT) transition ([(CN)5FeIICNRuIII(NH3)5]- dissolved in formamide). The 2D VE spectra of both molecules reveal peaks resulting from coupled high- and low-frequency vibrational modes to the charge transfer transition. The time-evolving amplitudes and positions of the peaks in the 2D VE spectra report on coherent and incoherent vibrational energy transfer dynamics among the coupled vibrational modes and the charge transfer transition. The selectivity of 2D VE spectroscopy to vibronic processes is evidenced from the selective coupling of specific νCN modes to the MMCT transition in the mixed valence complex. The lineshapes in 2D VE spectra report on the correlation of the frequency fluctuations between the coupled vibrational and electronic frequencies in the mixed valence complex which has a time scale of 1 ps. The details and results of this study confirm the versatility of 2D VE spectroscopy and its applicability to probe how vibrations modulate charge and energy transfer in a wide range of complex molecular, material, and biological systems.

  4. Two-dimensional silica opens new perspectives

    Science.gov (United States)

    Büchner, Christin; Heyde, Markus

    2017-12-01

    In recent years, silica films have emerged as a novel class of two-dimensional (2D) materials. Several groups succeeded in epitaxial growth of ultrathin SiO2 layers using different growth methods and various substrates. The structures consist of tetrahedral [SiO4] building blocks in two mirror symmetrical planes, connected via oxygen bridges. This arrangement is called a silica bilayer as it is the thinnest 2D arrangement with the stoichiometry SiO2 known today. With all bonds saturated within the nano-sheet, the interaction with the substrate is based on van der Waals forces. Complex ring networks are observed, including hexagonal honeycomb lattices, point defects and domain boundaries, as well as amorphous domains. The network structures are highly tuneable through variation of the substrate, deposition parameters, cooling procedure, introducing dopants or intercalating small species. The amorphous networks and structural defects were resolved with atomic resolution microscopy and modeled with density functional theory and molecular dynamics. Such data contribute to our understanding of the formation and characteristic motifs of glassy systems. Growth studies and doping with other chemical elements reveal ways to tune ring sizes and defects as well as chemical reactivities. The pristine films have been utilized as molecular sieves and for confining molecules in nanocatalysis. Post growth hydroxylation can be used to tweak the reactivity as well. The electronic properties of silica bilayers are favourable for using silica as insulators in 2D material stacks. Due to the fully saturated atomic structure, the bilayer interacts weakly with the substrate and can be described as quasi-freestanding. Recently, a mm-scale film transfer under structure retention has been demonstrated. The chemical and mechanical stability of silica bilayers is very promising for technological applications in 2D heterostacks. Due to the impact of this bilayer system for glass science

  5. Two-dimensional vibrational-electronic spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Courtney, Trevor L.; Fox, Zachary W.; Slenkamp, Karla M.; Khalil, Munira, E-mail: mkhalil@uw.edu [Department of Chemistry, University of Washington, Box 351700, Seattle, Washington 98195 (United States)

    2015-10-21

    Two-dimensional vibrational-electronic (2D VE) spectroscopy is a femtosecond Fourier transform (FT) third-order nonlinear technique that creates a link between existing 2D FT spectroscopies in the vibrational and electronic regions of the spectrum. 2D VE spectroscopy enables a direct measurement of infrared (IR) and electronic dipole moment cross terms by utilizing mid-IR pump and optical probe fields that are resonant with vibrational and electronic transitions, respectively, in a sample of interest. We detail this newly developed 2D VE spectroscopy experiment and outline the information contained in a 2D VE spectrum. We then use this technique and its single-pump counterpart (1D VE) to probe the vibrational-electronic couplings between high frequency cyanide stretching vibrations (ν{sub CN}) and either a ligand-to-metal charge transfer transition ([Fe{sup III}(CN){sub 6}]{sup 3−} dissolved in formamide) or a metal-to-metal charge transfer (MMCT) transition ([(CN){sub 5}Fe{sup II}CNRu{sup III}(NH{sub 3}){sub 5}]{sup −} dissolved in formamide). The 2D VE spectra of both molecules reveal peaks resulting from coupled high- and low-frequency vibrational modes to the charge transfer transition. The time-evolving amplitudes and positions of the peaks in the 2D VE spectra report on coherent and incoherent vibrational energy transfer dynamics among the coupled vibrational modes and the charge transfer transition. The selectivity of 2D VE spectroscopy to vibronic processes is evidenced from the selective coupling of specific ν{sub CN} modes to the MMCT transition in the mixed valence complex. The lineshapes in 2D VE spectra report on the correlation of the frequency fluctuations between the coupled vibrational and electronic frequencies in the mixed valence complex which has a time scale of 1 ps. The details and results of this study confirm the versatility of 2D VE spectroscopy and its applicability to probe how vibrations modulate charge and energy transfer in a

  6. Analytical models of probability distribution and excess noise factor of solid state photomultiplier signals with crosstalk

    International Nuclear Information System (INIS)

    Vinogradov, S.

    2012-01-01

    Silicon Photomultipliers (SiPM), also called Solid State Photomultipliers (SSPM), are based on Geiger mode avalanche breakdown that is limited by a strong negative feedback. An SSPM can detect and resolve single photons due to the high gain and ultra-low excess noise of avalanche multiplication in this mode. Crosstalk and afterpulsing processes associated with the high gain introduce specific excess noise and deteriorate the photon number resolution of the SSPM. The probabilistic features of these processes are widely studied because of its significance for the SSPM design, characterization, optimization and application, but the process modeling is mostly based on Monte Carlo simulations and numerical methods. In this study, crosstalk is considered to be a branching Poisson process, and analytical models of probability distribution and excess noise factor (ENF) of SSPM signals based on the Borel distribution as an advance on the geometric distribution models are presented and discussed. The models are found to be in a good agreement with the experimental probability distributions for dark counts and a few photon spectrums in a wide range of fired pixels number as well as with observed super-linear behavior of crosstalk ENF.

  7. A formalism to generate probability distributions for performance-assessment modeling

    International Nuclear Information System (INIS)

    Kaplan, P.G.

    1990-01-01

    A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon's informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs

  8. Research on Energy-Saving Design of Overhead Travelling Crane Camber Based on Probability Load Distribution

    Directory of Open Access Journals (Sweden)

    Tong Yifei

    2014-01-01

    Full Text Available Crane is a mechanical device, used widely to move materials in modern production. It is reported that the energy consumptions of China are at least 5–8 times of other developing countries. Thus, energy consumption becomes an unavoidable topic. There are several reasons influencing the energy loss, and the camber of the girder is the one not to be neglected. In this paper, the problem of the deflections induced by the moving payload in the girder of overhead travelling crane is examined. The evaluation of a camber giving a counterdeflection of the girder is proposed in order to get minimum energy consumptions for trolley to move along a nonstraight support. To this aim, probabilistic payload distributions are considered instead of fixed or rated loads involved in other researches. Taking 50/10 t bridge crane as a research object, the probability loads are determined by analysis of load distribution density functions. According to load distribution, camber design under different probability loads is discussed in detail as well as energy consumptions distribution. The research results provide the design reference of reasonable camber to obtain the least energy consumption for climbing corresponding to different P0; thus energy-saving design can be achieved.

  9. Two-dimensional spectrophotometry of planetary nebulae by CCD imaging

    International Nuclear Information System (INIS)

    Jacoby, G.H.; Africano, J.L.; Quigley, R.J.; Western Washington Univ., Bellingham, WA)

    1987-01-01

    The spatial distribution of the electron temperature and density and the ionic abundances of O(+), O(2+), N(+), and S(+) have been derived from CCD images of the planetary nebulae NGC 40 and NGC 6826 taken in the important emission lines of forbidden O II, forbidden O III, H-beta, forbidden N II, and forbidden S II. The steps required in the derivation of the absolute fluxes, line, ratios, and ionic abundances are outlined and then discussed in greater detail. The results show that the CCD imaging technique for two-dimensional spectrophotometry can effectively compete with classical spectrophotometry, providing the added benefits of complete spatial coverage at seeing-disk spatial resolution. The multiplexing in the spatial dimension, however, results in a loss of spectral information, since only one emission line is observed at any one time. 37 references

  10. Cooperation in two-dimensional mixed-games

    International Nuclear Information System (INIS)

    Amaral, Marco A; Silva, Jafferson K L da; Wardil, Lucas

    2015-01-01

    Evolutionary game theory is a common framework to study the evolution of cooperation, where it is usually assumed that the same game is played in all interactions. Here, we investigate a model where the game that is played by two individuals is uniformly drawn from a sample of two different games. Using the master equation approach we show that the random mixture of two games is equivalent to play the average game when (i) the strategies are statistically independent of the game distribution and (ii) the transition rates are linear functions of the payoffs. We also use Monte-Carlo simulations in a two-dimensional lattice and mean-field techniques to investigate the scenario when the two above conditions do not hold. We find that even outside of such conditions, several quantities characterizing the mixed-games are still the same as the ones obtained in the average game when the two games are not very different. (paper)

  11. Acoustic metamaterials for new two-dimensional sonic devices

    Energy Technology Data Exchange (ETDEWEB)

    Torrent, Daniel; Sanchez-Dehesa, Jose [Wave Phenomena Group, Department of Electronic Engineering, Polytechnic University of Valencia, C/Camino de Vera sn, E-46022 Valencia (Spain)

    2007-09-15

    It has been shown that two-dimensional arrays of rigid or fluidlike cylinders in a fluid or a gas define, in the limit of large wavelengths, a class of acoustic metamaterials whose effective parameters (sound velocity and density) can be tailored up to a certain limit. This work goes a step further by considering arrays of solid cylinders in which the elastic properties of cylinders are taken into account. We have also treated mixtures of two different elastic cylinders. It is shown that both effects broaden the range of acoustic parameters available for designing metamaterials. For example, it is predicted that metamaterials with perfect matching of impedance with air are now possible by using aerogel and rigid cylinders equally distributed in a square lattice. As a potential application of the proposed metamaterial, we present a gradient index lens for airborne sound (i.e. a sonic Wood lens) whose functionality is demonstrated by multiple scattering simulations.

  12. New method for extracting tumors in PET/CT images based on the probability distribution

    International Nuclear Information System (INIS)

    Nitta, Shuhei; Hontani, Hidekata; Hukami, Tadanori

    2006-01-01

    In this report, we propose a method for extracting tumors from PET/CT images by referring to the probability distribution of pixel values in the PET image. In the proposed method, first, the organs that normally take up fluorodeoxyglucose (FDG) (e.g., the liver, kidneys, and brain) are extracted. Then, the tumors are extracted from the images. The distribution of pixel values in PET images differs in each region of the body. Therefore, the threshold for detecting tumors is adaptively determined by referring to the distribution. We applied the proposed method to 37 cases and evaluated its performance. This report also presents the results of experiments comparing the proposed method and another method in which the pixel values are normalized for extracting tumors. (author)

  13. Flux-probability distributions from the master equation for radiation transport in stochastic media

    International Nuclear Information System (INIS)

    Franke, Brian C.; Prinja, Anil K.

    2011-01-01

    We present numerical investigations into the accuracy of approximations in the master equation for radiation transport in discrete binary random media. Our solutions of the master equation yield probability distributions of particle flux at each element of phase space. We employ the Levermore-Pomraning interface closure and evaluate the effectiveness of closures for the joint conditional flux distribution for estimating scattering integrals. We propose a parameterized model for this joint-pdf closure, varying between correlation neglect and a full-correlation model. The closure is evaluated for a variety of parameter settings. Comparisons are made with benchmark results obtained through suites of fixed-geometry realizations of random media in rod problems. All calculations are performed using Monte Carlo techniques. Accuracy of the approximations in the master equation is assessed by examining the probability distributions for reflection and transmission and by evaluating the moments of the pdfs. The results suggest the correlation-neglect setting in our model performs best and shows improved agreement in the atomic-mix limit. (author)

  14. Measuring sensitivity in pharmacoeconomic studies. Refining point sensitivity and range sensitivity by incorporating probability distributions.

    Science.gov (United States)

    Nuijten, M J

    1999-07-01

    The aim of the present study is to describe a refinement of a previously presented method, based on the concept of point sensitivity, to deal with uncertainty in economic studies. The original method was refined by the incorporation of probability distributions which allow a more accurate assessment of the level of uncertainty in the model. In addition, a bootstrap method was used to create a probability distribution for a fixed input variable based on a limited number of data points. The original method was limited in that the sensitivity measurement was based on a uniform distribution of the variables and that the overall sensitivity measure was based on a subjectively chosen range which excludes the impact of values outside the range on the overall sensitivity. The concepts of the refined method were illustrated using a Markov model of depression. The application of the refined method substantially changed the ranking of the most sensitive variables compared with the original method. The response rate became the most sensitive variable instead of the 'per diem' for hospitalisation. The refinement of the original method yields sensitivity outcomes, which greater reflect the real uncertainty in economic studies.

  15. Lie algebra contractions on two-dimensional hyperboloid

    International Nuclear Information System (INIS)

    Pogosyan, G. S.; Yakhno, A.

    2010-01-01

    The Inoenue-Wigner contraction from the SO(2, 1) group to the Euclidean E(2) and E(1, 1) group is used to relate the separation of variables in Laplace-Beltrami (Helmholtz) equations for the four corresponding two-dimensional homogeneous spaces: two-dimensional hyperboloids and two-dimensional Euclidean and pseudo-Euclidean spaces. We show how the nine systems of coordinates on the two-dimensional hyperboloids contracted to the four systems of coordinates on E 2 and eight on E 1,1 . The text was submitted by the authors in English.

  16. Impact of spike train autostructure on probability distribution of joint spike events.

    Science.gov (United States)

    Pipa, Gordon; Grün, Sonja; van Vreeswijk, Carl

    2013-05-01

    The discussion whether temporally coordinated spiking activity really exists and whether it is relevant has been heated over the past few years. To investigate this issue, several approaches have been taken to determine whether synchronized events occur significantly above chance, that is, whether they occur more often than expected if the neurons fire independently. Most investigations ignore or destroy the autostructure of the spiking activity of individual cells or assume Poissonian spiking as a model. Such methods that ignore the autostructure can significantly bias the coincidence statistics. Here, we study the influence of the autostructure on the probability distribution of coincident spiking events between tuples of mutually independent non-Poisson renewal processes. In particular, we consider two types of renewal processes that were suggested as appropriate models of experimental spike trains: a gamma and a log-normal process. For a gamma process, we characterize the shape of the distribution analytically with the Fano factor (FFc). In addition, we perform Monte Carlo estimations to derive the full shape of the distribution and the probability for false positives if a different process type is assumed as was actually present. We also determine how manipulations of such spike trains, here dithering, used for the generation of surrogate data change the distribution of coincident events and influence the significance estimation. We find, first, that the width of the coincidence count distribution and its FFc depend critically and in a nontrivial way on the detailed properties of the structure of the spike trains as characterized by the coefficient of variation CV. Second, the dependence of the FFc on the CV is complex and mostly nonmonotonic. Third, spike dithering, even if as small as a fraction of the interspike interval, can falsify the inference on coordinated firing.

  17. Coding for Two Dimensional Constrained Fields

    DEFF Research Database (Denmark)

    Laursen, Torben Vaarbye

    2006-01-01

    a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....

  18. On the Meta Distribution of Coverage Probability in Uplink Cellular Networks

    KAUST Repository

    Elsawy, Hesham

    2017-04-07

    This letter studies the meta distribution of coverage probability (CP), within a stochastic geometry framework, for cellular uplink transmission with fractional path-loss inversion power control. Using the widely accepted Poisson point process (PPP) for modeling the spatial locations of base stations (BSs), we obtain the percentiles of users that achieve a target uplink CP over an arbitrary, but fixed, realization of the PPP. To this end, the effect of the users activity factor (p) and the path-loss compensation factor () on the uplink performance are analyzed. The results show that decreasing p and/or increasing reduce the CP variation around the spatially averaged value.

  19. Probability distribution of wave packet delay time for strong overlapping of resonance levels

    International Nuclear Information System (INIS)

    Lyuboshits, V.L.

    1983-01-01

    Time behaviour of nuclear reactions in the case of high level densities is investigated basing on the theory of overlapping resonances. In the framework of a model of n equivalent channels an analytical expression is obtained for the probability distribution function for wave packet delay time at the compound nucleus production. It is shown that at strong overlapping of the resonance levels the relative fluctuation of the delay time is small at the stage of compound nucleus production. A possible increase in the duration of nuclear reactions with the excitation energy rise is discussed

  20. Concise method for evaluating the probability distribution of the marginal cost of power generation

    International Nuclear Information System (INIS)

    Zhang, S.H.; Li, Y.Z.

    2000-01-01

    In the developing electricity market, many questions on electricity pricing and the risk modelling of forward contracts require the evaluation of the expected value and probability distribution of the short-run marginal cost of power generation at any given time. A concise forecasting method is provided, which is consistent with the definitions of marginal costs and the techniques of probabilistic production costing. The method embodies clear physical concepts, so that it can be easily understood theoretically and computationally realised. A numerical example has been used to test the proposed method. (author)

  1. The Bayesian count rate probability distribution in measurement of ionizing radiation by use of a ratemeter

    Energy Technology Data Exchange (ETDEWEB)

    Weise, K.

    2004-06-01

    Recent metrological developments concerning measurement uncertainty, founded on Bayesian statistics, give rise to a revision of several parts of the DIN 25482 and ISO 11929 standard series. These series stipulate detection limits and decision thresholds for ionizing-radiation measurements. Part 3 and, respectively, part 4 of them deal with measurements by use of linear-scale analogue ratemeters. A normal frequency distribution of the momentary ratemeter indication for a fixed count rate value is assumed. The actual distribution, which is first calculated numerically by solving an integral equation, differs, however, considerably from the normal distribution although this one represents an approximation of it for sufficiently large values of the count rate to be measured. As is shown, this similarly holds true for the Bayesian probability distribution of the count rate for sufficiently large given measured values indicated by the ratemeter. This distribution follows from the first one mentioned by means of the Bayes theorem. Its expectation value and variance are needed for the standards to be revised on the basis of Bayesian statistics. Simple expressions are given by the present standards for estimating these parameters and for calculating the detection limit and the decision threshold. As is also shown, the same expressions can similarly be used as sufficient approximations by the revised standards if, roughly, the present indicated value exceeds the reciprocal ratemeter relaxation time constant. (orig.)

  2. Probability distribution and statistical properties of spherically compensated cosmic regions in ΛCDM cosmology

    Science.gov (United States)

    Alimi, Jean-Michel; de Fromont, Paul

    2018-04-01

    The statistical properties of cosmic structures are well known to be strong probes for cosmology. In particular, several studies tried to use the cosmic void counting number to obtain tight constrains on dark energy. In this paper, we model the statistical properties of these regions using the CoSphere formalism (de Fromont & Alimi) in both primordial and non-linearly evolved Universe in the standard Λ cold dark matter model. This formalism applies similarly for minima (voids) and maxima (such as DM haloes), which are here considered symmetrically. We first derive the full joint Gaussian distribution of CoSphere's parameters in the Gaussian random field. We recover the results of Bardeen et al. only in the limit where the compensation radius becomes very large, i.e. when the central extremum decouples from its cosmic environment. We compute the probability distribution of the compensation size in this primordial field. We show that this distribution is redshift independent and can be used to model cosmic voids size distribution. We also derive the statistical distribution of the peak parameters introduced by Bardeen et al. and discuss their correlation with the cosmic environment. We show that small central extrema with low density are associated with narrow compensation regions with deep compensation density, while higher central extrema are preferentially located in larger but smoother over/under massive regions.

  3. Study of the SEMG probability distribution of the paretic tibialis anterior muscle

    International Nuclear Information System (INIS)

    Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B

    2007-01-01

    The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed

  4. Study of the SEMG probability distribution of the paretic tibialis anterior muscle

    Energy Technology Data Exchange (ETDEWEB)

    Cherniz, AnalIa S; Bonell, Claudia E; Tabernig, Carolina B [Laboratorio de Ingenieria de Rehabilitacion e Investigaciones Neuromusculares y Sensoriales, Facultad de Ingenieria, UNER, Oro Verde (Argentina)

    2007-11-15

    The surface electromyographic signal is a stochastic signal that has been modeled as a Gaussian process, with a zero mean. It has been experimentally proved that this probability distribution can be adjusted with less error to a Laplacian type distribution. The selection of estimators for the detection of changes in the amplitude of the muscular signal depends, among other things, on the type of distribution. In the case of subjects with lesions to the superior motor neuron, the lack of central control affects the muscular tone, the force and the patterns of muscular movement involved in activities such as the gait cycle. In this work, the distribution types of the SEMG signal amplitudes of the tibialis anterior muscle are evaluated during gait, both in two healthy subjects and in two hemiparetic ones in order to select the estimators that best characterize them. It was observed that the Laplacian distribution function would be the one that best adjusts to the experimental data in the studied subjects, although this largely depends on the subject and on the data segment analyzed.

  5. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    Science.gov (United States)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  6. Two-dimensional fluorescence lifetime correlation spectroscopy. 2. Application.

    Science.gov (United States)

    Ishii, Kunihiko; Tahara, Tahei

    2013-10-03

    In the preceding article, we introduced the theoretical framework of two-dimensional fluorescence lifetime correlation spectroscopy (2D FLCS). In this article, we report the experimental implementation of 2D FLCS. In this method, two-dimensional emission-delay correlation maps are constructed from the photon data obtained with the time-correlated single photon counting (TCSPC), and then they are converted to 2D lifetime correlation maps by the inverse Laplace transform. We develop a numerical method to realize reliable transformation, employing the maximum entropy method (MEM). We apply the developed actual 2D FLCS to two real systems, a dye mixture and a DNA hairpin. For the dye mixture, we show that 2D FLCS is experimentally feasible and that it can identify different species in an inhomogeneous sample without any prior knowledge. The application to the DNA hairpin demonstrates that 2D FLCS can disclose microsecond spontaneous dynamics of biological molecules in a visually comprehensible manner, through identifying species as unique lifetime distributions. A FRET pair is attached to the both ends of the DNA hairpin, and the different structures of the DNA hairpin are distinguished as different fluorescence lifetimes in 2D FLCS. By constructing the 2D correlation maps of the fluorescence lifetime of the FRET donor, the equilibrium dynamics between the open and the closed forms of the DNA hairpin is clearly observed as the appearance of the cross peaks between the corresponding fluorescence lifetimes. This equilibrium dynamics of the DNA hairpin is clearly separated from the acceptor-missing DNA that appears as an isolated diagonal peak in the 2D maps. The present study clearly shows that newly developed 2D FLCS can disclose spontaneous structural dynamics of biological molecules with microsecond time resolution.

  7. Probability distribution of magnetization in the one-dimensional Ising model: effects of boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Antal, T [Physics Department, Simon Fraser University, Burnaby, BC V5A 1S6 (Canada); Droz, M [Departement de Physique Theorique, Universite de Geneve, CH 1211 Geneva 4 (Switzerland); Racz, Z [Institute for Theoretical Physics, Eoetvoes University, 1117 Budapest, Pazmany setany 1/a (Hungary)

    2004-02-06

    Finite-size scaling functions are investigated both for the mean-square magnetization fluctuations and for the probability distribution of the magnetization in the one-dimensional Ising model. The scaling functions are evaluated in the limit of the temperature going to zero (T {yields} 0), the size of the system going to infinity (N {yields} {infinity}) while N[1 - tanh(J/k{sub B}T)] is kept finite (J being the nearest neighbour coupling). Exact calculations using various boundary conditions (periodic, antiperiodic, free, block) demonstrate explicitly how the scaling functions depend on the boundary conditions. We also show that the block (small part of a large system) magnetization distribution results are identical to those obtained for free boundary conditions.

  8. Ruin Probabilities and Aggregrate Claims Distributions for Shot Noise Cox Processes

    DEFF Research Database (Denmark)

    Albrecher, H.; Asmussen, Søren

    claim size is investigated under these assumptions. For both light-tailed and heavy-tailed claim size distributions, asymptotic estimates for infinite-time and finite-time ruin probabilities are derived. Moreover, we discuss an extension of the model to an adaptive premium rule that is dynamically......We consider a risk process Rt where the claim arrival process is a superposition of a homogeneous Poisson process and a Cox process with a Poisson shot noise intensity process, capturing the effect of sudden increases of the claim intensity due to external events. The distribution of the aggregate...... adjusted according to past claims experience....

  9. PHOTOMETRIC REDSHIFT PROBABILITY DISTRIBUTIONS FOR GALAXIES IN THE SDSS DR8

    International Nuclear Information System (INIS)

    Sheldon, Erin S.; Cunha, Carlos E.; Mandelbaum, Rachel; Brinkmann, J.; Weaver, Benjamin A.

    2012-01-01

    We present redshift probability distributions for galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 8 imaging data. We used the nearest-neighbor weighting algorithm to derive the ensemble redshift distribution N(z), and individual redshift probability distributions P(z) for galaxies with r < 21.8 and u < 29.0. As part of this technique, we calculated weights for a set of training galaxies with known redshifts such that their density distribution in five-dimensional color-magnitude space was proportional to that of the photometry-only sample, producing a nearly fair sample in that space. We estimated the ensemble N(z) of the photometric sample by constructing a weighted histogram of the training-set redshifts. We derived P(z)'s for individual objects by using training-set objects from the local color-magnitude space around each photometric object. Using the P(z) for each galaxy can reduce the statistical error in measurements that depend on the redshifts of individual galaxies. The spectroscopic training sample is substantially larger than that used for the DR7 release. The newly added PRIMUS catalog is now the most important training set used in this analysis by a wide margin. We expect the primary sources of error in the N(z) reconstruction to be sample variance and spectroscopic failures: The training sets are drawn from relatively small volumes of space, and some samples have large incompleteness. Using simulations we estimated the uncertainty in N(z) due to sample variance at a given redshift to be ∼10%-15%. The uncertainty on calculations incorporating N(z) or P(z) depends on how they are used; we discuss the case of weak lensing measurements. The P(z) catalog is publicly available from the SDSS Web site.

  10. Two dimensional kicked quantum Ising model: dynamical phase transitions

    International Nuclear Information System (INIS)

    Pineda, C; Prosen, T; Villaseñor, E

    2014-01-01

    Using an efficient one and two qubit gate simulator operating on graphical processing units, we investigate ergodic properties of a quantum Ising spin 1/2 model on a two-dimensional lattice, which is periodically driven by a δ-pulsed transverse magnetic field. We consider three different dynamical properties: (i) level density, (ii) level spacing distribution of the Floquet quasienergy spectrum, and (iii) time-averaged autocorrelation function of magnetization components. Varying the parameters of the model, we found transitions between ordered (non-ergodic) and quantum chaotic (ergodic) phases, but the transitions between flat and non-flat spectral density do not correspond to transitions between ergodic and non-ergodic local observables. Even more surprisingly, we found good agreement of level spacing distribution with the Wigner surmise of random matrix theory for almost all values of parameters except where the model is essentially non-interacting, even in regions where local observables are not ergodic or where spectral density is non-flat. These findings question the versatility of the interpretation of level spacing distribution in many-body systems and stress the importance of the concept of locality. (paper)

  11. Understanding the distinctively skewed and heavy tailed character of atmospheric and oceanic probability distributions

    Energy Technology Data Exchange (ETDEWEB)

    Sardeshmukh, Prashant D., E-mail: Prashant.D.Sardeshmukh@noaa.gov [CIRES, University of Colorado, Boulder, Colorado 80309 (United States); NOAA/Earth System Research Laboratory, Boulder, Colorado 80305 (United States); Penland, Cécile [NOAA/Earth System Research Laboratory, Boulder, Colorado 80305 (United States)

    2015-03-15

    The probability distributions of large-scale atmospheric and oceanic variables are generally skewed and heavy-tailed. We argue that their distinctive departures from Gaussianity arise fundamentally from the fact that in a quadratically nonlinear system with a quadratic invariant, the coupling coefficients between system components are not constant but depend linearly on the system state in a distinctive way. In particular, the skewness arises from a tendency of the system trajectory to linger near states of weak coupling. We show that the salient features of the observed non-Gaussianity can be captured in the simplest such nonlinear 2-component system. If the system is stochastically forced and linearly damped, with one component damped much more strongly than the other, then the strongly damped fast component becomes effectively decoupled from the weakly damped slow component, and its impact on the slow component can be approximated as a stochastic noise forcing plus an augmented nonlinear damping. In the limit of large time-scale separation, the nonlinear augmentation of the damping becomes small, and the noise forcing can be approximated as an additive noise plus a correlated additive and multiplicative noise (CAM noise) forcing. Much of the diversity of observed large-scale atmospheric and oceanic probability distributions can be interpreted in this minimal framework.

  12. Beginning Introductory Physics with Two-Dimensional Motion

    Science.gov (United States)

    Huggins, Elisha

    2009-01-01

    During the session on "Introductory College Physics Textbooks" at the 2007 Summer Meeting of the AAPT, there was a brief discussion about whether introductory physics should begin with one-dimensional motion or two-dimensional motion. Here we present the case that by starting with two-dimensional motion, we are able to introduce a considerable…

  13. Two-dimensional black holes and non-commutative spaces

    International Nuclear Information System (INIS)

    Sadeghi, J.

    2008-01-01

    We study the effects of non-commutative spaces on two-dimensional black hole. The event horizon of two-dimensional black hole is obtained in non-commutative space up to second order of perturbative calculations. A lower limit for the non-commutativity parameter is also obtained. The observer in that limit in contrast to commutative case see two horizon

  14. Solution of the two-dimensional spectral factorization problem

    Science.gov (United States)

    Lawton, W. M.

    1985-01-01

    An approximation theorem is proven which solves a classic problem in two-dimensional (2-D) filter theory. The theorem shows that any continuous two-dimensional spectrum can be uniformly approximated by the squared modulus of a recursively stable finite trigonometric polynomial supported on a nonsymmetric half-plane.

  15. Two-dimensional Navier-Stokes turbulence in bounded domains

    NARCIS (Netherlands)

    Clercx, H.J.H.; van Heijst, G.J.F.

    In this review we will discuss recent experimental and numerical results of quasi-two-dimensional decaying and forced Navier–Stokes turbulence in bounded domains. We will give a concise overview of developments in two-dimensional turbulence research, with emphasis on the progress made during the

  16. Two-dimensional Navier-Stokes turbulence in bounded domains

    NARCIS (Netherlands)

    Clercx, H.J.H.; Heijst, van G.J.F.

    2009-01-01

    In this review we will discuss recent experimental and numerical results of quasi-two-dimensional decaying and forced Navier–Stokes turbulence in bounded domains. We will give a concise overview of developments in two-dimensional turbulence research, with emphasis on the progress made during the

  17. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    Science.gov (United States)

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  18. A methodology for more efficient tail area sampling with discrete probability distribution

    International Nuclear Information System (INIS)

    Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon

    1988-01-01

    Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor

  19. Covariance problem in two-dimensional quantum chromodynamics

    International Nuclear Information System (INIS)

    Hagen, C.R.

    1979-01-01

    The problem of covariance in the field theory of a two-dimensional non-Abelian gauge field is considered. Since earlier work has shown that covariance fails (in charged sectors) for the Schwinger model, particular attention is given to an evaluation of the role played by the non-Abelian nature of the fields. In contrast to all earlier attempts at this problem, it is found that the potential covariance-breaking terms are identical to those found in the Abelian theory provided that one expresses them in terms of the total (i.e., conserved) current operator. The question of covariance is thus seen to reduce in all cases to a determination as to whether there exists a conserved global charge in the theory. Since the charge operator in the Schwinger model is conserved only in neutral sectors, one is thereby led to infer a probable failure of covariance in the non-Abelian theory, but one which is identical to that found for the U(1) case

  20. Critical phenomena in quasi-two-dimensional vibrated granular systems.

    Science.gov (United States)

    Guzmán, Marcelo; Soto, Rodrigo

    2018-01-01

    The critical phenomena associated to the liquid-to-solid transition of quasi-two-dimensional vibrated granular systems is studied using molecular dynamics simulations of the inelastic hard sphere model. The critical properties are associated to the fourfold bond-orientational order parameter χ_{4}, which measures the level of square crystallization of the system. Previous experimental results have shown that the transition of χ_{4}, when varying the vibration amplitude, can be either discontinuous or continuous, for two different values of the height of the box. Exploring the amplitude-height phase space, a transition line is found, which can be either discontinuous or continuous, merging at a tricritical point and the continuous branch ends in an upper critical point. In the continuous transition branch, the critical properties are studied. The exponent associated to the amplitude of the order parameter is β=1/2, for various system sizes, in complete agreement with the experimental results. However, the fluctuations of χ_{4} do not show any critical behavior, probably due to crossover effects by the close presence of the tricritical point. Finally, in quasi-one-dimensional systems, the transition is only discontinuous, limited by one critical point, indicating that two is the lower dimension for having a tricritical point.

  1. Edge orientations of mechanically exfoliated anisotropic two-dimensional materials

    Science.gov (United States)

    Yang, Juntan; Wang, Yi; Li, Yinfeng; Gao, Huajian; Chai, Yang; Yao, Haimin

    2018-03-01

    Mechanical exfoliation is an approach widely applied to prepare high-quality two-dimensional (2D) materials for investigating their intrinsic physical properties. During mechanical exfoliation, in-plane cleavage results in new edges whose orientations play an important role in determining the properties of the as-exfoliated 2D materials especially those with high anisotropy. Here, we systematically investigate the factors affecting the edge orientation of 2D materials obtained by mechanical exfoliation. Our theoretical study manifests that the fractured direction during mechanical exfoliation is determined synergistically by the tearing direction and material anisotropy of fracture energy. For a specific 2D material, our theory enables us to predict the possible edge orientations of the exfoliated flakes as well as their occurring probabilities. The theoretical prediction is experimentally verified by examining the inter-edge angles of the exfoliated flakes of four typical 2D materials including graphene, MoS2, PtS2, and black phosphorus. This work not only sheds light on the mechanics of exfoliation of the 2D materials but also provides a new approach to deriving information of edge orientations of mechanically exfoliated 2D materials by data mining of their macroscopic geometric features.

  2. Broken ergodicity in two-dimensional homogeneous magnetohydrodynamic turbulence

    International Nuclear Information System (INIS)

    Shebalin, John V.

    2010-01-01

    Two-dimensional (2D) homogeneous magnetohydrodynamic (MHD) turbulence has many of the same qualitative features as three-dimensional (3D) homogeneous MHD turbulence. These features include several ideal (i.e., nondissipative) invariants along with the phenomenon of broken ergodicity (defined as nonergodic behavior over a very long time). Broken ergodicity appears when certain modes act like random variables with mean values that are large compared to their standard deviations, indicating a coherent structure or dynamo. Recently, the origin of broken ergodicity in 3D MHD turbulence that is manifest in the lowest wavenumbers was found. Here, we study the origin of broken ergodicity in 2D MHD turbulence. It will be seen that broken ergodicity in ideal 2D MHD turbulence can be manifest in the lowest wavenumbers of a finite numerical model for certain initial conditions or in the highest wavenumbers for another set of initial conditions. The origins of broken ergodicity in an ideal 2D homogeneous MHD turbulence are found through an eigenanalysis of the covariance matrices of the probability density function and by an examination of the associated entropy functional. When the values of ideal invariants are kept fixed and grid size increases, it will be shown that the energy in a few large modes remains constant, while the energy in any other mode is inversely proportional to grid size. Also, as grid size increases, we find that broken ergodicity becomes manifest at more and more wavenumbers.

  3. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    Science.gov (United States)

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  4. The effects of radiotherapy treatment uncertainties on the delivered dose distribution and tumour control probability

    International Nuclear Information System (INIS)

    Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA

    2001-01-01

    Uncertainty in the precise quantity of radiation dose delivered to tumours in external beam radiotherapy is present due to many factors, and can result in either spatially uniform (Gaussian) or spatially non-uniform dose errors. These dose errors are incorporated into the calculation of tumour control probability (TCP) and produce a distribution of possible TCP values over a population. We also study the effect of inter-patient cell sensitivity heterogeneity on the population distribution of patient TCPs. This study aims to investigate the relative importance of these three uncertainties (spatially uniform dose uncertainty, spatially non-uniform dose uncertainty, and inter-patient cell sensitivity heterogeneity) on the delivered dose and TCP distribution following a typical course of fractionated external beam radiotherapy. The dose distributions used for patient treatments are modelled in one dimension. Geometric positioning uncertainties during and before treatment are considered as shifts of a pre-calculated dose distribution. Following the simulation of a population of patients, distributions of dose across the patient population are used to calculate mean treatment dose, standard deviation in mean treatment dose, mean TCP, standard deviation in TCP, and TCP mode. These parameters are calculated with each of the three uncertainties included separately. The calculations show that the dose errors in the tumour volume are dominated by the spatially uniform component of dose uncertainty. This could be related to machine specific parameters, such as linear accelerator calibration. TCP calculation is affected dramatically by inter-patient variation in the cell sensitivity and to a lesser extent by the spatially uniform dose errors. The positioning errors with the 1.5 cm margins used cause dose uncertainty outside the tumour volume and have a small effect on mean treatment dose (in the tumour volume) and tumour control. Copyright (2001) Australasian College of

  5. Probability distributions of placental morphological measurements and origins of variability of placental shapes.

    Science.gov (United States)

    Yampolsky, M; Salafia, C M; Shlakhter, O

    2013-06-01

    While the mean shape of human placenta is round with centrally inserted umbilical cord, significant deviations from this ideal are fairly common, and may be clinically meaningful. Traditionally, they are explained by trophotropism. We have proposed a hypothesis explaining typical variations in placental shape by randomly determined fluctuations in the growth process of the vascular tree. It has been recently reported that umbilical cord displacement in a birth cohort has a log-normal probability distribution, which indicates that the displacement between an initial point of origin and the centroid of the mature shape is a result of accumulation of random fluctuations of the dynamic growth of the placenta. To confirm this, we investigate statistical distributions of other features of placental morphology. In a cohort of 1023 births at term digital photographs of placentas were recorded at delivery. Excluding cases with velamentous cord insertion, or missing clinical data left 1001 (97.8%) for which placental surface morphology features were measured. Best-fit statistical distributions for them were obtained using EasyFit. The best-fit distributions of umbilical cord displacement, placental disk diameter, area, perimeter, and maximal radius calculated from the cord insertion point are of heavy-tailed type, similar in shape to log-normal distributions. This is consistent with a stochastic origin of deviations of placental shape from normal. Deviations of placental shape descriptors from average have heavy-tailed distributions similar in shape to log-normal. This evidence points away from trophotropism, and towards a spontaneous stochastic evolution of the variants of placental surface shape features. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. On Selection of the Probability Distribution for Representing the Maximum Annual Wind Speed in East Cairo, Egypt

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh. I.; El-Hemamy, S.T.

    2013-01-01

    The main objective of this paper is to identify an appropriate probability model and best plotting position formula which represent the maximum annual wind speed in east Cairo. This model can be used to estimate the extreme wind speed and return period at a particular site as well as to determine the radioactive release distribution in case of accident occurrence at a nuclear power plant. Wind speed probabilities can be estimated by using probability distributions. An accurate determination of probability distribution for maximum wind speed data is very important in expecting the extreme value . The probability plots of the maximum annual wind speed (MAWS) in east Cairo are fitted to six major statistical distributions namely: Gumbel, Weibull, Normal, Log-Normal, Logistic and Log- Logistic distribution, while eight plotting positions of Hosking and Wallis, Hazen, Gringorten, Cunnane, Blom, Filliben, Benard and Weibull are used for determining exceedance of their probabilities. A proper probability distribution for representing the MAWS is selected by the statistical test criteria in frequency analysis. Therefore, the best plotting position formula which can be used to select appropriate probability model representing the MAWS data must be determined. The statistical test criteria which represented in: the probability plot correlation coefficient (PPCC), the root mean square error (RMSE), the relative root mean square error (RRMSE) and the maximum absolute error (MAE) are used to select the appropriate probability position and distribution. The data obtained show that the maximum annual wind speed in east Cairo vary from 44.3 Km/h to 96.1 Km/h within duration of 39 years . Weibull plotting position combined with Normal distribution gave the highest fit, most reliable, accurate predictions and determination of the wind speed in the study area having the highest value of PPCC and lowest values of RMSE, RRMSE and MAE

  7. Probability distribution functions of turbulence in seepage-affected alluvial channel

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Anurag; Kumar, Bimlesh, E-mail: anurag.sharma@iitg.ac.in, E-mail: bimk@iitg.ac.in [Department of Civil Engineering, Indian Institute of Technology Guwahati, 781039 (India)

    2017-02-15

    The present experimental study is carried out on the probability distribution functions (PDFs) of turbulent flow characteristics within near-bed-surface and away-from-bed surfaces for both no seepage and seepage flow. Laboratory experiments were conducted in the plane sand bed for no seepage (NS), 10% seepage (10%S) and 15% seepage (15%) cases. The experimental calculation of the PDFs of turbulent parameters such as Reynolds shear stress, velocity fluctuations, and bursting events is compared with theoretical expression obtained by Gram–Charlier (GC)-based exponential distribution. Experimental observations follow the computed PDF distributions for both no seepage and seepage cases. Jensen-Shannon divergence (JSD) method is used to measure the similarity between theoretical and experimental PDFs. The value of JSD for PDFs of velocity fluctuation lies between 0.0005 to 0.003 while the JSD value for PDFs of Reynolds shear stress varies between 0.001 to 0.006. Even with the application of seepage, the PDF distribution of bursting events, sweeps and ejections are well characterized by the exponential distribution of the GC series, except that a slight deflection of inward and outward interactions is observed which may be due to weaker events. The value of JSD for outward and inward interactions ranges from 0.0013 to 0.032, while the JSD value for sweep and ejection events varies between 0.0001 to 0.0025. The theoretical expression for the PDF of turbulent intensity is developed in the present study, which agrees well with the experimental observations and JSD lies between 0.007 and 0.015. The work presented is potentially applicable to the probability distribution of mobile-bed sediments in seepage-affected alluvial channels typically characterized by the various turbulent parameters. The purpose of PDF estimation from experimental data is that it provides a complete numerical description in the areas of turbulent flow either at a single or finite number of points

  8. Description of atomic burials in compact globular proteins by Fermi-Dirac probability distributions.

    Science.gov (United States)

    Gomes, Antonio L C; de Rezende, Júlia R; Pereira de Araújo, Antônio F; Shakhnovich, Eugene I

    2007-02-01

    We perform a statistical analysis of atomic distributions as a function of the distance R from the molecular geometrical center in a nonredundant set of compact globular proteins. The number of atoms increases quadratically for small R, indicating a constant average density inside the core, reaches a maximum at a size-dependent distance R(max), and falls rapidly for larger R. The empirical curves turn out to be consistent with the volume increase of spherical concentric solid shells and a Fermi-Dirac distribution in which the distance R plays the role of an effective atomic energy epsilon(R) = R. The effective chemical potential mu governing the distribution increases with the number of residues, reflecting the size of the protein globule, while the temperature parameter beta decreases. Interestingly, betamu is not as strongly dependent on protein size and appears to be tuned to maintain approximately half of the atoms in the high density interior and the other half in the exterior region of rapidly decreasing density. A normalized size-independent distribution was obtained for the atomic probability as a function of the reduced distance, r = R/R(g), where R(g) is the radius of gyration. The global normalized Fermi distribution, F(r), can be reasonably decomposed in Fermi-like subdistributions for different atomic types tau, F(tau)(r), with Sigma(tau)F(tau)(r) = F(r), which depend on two additional parameters mu(tau) and h(tau). The chemical potential mu(tau) affects a scaling prefactor and depends on the overall frequency of the corresponding atomic type, while the maximum position of the subdistribution is determined by h(tau), which appears in a type-dependent atomic effective energy, epsilon(tau)(r) = h(tau)r, and is strongly correlated to available hydrophobicity scales. Better adjustments are obtained when the effective energy is not assumed to be necessarily linear, or epsilon(tau)*(r) = h(tau)*r(alpha,), in which case a correlation with hydrophobicity

  9. Towards a theoretical determination of the geographical probability distribution of meteoroid impacts on Earth

    Science.gov (United States)

    Zuluaga, Jorge I.; Sucerquia, Mario

    2018-06-01

    Tunguska and Chelyabinsk impact events occurred inside a geographical area of only 3.4 per cent of the Earth's surface. Although two events hardly constitute a statistically significant demonstration of a geographical pattern of impacts, their spatial coincidence is at least tantalizing. To understand if this concurrence reflects an underlying geographical and/or temporal pattern, we must aim at predicting the spatio-temporal distribution of meteoroid impacts on Earth. For this purpose we designed, implemented, and tested a novel numerical technique, the `Gravitational Ray Tracing' (GRT) designed to compute the relative impact probability (RIP) on the surface of any planet. GRT is inspired by the so-called ray-casting techniques used to render realistic images of complex 3D scenes. In this paper we describe the method and the results of testing it at the time of large impact events. Our findings suggest a non-trivial pattern of impact probabilities at any given time on the Earth. Locations at 60-90° from the apex are more prone to impacts, especially at midnight. Counterintuitively, sites close to apex direction have the lowest RIP, while in the antapex RIP are slightly larger than average. We present here preliminary maps of RIP at the time of Tunguska and Chelyabinsk events and found no evidence of a spatial or temporal pattern, suggesting that their coincidence was fortuitous. We apply the GRT method to compute theoretical RIP at the location and time of 394 large fireballs. Although the predicted spatio-temporal impact distribution matches marginally the observed events, we successfully predict their impact speed distribution.

  10. Spatial distribution and occurrence probability of regional new particle formation events in eastern China

    Directory of Open Access Journals (Sweden)

    X. Shen

    2018-01-01

    Full Text Available In this work, the spatial extent of new particle formation (NPF events and the relative probability of observing particles originating from different spatial origins around three rural sites in eastern China were investigated using the NanoMap method, using particle number size distribution (PNSD data and air mass back trajectories. The length of the datasets used were 7, 1.5, and 3 years at rural sites Shangdianzi (SDZ in the North China Plain (NCP, Mt. Tai (TS in central eastern China, and Lin'an (LAN in the Yangtze River Delta region in eastern China, respectively. Regional NPF events were observed to occur with the horizontal extent larger than 500 km at SDZ and TS, favoured by the fast transport of northwesterly air masses. At LAN, however, the spatial footprint of NPF events was mostly observed around the site within 100–200 km. Difference in the horizontal spatial distribution of new particle source areas at different sites was connected to typical meteorological conditions at the sites. Consecutive large-scale regional NPF events were observed at SDZ and TS simultaneously and were associated with a high surface pressure system dominating over this area. Simultaneous NPF events at SDZ and LAN were seldom observed. At SDZ the polluted air masses arriving over the NCP were associated with higher particle growth rate (GR and new particle formation rate (J than air masses from Inner Mongolia (IM. At TS the same phenomenon was observed for J, but GR was somewhat lower in air masses arriving over the NCP compared to those arriving from IM. The capability of NanoMap to capture the NPF occurrence probability depends on the length of the dataset of PNSD measurement but also on topography around the measurement site and typical air mass advection speed during NPF events. Thus the long-term measurements of PNSD in the planetary boundary layer are necessary in the further study of spatial extent and the probability of NPF events. The spatial

  11. Matrix method for two-dimensional waveguide mode solution

    Science.gov (United States)

    Sun, Baoguang; Cai, Congzhong; Venkatesh, Balajee Seshasayee

    2018-05-01

    In this paper, we show that the transfer matrix theory of multilayer optics can be used to solve the modes of any two-dimensional (2D) waveguide for their effective indices and field distributions. A 2D waveguide, even composed of numerous layers, is essentially a multilayer stack and the transmission through the stack can be analysed using the transfer matrix theory. The result is a transfer matrix with four complex value elements, namely A, B, C and D. The effective index of a guided mode satisfies two conditions: (1) evanescent waves exist simultaneously in the first (cladding) layer and last (substrate) layer, and (2) the complex element D vanishes. For a given mode, the field distribution in the waveguide is the result of a 'folded' plane wave. In each layer, there is only propagation and absorption; at each boundary, only reflection and refraction occur, which can be calculated according to the Fresnel equations. As examples, we show that this method can be used to solve modes supported by the multilayer step-index dielectric waveguide, slot waveguide, gradient-index waveguide and various plasmonic waveguides. The results indicate the transfer matrix method is effective for 2D waveguide mode solution in general.

  12. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  13. Neutronic evolution of SENA reactor during the first and second cycles. Comparison between the experimental power distributions obtained from the in-core instrumentation evaluation code CIRCE and the theoretical power values computed with the two-dimensional diffusion-evolution code EVOE

    International Nuclear Information System (INIS)

    Andrieux, Chantal

    1976-03-01

    The neutronic evolution of the reacteur Sena during the first and second cycles is presented. The experimental power distributions, obtained from the in-core instrumentation evaluation code CIRCE are compared with the theoretical powers calculated with the two-dimensional diffusion-evolution code EVOE. The CIRCE code allows: the study of the evolution of the principal parameters of the core, the comparison of the results of measured and theoretical estimates. Therefore this study has a great interest for the knowledge of the neutronic evolution of the core, as well as the validation of the refinement of theoretical estimation methods. The core calculation methods and requisite data for the evaluation of the measurements are presented after a brief description of the SENA core and its inner instrumentation. The principle of the in-core instrumentation evaluation code CIRCE, and calculation of the experimental power distributions and nuclear core parameters are then exposed. The results of the evaluation are discussed, with a comparison of the theoretical and experimental results. Taking account of the approximations used, these results, as far as the first and second cycles at SENA are concerned, are satisfactory, the deviations between theoretical and experimental power distributions being lower than 3% at the middle of the reactor and 9% at the periphery [fr

  14. To investigate the characteristics of two -dimensional images and color flow distribution of uterine fi-broids and uterine fibroids%子宫肌瘤和子宫腺肌瘤的二维图像及彩色血流分布特点

    Institute of Scientific and Technical Information of China (English)

    李金莉; 梁凤伟; 严富良; 温海群; 唐其满; 朱秀蕾

    2016-01-01

    目的:观察子宫肌瘤和子宫腺肌瘤的二维图像及彩色血流分布特点,提高两种疾病的诊断准确性。方法:随机选取进行子宫肌瘤和子宫腺肌瘤检查的患者各50例进行超声检查,观察子宫肌瘤和子宫腺肌瘤的二维图像特点,并对比二者的血流分布特点。结果:宫肌瘤的内部多为低回声,彩色多普勒显示周边环状血流信号,血流分布呈现高速低阻型特征。子宫腺肌瘤可在子宫肌层见异常病灶,其边界模糊,彩色多普勒显示散在血流信号,其血流分布呈高速高阻型,VS、VD、PI 以及 RI 均高于子宫肌瘤周边,且差异具有统计学意义(P <0.05)。结论:子宫肌瘤和子宫腺肌瘤的二维图像以及彩色血流分布特点有助于提高两种疾病诊断的准确性。%Objective To observe the characteristics of two -dimensional images and color flow distribution of uterine fibroids and uterine fibroids,and to improve the diagnostic accuracy of two kinds of diseases.Method 240 cases of uterine myoma and 50 cases of adenomyoma were selected they were checked with ultrasound,to observe the characteristics of two -dimensional image of uterine myoma and adenomyoma,and compared their blood flow distribution characteristics.Results The interior of the uterine fibroids was mostly low echo area,and the blood flow distribution showed the characteristics of high speed and low resist-ance.Uterine fibroids can be seen in the uterine muscular layer of abnormal lesions,its boundary fuzzy,the blood flow distribution was high speed and high resistance,VD,PI,VS and RI were higher than the uterine fibroids,and with statistical difference (P 0.05).Conclusion The two -dimensional images of uterine fibroids and uterine fibroids and color flow distribution characteris-tics are helpful to improve the accuracy of diagnosis of two kinds of diseases.

  15. Calculating the Prior Probability Distribution for a Causal Network Using Maximum Entropy: Alternative Approaches

    Directory of Open Access Journals (Sweden)

    Michael J. Markham

    2011-07-01

    Full Text Available Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME are free from these conditions and have the potential to be used in a wider context including systems consisting of given sets of linear and independence constraints, subject to consistency and convergence. ME can also be used to validate results from the causal network methodologies. Three ME methods for determining the prior probability distribution of causal network systems are considered. The first method is Sequential Maximum Entropy in which the computation of a progression of local distributions leads to the over-all distribution. This is followed by development of the Method of Tribus. The development takes the form of an algorithm that includes the handling of explicit independence constraints. These fall into two groups those relating parents of vertices, and those deduced from triangulation of the remaining graph. The third method involves a variation in the part of that algorithm which handles independence constraints. Evidence is presented that this adaptation only requires the linear constraints and the parental independence constraints to emulate the second method in a substantial class of examples.

  16. Various models for pion probability distributions from heavy-ion collisions

    International Nuclear Information System (INIS)

    Mekjian, A.Z.; Mekjian, A.Z.; Schlei, B.R.; Strottman, D.; Schlei, B.R.

    1998-01-01

    Various models for pion multiplicity distributions produced in relativistic heavy ion collisions are discussed. The models include a relativistic hydrodynamic model, a thermodynamic description, an emitting source pion laser model, and a description which generates a negative binomial description. The approach developed can be used to discuss other cases which will be mentioned. The pion probability distributions for these various cases are compared. Comparison of the pion laser model and Bose-Einstein condensation in a laser trap and with the thermal model are made. The thermal model and hydrodynamic model are also used to illustrate why the number of pions never diverges and why the Bose-Einstein correction effects are relatively small. The pion emission strength η of a Poisson emitter and a critical density η c are connected in a thermal model by η/n c =e -m/T <1, and this fact reduces any Bose-Einstein correction effects in the number and number fluctuation of pions. Fluctuations can be much larger than Poisson in the pion laser model and for a negative binomial description. The clan representation of the negative binomial distribution due to Van Hove and Giovannini is discussed using the present description. Applications to CERN/NA44 and CERN/NA49 data are discussed in terms of the relativistic hydrodynamic model. copyright 1998 The American Physical Society

  17. On the probability distribution of daily streamflow in the United States

    Science.gov (United States)

    Blum, Annalise G.; Archfield, Stacey A.; Vogel, Richard M.

    2017-06-01

    Daily streamflows are often represented by flow duration curves (FDCs), which illustrate the frequency with which flows are equaled or exceeded. FDCs have had broad applications across both operational and research hydrology for decades; however, modeling FDCs has proven elusive. Daily streamflow is a complex time series with flow values ranging over many orders of magnitude. The identification of a probability distribution that can approximate daily streamflow would improve understanding of the behavior of daily flows and the ability to estimate FDCs at ungaged river locations. Comparisons of modeled and empirical FDCs at nearly 400 unregulated, perennial streams illustrate that the four-parameter kappa distribution provides a very good representation of daily streamflow across the majority of physiographic regions in the conterminous United States (US). Further, for some regions of the US, the three-parameter generalized Pareto and lognormal distributions also provide a good approximation to FDCs. Similar results are found for the period of record FDCs, representing the long-term hydrologic regime at a site, and median annual FDCs, representing the behavior of flows in a typical year.

  18. Constituent quarks as clusters in quark-gluon-parton model. [Total cross sections, probability distributions

    Energy Technology Data Exchange (ETDEWEB)

    Kanki, T [Osaka Univ., Toyonaka (Japan). Coll. of General Education

    1976-12-01

    We present a quark-gluon-parton model in which quark-partons and gluons make clusters corresponding to two or three constituent quarks (or anti-quarks) in the meson or in the baryon, respectively. We explicitly construct the constituent quark state (cluster), by employing the Kuti-Weisskopf theory and by requiring the scaling. The quark additivity of the hadronic total cross sections and the quark counting rules on the threshold powers of various distributions are satisfied. For small x (Feynman fraction), it is shown that the constituent quarks and quark-partons have quite different probability distributions. We apply our model to hadron-hadron inclusive reactions, and clarify that the fragmentation and the diffractive processes relate to the constituent quark distributions, while the processes in or near the central region are controlled by the quark-partons. Our model gives the reasonable interpretation for the experimental data and much improves the usual ''constituent interchange model'' result near and in the central region (x asymptotically equals x sub(T) asymptotically equals 0).

  19. The return period analysis of natural disasters with statistical modeling of bivariate joint probability distribution.

    Science.gov (United States)

    Li, Ning; Liu, Xueqin; Xie, Wei; Wu, Jidong; Zhang, Peng

    2013-01-01

    New features of natural disasters have been observed over the last several years. The factors that influence the disasters' formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk-based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis. © 2012 Society for Risk Analysis.

  20. The correlation of defect distribution in collisional phase with measured cascade collapse probability

    International Nuclear Information System (INIS)

    Morishita, K.; Ishino, S.; Sekimura, N.

    1995-01-01

    The spatial distributions of atomic displacement at the end of the collisional phase of cascade damage processes were calculated using the computer simulation code MARLOWE, which is based on the binary collision approximation (BCA). The densities of the atomic displacement were evaluated in high dense regions (HDRs) of cascades in several pure metals (Fe, Ni, Cu, Ag, Au, Mo and W). They were compared with the measured cascade collapse probabilities reported in the literature where TEM observations were carried out using thin metal foils irradiated by low-dose ions at room temperature. We found that there exists the minimum or ''critical'' values of the atomic displacement densities for the HDR to collapse into TEM-visible vacancy clusters. The critical densities are generally independent of the cascade energy in the same metal. Furthermore, the material dependence of the critical densities can be explained by the difference in the vacancy mobility at the melting temperature of target materials. This critical density calibration, which is extracted from the ion-irradiation experiments and the BCA simulations, is applied to estimation of cascade collapse probabilities in the metals irradiated by fusion neutrons. (orig.)

  1. Analysis of Observation Data of Earth-Rockfill Dam Based on Cloud Probability Distribution Density Algorithm

    Directory of Open Access Journals (Sweden)

    Han Liwei

    2014-07-01

    Full Text Available Monitoring data on an earth-rockfill dam constitutes a form of spatial data. Such data include much uncertainty owing to the limitation of measurement information, material parameters, load, geometry size, initial conditions, boundary conditions and the calculation model. So the cloud probability density of the monitoring data must be addressed. In this paper, the cloud theory model was used to address the uncertainty transition between the qualitative concept and the quantitative description. Then an improved algorithm of cloud probability distribution density based on a backward cloud generator was proposed. This was used to effectively convert certain parcels of accurate data into concepts which can be described by proper qualitative linguistic values. Such qualitative description was addressed as cloud numerical characteristics-- {Ex, En, He}, which could represent the characteristics of all cloud drops. The algorithm was then applied to analyze the observation data of a piezometric tube in an earth-rockfill dam. And experiment results proved that the proposed algorithm was feasible, through which, we could reveal the changing regularity of piezometric tube’s water level. And the damage of the seepage in the body was able to be found out.

  2. Wave functions and two-electron probability distributions of the Hooke's-law atom and helium

    International Nuclear Information System (INIS)

    O'Neill, Darragh P.; Gill, Peter M. W.

    2003-01-01

    The Hooke's-law atom (hookium) provides an exactly soluble model for a two-electron atom in which the nuclear-electron Coulombic attraction has been replaced by a harmonic one. Starting from the known exact position-space wave function for the ground state of hookium, we present the momentum-space wave function. We also look at the intracules, two-electron probability distributions, for hookium in position, momentum, and phase space. These are compared with the Hartree-Fock results and the Coulomb holes (the difference between the exact and Hartree-Fock intracules) in position, momentum, and phase space are examined. We then compare these results with analogous results for the ground state of helium using a simple, explicitly correlated wave function

  3. Light Scattering of Rough Orthogonal Anisotropic Surfaces with Secondary Most Probable Slope Distributions

    International Nuclear Information System (INIS)

    Li Hai-Xia; Cheng Chuan-Fu

    2011-01-01

    We study the light scattering of an orthogonal anisotropic rough surface with secondary most probable slope distribution. It is found that the scattered intensity profiles have obvious secondary maxima, and in the direction perpendicular to the plane of incidence, the secondary maxima are oriented in a curve on the observation plane, which is called the orientation curve. By numerical calculation of the scattering wave fields with the height data of the sample, it is validated that the secondary maxima are induced by the side face element, which constitutes the prismoid structure of the anisotropic surface. We derive the equation of the quadratic orientation curve. Experimentally, we construct the system for light scattering measurement using a CCD. The scattered intensity profiles are extracted from the images at different angles of incidence along the orientation curves. The experimental results conform to the theory. (fundamental areas of phenomenology(including applications))

  4. Estimating species occurrence, abundance, and detection probability using zero-inflated distributions.

    Science.gov (United States)

    Wenger, Seth J; Freeman, Mary C

    2008-10-01

    Researchers have developed methods to account for imperfect detection of species with either occupancy (presence absence) or count data using replicated sampling. We show how these approaches can be combined to simultaneously estimate occurrence, abundance, and detection probability by specifying a zero-inflated distribution for abundance. This approach may be particularly appropriate when patterns of occurrence and abundance arise from distinct processes operating at differing spatial or temporal scales. We apply the model to two data sets: (1) previously published data for a species of duck, Anas platyrhynchos, and (2) data for a stream fish species, Etheostoma scotti. We show that in these cases, an incomplete-detection zero-inflated modeling approach yields a superior fit to the data than other models. We propose that zero-inflated abundance models accounting for incomplete detection be considered when replicate count data are available.

  5. Exploring two-dimensional electron gases with two-dimensional Fourier transform spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Paul, J.; Dey, P.; Karaiskaj, D., E-mail: karaiskaj@usf.edu [Department of Physics, University of South Florida, 4202 East Fowler Ave., Tampa, Florida 33620 (United States); Tokumoto, T.; Hilton, D. J. [Department of Physics, University of Alabama at Birmingham, Birmingham, Alabama 35294 (United States); Reno, J. L. [CINT, Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

    2014-10-07

    The dephasing of the Fermi edge singularity excitations in two modulation doped single quantum wells of 12 nm and 18 nm thickness and in-well carrier concentration of ∼4 × 10{sup 11} cm{sup −2} was carefully measured using spectrally resolved four-wave mixing (FWM) and two-dimensional Fourier transform (2DFT) spectroscopy. Although the absorption at the Fermi edge is broad at this doping level, the spectrally resolved FWM shows narrow resonances. Two peaks are observed separated by the heavy hole/light hole energy splitting. Temperature dependent “rephasing” (S{sub 1}) 2DFT spectra show a rapid linear increase of the homogeneous linewidth with temperature. The dephasing rate increases faster with temperature in the narrower 12 nm quantum well, likely due to an increased carrier-phonon scattering rate. The S{sub 1} 2DFT spectra were measured using co-linear, cross-linear, and co-circular polarizations. Distinct 2DFT lineshapes were observed for co-linear and cross-linear polarizations, suggesting the existence of polarization dependent contributions. The “two-quantum coherence” (S{sub 3}) 2DFT spectra for the 12 nm quantum well show a single peak for both co-linear and co-circular polarizations.

  6. Functional inks and printing of two-dimensional materials.

    Science.gov (United States)

    Hu, Guohua; Kang, Joohoon; Ng, Leonard W T; Zhu, Xiaoxi; Howe, Richard C T; Jones, Christopher G; Hersam, Mark C; Hasan, Tawfique

    2018-05-08

    Graphene and related two-dimensional materials provide an ideal platform for next generation disruptive technologies and applications. Exploiting these solution-processed two-dimensional materials in printing can accelerate this development by allowing additive patterning on both rigid and conformable substrates for flexible device design and large-scale, high-speed, cost-effective manufacturing. In this review, we summarise the current progress on ink formulation of two-dimensional materials and the printable applications enabled by them. We also present our perspectives on their research and technological future prospects.

  7. Third sound in one and two dimensional modulated structures

    International Nuclear Information System (INIS)

    Komuro, T.; Kawashima, H., Shirahama, K.; Kono, K.

    1996-01-01

    An experimental technique is developed to study acoustic transmission in one and two dimensional modulated structures by employing third sound of a superfluid helium film. In particular, the Penrose lattice, which is a two dimensional quasiperiodic structure, is studied. In two dimensions, the scattering of third sound is weaker than in one dimension. Nevertheless, the authors find that the transmission spectrum in the Penrose lattice, which is a two dimensional prototype of the quasicrystal, is observable if the helium film thickness is chosen around 5 atomic layers. The transmission spectra in the Penrose lattice are explained in terms of dynamical theory of diffraction

  8. ONE-DIMENSIONAL AND TWO-DIMENSIONAL LEADERSHIP STYLES

    Directory of Open Access Journals (Sweden)

    Nikola Stefanović

    2007-06-01

    Full Text Available In order to motivate their group members to perform certain tasks, leaders use different leadership styles. These styles are based on leaders' backgrounds, knowledge, values, experiences, and expectations. The one-dimensional styles, used by many world leaders, are autocratic and democratic styles. These styles lie on the two opposite sides of the leadership spectrum. In order to precisely define the leadership styles on the spectrum between the autocratic leadership style and the democratic leadership style, leadership theory researchers use two dimensional matrices. The two-dimensional matrices define leadership styles on the basis of different parameters. By using these parameters, one can identify two-dimensional styles.

  9. Fermion emission in a two-dimensional black hole space-time

    International Nuclear Information System (INIS)

    Wanders, G.

    1994-01-01

    We investigate massless fermion production by a two-dimensional dilatonic black hole. Our analysis is based on the Bogoliubov transformation relating the outgoing fermion field observed outside the black hole horizon to the incoming field present before the black hole creation. It takes full account of the fact that the transformation is neither invertible nor unitarily implementable. The particle content of the outgoing radiation is specified by means of inclusive probabilities for the detection of sets of outgoing fermions and antifermions in given states. For states localized near the horizon these probabilities characterize a thermal equilibrium state. The way the probabilities become thermal as one approaches the horizon is discussed in detail

  10. Probability distribution of distance in a uniform ellipsoid: Theory and applications to physics

    International Nuclear Information System (INIS)

    Parry, Michelle; Fischbach, Ephraim

    2000-01-01

    A number of authors have previously found the probability P n (r) that two points uniformly distributed in an n-dimensional sphere are separated by a distance r. This result greatly facilitates the calculation of self-energies of spherically symmetric matter distributions interacting by means of an arbitrary radially symmetric two-body potential. We present here the analogous results for P 2 (r;ε) and P 3 (r;ε) which respectively describe an ellipse and an ellipsoid whose major and minor axes are 2a and 2b. It is shown that for ε=(1-b 2 /a 2 ) 1/2 ≤1, P 2 (r;ε) and P 3 (r;ε) can be obtained as an expansion in powers of ε, and our results are valid through order ε 4 . As an application of these results we calculate the Coulomb energy of an ellipsoidal nucleus, and compare our result to an earlier result quoted in the literature. (c) 2000 American Institute of Physics

  11. Probability distribution functions for ELM bursts in a series of JET tokamak discharges

    International Nuclear Information System (INIS)

    Greenhough, J; Chapman, S C; Dendy, R O; Ward, D J

    2003-01-01

    A novel statistical treatment of the full raw edge localized mode (ELM) signal from a series of previously studied JET plasmas is tested. The approach involves constructing probability distribution functions (PDFs) for ELM amplitudes and time separations, and quantifying the fit between the measured PDFs and model distributions (Gaussian, inverse exponential) and Poisson processes. Uncertainties inherent in the discreteness of the raw signal require the application of statistically rigorous techniques to distinguish ELM data points from background, and to extrapolate peak amplitudes. The accuracy of PDF construction is further constrained by the relatively small number of ELM bursts (several hundred) in each sample. In consequence the statistical technique is found to be difficult to apply to low frequency (typically Type I) ELMs, so the focus is narrowed to four JET plasmas with high frequency (typically Type III) ELMs. The results suggest that there may be several fundamentally different kinds of Type III ELMing process at work. It is concluded that this novel statistical treatment can be made to work, may have wider applications to ELM data, and has immediate practical value as an additional quantitative discriminant between classes of ELMing behaviour

  12. A statistical model for deriving probability distributions of contamination for accidental releases

    International Nuclear Information System (INIS)

    ApSimon, H.M.; Davison, A.C.

    1986-01-01

    Results generated from a detailed long-range transport model, MESOS, simulating dispersal of a large number of hypothetical releases of radionuclides in a variety of meteorological situations over Western Europe have been used to derive a simpler statistical model, MESOSTAT. This model may be used to generate probability distributions of different levels of contamination at a receptor point 100-1000 km or so from the source (for example, across a frontier in another country) without considering individual release and dispersal scenarios. The model is embodied in a series of equations involving parameters which are determined from such factors as distance between source and receptor, nuclide decay and deposition characteristics, release duration, and geostrophic windrose at the source. Suitable geostrophic windrose data have been derived for source locations covering Western Europe. Special attention has been paid to the relatively improbable extreme values of contamination at the top end of the distribution. The MESOSTAT model and its development are described, with illustrations of its use and comparison with the original more detailed modelling techniques. (author)

  13. A Bloch modal approach for engineering waveguide and cavity modes in two-dimensional photonic crystals

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper

    2014-01-01

    uses no external excitation and determines the quasi-normal modes as unity eigenvalues of the cavity roundtrip matrix. We demonstrate the method and the quasi-normal modes for two types of two-dimensional photonic crystal structures, and discuss the quasi-normal mode eld distributions and Q-factors...

  14. Computation of two-dimensional isothermal flow in shell-and-tube heat exchangers

    International Nuclear Information System (INIS)

    Carlucci, L.N.; Galpin, P.F.; Brown, J.D.; Frisina, V.

    1983-07-01

    A computational procedure is outlined whereby two-dimensional isothermal shell-side flow distributions can be calculated for tube bundles having arbitrary boundaries and flow blocking devices, such as sealing strips, defined in arbitrary locations. The procedure is described in some detail and several computed results are presented to illustrate the robustness and generality of the method

  15. Two-dimensional thermal modeling of power monolithic microwave integrated circuits (MMIC's)

    Science.gov (United States)

    Fan, Mark S.; Christou, Aris; Pecht, Michael G.

    1992-01-01

    Numerical simulations of the two-dimensional temperature distributions for a typical GaAs MMIC circuit are conducted, aiming at understanding the heat conduction process of the circuit chip and providing temperature information for device reliability analysis. The method used is to solve the two-dimensional heat conduction equation with a control-volume-based finite difference scheme. In particular, the effects of the power dissipation and the ambient temperature are examined, and the criterion for the worst operating environment is discussed in terms of the allowed highest device junction temperature.

  16. Tunable states of interlayer cations in two-dimensional materials

    International Nuclear Information System (INIS)

    Sato, K.; Numata, K.; Dai, W.; Hunger, M.

    2014-01-01

    The local state of cations inside the Ångstrom-scale interlayer spaces is one of the controlling factors for designing sophisticated two-dimensional (2D) materials consisting of 2D nanosheets. In the present work, the molecular mechanism on how the interlayer cation states are induced by the local structures of the 2D nanosheets is highlighted. For this purpose, the local states of Na cations in inorganic 2D materials, in which the compositional fluctuations of a few percent are introduced in the tetrahedral and octahedral units of the 2D nanosheets, were systematically studied by means of 23 Na magic-angle-spinning (MAS) nuclear magnetic resonance (NMR) and 23 Na multiple-quantum MAS (MQMAS) NMR spectroscopy. In contrast with an uniform distribution of Na cations expected so far, various well-defined cation states sensitive to the local structures of the 2D nanosheets were identified. The tunability of the interlayer cation states along with the local structure of the 2D nanosheets, as the smallest structural unit of the 2D material, is discussed

  17. Tunable states of interlayer cations in two-dimensional materials

    Energy Technology Data Exchange (ETDEWEB)

    Sato, K.; Numata, K. [Department of Environmental Sciences, Tokyo Gakugei University, Koganei, Tokyo 184-8501 (Japan); Dai, W. [Key Laboratory of Advanced Energy Materials Chemistry (Ministry of Education), College of Chemistry, Nankai University, Tianjin 300071 (China); Hunger, M. [Institute of Chemical Technology, University of Stuttgart, 70550 Stuttgart (Germany)

    2014-03-31

    The local state of cations inside the Ångstrom-scale interlayer spaces is one of the controlling factors for designing sophisticated two-dimensional (2D) materials consisting of 2D nanosheets. In the present work, the molecular mechanism on how the interlayer cation states are induced by the local structures of the 2D nanosheets is highlighted. For this purpose, the local states of Na cations in inorganic 2D materials, in which the compositional fluctuations of a few percent are introduced in the tetrahedral and octahedral units of the 2D nanosheets, were systematically studied by means of {sup 23}Na magic-angle-spinning (MAS) nuclear magnetic resonance (NMR) and {sup 23}Na multiple-quantum MAS (MQMAS) NMR spectroscopy. In contrast with an uniform distribution of Na cations expected so far, various well-defined cation states sensitive to the local structures of the 2D nanosheets were identified. The tunability of the interlayer cation states along with the local structure of the 2D nanosheets, as the smallest structural unit of the 2D material, is discussed.

  18. Almost two-dimensional treatment of drift wave turbulence

    International Nuclear Information System (INIS)

    Albert, J.M.; Similon, P.L.; Sudan, R.N.

    1990-01-01

    The approximation of two-dimensionality is studied and extended for electrostatic drift wave turbulence in a three-dimensional, magnetized plasma. It is argued on the basis of the direct interaction approximation that in the absence of parallel viscosity, purely 2-D solutions exist for which only modes with k parallel =0 are excited, but that the 2-D spectrum is unstable to perturbations at nonzero k parallel . A 1-D equation for the parallel profile g k perpendicular (k parallel ) of the saturated spectrum at steady state is derived and solved, allowing for parallel viscosity; the spectrum has finite width in k parallel , and hence finite parallel correlation length, as a result of nonlinear coupling. The enhanced energy dissipation rate, a 3-D effect, may be incorporated in the 2-D approximation by a suitable renormalization of the linear dissipation term. An algorithm is presented that reduces the 3-D problem to coupled 1- and 2-D problems. Numerical results from a 2-D spectral direct simulation, thus modified, are compared with the results from the corresponding 3-D (unmodified) simulation for a specific model of drift wave excitation. Damping at high k parallel is included. It is verified that the 1-D solution for g k perpendicular (k parallel ) accurately describes the shape and width of the 3-D spectrum, and that the modified 2-D simulation gives a good estimate of the 3-D energy saturation level and distribution E(k perpendicular )

  19. Multisoliton formula for completely integrable two-dimensional systems

    International Nuclear Information System (INIS)

    Chudnovsky, D.V.; Chudnovsky, G.V.

    1979-01-01

    For general two-dimensional completely integrable systems, the exact formulae for multisoliton type solutions are given. The formulae are obtained algebrically from solutions of two linear partial differential equations

  20. Two-dimensional electronic femtosecond stimulated Raman spectroscopy

    Directory of Open Access Journals (Sweden)

    Ogilvie J.P.

    2013-03-01

    Full Text Available We report two-dimensional electronic spectroscopy with a femtosecond stimulated Raman scattering probe. The method reveals correlations between excitation energy and excited state vibrational structure following photoexcitation. We demonstrate the method in rhodamine 6G.

  1. Generalized similarity method in unsteady two-dimensional MHD ...

    African Journals Online (AJOL)

    user

    International Journal of Engineering, Science and Technology. Vol. 1, No. 1, 2009 ... temperature two-dimensional MHD laminar boundary layer of incompressible fluid. ...... Φ η is Blasius solution for stationary boundary layer on the plate,. ( ). 0.

  2. Two-dimensional materials for novel liquid separation membranes

    Science.gov (United States)

    Ying, Yulong; Yang, Yefeng; Ying, Wen; Peng, Xinsheng

    2016-08-01

    Demand for a perfect molecular-level separation membrane with ultrafast permeation and a robust mechanical property for any kind of species to be blocked in water purification and desalination is urgent. In recent years, due to their intrinsic characteristics, such as a unique mono-atom thick structure, outstanding mechanical strength and excellent flexibility, as well as facile and large-scale production, graphene and its large family of two-dimensional (2D) materials are regarded as ideal membrane materials for ultrafast molecular separation. A perfect separation membrane should be as thin as possible to maximize its flux, mechanically robust and without failure even if under high loading pressure, and have a narrow nanochannel size distribution to guarantee its selectivity. The latest breakthrough in 2D material-based membranes will be reviewed both in theories and experiments, including their current state-of-the-art fabrication, structure design, simulation and applications. Special attention will be focused on the designs and strategies employed to control microstructures to enhance permeation and selectivity for liquid separation. In addition, critical views on the separation mechanism within two-dimensional material-based membranes will be provided based on a discussion of the effects of intrinsic defects during growth, predefined nanopores and nanochannels during subsequent fabrication processes, the interlayer spacing of stacking 2D material flakes and the surface charge or functional groups. Furthermore, we will summarize the significant progress of these 2D material-based membranes for liquid separation in nanofiltration/ultrafiltration and pervaporation. Lastly, we will recall issues requiring attention, and discuss existing questionable conclusions in some articles and emerging challenges. This review will serve as a valuable platform to provide a compact source of relevant and timely information about the development of 2D material-based membranes as

  3. Two-dimensional materials for novel liquid separation membranes.

    Science.gov (United States)

    Ying, Yulong; Yang, Yefeng; Ying, Wen; Peng, Xinsheng

    2016-08-19

    Demand for a perfect molecular-level separation membrane with ultrafast permeation and a robust mechanical property for any kind of species to be blocked in water purification and desalination is urgent. In recent years, due to their intrinsic characteristics, such as a unique mono-atom thick structure, outstanding mechanical strength and excellent flexibility, as well as facile and large-scale production, graphene and its large family of two-dimensional (2D) materials are regarded as ideal membrane materials for ultrafast molecular separation. A perfect separation membrane should be as thin as possible to maximize its flux, mechanically robust and without failure even if under high loading pressure, and have a narrow nanochannel size distribution to guarantee its selectivity. The latest breakthrough in 2D material-based membranes will be reviewed both in theories and experiments, including their current state-of-the-art fabrication, structure design, simulation and applications. Special attention will be focused on the designs and strategies employed to control microstructures to enhance permeation and selectivity for liquid separation. In addition, critical views on the separation mechanism within two-dimensional material-based membranes will be provided based on a discussion of the effects of intrinsic defects during growth, predefined nanopores and nanochannels during subsequent fabrication processes, the interlayer spacing of stacking 2D material flakes and the surface charge or functional groups. Furthermore, we will summarize the significant progress of these 2D material-based membranes for liquid separation in nanofiltration/ultrafiltration and pervaporation. Lastly, we will recall issues requiring attention, and discuss existing questionable conclusions in some articles and emerging challenges. This review will serve as a valuable platform to provide a compact source of relevant and timely information about the development of 2D material-based membranes as

  4. Topological aspect of disclinations in two-dimensional crystals

    International Nuclear Information System (INIS)

    Wei-Kai, Qi; Tao, Zhu; Yong, Chen; Ji-Rong, Ren

    2009-01-01

    By using topological current theory, this paper studies the inner topological structure of disclinations during the melting of two-dimensional systems. From two-dimensional elasticity theory, it finds that there are topological currents for topological defects in homogeneous equation. The evolution of disclinations is studied, and the branch conditions for generating, annihilating, crossing, splitting and merging of disclinations are given. (the physics of elementary particles and fields)

  5. Structures of two-dimensional three-body systems

    International Nuclear Information System (INIS)

    Ruan, W.Y.; Liu, Y.Y.; Bao, C.G.

    1996-01-01

    Features of the structure of L = 0 states of a two-dimensional three-body model system have been investigated. Three types of permutation symmetry of the spatial part, namely symmetric, antisymmetric, and mixed, have been considered. A comparison has been made between the two-dimensional system and the corresponding three-dimensional one. The effect of symmetry on microscopic structures is emphasized. (author)

  6. Study on two-dimensional induced signal readout of MRPC

    International Nuclear Information System (INIS)

    Wu Yucheng; Yue Qian; Li Yuanjing; Ye Jin; Cheng Jianping; Wang Yi; Li Jin

    2012-01-01

    A kind of two-dimensional readout electrode structure for the induced signal readout of MRPC has been studied in both simulation and experiments. Several MRPC prototypes are produced and a series of test experiments have been done to compare with the result of simulation, in order to verify the simulation model. The experiment results are in good agreement with those of simulation. This method will be used to design the two-dimensional signal readout mode of MRPC in the future work.

  7. Controlled Interactions between Two Dimensional Layered Inorganic Nanosheets and Polymers

    Science.gov (United States)

    2016-06-15

    AFRL-AFOSR-JP-TR-2016-0071 Controlled Interactions between Two Dimensional Layered Inorganic Nanosheets and Polymers Cheolmin Park YONSEI UNIVERSITY...Interactions between Two Dimensional Layered Inorganic Nanosheets and Polymers 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386-14-1-4054 5c.  PROGRAM ELEMENT...prospects for a variety of emerging applications in a broad range of fields, such as electronics, energy conversion and storage, catalysis and polymer

  8. Spatiotemporal chaos and two-dimensional dissipative rogue waves in Lugiato-Lefever model

    Science.gov (United States)

    Panajotov, Krassimir; Clerc, Marcel G.; Tlidi, Mustapha

    2017-06-01

    Driven nonlinear optical cavities can exhibit complex spatiotemporal dynamics. We consider the paradigmatic Lugiato-Lefever model describing driven nonlinear optical resonator. This model is one of the most-studied nonlinear equations in optics. It describes a large spectrum of nonlinear phenomena from bistability, to periodic patterns, localized structures, self-pulsating localized structures and to a complex spatiotemporal behavior. The model is considered also as prototype model to describe several optical nonlinear devices such as Kerr media, liquid crystals, left handed materials, nonlinear fiber cavity, and frequency comb generation. We focus our analysis on a spatiotemporal chaotic dynamics in one-dimension. We identify a route to spatiotemporal chaos through an extended quasiperiodicity. We have estimated the Kaplan-Yorke dimension that provides a measure of the strange attractor complexity. Likewise, we show that the Lugiato-Leferver equation supports rogues waves in two-dimensional settings. We characterize rogue-wave formation by computing the probability distribution of the pulse height. Contribution to the Topical Issue "Theory and Applications of the Lugiato-Lefever Equation", edited by Yanne K. Chembo, Damia Gomila, Mustapha Tlidi, Curtis R. Menyuk.

  9. Alternate two-dimensional quantum walk with a single-qubit coin

    International Nuclear Information System (INIS)

    Di Franco, C.; Busch, Th.; Mc Gettrick, M.; Machida, T.

    2011-01-01

    We have recently proposed a two-dimensional quantum walk where the requirement of a higher dimensionality of the coin space is substituted with the alternance of the directions in which the walker can move [C. Di Franco, M. Mc Gettrick, and Th. Busch, Phys. Rev. Lett. 106, 080502 (2011)]. For a particular initial state of the coin, this walk is able to perfectly reproduce the spatial probability distribution of the nonlocalized case of the Grover walk. Here, we present a more detailed proof of this equivalence. We also extend the analysis to other initial states in order to provide a more complete picture of our walk. We show that this scheme outperforms the Grover walk in the generation of x-y spatial entanglement for any initial condition, with the maximum entanglement obtained in the case of the particular aforementioned state. Finally, the equivalence is generalized to wider classes of quantum walks and a limit theorem for the alternate walk in this context is presented.

  10. The theory of critical phenomena in two-dimensional systems

    International Nuclear Information System (INIS)

    Olvera de la C, M.

    1981-01-01

    An exposition of the theory of critical phenomena in two-dimensional physical systems is presented. The first six chapters deal with the mean field theory of critical phenomena, scale invariance of the thermodynamic functions, Kadanoff's spin block construction, Wilson's renormalization group treatment of critical phenomena in configuration space, and the two-dimensional Ising model on a triangular lattice. The second part of this work is made of four chapters devoted to the application of the ideas expounded in the first part to the discussion of critical phenomena in superfluid films, two-dimensional crystals and the two-dimensional XY model of magnetic systems. Chapters seven to ten are devoted to the following subjects: analysis of long range order in one, two, and three-dimensional physical systems. Topological defects in the XY model, in superfluid films and in two-dimensional crystals. The Thouless-Kosterlitz iterated mean field theory of the dipole gas. The renormalization group treatment of the XY model, superfluid films and two-dimensional crystal. (author)

  11. Two-dimensional multifractal cross-correlation analysis

    International Nuclear Information System (INIS)

    Xi, Caiping; Zhang, Shuning; Xiong, Gang; Zhao, Huichang; Yang, Yonghong

    2017-01-01

    Highlights: • We study the mathematical models of 2D-MFXPF, 2D-MFXDFA and 2D-MFXDMA. • Present the definition of the two-dimensional N 2 -partitioned multiplicative cascading process. • Do the comparative analysis of 2D-MC by 2D-MFXPF, 2D-MFXDFA and 2D-MFXDMA. • Provide a reference on the choice and parameter settings of these methods in practice. - Abstract: There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross-correlations. This paper presents two-dimensional multifractal cross-correlation analysis based on the partition function (2D-MFXPF), two-dimensional multifractal cross-correlation analysis based on the detrended fluctuation analysis (2D-MFXDFA) and two-dimensional multifractal cross-correlation analysis based on the detrended moving average analysis (2D-MFXDMA). We apply these methods to pairs of two-dimensional multiplicative cascades (2D-MC) to do a comparative study. Then, we apply the two-dimensional multifractal cross-correlation analysis based on the detrended fluctuation analysis (2D-MFXDFA) to real images and unveil intriguing multifractality in the cross correlations of the material structures. At last, we give the main conclusions and provide a valuable reference on how to choose the multifractal algorithms in the potential applications in the field of SAR image classification and detection.

  12. Two-Dimensional Materials for Sensing: Graphene and Beyond

    Directory of Open Access Journals (Sweden)

    Seba Sara Varghese

    2015-09-01

    Full Text Available Two-dimensional materials have attracted great scientific attention due to their unusual and fascinating properties for use in electronics, spintronics, photovoltaics, medicine, composites, etc. Graphene, transition metal dichalcogenides such as MoS2, phosphorene, etc., which belong to the family of two-dimensional materials, have shown great promise for gas sensing applications due to their high surface-to-volume ratio, low noise and sensitivity of electronic properties to the changes in the surroundings. Two-dimensional nanostructured semiconducting metal oxide based gas sensors have also been recognized as successful gas detection devices. This review aims to provide the latest advancements in the field of gas sensors based on various two-dimensional materials with the main focus on sensor performance metrics such as sensitivity, specificity, detection limit, response time, and reversibility. Both experimental and theoretical studies on the gas sensing properties of graphene and other two-dimensional materials beyond graphene are also discussed. The article concludes with the current challenges and future prospects for two-dimensional materials in gas sensor applications.

  13. Two-dimensional atom localization via probe absorption in a four-level atomic system

    International Nuclear Information System (INIS)

    Wang Zhi-Ping; Ge Qiang; Ruan Yu-Hua; Yu Ben-Li

    2013-01-01

    We have investigated the two-dimensional (2D) atom localization via probe absorption in a coherently driven four-level atomic system by means of a radio-frequency field driving a hyperfine transition. It is found that the detecting probability and precision of 2D atom localization can be significantly improved via adjusting the system parameters. As a result, our scheme may be helpful in laser cooling or the atom nano-lithography via atom localization

  14. Bounds on the Capacity of Weakly constrained two-dimensional Codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2002-01-01

    Upper and lower bounds are presented for the capacity of weakly constrained two-dimensional codes. The maximum entropy is calculated for two simple models of 2-D codes constraining the probability of neighboring 1s as an example. For given models of the coded data, upper and lower bounds...... on the capacity for 2-D channel models based on occurrences of neighboring 1s are considered....

  15. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Cieplak, Agnieszka M.; Slosar, Anže, E-mail: acieplak@bnl.gov, E-mail: anze@bnl.gov [Brookhaven National Laboratory, Bldg 510, Upton, NY, 11973 (United States)

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n -th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  16. Effects of translation-rotation coupling on the displacement probability distribution functions of boomerang colloidal particles

    Science.gov (United States)

    Chakrabarty, Ayan; Wang, Feng; Sun, Kai; Wei, Qi-Huo

    Prior studies have shown that low symmetry particles such as micro-boomerangs exhibit behaviour of Brownian motion rather different from that of high symmetry particles because convenient tracking points (TPs) are usually inconsistent with the center of hydrodynamic stress (CoH) where the translational and rotational motions are decoupled. In this paper we study the effects of the translation-rotation coupling on the displacement probability distribution functions (PDFs) of the boomerang colloid particles with symmetric arms. By tracking the motions of different points on the particle symmetry axis, we show that as the distance between the TP and the CoH is increased, the effects of translation-rotation coupling becomes pronounced, making the short-time 2D PDF for fixed initial orientation to change from elliptical to crescent shape and the angle averaged PDFs from ellipsoidal-particle-like PDF to a shape with a Gaussian top and long displacement tails. We also observed that at long times the PDFs revert to Gaussian. This crescent shape of 2D PDF provides a clear physical picture of the non-zero mean displacements observed in boomerangs particles.

  17. Characterizing the Lyman-alpha forest flux probability distribution function using Legendre polynomials

    Science.gov (United States)

    Cieplak, Agnieszka; Slosar, Anze

    2018-01-01

    The Lyman-alpha forest has become a powerful cosmological probe at intermediate redshift. It is a highly non-linear field with much information present beyond the power spectrum. The flux probability flux distribution (PDF) in particular has been a successful probe of small scale physics. However, it is also sensitive to pixel noise, spectrum resolution, and continuum fitting, all of which lead to possible biased estimators. Here we argue that measuring the coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. Since the n-th Legendre coefficient can be expressed as a linear combination of the first n moments of the field, this allows for the coefficients to be measured in the presence of noise and allows for a clear route towards marginalization over the mean flux. Additionally, in the presence of noise, a finite number of these coefficients are well measured with a very sharp transition into noise dominance. This compresses the information into a small amount of well-measured quantities. Finally, we find that measuring fewer quasars with high signal-to-noise produces a higher amount of recoverable information.

  18. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    Science.gov (United States)

    Cieplak, Agnieszka M.; Slosar, Anže

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  19. Evaluating the suitability of wind speed probability distribution models: A case of study of east and southeast parts of Iran

    International Nuclear Information System (INIS)

    Alavi, Omid; Mohammadi, Kasra; Mostafaeipour, Ali

    2016-01-01

    Highlights: • Suitability of different wind speed probability functions is assessed. • 5 stations distributed in east and south-east of Iran are considered as case studies. • Nakagami distribution is tested for first time and compared with 7 other functions. • Due to difference in wind features, best function is not similar for all stations. - Abstract: Precise information of wind speed probability distribution is truly significant for many wind energy applications. The objective of this study is to evaluate the suitability of different probability functions for estimating wind speed distribution at five stations, distributed in the east and southeast of Iran. Nakagami distribution function is utilized for the first time to estimate the distribution of wind speed. The performance of Nakagami function is compared with seven typically used distribution functions. The achieved results reveal that the more effective function is not similar among all stations. Wind speed characteristics, quantity and quality of the recorded wind speed data can be considered as influential parameters on the performance of the distribution functions. Also, the skewness of the recorded wind speed data may have influence on the accuracy of the Nakagami distribution. For Chabahar and Khaf stations the Nakagami distribution shows the highest performance while for Lutak, Rafsanjan and Zabol stations the Gamma, Generalized Extreme Value and Inverse-Gaussian distributions offer the best fits, respectively. Based on the analysis, the Nakagami distribution can generally be considered as an effective distribution since it provides the best fits in 2 stations and ranks 3rd to 5th in the remaining stations; however, due to the close performance of the Nakagami and Weibull distributions and also flexibility of the Weibull function as its widely proven feature, more assessments on the performance of the Nakagami distribution are required.

  20. Numerical simulation of aerodynamic sound radiated from a two-dimensional airfoil

    OpenAIRE

    飯田, 明由; 大田黒, 俊夫; 加藤, 千幸; Akiyoshi, Iida; Toshio, Otaguro; Chisachi, Kato; 日立機研; 日立機研; 東大生研; Mechanical Engineering Research Laboratory, Hitachi Ltd.; Mechanical Engineering Research Laboratory, Hitachi Ltd.; University of Tokyo

    2000-01-01

    An aerodynamic sound radiated from a two-dimensional airfoil has been computed with the Lighthill-Curle's theory. The predicted sound pressure level is agreement with the measured one. Distribution of vortex sound sources is also estimated based on the correlation between the unsteady vorticity fluctuations and the aerodynamic sound. The distribution of vortex sound source reveals that separated shear layers generate aerodynamic sound. This result is help to understand noise reduction method....

  1. Two-dimensional model of laser alloying of binary alloy powder with interval of melting temperature

    Science.gov (United States)

    Knyzeva, A. G.; Sharkeev, Yu. P.

    2017-10-01

    The paper contains two-dimensional model of laser beam melting of powders from binary alloy. The model takes into consideration the melting of alloy in some temperature interval between solidus and liquidus temperatures. The external source corresponds to laser beam with energy density distributed by Gauss law. The source moves along the treated surface according to given trajectory. The model allows investigating the temperature distribution and thickness of powder layer depending on technological parameters.

  2. Estimating the hydraulic conductivity of two-dimensional fracture networks

    Science.gov (United States)

    Leung, C. T.; Zimmerman, R. W.

    2010-12-01

    Most oil and gas reservoirs, as well as most potential sites for nuclear waste disposal, are naturally fractured. In these sites, the network of fractures will provide the main path for fluid to flow through the rock mass. In many cases, the fracture density is so high as to make it impractical to model it with a discrete fracture network (DFN) approach. For such rock masses, it would be useful to have recourse to analytical, or semi-analytical, methods to estimate the macroscopic hydraulic conductivity of the fracture network. We have investigated single-phase fluid flow through stochastically generated two-dimensional fracture networks. The centres and orientations of the fractures are uniformly distributed, whereas their lengths follow either a lognormal distribution or a power law distribution. We have considered the case where the fractures in the network each have the same aperture, as well as the case where the aperture of each fracture is directly proportional to the fracture length. The discrete fracture network flow and transport simulator NAPSAC, developed by Serco (Didcot, UK), is used to establish the “true” macroscopic hydraulic conductivity of the network. We then attempt to match this conductivity using a simple estimation method that does not require extensive computation. For our calculations, fracture networks are represented as networks composed of conducting segments (bonds) between nodes. Each bond represents the region of a single fracture between two adjacent intersections with other fractures. We assume that the bonds are arranged on a kagome lattice, with some fraction of the bonds randomly missing. The conductance of each bond is then replaced with some effective conductance, Ceff, which we take to be the arithmetic mean of the individual conductances, averaged over each bond, rather than over each fracture. This is in contrast to the usual approximation used in effective medium theories, wherein the geometric mean is used. Our

  3. EDF: Computing electron number probability distribution functions in real space from molecular wave functions

    Science.gov (United States)

    Francisco, E.; Pendás, A. Martín; Blanco, M. A.

    2008-04-01

    Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer

  4. Traditional Semiconductors in the Two-Dimensional Limit.

    Science.gov (United States)

    Lucking, Michael C; Xie, Weiyu; Choe, Duk-Hyun; West, Damien; Lu, Toh-Ming; Zhang, S B

    2018-02-23

    Interest in two-dimensional materials has exploded in recent years. Not only are they studied due to their novel electronic properties, such as the emergent Dirac fermion in graphene, but also as a new paradigm in which stacking layers of distinct two-dimensional materials may enable different functionality or devices. Here, through first-principles theory, we reveal a large new class of two-dimensional materials which are derived from traditional III-V, II-VI, and I-VII semiconductors. It is found that in the ultrathin limit the great majority of traditional binary semiconductors studied (a series of 28 semiconductors) are not only kinetically stable in a two-dimensional double layer honeycomb structure, but more energetically stable than the truncated wurtzite or zinc-blende structures associated with three dimensional bulk. These findings both greatly increase the landscape of two-dimensional materials and also demonstrate that in the double layer honeycomb form, even ordinary semiconductors, such as GaAs, can exhibit exotic topological properties.

  5. Two-dimensional analytic weighting functions for limb scattering

    Science.gov (United States)

    Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.

    2017-10-01

    Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.

  6. Two-dimensional topological field theories coupled to four-dimensional BF theory

    International Nuclear Information System (INIS)

    Montesinos, Merced; Perez, Alejandro

    2008-01-01

    Four-dimensional BF theory admits a natural coupling to extended sources supported on two-dimensional surfaces or string world sheets. Solutions of the theory are in one to one correspondence with solutions of Einstein equations with distributional matter (cosmic strings). We study new (topological field) theories that can be constructed by adding extra degrees of freedom to the two-dimensional world sheet. We show how two-dimensional Yang-Mills degrees of freedom can be added on the world sheet, producing in this way, an interactive (topological) theory of Yang-Mills fields with BF fields in four dimensions. We also show how a world sheet tetrad can be naturally added. As in the previous case the set of solutions of these theories are contained in the set of solutions of Einstein's equations if one allows distributional matter supported on two-dimensional surfaces. These theories are argued to be exactly quantizable. In the context of quantum gravity, one important motivation to study these models is to explore the possibility of constructing a background-independent quantum field theory where local degrees of freedom at low energies arise from global topological (world sheet) degrees of freedom at the fundamental level

  7. Dressed-state analysis of efficient two-dimensional atom localization in a four-level atomic system

    International Nuclear Information System (INIS)

    Wang, Zhiping; Yu, Benli

    2014-01-01

    We investigate two-dimensional atom localization via spontaneous emission in a four-level atomic system. It is found that the detection probability and precision of two-dimensional atom localization can be significantly improved due to the interference effect between the spontaneous decay channels and the dynamically induced quantum interference generated by the probe and composite fields. More importantly, a 100% probability of finding an atom within the sub-half-wavelength domain of the standing waves can be reached when the corresponding conditions are satisfied. As a result, our scheme may be helpful in laser cooling or atom nano-lithography via atom localization. (paper)

  8. Two-dimensional model of a freely expanding plasma

    International Nuclear Information System (INIS)

    Khalid, Q.

    1975-01-01

    The free expansion of an initially confined plasma is studied by the computer experiment technique. The research is an extension to two dimensions of earlier work on the free expansion of a collisionless plasma in one dimension. In the two-dimensional rod model, developed in this research, the plasma particles, electrons and ions are modeled as infinitely long line charges or rods. The line charges move freely in two dimensions normal to their parallel axes, subject only to a self-consistent electric field. Two approximations, the grid approximation and the periodic boundary condition are made in order to reduce the computation time. In the grid approximation, the space occupied by the plasma at a given time is divided into boxes. The particles are subject to an average electric field calculated for that box assuming that the total charge within each box is located at the center of the box. However, the motion of each particle is exactly followed. The periodic boundary condition allows us to consider only one-fourth of the total number of particles of the plasma, representing the remaining three-fourths of the particles as symmetrically placed images of those whose positions are calculated. This approximation follows from the expected azimuthal symmetry of the plasma. The dynamics of the expansion are analyzed in terms of average ion and electron positions, average velocities, oscillation frequencies and relative distribution of energy between thermal, flow and electric field energies. Comparison is made with previous calculations of one-dimensional models which employed plane, spherical or cylindrical sheets as charged particles. In order to analyze the effect of the grid approximation, the model is solved for two different grid sizes and for each grid size the plasma dynamics is determined. For the initial phase of expansion, the agreement for the two grid sizes is found to be good

  9. Green's function for a generalized two-dimensional fluid.

    Science.gov (United States)

    Iwayama, Takahiro; Watanabe, Takeshi

    2010-09-01

    A Green's function for a generalized two-dimensional (2D) fluid in an unbounded domain (the so-called α turbulence system) is discussed. The generalized 2D fluid is characterized by a relationship between an advected quantity q and the stream function ψ : namely, q=-(-Δ){α/2}ψ . Here, α is a real number and q is referred to as the vorticity. In this study, the Green's function refers to the stream function produced by a delta-functional distribution of q , i.e., a point vortex with unit strength. The Green's function has the form G{(α)}(r)∝r{α-2} , except when α is an even number, where r is the distance from the point vortex. This functional form is known as the Riesz potential. When α is a positive even number, the logarithmic correction to the Riesz potential has the form G(r){(α)}∝r{α-2} ln r . In contrast, when α is a negative even number, G{(α)} is given by the higher-order Laplacian of the delta function. The transition of the small-scale behavior of q at α=2 , a well-known property of forced and dissipative α turbulence, is explained in terms of the Green's function. Moreover, the azimuthal velocity around the point vortex is derived from the Green's function. The functional form of the azimuthal velocity indicates that physically realizable systems for the generalized 2D fluid exist only when α≤3 . The Green's function and physically realizable systems for an anisotropic generalized 2D fluid are presented as an application of the present study.

  10. Dynamical class of a two-dimensional plasmonic Dirac system.

    Science.gov (United States)

    Silva, Érica de Mello

    2015-10-01

    A current goal in plasmonic science and technology is to figure out how to manage the relaxational dynamics of surface plasmons in graphene since its damping constitutes a hinder for the realization of graphene-based plasmonic devices. In this sense we believe it might be of interest to enlarge the knowledge on the dynamical class of two-dimensional plasmonic Dirac systems. According to the recurrence relations method, different systems are said to be dynamically equivalent if they have identical relaxation functions at all times, and such commonality may lead to deep connections between seemingly unrelated physical systems. We employ the recurrence relations approach to obtain relaxation and memory functions of density fluctuations and show that a two-dimensional plasmonic Dirac system at long wavelength and zero temperature belongs to the same dynamical class of standard two-dimensional electron gas and classical harmonic oscillator chain with an impurity mass.

  11. Hamiltonian formalism of two-dimensional Vlasov kinetic equation.

    Science.gov (United States)

    Pavlov, Maxim V

    2014-12-08

    In this paper, the two-dimensional Benney system describing long wave propagation of a finite depth fluid motion and the multi-dimensional Russo-Smereka kinetic equation describing a bubbly flow are considered. The Hamiltonian approach established by J. Gibbons for the one-dimensional Vlasov kinetic equation is extended to a multi-dimensional case. A local Hamiltonian structure associated with the hydrodynamic lattice of moments derived by D. J. Benney is constructed. A relationship between this hydrodynamic lattice of moments and the two-dimensional Vlasov kinetic equation is found. In the two-dimensional case, a Hamiltonian hydrodynamic lattice for the Russo-Smereka kinetic model is constructed. Simple hydrodynamic reductions are presented.

  12. Control Operator for the Two-Dimensional Energized Wave Equation

    Directory of Open Access Journals (Sweden)

    Sunday Augustus REJU

    2006-07-01

    Full Text Available This paper studies the analytical model for the construction of the two-dimensional Energized wave equation. The control operator is given in term of space and time t independent variables. The integral quadratic objective cost functional is subject to the constraint of two-dimensional Energized diffusion, Heat and a source. The operator that shall be obtained extends the Conjugate Gradient method (ECGM as developed by Hestenes et al (1952, [1]. The new operator enables the computation of the penalty cost, optimal controls and state trajectories of the two-dimensional energized wave equation when apply to the Conjugate Gradient methods in (Waziri & Reju, LEJPT & LJS, Issues 9, 2006, [2-4] to appear in this series.

  13. Velocity and Dispersion for a Two-Dimensional Random Walk

    International Nuclear Information System (INIS)

    Li Jinghui

    2009-01-01

    In the paper, we consider the transport of a two-dimensional random walk. The velocity and the dispersion of this two-dimensional random walk are derived. It mainly show that: (i) by controlling the values of the transition rates, the direction of the random walk can be reversed; (ii) for some suitably selected transition rates, our two-dimensional random walk can be efficient in comparison with the one-dimensional random walk. Our work is motivated in part by the challenge to explain the unidirectional transport of motor proteins. When the motor proteins move at the turn points of their tracks (i.e., the cytoskeleton filaments and the DNA molecular tubes), some of our results in this paper can be used to deal with the problem. (general)

  14. Joint Bayesian Estimation of Quasar Continua and the Lyα Forest Flux Probability Distribution Function

    Science.gov (United States)

    Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan

    2017-08-01

    We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.

  15. Characterisation of seasonal flood types according to timescales in mixed probability distributions

    Science.gov (United States)

    Fischer, Svenja; Schumann, Andreas; Schulte, Markus

    2016-08-01

    When flood statistics are based on annual maximum series (AMS), the sample often contains flood peaks, which differ in their genesis. If the ratios among event types change over the range of observations, the extrapolation of a probability distribution function (pdf) can be dominated by a majority of events that belong to a certain flood type. If this type is not typical for extraordinarily large extremes, such an extrapolation of the pdf is misleading. To avoid this breach of the assumption of homogeneity, seasonal models were developed that differ between winter and summer floods. We show that a distinction between summer and winter floods is not always sufficient if seasonal series include events with different geneses. Here, we differentiate floods by their timescales into groups of long and short events. A statistical method for such a distinction of events is presented. To demonstrate their applicability, timescales for winter and summer floods in a German river basin were estimated. It is shown that summer floods can be separated into two main groups, but in our study region, the sample of winter floods consists of at least three different flood types. The pdfs of the two groups of summer floods are combined via a new mixing model. This model considers that information about parallel events that uses their maximum values only is incomplete because some of the realisations are overlaid. A statistical method resulting in an amendment of statistical parameters is proposed. The application in a German case study demonstrates the advantages of the new model, with specific emphasis on flood types.

  16. The OMPS Limb Profiler Instrument: Two-Dimensional Retrieval Algorithm

    Science.gov (United States)

    Rault, Didier F.

    2010-01-01

    The upcoming Ozone Mapper and Profiler Suite (OMPS), which will be launched on the NPOESS Preparatory Project (NPP) platform in early 2011, will continue monitoring the global distribution of the Earth's middle atmosphere ozone and aerosol. OMPS is composed of three instruments, namely the Total Column Mapper (heritage: TOMS, OMI), the Nadir Profiler (heritage: SBUV) and the Limb Profiler (heritage: SOLSE/LORE, OSIRIS, SCIAMACHY, SAGE III). The ultimate goal of the mission is to better understand and quantify the rate of stratospheric ozone recovery. The focus of the paper will be on the Limb Profiler (LP) instrument. The LP instrument will measure the Earth's limb radiance (which is due to the scattering of solar photons by air molecules, aerosol and Earth surface) in the ultra-violet (UV), visible and near infrared, from 285 to 1000 nm. The LP simultaneously images the whole vertical extent of the Earth's limb through three vertical slits, each covering a vertical tangent height range of 100 km and each horizontally spaced by 250 km in the cross-track direction. Measurements are made every 19 seconds along the orbit track, which corresponds to a distance of about 150km. Several data analysis tools are presently being constructed and tested to retrieve ozone and aerosol vertical distribution from limb radiance measurements. The primary NASA algorithm is based on earlier algorithms developed for the SOLSE/LORE and SAGE III limb scatter missions. All the existing retrieval algorithms rely on a spherical symmetry assumption for the atmosphere structure. While this assumption is reasonable in most of the stratosphere, it is no longer valid in regions of prime scientific interest, such as polar vortex and UTLS regions. The paper will describe a two-dimensional retrieval algorithm whereby the ozone distribution is simultaneously retrieved vertically and horizontally for a whole orbit. The retrieval code relies on (1) a forward 2D Radiative Transfer code (to model limb

  17. The probability distribution of maintenance cost of a system affected by the gamma process of degradation: Finite time solution

    International Nuclear Information System (INIS)

    Cheng, Tianjin; Pandey, Mahesh D.; Weide, J.A.M. van der

    2012-01-01

    The stochastic gamma process has been widely used to model uncertain degradation in engineering systems and structures. The optimization of the condition-based maintenance (CBM) policy is typically based on the minimization of the asymptotic cost rate. In the financial planning of a maintenance program, however, a more accurate prediction interval for the cost is needed for prudent decision making. The prediction interval cannot be estimated unless the probability distribution of cost is known. In this context, the asymptotic cost rate has a limited utility. This paper presents the derivation of the probability distribution of maintenance cost, when the system degradation is modelled as a stochastic gamma process. A renewal equation is formulated to derive the characteristic function, then the discrete Fourier transform of the characteristic function leads to the complete probability distribution of cost in a finite time setting. The proposed approach is useful for a precise estimation of prediction limits and optimization of the maintenance cost.

  18. Two-dimensional nonlinear equations of supersymmetric gauge theories

    International Nuclear Information System (INIS)

    Savel'ev, M.V.

    1985-01-01

    Supersymmetric generalization of two-dimensional nonlinear dynamical equations of gauge theories is presented. The nontrivial dynamics of a physical system in the supersymmetry and supergravity theories for (2+2)-dimensions is described by the integrable embeddings of Vsub(2/2) superspace into the flat enveloping superspace Rsub(N/M), supplied with the structure of a Lie superalgebra. An equation is derived which describes a supersymmetric generalization of the two-dimensional Toda lattice. It contains both super-Liouville and Sinh-Gordon equations

  19. Pair Interaction of Dislocations in Two-Dimensional Crystals

    Science.gov (United States)

    Eisenmann, C.; Gasser, U.; Keim, P.; Maret, G.; von Grünberg, H. H.

    2005-10-01

    The pair interaction between crystal dislocations is systematically explored by analyzing particle trajectories of two-dimensional colloidal crystals measured by video microscopy. The resulting pair energies are compared to Monte Carlo data and to predictions derived from the standard Hamiltonian of the elastic theory of dislocations. Good agreement is found with respect to the distance and temperature dependence of the interaction potential, but not regarding the angle dependence where discrete lattice effects become important. Our results on the whole confirm that the dislocation Hamiltonian allows a quantitative understanding of the formation and interaction energies of dislocations in two-dimensional crystals.

  20. Two dimensional nonlinear spectral estimation techniques for breast cancer localization

    International Nuclear Information System (INIS)

    Stathaki, P.T.; Constantinides, A.G.

    1994-01-01

    In this paper the problem of image texture analysis in the presence of noise is examined from a higher-order statistical perspective. The approach taken involves the use of two dimensional second order Volterra filters where the filter weights are derived from third order cumulants of the two dimensional signal. The specific application contained in this contribution is in mammography, an area in which it is difficult to discern the appropriate features. The paper describes the fundamental issues of the various components of the approach. The results of the entire texture modelling, classification and segmentation scheme contained in this paper are very encouraging

  1. Densis. Densimetric representation of two-dimensional matrices

    International Nuclear Information System (INIS)

    Los Arcos Merino, J.M.

    1978-01-01

    Densis is a Fortran V program which allows off-line control of a Calcomp digital plotter, to represent a two-dimensional matrix of numerical elements in the form of a variable shading intensity map in two colours. Each matrix element is associated to a square of a grid which is traced over by lines whose number is a function of the element value according to a selected scale. Program features, subroutine structure and running instructions, are described. Some typical results, for gamma-gamma coincidence experimental data and a sampled two-dimensional function, are indicated. (author)

  2. Two-dimensional QCD in the Coulomb gauge

    International Nuclear Information System (INIS)

    Kalashnikova, Yu.S.; Nefed'ev, A.V.

    2002-01-01

    Various aspects of the 't Hooft model for two-dimensional QCD in the limit of infinite number of colours in the Coulomb gauge are discussed. The properties of mesonic excitations are studied, with special emphasis on the pion. Attention is paid to the dual role of the pion. which, while a genuine qq-bar state, is a Goldstone boson of two-dimensional QCD as well. In particular, the validity of the soft-pion theorems is demonstrated. It is shown that the Coulomb gauge is the most suitable choice for the study of hadronic observables involving pions [ru

  3. Quantum Communication Through a Two-Dimensional Spin Network

    International Nuclear Information System (INIS)

    Wang Zhaoming; Gu Yongjian

    2012-01-01

    We investigate the state or entanglement transfer through a two-dimensional spin network. We show that for state transfer, better fidelity can be gained along the diagonal direction but for entanglement transfer, when the initial entanglement is created along the boundary, the concurrence is more inclined to propagate along the boundary. This behavior is produced by quantum mechanical interference and the communication quality depends on the precise size of the network. For some number of sites, the fidelity in a two-dimensional channel is higher than one-dimensional case. This is an important result for realizing quantum communication through high dimension spin chain networks.

  4. Critical Behaviour of a Two-Dimensional Random Antiferromagnet

    DEFF Research Database (Denmark)

    Als-Nielsen, Jens Aage; Birgeneau, R. J.; Guggenheim, H. J.

    1976-01-01

    A neutron scattering study of the order parameter, correlation length and staggered susceptibility of the two-dimensional random antiferromagnet Rb2Mn0.5Ni0.5F4 is reported. The system is found to exhibit a well-defined phase transition with critical exponents identical to those of the isomorphou...... pure materials K2NiF4 and K2MnF4. Thus, in these systems, which have the asymptotic critical behaviour of the two-dimensional Ising model, randomness has no measurable effect on the phase-transition behaviour....

  5. Two dimensional nonlinear spectral estimation techniques for breast cancer localization

    Energy Technology Data Exchange (ETDEWEB)

    Stathaki, P T; Constantinides, A G [Signal Processing Section, Department of Electrical and Electronic Engineering, Imperial College, Exhibition Road, London SW7 2BT, UK (United Kingdom)

    1994-12-31

    In this paper the problem of image texture analysis in the presence of noise is examined from a higher-order statistical perspective. The approach taken involves the use of two dimensional second order Volterra filters where the filter weights are derived from third order cumulants of the two dimensional signal. The specific application contained in this contribution is in mammography, an area in which it is difficult to discern the appropriate features. The paper describes the fundamental issues of the various components of the approach. The results of the entire texture modelling, classification and segmentation scheme contained in this paper are very encouraging. 7 refs, 2 figs.

  6. Finite element solution of two dimensional time dependent heat equation

    International Nuclear Information System (INIS)

    Maaz

    1999-01-01

    A Microsoft Windows based computer code, named FHEAT, has been developed for solving two dimensional heat problems in Cartesian and Cylindrical geometries. The programming language is Microsoft Visual Basic 3.0. The code makes use of Finite element formulation for spatial domain and Finite difference formulation for time domain. Presently the code is capable of solving two dimensional steady state and transient problems in xy- and rz-geometries. The code is capable excepting both triangular and rectangular elements. Validation and benchmarking was done against hand calculations and published results. (author)

  7. Chaotic dynamics in two-dimensional noninvertible maps

    CERN Document Server

    Mira, Christian; Cathala, Jean-Claude; Gardini, Laura

    1996-01-01

    This book is essentially devoted to complex properties (Phase plane structure and bifurcations) of two-dimensional noninvertible maps, i.e. maps having either a non-unique inverse, or no real inverse, according to the plane point. They constitute models of sets of discrete dynamical systems encountered in Engineering (Control, Signal Processing, Electronics), Physics, Economics, Life Sciences. Compared to the studies made in the one-dimensional case, the two-dimensional situation remained a long time in an underdeveloped state. It is only since these last years that the interest for this resea

  8. Chiral anomaly, fermionic determinant and two dimensional models

    International Nuclear Information System (INIS)

    Rego Monteiro, M.A. do.

    1985-01-01

    The chiral anomaly in random pair dimension is analysed. This anomaly is perturbatively calculated by dimensional regularization method. A new method for non-perturbative Jacobian calculation of a general chiral transformation, 1.e., finite and non-Abelian, is developed. This method is used for non-perturbative chiral anomaly calculation, as an alternative to bosonization of two-dimensional theories for massless fermions and to study the phenomenum of fermion number fractionalization. The fermionic determinant from two-dimensional quantum chromodynamics is also studied, and calculated, exactly, as in decoupling gauge as with out reference to a particular gauge. (M.C.K.) [pt

  9. Crustal geomagnetic field - Two-dimensional intermediate-wavelength spatial power spectra

    Science.gov (United States)

    Mcleod, M. G.

    1983-01-01

    Two-dimensional Fourier spatial power spectra of equivalent magnetization values are presented for a region that includes a large portion of the western United States. The magnetization values were determined by inversion of POGO satellite data, assuming a magnetic crust 40 km thick, and were located on an 11 x 10 array with 300 km grid spacing. The spectra appear to be in good agreement with values of the crustal geomagnetic field spatial power spectra given by McLeod and Coleman (1980) and with the crustal field model given by Serson and Hannaford (1957). The spectra show evidence of noise at low frequencies in the direction along the satellite orbital track (N-S). indicating that for this particular data set additional filtering would probably be desirable. These findings illustrate the value of two-dimensional spatial power spectra both for describing the geomagnetic field statistically and as a guide for diagnosing possible noise sources.

  10. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA.

    Science.gov (United States)

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-06-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.

  11. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA*

    Science.gov (United States)

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-01-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474

  12. Probability Distribution of Long-run Indiscriminate Felling of Trees in ...

    African Journals Online (AJOL)

    Bright

    conditionally independent of every prior state given the current state (Obodos, ... of events or experiments in which the probability of occurrence for an event ... represent the exhaustive and mutually exclusive outcomes (states) of a system at.

  13. Vectorized Matlab Codes for Linear Two-Dimensional Elasticity

    Directory of Open Access Journals (Sweden)

    Jonas Koko

    2007-01-01

    Full Text Available A vectorized Matlab implementation for the linear finite element is provided for the two-dimensional linear elasticity with mixed boundary conditions. Vectorization means that there is no loop over triangles. Numerical experiments show that our implementation is more efficient than the standard implementation with a loop over all triangles.

  14. Level crossings in complex two-dimensional potentials

    Indian Academy of Sciences (India)

    Two-dimensional P T -symmetric quantum-mechanical systems with the complex cubic potential 12 = 2 + 2 + 2 and the complex Hénon–Heiles potential HH = 2 + 2 + (2 − 3/3) are investigated. Using numerical and perturbative methods, energy spectra are obtained to high levels. Although both ...

  15. Zero sound in a two-dimensional dipolar Fermi gas

    NARCIS (Netherlands)

    Lu, Z.K.; Matveenko, S.I.; Shlyapnikov, G.V.

    2013-01-01

    We study zero sound in a weakly interacting two-dimensional (2D) gas of single-component fermionic dipoles (polar molecules or atoms with a large magnetic moment) tilted with respect to the plane of their translational motion. It is shown that the propagation of zero sound is provided by both

  16. Interior design of a two-dimensional semiclassical black hole

    Science.gov (United States)

    Levanony, Dana; Ori, Amos

    2009-10-01

    We look into the inner structure of a two-dimensional dilatonic evaporating black hole. We establish and employ the homogenous approximation for the black-hole interior. Two kinds of spacelike singularities are found inside the black hole, and their structure is investigated. We also study the evolution of spacetime from the horizon to the singularity.

  17. On final states of two-dimensional decaying turbulence

    NARCIS (Netherlands)

    Yin, Z.

    2004-01-01

    Numerical and analytical studies of final states of two-dimensional (2D) decaying turbulence are carried out. The first part of this work is trying to give a definition for final states of 2D decaying turbulence. The functional relation of ¿-¿, which is frequently adopted as the characterization of

  18. Vibrations of thin piezoelectric shallow shells: Two-dimensional ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    In this paper we consider the eigenvalue problem for piezoelectric shallow shells and we show that, as the thickness of the shell goes to zero, the eigensolutions of the three-dimensional piezoelectric shells converge to the eigensolutions of a two- dimensional eigenvalue problem. Keywords. Vibrations; piezoelectricity ...

  19. Inter-layer Cooper pairing of two-dimensional electrons

    International Nuclear Information System (INIS)

    Inoue, Masahiro; Takemori, Tadashi; Yoshizaki, Ryozo; Sakudo, Tunetaro; Ohtaka, Kazuo

    1987-01-01

    The authors point out the possibility that the high transition temperatures of the recently discovered oxide superconductors are dominantly caused by the inter-layer Cooper pairing of two-dimensional electrons that are coupled through the exchange of three-dimensional phonons. (author)

  20. Solitary wave solutions of two-dimensional nonlinear Kadomtsev ...

    Indian Academy of Sciences (India)

    Aly R Seadawy

    2017-09-13

    Sep 13, 2017 ... We considered the two-dimensional DASWs in colli- sionless, unmagnetized cold plasma consisting of dust fluid, ions and electrons. The dynamics of DASWs is governed by the normalized fluid equations of nonlin- ear continuity (1), nonlinear motion of system (2) and. (3) and linear Poisson equation (4) as.