Distribution-independent hierarchicald N-body methods
International Nuclear Information System (INIS)
Aluru, S.
1994-01-01
The N-body problem is to simulate the motion of N particles under the influence of mutual force fields based on an inverse square law. The problem has applications in several domains including astrophysics, molecular dynamics, fluid dynamics, radiosity methods in computer graphics and numerical complex analysis. Research efforts have focused on reducing the O(N 2 ) time per iteration required by the naive algorithm of computing each pairwise interaction. Widely respected among these are the Barnes-Hut and Greengard methods. Greengard claims his algorithm reduces the complexity to O(N) time per iteration. Throughout this thesis, we concentrate on rigorous, distribution-independent, worst-case analysis of the N-body methods. We show that Greengard's algorithm is not O(N), as claimed. Both Barnes-Hut and Greengard's methods depend on the same data structure, which we show is distribution-dependent. For the distribution that results in the smallest running time, we show that Greengard's algorithm is Ω(N log 2 N) in two dimensions and Ω(N log 4 N) in three dimensions. We have designed a hierarchical data structure whose size depends entirely upon the number of particles and is independent of the distribution of the particles. We show that both Greengard's and Barnes-Hut algorithms can be used in conjunction with this data structure to reduce their complexity. Apart from reducing the complexity of the Barnes-Hut algorithm, the data structure also permits more accurate error estimation. We present two- and three-dimensional algorithms for creating the data structure. The multipole method designed using this data structure has a complexity of O(N log N) in two dimensions and O(N log 2 N) in three dimensions
Extending the alias Monte Carlo sampling method to general distributions
International Nuclear Information System (INIS)
Edwards, A.L.; Rathkopf, J.A.; Smidt, R.K.
1991-01-01
The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs
Dynamic Subsidy Method for Congestion Management in Distribution Networks
DEFF Research Database (Denmark)
Huang, Shaojun; Wu, Qiuwei
2016-01-01
Dynamic subsidy (DS) is a locational price paid by the distribution system operator (DSO) to its customers in order to shift energy consumption to designated hours and nodes. It is promising for demand side management and congestion management. This paper proposes a new DS method for congestion...... management in distribution networks, including the market mechanism, the mathematical formulation through a two-level optimization, and the method solving the optimization by tightening the constraints and linearization. Case studies were conducted with a one node system and the Bus 4 distribution network...... of the Roy Billinton Test System (RBTS) with high penetration of electric vehicles (EVs) and heat pumps (HPs). The case studies demonstrate the efficacy of the DS method for congestion management in distribution networks. Studies in this paper show that the DS method offers the customers a fair opportunity...
Advanced airflow distribution methods for reduction of personal exposure to indoor pollutants
DEFF Research Database (Denmark)
Cao, Guangyu; Kosonen, Risto; Melikov, Arsen
2016-01-01
The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow ...... distribution methods to reduce indoor exposure to various indoor pollutants. This article presents some of the latest development of advanced airflow distribution methods to reduce indoor exposure in various types of buildings.......The main objective of this study is to recognize possible airflow distribution methods to protect the occupants from exposure to various indoor pollutants. The fact of the increasing exposure of occupants to various indoor pollutants shows that there is an urgent need to develop advanced airflow...
Corbin, B. A.; Seager, S.; Ross, A.; Hoffman, J.
2017-12-01
Distributed satellite systems (DSS) have emerged as an effective and cheap way to conduct space science, thanks to advances in the small satellite industry. However, relatively few space science missions have utilized multiple assets to achieve their primary scientific goals. Previous research on methods for evaluating mission concepts designs have shown that distributed systems are rarely competitive with monolithic systems, partially because it is difficult to quantify the added value of DSSs over monolithic systems. Comparatively little research has focused on how DSSs can be used to achieve new, fundamental space science goals that cannot be achieved with monolithic systems or how to choose a design from a larger possible tradespace of options. There are seven emergent capabilities of distributed satellites: shared sampling, simultaneous sampling, self-sampling, census sampling, stacked sampling, staged sampling, and sacrifice sampling. These capabilities are either fundamentally, analytically, or operationally unique in their application to distributed science missions, and they can be leveraged to achieve science goals that are either impossible or difficult and costly to achieve with monolithic systems. The Responsive Systems Comparison (RSC) method combines Multi-Attribute Tradespace Exploration with Epoch-Era Analysis to examine benefits, costs, and flexible options in complex systems over the mission lifecycle. Modifications to the RSC method as it exists in previously published literature were made in order to more accurately characterize how value is derived from space science missions. New metrics help rank designs by the value derived over their entire mission lifecycle and show more accurate cumulative value distributions. The RSC method was applied to four case study science missions that leveraged the emergent capabilities of distributed satellites to achieve their primary science goals. In all four case studies, RSC showed how scientific value was
A Comparative Study of Distribution System Parameter Estimation Methods
Energy Technology Data Exchange (ETDEWEB)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
Distribution network planning method considering distributed generation for peak cutting
International Nuclear Information System (INIS)
Ouyang Wu; Cheng Haozhong; Zhang Xiubin; Yao Liangzhong
2010-01-01
Conventional distribution planning method based on peak load brings about large investment, high risk and low utilization efficiency. A distribution network planning method considering distributed generation (DG) for peak cutting is proposed in this paper. The new integrated distribution network planning method with DG implementation aims to minimize the sum of feeder investments, DG investments, energy loss cost and the additional cost of DG for peak cutting. Using the solution techniques combining genetic algorithm (GA) with the heuristic approach, the proposed model determines the optimal planning scheme including the feeder network and the siting and sizing of DG. The strategy for the site and size of DG, which is based on the radial structure characteristics of distribution network, reduces the complexity degree of solving the optimization model and eases the computational burden substantially. Furthermore, the operation schedule of DG at the different load level is also provided.
Calculation Methods for Wallenius’ Noncentral Hypergeometric Distribution
DEFF Research Database (Denmark)
Fog, Agner
2008-01-01
Two different probability distributions are both known in the literature as "the" noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution can be described by an urn model without replacement with bias. Fisher's noncentral hypergeometric distribution...... is the conditional distribution of independent binomial variates given their sum. No reliable calculation method for Wallenius' noncentral hypergeometric distribution has hitherto been described in the literature. Several new methods for calculating probabilities from Wallenius' noncentral hypergeometric...... distribution are derived. Range of applicability, numerical problems, and efficiency are discussed for each method. Approximations to the mean and variance are also discussed. This distribution has important applications in models of biased sampling and in models of evolutionary systems....
Methods for Distributed Optimal Energy Management
DEFF Research Database (Denmark)
Brehm, Robert
The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast to convent......The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast...... to conventional centralised optimal energy flow management systems, here-in, focus is set on how optimal energy management can be achieved in a decentralised distributed architecture such as a multi-agent system. Distributed optimisation methods are introduced, targeting optimisation of energy flow in virtual......-consumption of renewable energy resources in low voltage grids. It can be shown that this method prevents mutual discharging of batteries and prevents peak loads, a supervisory control instance can dictate the level of autarchy from the utility grid. Further it is shown that the problem of optimal energy flow management...
Test of methods for retrospective activity size distribution determination from filter samples
International Nuclear Information System (INIS)
Meisenberg, Oliver; Tschiersch, Jochen
2015-01-01
Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter
Cellular Neural Network-Based Methods for Distributed Network Intrusion Detection
Directory of Open Access Journals (Sweden)
Kang Xie
2015-01-01
Full Text Available According to the problems of current distributed architecture intrusion detection systems (DIDS, a new online distributed intrusion detection model based on cellular neural network (CNN was proposed, in which discrete-time CNN (DTCNN was used as weak classifier in each local node and state-controlled CNN (SCCNN was used as global detection method, respectively. We further proposed a new method for design template parameters of SCCNN via solving Linear Matrix Inequality. Experimental results based on KDD CUP 99 dataset show its feasibility and effectiveness. Emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI implementation which allows the distributed intrusion detection to be performed better.
Higher moments method for generalized Pareto distribution in flood frequency analysis
Zhou, C. R.; Chen, Y. F.; Huang, Q.; Gu, S. H.
2017-08-01
The generalized Pareto distribution (GPD) has proven to be the ideal distribution in fitting with the peak over threshold series in flood frequency analysis. Several moments-based estimators are applied to estimating the parameters of GPD. Higher linear moments (LH moments) and higher probability weighted moments (HPWM) are the linear combinations of Probability Weighted Moments (PWM). In this study, the relationship between them will be explored. A series of statistical experiments and a case study are used to compare their performances. The results show that if the same PWM are used in LH moments and HPWM methods, the parameter estimated by these two methods is unbiased. Particularly, when the same PWM are used, the PWM method (or the HPWM method when the order equals 0) shows identical results in parameter estimation with the linear Moments (L-Moments) method. Additionally, this phenomenon is significant when r ≥ 1 that the same order PWM are used in HPWM and LH moments method.
Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming
2018-01-01
This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.
Effects of mixing methods on phase distribution in vertical bubble flow
International Nuclear Information System (INIS)
Monji, Hideaki; Matsui, Goichi; Sugiyama, Takayuki.
1992-01-01
The mechanism of the phase distribution formation in a bubble flow is one of the most important problems in the control of two-phase flow systems. The effect of mixing methods on the phase distribution was experimentally investigated by using upward nitrogen gas-water bubble flow under the condition of fixed flow rates. The experimental results show that the diameter of the gas injection hole influences the phase distribution through the bubble size. The location of the injection hole and the direction of injection do not influence the phase distribution of fully developed bubble flow. The transitive equivalent bubble size from the coring bubble flow to the sliding bubble flow corresponds to the bubble shape transition. The analytical results show that the phase distribution may be predictable if the phase profile is judged from the bubble size. (author)
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods
Ritter, Nicola L.
2012-01-01
Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…
Thermodynamic method for generating random stress distributions on an earthquake fault
Barall, Michael; Harris, Ruth A.
2012-01-01
This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.
Generalized Analysis of a Distribution Separation Method
Directory of Open Access Journals (Sweden)
Peng Zhang
2016-04-01
Full Text Available Separating two probability distributions from a mixture model that is made up of the combinations of the two is essential to a wide range of applications. For example, in information retrieval (IR, there often exists a mixture distribution consisting of a relevance distribution that we need to estimate and an irrelevance distribution that we hope to get rid of. Recently, a distribution separation method (DSM was proposed to approximate the relevance distribution, by separating a seed irrelevance distribution from the mixture distribution. It was successfully applied to an IR task, namely pseudo-relevance feedback (PRF, where the query expansion model is often a mixture term distribution. Although initially developed in the context of IR, DSM is indeed a general mathematical formulation for probability distribution separation. Thus, it is important to further generalize its basic analysis and to explore its connections to other related methods. In this article, we first extend DSM’s theoretical analysis, which was originally based on the Pearson correlation coefficient, to entropy-related measures, including the KL-divergence (Kullback–Leibler divergence, the symmetrized KL-divergence and the JS-divergence (Jensen–Shannon divergence. Second, we investigate the distribution separation idea in a well-known method, namely the mixture model feedback (MMF approach. We prove that MMF also complies with the linear combination assumption, and then, DSM’s linear separation algorithm can largely simplify the EM algorithm in MMF. These theoretical analyses, as well as further empirical evaluation results demonstrate the advantages of our DSM approach.
Method of forecasting power distribution
International Nuclear Information System (INIS)
Kaneto, Kunikazu.
1981-01-01
Purpose: To obtain forecasting results at high accuracy by reflecting the signals from neutron detectors disposed in the reactor core on the forecasting results. Method: An on-line computer transfers, to a simulator, those process data such as temperature and flow rate for coolants in each of the sections and various measuring signals such as control rod positions from the nuclear reactor. The simulator calculates the present power distribution before the control operation. The signals from the neutron detectors at each of the positions in the reactor core are estimated from the power distribution and errors are determined based on the estimated values and the measured values to determine the smooth error distribution in the axial direction. Then, input conditions at the time to be forecast are set by a data setter. The simulator calculates the forecast power distribution after the control operation based on the set conditions. The forecast power distribution is corrected using the error distribution. (Yoshino, Y.)
A code for obtaining temperature distribution by finite element method
International Nuclear Information System (INIS)
Bloch, M.
1984-01-01
The ELEFIB Fortran language computer code using finite element method for calculating temperature distribution of linear and two dimensional problems, in permanent region or in the transient phase of heat transfer, is presented. The formulation of equations uses the Galerkin method. Some examples are shown and the results are compared with other papers. The comparative evaluation shows that the elaborated code gives good values. (M.C.K.) [pt
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Directory of Open Access Journals (Sweden)
Chaoyang Shi
2017-12-01
Full Text Available Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions
DEFF Research Database (Denmark)
Fog, Agner
2008-01-01
the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...
Networked and Distributed Control Method with Optimal Power Dispatch for Islanded Microgrids
DEFF Research Database (Denmark)
Li, Qiang; Peng, Congbo; Chen, Minyou
2017-01-01
of controllable agents. The distributed control laws derived from the first subgraph guarantee the supply-demand balance, while further control laws from the second subgraph reassign the outputs of controllable distributed generators, which ensure active and reactive power are dispatched optimally. However...... according to our proposition. Finally, the method is evaluated over seven cases via simulation. The results show that the system performs as desired, even if environmental conditions and load demand fluctuate significantly. In summary, the method can rapidly respond to fluctuations resulting in optimal...
Directory of Open Access Journals (Sweden)
Shan Yang
2016-01-01
Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.
Advanced airflow distribution methods for reducing exposure of indoor pollution
DEFF Research Database (Denmark)
Cao, Guangyu; Nielsen, Peter Vilhelm; Melikov, Arsen
2017-01-01
The adverse effect of various indoor pollutants on occupants’ health have been recognized. In public spaces flu viruses may spread from person to person by airflow generated by various traditional ventilation methods, like natural ventilation and mixing ventilation (MV Personalized ventilation (PV......) supplies clean air close to the occupant and directly into the breathing zone. Studies show that it improves the inhaled air quality and reduces the risk of airborne cross-infection in comparison with total volume (TV) ventilation. However, it is still challenging for PV and other advanced air distribution...... methods to reduce the exposure to gaseous and particulate pollutants under disturbed conditions and to ensure thermal comfort at the same time. The objective of this study is to analyse the performance of different advanced airflow distribution methods for protection of occupants from exposure to indoor...
Advanced airflow distribution methods for reducing exposure of indoor pollution
DEFF Research Database (Denmark)
Cao, Guangyu; Nielsen, Peter Vilhelm; Melikov, Arsen Krikor
methods to reduce the exposure to gaseous and particulate pollutants under disturbed conditions and to ensure thermal comfort at the same time. The objective of this study is to analyse the performance of different advanced airflow distribution methods for protection of occupants from exposure to indoor......The adverse effect of various indoor pollutants on occupants’ health have been recognized. In public spaces flu viruses may spread from person to person by airflow generated by various traditional ventilation methods, like natural ventilation and mixing ventilation (MV Personalized ventilation (PV......) supplies clean air close to the occupant and directly into the breathing zone. Studies show that it improves the inhaled air quality and reduces the risk of airborne cross-infection in comparison with total volume (TV) ventilation. However, it is still challenging for PV and other advanced air distribution...
DEFF Research Database (Denmark)
Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi
2014-01-01
This paper reviews the existing congestion management methods for distribution networks with high penetration of DERs documented in the recent research literatures. The congestion management methods for distribution networks reviewed can be grouped into two categories – market methods and direct...... control methods. The market methods consist of dynamic tariff, distribution capacity market, shadow price and flexible service market. The direct control methods are comprised of network reconfiguration, reactive power control and active power control. Based on the review of the existing methods...
Reduction Method for Active Distribution Networks
DEFF Research Database (Denmark)
Raboni, Pietro; Chen, Zhe
2013-01-01
On-line security assessment is traditionally performed by Transmission System Operators at the transmission level, ignoring the effective response of distributed generators and small loads. On the other hand the required computation time and amount of real time data for including Distribution...... Networks also would be too large. In this paper an adaptive aggregation method for subsystems with power electronic interfaced generators and voltage dependant loads is proposed. With this tool may be relatively easier including distribution networks into security assessment. The method is validated...... by comparing the results obtained in PSCAD® with the detailed network model and with the reduced one. Moreover the control schemes of a wind turbine and a photovoltaic plant included in the detailed network model are described....
Leontief Input-Output Method for The Fresh Milk Distribution Linkage Analysis
Directory of Open Access Journals (Sweden)
Riski Nur Istiqomah
2016-11-01
Full Text Available This research discusses about linkage analysis and identifies the key sector in the fresh milk distribution using Leontief Input-Output method. This method is one of the application of Mathematics in economy. The current fresh milk distribution system includes dairy farmers →collectors→fresh milk processing industries→processed milk distributors→consumers. Then, the distribution is merged between the collectors’ axctivity and the fresh milk processing industry. The data used are primary and secondary data taken in June 2016 in Kecamatan Jabung Kabupaten Malang. The collected data are then analysed using Leontief Input-Output Matriks and Python (PYIO 2.1 software. The result is that the merging of the collectors’ and the fresh milk processing industry’s activities shows high indices of forward linkages and backward linkages. It is shown that merging of the two activities is the key sector which has an important role in developing the whole activities in the fresh milk distribution.
Analysis of calculating methods for failure distribution function based on maximal entropy principle
International Nuclear Information System (INIS)
Guo Chunying; Lin Yuangen; Jiang Meng; Wu Changli
2009-01-01
The computation of invalidation distribution functions of electronic devices when exposed in gamma rays is discussed here. First, the possible devices failure distribution models are determined through the tests of statistical hypotheses using the test data. The results show that: the devices' failure distribution can obey multi-distributions when the test data is few. In order to decide the optimum failure distribution model, the maximal entropy principle is used and the elementary failure models are determined. Then, the Bootstrap estimation method is used to simulate the intervals estimation of the mean and the standard deviation. On the basis of this, the maximal entropy principle is used again and the simulated annealing method is applied to find the optimum values of the mean and the standard deviation. Accordingly, the electronic devices' optimum failure distributions are finally determined and the survival probabilities are calculated. (authors)
Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions
Liu, C.; Charpentier, R.R.; Su, J.
2011-01-01
Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.
The overlapping distribution method to compute chemical potentials of chain molecules
Mooij, G.C.A.M.; Frenkel, D.
1994-01-01
The chemical potential of continuously deformable chain molecules can be estimated by measuring the average Rosenbluth weight associated with the virtual insertion of a molecule. We show how to generalize the overlapping-distribution method of Bennett to histograms of Rosenbluth weights. In this way
Methods for reconstruction of the density distribution of nuclear power
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2015-01-01
Highlights: • Two methods for reconstruction of the pin power distribution are presented. • The ARM method uses analytical solution of the 2D diffusion equation. • The PRM method uses polynomial solution without boundary conditions. • The maximum errors in pin power reconstruction occur in the peripheral water region. • The errors are significantly less in the inner area of the core. - Abstract: In analytical reconstruction method (ARM), the two-dimensional (2D) neutron diffusion equation is analytically solved for two energy groups (2G) and homogeneous nodes with dimensions of a fuel assembly (FA). The solution employs a 2D fourth-order expansion for the axial leakage term. The Nodal Expansion Method (NEM) provides the solution average values as the four average partial currents on the surfaces of the node, the average flux in the node and the multiplying factor of the problem. The expansion coefficients for the axial leakage are determined directly from NEM method or can be determined in the reconstruction method. A new polynomial reconstruction method (PRM) is implemented based on the 2D expansion for the axial leakage term. The ARM method use the four average currents on the surfaces of the node and four average fluxes in corners of the node as boundary conditions and the average flux in the node as a consistency condition. To determine the average fluxes in corners of the node an analytical solution is employed. This analytical solution uses the average fluxes on the surfaces of the node as boundary conditions and discontinuities in corners are incorporated. The polynomial and analytical solutions to the PRM and ARM methods, respectively, represent the homogeneous flux distributions. The detailed distributions inside a FA are estimated by product of homogeneous distribution by local heterogeneous form function. Moreover, the form functions of power are used. The results show that the methods have good accuracy when compared with reference values and
Distributed MIMO-ISAR Sub-image Fusion Method
Directory of Open Access Journals (Sweden)
Gu Wenkun
2017-02-01
Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.
Dual reference point temperature interrogating method for distributed temperature sensor
International Nuclear Information System (INIS)
Ma, Xin; Ju, Fang; Chang, Jun; Wang, Weijie; Wang, Zongliang
2013-01-01
A novel method based on dual temperature reference points is presented to interrogate the temperature in a distributed temperature sensing (DTS) system. This new method is suitable to overcome deficiencies due to the impact of DC offsets and the gain difference in the two signal channels of the sensing system during temperature interrogation. Moreover, this method can in most cases avoid the need to calibrate the gain and DC offsets in the receiver, data acquisition and conversion. An improved temperature interrogation formula is presented and the experimental results show that this method can efficiently estimate the channel amplification and system DC offset, thus improving the system accuracy. (letter)
Why liquid displacement methods are sometimes wrong in estimating the pore-size distribution
Gijsbertsen-Abrahamse, A.J.; Boom, R.M.; Padt, van der A.
2004-01-01
The liquid displacement method is a commonly used method to determine the pore size distribution of micro- and ultrafiltration membranes. One of the assumptions for the calculation of the pore sizes is that the pores are parallel and thus are not interconnected. To show that the estimated pore size
International Nuclear Information System (INIS)
Shevkunov, I A; Petrov, N V
2014-01-01
Performance of the three phase retrieval methods that use spatial intensity distributions was investigated in dealing with a task of reconstruction of the amplitude characteristics of the test object. These methods differ both by mathematical models and order of iteration execution. The single-beam multiple-intensity reconstruction method showed the best efficiency in terms of quality of reconstruction and time consumption.
On-line reconstruction of in-core power distribution by harmonics expansion method
International Nuclear Information System (INIS)
Wang Changhui; Wu Hongchun; Cao Liangzhi; Yang Ping
2011-01-01
Highlights: → A harmonics expansion method for the on-line in-core power reconstruction is proposed. → A harmonics data library is pre-generated off-line and a code named COMS is developed. → Numerical results show that the maximum relative error of the reconstruction is less than 5.5%. → This method has a high computational speed compared to traditional methods. - Abstract: Fixed in-core detectors are most suitable in real-time response to in-core power distributions in pressurized water reactors (PWRs). In this paper, a harmonics expansion method is used to reconstruct the in-core power distribution of a PWR on-line. In this method, the in-core power distribution is expanded by the harmonics of one reference case. The expansion coefficients are calculated using signals provided by fixed in-core detectors. To conserve computing time and improve reconstruction precision, a harmonics data library containing the harmonics of different reference cases is constructed. Upon reconstruction of the in-core power distribution on-line, the two closest reference cases are searched from the harmonics data library to produce expanded harmonics by interpolation. The Unit 1 reactor of DayaBay Nuclear Power Plant (DayaBay NPP) in China is considered for verification. The maximum relative error between the measurement and reconstruction results is less than 5.5%, and the computing time is about 0.53 s for a single reconstruction, indicating that this method is suitable for the on-line monitoring of PWRs.
International Nuclear Information System (INIS)
Tan, Cheng-Yang; Fermilab
2006-01-01
One common way for measuring the emittance of an electron beam is with the slits method. The usual approach for analyzing the data is to calculate an emittance that is a subset of the parent emittance. This paper shows an alternative way by using the method of correlations which ties the parameters derived from the beamlets to the actual parameters of the parent emittance. For parent distributions that are Gaussian, this method yields exact results. For non-Gaussian beam distributions, this method yields an effective emittance that can serve as a yardstick for emittance comparisons
Directory of Open Access Journals (Sweden)
Yu Su
2018-06-01
effective load considering uncertainties. Results showed that the planning method of on-load capacity regulating distribution transformers proposed in this paper was very feasible and is of great guiding significance to distribution transformer planning after electric energy replacement and the popularization of on-load capacity regulating distribution transformers.
Directory of Open Access Journals (Sweden)
Qingwu Gong
2017-03-01
Full Text Available The intermittency and variability of permeated distributed generators (DGs could cause many critical security and economy risks to distribution systems. This paper applied a certain mathematical distribution to imitate the output variability and uncertainty of DGs. Then, four risk indices—EENS (expected energy not supplied, PLC (probability of load curtailment, EFLC (expected frequency of load curtailment, and SI (severity index—were established to reflect the system risk level of the distribution system. For the certain mathematical distribution of the DGs’ output power, an improved PEM (point estimate method-based method was proposed to calculate these four system risk indices. In this improved PEM-based method, an enumeration method was used to list the states of distribution systems, and an improved PEM was developed to deal with the uncertainties of DGs, and the value of load curtailment in distribution systems was calculated by an optimal power flow algorithm. Finally, the effectiveness and advantages of this proposed PEM-based method for distribution system assessment were verified by testing a modified IEEE 30-bus system. Simulation results have shown that this proposed PEM-based method has a high computational accuracy and highly reduced computational costs compared with other risk assessment methods and is very effective for risk assessments.
Sediment spatial distribution evaluated by three methods and its relation to some soil properties
Energy Technology Data Exchange (ETDEWEB)
Bacchi, O O.S. . [Centro de Energia Nuclear na Agricultura-CENA/USP, Laboratorio de Fisica do Solo, Piracicaba, SP (Brazil); Reichardt, K [Centro de Energia Nuclear na Agricultura-CENA/USP, Laboratorio de Fisica do Solo, Piracicaba, SP (Brazil); Departamento de Ciencias Exatas, Escola Superior de Agricultura ' Luiz de Queiroz' ESALQ/USP, Piracicaba, SP (Brazil); Sparovek, G [Departamento de Solos e Nutricao de Plantas, Escola Superior de Agricultura ' Luiz de Queiroz' ESALQ/USP, Piracicaba, SP (Brazil)
2003-02-15
An investigation of rates and spatial distribution of sediments on an agricultural field cultivated with sugarcane was undertaken using the {sup 137}Cs technique, USLE and WEPP models. The study was carried out on the Ceveiro watershed of the Piracicaba river basin, state of Sao Paulo, Brazil, experiencing severe soil degradation due to soil erosion. The objectives of the study were to compare the spatial distribution of sediments evaluated by the three methods and its relation to some soil properties. Erosion and sedimentation rates and their spatial distribution estimated by the three methods were completely different. Although not able to show sediment deposition, the spatial distribution of erosion rates evaluated by USLE presented the best correlation with other studied soil properties. (author)
Probability evolution method for exit location distribution
Zhu, Jinjie; Chen, Zhen; Liu, Xianbin
2018-03-01
The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.
Distribution Route Planning of Clean Coal Based on Nearest Insertion Method
Wang, Yunrui
2018-01-01
Clean coal technology has made some achievements for several ten years, but the research in its distribution field is very small, the distribution efficiency would directly affect the comprehensive development of clean coal technology, it is the key to improve the efficiency of distribution by planning distribution route rationally. The object of this paper was a clean coal distribution system which be built in a county. Through the surveying of the customer demand and distribution route, distribution vehicle in previous years, it was found that the vehicle deployment was only distributed by experiences, and the number of vehicles which used each day changed, this resulted a waste of transport process and an increase in energy consumption. Thus, the mathematical model was established here in order to aim at shortest path as objective function, and the distribution route was re-planned by using nearest-insertion method which been improved. The results showed that the transportation distance saved 37 km and the number of vehicles used had also been decreased from the past average of 5 to fixed 4 every day, as well the real loading of vehicles increased by 16.25% while the current distribution volume staying same. It realized the efficient distribution of clean coal, achieved the purpose of saving energy and reducing consumption.
Method of determining local distribution of water or aqueous solutions penetrated into plastics
International Nuclear Information System (INIS)
Krejci, M.; Joks, Z.
1983-01-01
Penetrating water is labelled with tritium and the distribution is autoradiographically monitored. The discovery consists in that the plastic with the penetrating water or aqueous solution is cooled with liquid nitrogen and under the stream of liquid nitrogen the plastic is cut and exposed on the autoradiographic film in the freezer at temperatures from -15 to -30 degC. The autoradiogram will show the distribution of water in the whole area of the section. The described method may be used to detect water distribution also in filled plastics. (J.P.)
Cathode power distribution system and method of using the same for power distribution
Williamson, Mark A; Wiedmeyer, Stanley G; Koehl, Eugene R; Bailey, James L; Willit, James L; Barnes, Laurel A; Blaskovitz, Robert J
2014-11-11
Embodiments include a cathode power distribution system and/or method of using the same for power distribution. The cathode power distribution system includes a plurality of cathode assemblies. Each cathode assembly of the plurality of cathode assemblies includes a plurality of cathode rods. The system also includes a plurality of bus bars configured to distribute current to each of the plurality of cathode assemblies. The plurality of bus bars include a first bus bar configured to distribute the current to first ends of the plurality of cathode assemblies and a second bus bar configured to distribute the current to second ends of the plurality of cathode assemblies.
a Landmark Extraction Method Associated with Geometric Features and Location Distribution
Zhang, W.; Li, J.; Wang, Y.; Xiao, Y.; Liu, P.; Zhang, S.
2018-04-01
Landmark plays an important role in spatial cognition and spatial knowledge organization. Significance measuring model is the main method of landmark extraction. It is difficult to take account of the spatial distribution pattern of landmarks because that the significance of landmark is built in one-dimensional space. In this paper, we start with the geometric features of the ground object, an extraction method based on the target height, target gap and field of view is proposed. According to the influence region of Voronoi Diagram, the description of target gap is established to the geometric representation of the distribution of adjacent targets. Then, segmentation process of the visual domain of Voronoi K order adjacent is given to set up target view under the multi view; finally, through three kinds of weighted geometric features, the landmarks are identified. Comparative experiments show that this method has a certain coincidence degree with the results of traditional significance measuring model, which verifies the effectiveness and reliability of the method and reduces the complexity of landmark extraction process without losing the reference value of landmark.
Directory of Open Access Journals (Sweden)
Giovanni Rapacciuolo
Full Text Available Conservation planners often wish to predict how species distributions will change in response to environmental changes. Species distribution models (SDMs are the primary tool for making such predictions. Many methods are widely used; however, they all make simplifying assumptions, and predictions can therefore be subject to high uncertainty. With global change well underway, field records of observed range shifts are increasingly being used for testing SDM transferability. We used an unprecedented distribution dataset documenting recent range changes of British vascular plants, birds, and butterflies to test whether correlative SDMs based on climate change provide useful approximations of potential distribution shifts. We modelled past species distributions from climate using nine single techniques and a consensus approach, and projected the geographical extent of these models to a more recent time period based on climate change; we then compared model predictions with recent observed distributions in order to estimate the temporal transferability and prediction accuracy of our models. We also evaluated the relative effect of methodological and taxonomic variation on the performance of SDMs. Models showed good transferability in time when assessed using widespread metrics of accuracy. However, models had low accuracy to predict where occupancy status changed between time periods, especially for declining species. Model performance varied greatly among species within major taxa, but there was also considerable variation among modelling frameworks. Past climatic associations of British species distributions retain a high explanatory power when transferred to recent time--due to their accuracy to predict large areas retained by species--but fail to capture relevant predictors of change. We strongly emphasize the need for caution when using SDMs to predict shifts in species distributions: high explanatory power on temporally-independent records
Chassin, David P [Pasco, WA; Donnelly, Matthew K [Kennewick, WA; Dagle, Jeffery E [Richland, WA
2011-12-06
Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices are described. In one aspect, an electrical power distribution control method includes providing electrical energy from an electrical power distribution system, applying the electrical energy to a load, providing a plurality of different values for a threshold at a plurality of moments in time and corresponding to an electrical characteristic of the electrical energy, and adjusting an amount of the electrical energy applied to the load responsive to an electrical characteristic of the electrical energy triggering one of the values of the threshold at the respective moment in time.
A method to measure depth distributions of implanted ions
International Nuclear Information System (INIS)
Arnesen, A.; Noreland, T.
1977-04-01
A new variant of the radiotracer method for depth distribution determinations has been tested. Depth distributions of radioactive implanted ions are determined by dissolving thin, uniform layers of evaporated material from the surface of a backing and by measuring the activity before and after the layer removal. The method has been used to determine depth distributions for 25 keV and 50 keV 57 Co ions in aluminium and gold. (Auth.)
Comparison of estimation methods for fitting weibull distribution to ...
African Journals Online (AJOL)
Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.
The synchronization method for distributed small satellite SAR
Xing, Lei; Gong, Xiaochun; Qiu, Wenxun; Sun, Zhaowei
2007-11-01
One of critical requirement for distributed small satellite SAR is the trigger time precision when all satellites turning on radar loads. This trigger operation is controlled by a dedicated communication tool or GPS system. In this paper a hardware platform is proposed which has integrated navigation, attitude control, and data handling system together. Based on it, a probabilistic synchronization method is proposed for SAR time precision requirement with ring architecture. To simplify design of transceiver, half-duplex communication way is used in this method. Research shows that time precision is relevant to relative frequency drift rate, satellite number, retry times, read error and round delay length. Installed with crystal oscillator short-term stability 10 -11 magnitude, this platform can achieve and maintain nanosecond order time error with a typical three satellites formation experiment during whole operating process.
Measurement of subcritical multiplication by the interval distribution method
International Nuclear Information System (INIS)
Nelson, G.W.
1985-01-01
The prompt decay constant or the subcritical neutron multiplication may be determined by measuring the distribution of the time intervals between successive neutron counts. The distribution data is analyzed by least-squares fitting to a theoretical distribution function derived from a point reactor probability model. Published results of measurements with one- and two-detector systems are discussed. Data collection times are shorter, and statistical errors are smaller the nearer the system is to delayed critical. Several of the measurements indicate that a shorter data collection time and higher accuracy are possible with the interval distribution method than with the Feynman variance method
Voltage Based Detection Method for High Impedance Fault in a Distribution System
Thomas, Mini Shaji; Bhaskar, Namrata; Prakash, Anupama
2016-09-01
High-impedance faults (HIFs) on distribution feeders cannot be detected by conventional protection schemes, as HIFs are characterized by their low fault current level and waveform distortion due to the nonlinearity of the ground return path. This paper proposes a method to identify the HIFs in distribution system and isolate the faulty section, to reduce downtime. This method is based on voltage measurements along the distribution feeder and utilizes the sequence components of the voltages. Three models of high impedance faults have been considered and source side and load side breaking of the conductor have been studied in this work to capture a wide range of scenarios. The effect of neutral grounding of the source side transformer is also accounted in this study. The results show that the algorithm detects the HIFs accurately and rapidly. Thus, the faulty section can be isolated and service can be restored to the rest of the consumers.
Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi
2016-01-01
The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants.
Standard test method for distribution coefficients of inorganic species by the batch method
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This test method covers the determination of distribution coefficients of chemical species to quantify uptake onto solid materials by a batch sorption technique. It is a laboratory method primarily intended to assess sorption of dissolved ionic species subject to migration through pores and interstices of site specific geomedia. It may also be applied to other materials such as manufactured adsorption media and construction materials. Application of the results to long-term field behavior is not addressed in this method. Distribution coefficients for radionuclides in selected geomedia are commonly determined for the purpose of assessing potential migratory behavior of contaminants in the subsurface of contaminated sites and waste disposal facilities. This test method is also applicable to studies for parametric studies of the variables and mechanisms which contribute to the measured distribution coefficient. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement a...
Data distribution method of workflow in the cloud environment
Wang, Yong; Wu, Junjuan; Wang, Ying
2017-08-01
Cloud computing for workflow applications provides the required high efficiency calculation and large storage capacity and it also brings challenges to the protection of trade secrets and other privacy data. Because of privacy data will cause the increase of the data transmission time, this paper presents a new data allocation algorithm based on data collaborative damage degree, to improve the existing data allocation strategy? Safety and public cloud computer algorithm depends on the private cloud; the static allocation method in the initial stage only to the non-confidential data division to improve the original data, in the operational phase will continue to generate data to dynamically adjust the data distribution scheme. The experimental results show that the improved method is effective in reducing the data transmission time.
Theoretical method for determining particle distribution functions of classical systems
International Nuclear Information System (INIS)
Johnson, E.
1980-01-01
An equation which involves the triplet distribution function and the three-particle direct correlation function is obtained. This equation was derived using an analogue of the Ornstein--Zernike equation. The new equation is used to develop a variational method for obtaining the triplet distribution function of uniform one-component atomic fluids from the pair distribution function. The variational method may be used with the first and second equations in the YBG hierarchy to obtain pair and triplet distribution functions. It should be easy to generalize the results to the n-particle distribution function
Three-Phase Harmonic Analysis Method for Unbalanced Distribution Systems
Directory of Open Access Journals (Sweden)
Jen-Hao Teng
2014-01-01
Full Text Available Due to the unbalanced features of distribution systems, a three-phase harmonic analysis method is essential to accurately analyze the harmonic impact on distribution systems. Moreover, harmonic analysis is the basic tool for harmonic filter design and harmonic resonance mitigation; therefore, the computational performance should also be efficient. An accurate and efficient three-phase harmonic analysis method for unbalanced distribution systems is proposed in this paper. The variations of bus voltages, bus current injections and branch currents affected by harmonic current injections can be analyzed by two relationship matrices developed from the topological characteristics of distribution systems. Some useful formulas are then derived to solve the three-phase harmonic propagation problem. After the harmonic propagation for each harmonic order is calculated, the total harmonic distortion (THD for bus voltages can be calculated accordingly. The proposed method has better computational performance, since the time-consuming full admittance matrix inverse employed by the commonly-used harmonic analysis methods is not necessary in the solution procedure. In addition, the proposed method can provide novel viewpoints in calculating the branch currents and bus voltages under harmonic pollution which are vital for harmonic filter design. Test results demonstrate the effectiveness and efficiency of the proposed method.
A Network Reconfiguration Method Considering Data Uncertainties in Smart Distribution Networks
Directory of Open Access Journals (Sweden)
Ke-yan Liu
2017-05-01
Full Text Available This work presents a method for distribution network reconfiguration with the simultaneous consideration of distributed generation (DG allocation. The uncertainties of load fluctuation before the network reconfiguration are also considered. Three optimal objectives, including minimal line loss cost, minimum Expected Energy Not Supplied, and minimum switch operation cost, are investigated. The multi-objective optimization problem is further transformed into a single-objective optimization problem by utilizing weighting factors. The proposed network reconfiguration method includes two periods. The first period is to create a feasible topology network by using binary particle swarm optimization (BPSO. Then the DG allocation problem is solved by utilizing sensitivity analysis and a Harmony Search algorithm (HSA. In the meanwhile, interval analysis is applied to deal with the uncertainties of load and devices parameters. Test cases are studied using the standard IEEE 33-bus and PG&E 69-bus systems. Different scenarios and comparisons are analyzed in the experiments. The results show the applicability of the proposed method. The performance analysis of the proposed method is also investigated. The computational results indicate that the proposed network reconfiguration algorithm is feasible.
Energy Technology Data Exchange (ETDEWEB)
El-Sayed, Karimat [XRD Lab, Physics Department, Faculty of Science, Ain-Shams University, Cairo (Egypt); Mohamed, Mohamed Bakr, E-mail: mbm1977@yahoo.com [XRD Lab, Physics Department, Faculty of Science, Ain-Shams University, Cairo (Egypt); Hamdy, Sh.; Ata-Allah, S.S. [Reactor Physics Department, NRC, Atomic Energy Authority, P.O. Box 13759, Cairo (Egypt)
2017-02-01
Nano-crystalline NiFe{sub 2}O{sub 4} was synthesized by citrate and sol–gel methods at different annealing temperatures and the results were compared with a bulk sample prepared by ceramic method. The effect of methods of preparation and different annealing temperatures on the crystallize size, strain, bond lengths, bond angles, cations distribution and degree of inversions were investigated by X-ray powder diffraction, high resolution transmission electron microscope, Mössbauer effect spectrometer and vibrating sample magnetometer. The cations distributions were determined at both octahedral and tetrahedral sites using both Mössbauer effect spectroscopy and a modified Bertaut method using Rietveld method. The Mössbauer effect spectra showed a regular decrease in the hyperfine field with decreasing particle size. Saturation magnetization and coercivity are found to be affected by the particle size and the cations distribution. - Highlights: • Annealed nano NiFe{sub 2}O{sub 4} was prepared by different methods. • The crystallite sizes are critical. • Mössbauer spectra show superparamagnetic doublet. • Cations distributions by MÓ§ssbauer and Bertaut method are constituents. • Cations distribution are significantly affects the magnetic properties.
Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates.
Directory of Open Access Journals (Sweden)
Caroline A Curtis
Full Text Available Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance.We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the 'plant characteristics' information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN, and tested whether ΔCN was influenced by growth form or range size.Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001. The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation.Our results show that distribution data are consistently broader than USDA PLANTS experts' knowledge and likely provide more robust estimates of climatic tolerance, especially for
A method to describe inelastic gamma field distribution in neutron gamma density logging.
Zhang, Feng; Zhang, Quanying; Liu, Juntao; Wang, Xinguang; Wu, He; Jia, Wenbao; Ti, Yongzhou; Qiu, Fei; Zhang, Xiaoyang
2017-11-01
Pulsed neutron gamma density logging (NGD) is of great significance for radioprotection and density measurement in LWD, however, the current methods have difficulty in quantitative calculation and single factor analysis for the inelastic gamma field distribution. In order to clarify the NGD mechanism, a new method is developed to describe the inelastic gamma field distribution. Based on the fast-neutron scattering and gamma attenuation, the inelastic gamma field distribution is characterized by the inelastic scattering cross section, fast-neutron scattering free path, formation density and other parameters. And the contribution of formation parameters on the field distribution is quantitatively analyzed. The results shows the contribution of density attenuation is opposite to that of inelastic scattering cross section and fast-neutron scattering free path. And as the detector-spacing increases, the density attenuation gradually plays a dominant role in the gamma field distribution, which means large detector-spacing is more favorable for the density measurement. Besides, the relationship of density sensitivity and detector spacing was studied according to this gamma field distribution, therefore, the spacing of near and far gamma ray detector is determined. The research provides theoretical guidance for the tool parameter design and density determination of pulsed neutron gamma density logging technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
A method for generating skewed random numbers using two overlapping uniform distributions
International Nuclear Information System (INIS)
Ermak, D.L.; Nasstrom, J.S.
1995-02-01
The objective of this work was to implement and evaluate a method for generating skewed random numbers using a combination of uniform random numbers. The method provides a simple and accurate way of generating skewed random numbers from the specified first three moments without an a priori specification of the probability density function. We describe the procedure for generating skewed random numbers from unifon-n random numbers, and show that it accurately produces random numbers with the desired first three moments over a range of skewness values. We also show that in the limit of zero skewness, the distribution of random numbers is an accurate approximation to the Gaussian probability density function. Future work win use this method to provide skewed random numbers for a Langevin equation model for diffusion in skewed turbulence
Comparing four methods to estimate usual intake distributions
Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.
2011-01-01
Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data
Methods and Tools for Profiling and Control of Distributed Systems
Directory of Open Access Journals (Sweden)
Sukharev Roman
2017-01-01
Full Text Available The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.
An analytical transport theory method for calculating flux distribution in slab cells
International Nuclear Information System (INIS)
Abdel Krim, M.S.
2001-01-01
A transport theory method for calculating flux distributions in slab fuel cell is described. Two coupled integral equations for flux in fuel and moderator are obtained; assuming partial reflection at moderator external boundaries. Galerkin technique is used to solve these equations. Numerical results for average fluxes in fuel and moderator and the disadvantage factor are given. Comparison with exact numerical methods, that is for total reflection moderator outer boundaries, show that the Galerkin technique gives accurate results for the disadvantage factor and average fluxes. (orig.)
Yang, Shan; Tong, Xiangqian
2016-01-01
Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverte...
Multi-level methods and approximating distribution functions
International Nuclear Information System (INIS)
Wilson, D.; Baker, R. E.
2016-01-01
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
Multi-level methods and approximating distribution functions
Energy Technology Data Exchange (ETDEWEB)
Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)
2016-07-15
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
PROGRAMMING OF METHODS FOR THE NEEDS OF LOGISTICS DISTRIBUTION SOLVING PROBLEMS
Directory of Open Access Journals (Sweden)
Andrea Štangová
2014-06-01
Full Text Available Logistics has become one of the dominant factors which is affecting the successful management, competitiveness and mentality of the global economy. Distribution logistics materializes the connesciton of production and consumer marke. It uses different methodology and methods of multicriterial evaluation and allocation. This thesis adresses the problem of the costs of securing the distribution of product. It was therefore relevant to design a software product thet would be helpful in solvin the problems related to distribution logistics. Elodis – electronic distribution logistics program was designed on the basis of theoretical analysis of the issue of distribution logistics and on the analysis of the software products market. The program uses a multicriterial evaluation methods to deremine the appropriate type and mathematical and geometrical method to determine an appropriate allocation of the distribution center, warehouse and company.
Air method measurements of apple vessel length distributions with improved apparatus and theory
Shabtal Cohen; John Bennink; Mel Tyree
2003-01-01
Studies showing that rootstock dwarfing potential is related to plant hydraulic conductance led to the hypothesis that xylem properties are also related. Vessel length distribution and other properties of apple wood from a series of varieties were measured using the 'air method' in order to test this hypothesis. Apparatus was built to measure and monitor...
Multipath interference test method for distributed amplifiers
Okada, Takahiro; Aida, Kazuo
2005-12-01
A method for testing distributed amplifiers is presented; the multipath interference (MPI) is detected as a beat spectrum between the multipath signal and the direct signal using a binary frequency shifted keying (FSK) test signal. The lightwave source is composed of a DFB-LD that is directly modulated by a pulse stream passing through an equalizer, and emits the FSK signal of the frequency deviation of about 430MHz at repetition rate of 80-100 kHz. The receiver consists of a photo-diode and an electrical spectrum analyzer (ESA). The base-band power spectrum peak appeared at the frequency of the FSK frequency deviation can be converted to amount of MPI using a calibration chart. The test method has improved the minimum detectable MPI as low as -70 dB, compared to that of -50 dB of the conventional test method. The detailed design and performance of the proposed method are discussed, including the MPI simulator for calibration procedure, computer simulations for evaluating the error caused by the FSK repetition rate and the fiber length under test and experiments on singlemode fibers and distributed Raman amplifier.
International Nuclear Information System (INIS)
Sitompul, Yos Panagaman; Shin, Hee-Sung; Park, Se-Hwan; Oh, Jong Myeong; Seo, Hee; Kim, Ho Dong
2013-01-01
An unfolding method has been developed to obtain a pin-wise source strength distribution of a 14 × 14 pressurized water reactor (PWR) spent fuel assembly. Sixteen measured gamma dose rates at 16 control rod guide tubes of an assembly are unfolded to 179 pin-wise source strengths of the assembly. The method calculates and optimizes five coefficients of the quadratic fitting function for X-Y source strength distribution, iteratively. The pin-wise source strengths are obtained at the sixth iteration, with a maximum difference between two sequential iterations of about 0.2%. The relative distribution of pin-wise source strength from the unfolding is checked using a comparison with the design code (Westinghouse APA code). The result shows that the relative distribution from the unfolding and design code is consistent within a 5% difference. The absolute value of the pin-wise source strength is also checked by reproducing the dose rates at the measurement points. The result shows that the pin-wise source strengths from the unfolding reproduce the dose rates within a 2% difference. (author)
Mathematical models and methods of assisting state subsidy distribution at the regional level
Bondarenko, Yu V.; Azarnova, T. V.; Kashirina, I. L.; Goroshko, I. V.
2018-03-01
One of the most common forms of state support in the world is subsidization. By providing direct financial support to businesses, local authorities get an opportunity to set certain performance targets. Successful achievement of such targets depends not only on the amount of the budgetary allocations, but also on the distribution mechanisms adopted by the regional authorities. Analysis of the existing mechanisms of subsidies distribution in Russian regions shows that in most cases the choice of subsidy calculation formula and its parameters depends on the experts’ subjective opinion. The authors offer a new approach to assisting subsidy distribution at the regional level, which is based on mathematical models and methods, allowing to evaluate the influence of subsidy distribution on the region’s social and economic development. The results of calculations were discussed with the regional administration representatives who confirmed their significance for decision-making in the sphere of state control.
Methods of assessing grain-size distribution during grain growth
DEFF Research Database (Denmark)
Tweed, Cherry J.; Hansen, Niels; Ralph, Brian
1985-01-01
This paper considers methods of obtaining grain-size distributions and ways of describing them. In order to collect statistically useful amounts of data, an automatic image analyzer is used, and the resulting data are subjected to a series of tests that evaluate the differences between two related...... distributions (before and after grain growth). The distributions are measured from two-dimensional sections, and both the data and the corresponding true three-dimensional grain-size distributions (obtained by stereological analysis) are collected. The techniques described here are illustrated by reference...
Analyzed method for calculating the distribution of electrostatic field
International Nuclear Information System (INIS)
Lai, W.
1981-01-01
An analyzed method for calculating the distribution of electrostatic field under any given axial gradient in tandem accelerators is described. This method possesses satisfactory accuracy compared with the results of numerical calculation
Jing, Xiaoli; Cheng, Haobo; Xu, Chunyun; Feng, Yunpeng
2017-02-20
In this paper, an accurate measurement method of multiple spots' position offsets on a four-quadrant detector is proposed for a distributed aperture laser angle measurement system (DALAMS). The theoretical model is put forward, as well as the corresponding calculation method. This method includes two steps. First, as the initial estimation, integral approximation is applied to fit the distributed spots' offset function; second, the Boltzmann function is employed to compensate for the estimation error to improve detection accuracy. The simulation results attest to the correctness and effectiveness of the proposed method, and tolerance synthesis analysis of DALAMS is conducted to determine the maximum uncertainties of manufacturing and installation. The maximum angle error is less than 0.08° in the prototype distributed measurement system, which shows the stability and robustness for prospective applications.
International Nuclear Information System (INIS)
Schvezov, C.E.; Samarasekera, I.; Weinberg, F.
1988-01-01
A mathematical model based on the finite element method for calculating temperature and shear stress distributions in III-V crystals grown by LEC technique was developed. The calculated temperature are in good agreements with the experimental measurements. The shear stress distribution was calculated for several environmental conditions. The results showed that the magnitude and the distribution of shear stresses are highly sensitive to the crystal environment, including thickness and temperature distribution in boron oxides and the gas. The shear stress is also strongly influenced by interface curvature and cystals radius. (author) [pt
Extension of the pseudo dynamic method to test structures with distributed mass
International Nuclear Information System (INIS)
Renda, V.; Papa, L.; Bellorini, S.
1993-01-01
those results showed a very good agreement, allowing the conclusion that the methodology and the standard procedure proposed can evaluate the equivalent mass matrix and external force vector to extend the PsD method to test structures with distributed mass and can show the applicability of the condensed model. (author)
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Extension of the Accurate Voltage-Sag Fault Location Method in Electrical Power Distribution Systems
Directory of Open Access Journals (Sweden)
Youssef Menchafou
2016-03-01
Full Text Available Accurate Fault location in an Electric Power Distribution System (EPDS is important in maintaining system reliability. Several methods have been proposed in the past. However, the performances of these methods either show to be inefficient or are a function of the fault type (Fault Classification, because they require the use of an appropriate algorithm for each fault type. In contrast to traditional approaches, an accurate impedance-based Fault Location (FL method is presented in this paper. It is based on the voltage-sag calculation between two measurement points chosen carefully from the available strategic measurement points of the line, network topology and current measurements at substation. The effectiveness and the accuracy of the proposed technique are demonstrated for different fault types using a radial power flow system. The test results are achieved from the numerical simulation using the data of a distribution line recognized in the literature.
System and Method for Monitoring Distributed Asset Data
Gorinevsky, Dimitry (Inventor)
2015-01-01
A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.
The frequency-independent control method for distributed generation systems
DEFF Research Database (Denmark)
Naderi, Siamak; Pouresmaeil, Edris; Gao, Wenzhong David
2012-01-01
In this paper a novel frequency-independent control method suitable for distributed generation (DG) is presented. This strategy is derived based on the . abc/. αβ transformation and . abc/. dq transformation of the ac system variables. The active and reactive currents injected by the DG are contr......In this paper a novel frequency-independent control method suitable for distributed generation (DG) is presented. This strategy is derived based on the . abc/. αβ transformation and . abc/. dq transformation of the ac system variables. The active and reactive currents injected by the DG...
Size distributions of micro-bubbles generated by a pressurized dissolution method
Taya, C.; Maeda, Y.; Hosokawa, S.; Tomiyama, A.; Ito, Y.
2012-03-01
Size of micro-bubbles is widely distributed in the range of one to several hundreds micrometers and depends on generation methods, flow conditions and elapsed times after the bubble generation. Although a size distribution of micro-bubbles should be taken into account to improve accuracy in numerical simulations of flows with micro-bubbles, a variety of the size distribution makes it difficult to introduce the size distribution in the simulations. On the other hand, several models such as the Rosin-Rammler equation and the Nukiyama-Tanazawa equation have been proposed to represent the size distribution of particles or droplets. Applicability of these models to the size distribution of micro-bubbles has not been examined yet. In this study, we therefore measure size distribution of micro-bubbles generated by a pressurized dissolution method by using a phase Doppler anemometry (PDA), and investigate the applicability of the available models to the size distributions of micro-bubbles. Experimental apparatus consists of a pressurized tank in which air is dissolved in liquid under high pressure condition, a decompression nozzle in which micro-bubbles are generated due to pressure reduction, a rectangular duct and an upper tank. Experiments are conducted for several liquid volumetric fluxes in the decompression nozzle. Measurements are carried out at the downstream region of the decompression nozzle and in the upper tank. The experimental results indicate that (1) the Nukiyama-Tanasawa equation well represents the size distribution of micro-bubbles generated by the pressurized dissolution method, whereas the Rosin-Rammler equation fails in the representation, (2) the bubble size distribution of micro-bubbles can be evaluated by using the Nukiyama-Tanasawa equation without individual bubble diameters, when mean bubble diameter and skewness of the bubble distribution are given, and (3) an evaluation method of visibility based on the bubble size distribution and bubble
Fast crawling methods of exploring content distributed over large graphs
Wang, Pinghui
2018-03-15
Despite recent effort to estimate topology characteristics of large graphs (e.g., online social networks and peer-to-peer networks), little attention has been given to develop a formal crawling methodology to characterize the vast amount of content distributed over these networks. Due to the large-scale nature of these networks and a limited query rate imposed by network service providers, exhaustively crawling and enumerating content maintained by each vertex is computationally prohibitive. In this paper, we show how one can obtain content properties by crawling only a small fraction of vertices and collecting their content. We first show that when sampling is naively applied, this can produce a huge bias in content statistics (i.e., average number of content replicas). To remove this bias, one may use maximum likelihood estimation to estimate content characteristics. However, our experimental results show that this straightforward method requires to sample most vertices to obtain accurate estimates. To address this challenge, we propose two efficient estimators: special copy estimator (SCE) and weighted copy estimator (WCE) to estimate content characteristics using available information in sampled content. SCE uses the special content copy indicator to compute the estimate, while WCE derives the estimate based on meta-information in sampled vertices. We conduct experiments on a variety of real-word and synthetic datasets, and the results show that WCE and SCE are cost effective and also “asymptotically unbiased”. Our methodology provides a new tool for researchers to efficiently query content distributed in large-scale networks.
International Nuclear Information System (INIS)
Mohammadi, Kasra; Alavi, Omid; Mostafaeipour, Ali; Goudarzi, Navid; Jalilvand, Mahdi
2016-01-01
Highlights: • Effectiveness of six numerical methods is evaluated to determine wind power density. • More appropriate method for computing the daily wind power density is estimated. • Four windy stations located in the south part of Alberta, Canada namely is investigated. • The more appropriate parameters estimation method was not identical among all examined stations. - Abstract: In this study, the effectiveness of six numerical methods is evaluated to determine the shape (k) and scale (c) parameters of Weibull distribution function for the purpose of calculating the wind power density. The selected methods are graphical method (GP), empirical method of Justus (EMJ), empirical method of Lysen (EML), energy pattern factor method (EPF), maximum likelihood method (ML) and modified maximum likelihood method (MML). The purpose of this study is to identify the more appropriate method for computing the wind power density in four stations distributed in Alberta province of Canada namely Edmonton City Center Awos, Grande Prairie A, Lethbridge A and Waterton Park Gate. To provide a complete analysis, the evaluations are performed on both daily and monthly scales. The results indicate that the precision of computed wind power density values change when different parameters estimation methods are used to determine the k and c parameters. Four methods of EMJ, EML, EPF and ML present very favorable efficiency while the GP method shows weak ability for all stations. However, it is found that the more effective method is not similar among stations owing to the difference in the wind characteristics.
Computerized method for X-ray angular distribution simulation in radiological systems
International Nuclear Information System (INIS)
Marques, Marcio A.; Oliveira, Henrique J.Q. de; Frere, Annie F.; Schiabel, Homero; Marques, Paulo M.A.
1996-01-01
A method to simulate the changes in X-ray angular distribution (the Heel effect) for radiologic imaging systems is presented. This simulation method is described as to predict images for any exposure technique considering that the distribution is the cause of the intensity variation along the radiation field
Directory of Open Access Journals (Sweden)
Mohammad Hosein Rezaei
2011-10-01
Full Text Available Transformers perform many functions such as voltage transformation, isolation and noise decoupling. They are indispensable components in electric power distribution system. However, at low frequencies (50 Hz, they are one of the heaviest and the most expensive equipment in an electrical distribution system. Nowadays, electronic power transformers are used instead of conventional power transformers that do voltage transformation and power delivery in power system by power electronic converter. In this paper, the structure of distribution electronic power transformer (DEPT are analized and then paid attention on the design of a linear-quadratic-regulator (LQR with integral action to improve dynamic performance of DEPT with voltage unbalance, voltage sags, voltage harmonics and voltage ﬂicker. The presentation control strategy is simulated by MATLAB/SIMULINK. In addition, the results that are in terms of dc-link reference voltage, input and output voltages clearly show that a better dynamic performance can be achieved by using the LQR method when compared to other techniques.
A method for statistically comparing spatial distribution maps
Directory of Open Access Journals (Sweden)
Reynolds Mary G
2009-01-01
Full Text Available Abstract Background Ecological niche modeling is a method for estimation of species distributions based on certain ecological parameters. Thus far, empirical determination of significant differences between independently generated distribution maps for a single species (maps which are created through equivalent processes, but with different ecological input parameters, has been challenging. Results We describe a method for comparing model outcomes, which allows a statistical evaluation of whether the strength of prediction and breadth of predicted areas is measurably different between projected distributions. To create ecological niche models for statistical comparison, we utilized GARP (Genetic Algorithm for Rule-Set Production software to generate ecological niche models of human monkeypox in Africa. We created several models, keeping constant the case location input records for each model but varying the ecological input data. In order to assess the relative importance of each ecological parameter included in the development of the individual predicted distributions, we performed pixel-to-pixel comparisons between model outcomes and calculated the mean difference in pixel scores. We used a two sample Student's t-test, (assuming as null hypothesis that both maps were identical to each other regardless of which input parameters were used to examine whether the mean difference in corresponding pixel scores from one map to another was greater than would be expected by chance alone. We also utilized weighted kappa statistics, frequency distributions, and percent difference to look at the disparities in pixel scores. Multiple independent statistical tests indicated precipitation as the single most important independent ecological parameter in the niche model for human monkeypox disease. Conclusion In addition to improving our understanding of the natural factors influencing the distribution of human monkeypox disease, such pixel-to-pixel comparison
Mathematical methods linear algebra normed spaces distributions integration
Korevaar, Jacob
1968-01-01
Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector
Distributed optimization for systems design : an augmented Lagrangian coordination method
Tosserams, S.
2008-01-01
This thesis presents a coordination method for the distributed design optimization of engineering systems. The design of advanced engineering systems such as aircrafts, automated distribution centers, and microelectromechanical systems (MEMS) involves multiple components that together realize the
Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods
MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason
2010-01-01
The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal distribution. This article uses a simulation study to demonstrate that confidence limits are imbalanced because the distribution of the indirect effect is normal only in special cases. Two alternatives for improving the performance of confidence limits for the indirect effect are evaluated: (a) a method based on the distribution of the product of two normal random variables, and (b) resampling methods. In Study 1, confidence limits based on the distribution of the product are more accurate than methods based on an assumed normal distribution but confidence limits are still imbalanced. Study 2 demonstrates that more accurate confidence limits are obtained using resampling methods, with the bias-corrected bootstrap the best method overall. PMID:20157642
Method of controlling power distribution in FBR type reactors
International Nuclear Information System (INIS)
Sawada, Shusaku; Kaneto, Kunikazu.
1982-01-01
Purpose: To attain the power distribution flattening with ease by obtaining a radial power distribution substantially in a constant configuration not depending on the burn-up cycle. Method: As the fuel burning proceeds, the radial power distribution is effected by the accumulation of fission products in the inner blancket fuel assemblies which varies the effect thereof as the neutron absorbing substances. Taking notice of the above fact, the power distribution is controlled in a heterogeneous FBR type reactor by varying the core residence period of the inner blancket assemblies in accordance with the charging density of the inner blancket assemblies in the reactor core. (Kawakami, Y.)
Wu, Bitao; Wu, Gang; Yang, Caiqian; He, Yi
2018-05-01
A novel damage identification method for concrete continuous girder bridges based on spatially-distributed long-gauge strain sensing is presented in this paper. First, the variation regularity of the long-gauge strain influence line of continuous girder bridges which changes with the location of vehicles on the bridge is studied. According to this variation regularity, a calculation method for the distribution regularity of the area of long-gauge strain history is investigated. Second, a numerical simulation of damage identification based on the distribution regularity of the area of long-gauge strain history is conducted, and the results indicate that this method is effective for identifying damage and is not affected by the speed, axle number and weight of vehicles. Finally, a real bridge test on a highway is conducted, and the experimental results also show that this method is very effective for identifying damage in continuous girder bridges, and the local element stiffness distribution regularity can be revealed at the same time. This identified information is useful for maintaining of continuous girder bridges on highways.
Directory of Open Access Journals (Sweden)
Jian Hao
2018-01-01
Full Text Available Space charge has close relation with the trap distribution in the insulation material. The phenomenon of charges trapping and detrapping has attracted significant attention in recent years. Space charge and trap parameters are effective parameters for assessing the ageing condition of the insulation material qualitatively. In this paper, a new method for calculating trap distribution based on the double exponential fitting analysis of charge decay process and its application on characterizing the trap distribution of oil impregnated insulation paper was investigated. When compared with the common first order exponential fitting analysis method, the improved dual-level trap method could obtain the energy level range and density of both shallow traps and deep traps, simultaneously. Space charge decay process analysis of the insulation paper immersed with new oil and aged oil shows that the improved trap distribution calculation method can distinguish the physical defects and chemical defects. The trap density shows an increasing trend with the oil ageing, especially for the deep traps mainly related to chemical defects. The greater the energy could be filled by the traps, the larger amount of charges could be trapped, especially under higher electric field strength. The deep trap energy level and trap density could be used to characterize ageing. When one evaluates the ageing condition of oil-paper insulation using trap distribution parameters, the influence of oil performance should not be ignored.
The determination of nuclear charge distributions using a Bayesian maximum entropy method
International Nuclear Information System (INIS)
Macaulay, V.A.; Buck, B.
1995-01-01
We treat the inference of nuclear charge densities from measurements of elastic electron scattering cross sections. In order to get the most reliable information from expensively acquired, incomplete and noisy measurements, we use Bayesian probability theory. Very little prior information about the charge densities is assumed. We derive a prior probability distribution which is a generalization of a form used widely in image restoration based on the entropy of a physical density. From the posterior distribution of possible densities, we select the most probable one, and show how error bars can be evaluated. These have very reasonable properties, such as increasing without bound as hypotheses about finer scale structures are included in the hypothesis space. The methods are demonstrated by using data on the nuclei 4 He and 12 C. (orig.)
Identification of reactor failure states using noise methods, and spatial power distribution
International Nuclear Information System (INIS)
Vavrin, J.; Blazek, J.
1981-01-01
A survey is given of the results achieved. Methodical means and programs were developed for the control computer which may be used in noise diagnostics and in the control of reactor power distribution. Statistical methods of processing the noise components of the signals of measured variables were used for identifying failures of reactors. The method of the synthesis of the neutron flux was used for modelling and evaluating the reactor power distribution. For monitoring and controlling the power distribution a mathematical model of the reactor was constructed suitable for control computers. The uses of noise analysis methods are recommended and directions of further development shown. (J.P.)
Sensitivity Analysis of Dynamic Tariff Method for Congestion Management in Distribution Networks
DEFF Research Database (Denmark)
Huang, Shaojun; Wu, Qiuwei; Liu, Zhaoxi
2015-01-01
The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate the congestions that might occur in a distribution network with high penetration of distribute energy resources (DERs). Sensitivity analysis of the DT method is crucial because of its decentralized...... control manner. The sensitivity analysis can obtain the changes of the optimal energy planning and thereby the line loading profiles over the infinitely small changes of parameters by differentiating the KKT conditions of the convex quadratic programming, over which the DT method is formed. Three case...
Li, Q; He, Y L; Wang, Y; Tao, W Q
2007-11-01
A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.
Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar
Directory of Open Access Journals (Sweden)
Zhenxin Cao
2018-02-01
Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.
Joint distribution of temperature and precipitation in the Mediterranean, using the Copula method
Lazoglou, Georgia; Anagnostopoulou, Christina
2018-03-01
This study analyses the temperature and precipitation dependence among stations in the Mediterranean. The first station group is located in the eastern Mediterranean (EM) and includes two stations, Athens and Thessaloniki, while the western (WM) one includes Malaga and Barcelona. The data was organized in two time periods, the hot-dry period and the cold-wet one, composed of 5 months, respectively. The analysis is based on a new statistical technique in climatology: the Copula method. Firstly, the calculation of the Kendall tau correlation index showed that temperatures among stations are dependant during both time periods whereas precipitation presents dependency only between the stations located in EM or WM and only during the cold-wet period. Accordingly, the marginal distributions were calculated for each studied station, as they are further used by the copula method. Finally, several copula families, both Archimedean and Elliptical, were tested in order to choose the most appropriate one to model the relation of the studied data sets. Consequently, this study achieves to model the dependence of the main climate parameters (temperature and precipitation) with the Copula method. The Frank copula was identified as the best family to describe the joint distribution of temperature, for the majority of station groups. For precipitation, the best copula families are BB1 and Survival Gumbel. Using the probability distribution diagrams, the probability of a combination of temperature and precipitation values between stations is estimated.
Gene tree rooting methods give distributions that mimic the coalescent process.
Tian, Yuan; Kubatko, Laura S
2014-01-01
Multi-locus phylogenetic inference is commonly carried out via models that incorporate the coalescent process to model the possibility that incomplete lineage sorting leads to incongruence between gene trees and the species tree. An interesting question that arises in this context is whether data "fit" the coalescent model. Previous work (Rosenfeld et al., 2012) has suggested that rooting of gene trees may account for variation in empirical data that has been previously attributed to the coalescent process. We examine this possibility using simulated data. We show that, in the case of four taxa, the distribution of gene trees observed from rooting estimated gene trees with either the molecular clock or with outgroup rooting can be closely matched by the distribution predicted by the coalescent model with specific choices of species tree branch lengths. We apply commonly-used coalescent-based methods of species tree inference to assess their performance in these situations. Copyright © 2013 Elsevier Inc. All rights reserved.
Loss optimization in distribution networks with distributed generation
DEFF Research Database (Denmark)
Pokhrel, Basanta Raj; Nainar, Karthikeyan; Bak-Jensen, Birgitte
2017-01-01
This paper presents a novel power loss minimization approach in distribution grids considering network reconfiguration, distributed generation and storage installation. Identification of optimum configuration in such scenario is one of the main challenges faced by distribution system operators...... in highly active distribution grids. This issue is tackled by formulating a hybrid loss optimization problem and solved using the Interior Point Method. Sensitivity analysis is used to identify the optimum location of storage units. Different scenarios of reconfiguration, storage and distributed generation...... penetration are created to test the proposed algorithm. It is tested in a benchmark medium voltage network to show the effectiveness and performance of the algorithm. Results obtained are found to be encouraging for radial distribution system. It shows that we can reduce the power loss by more than 30% using...
Katano, Izumi; Harada, Ken; Doi, Hideyuki; Souma, Rio; Minamoto, Toshifumi
2017-01-01
Environmental DNA (eDNA) has recently been used for detecting the distribution of macroorganisms in various aquatic habitats. In this study, we applied an eDNA method to estimate the distribution of the Japanese clawed salamander, Onychodactylus japonicus, in headwater streams. Additionally, we compared the detection of eDNA and hand-capturing methods used for determining the distribution of O. japonicus. For eDNA detection, we designed a qPCR primer/probe set for O. japonicus using the 12S rRNA region. We detected the eDNA of O. japonicus at all sites (with the exception of one), where we also observed them by hand-capturing. Additionally, we detected eDNA at two sites where we were unable to observe individuals using the hand-capturing method. Moreover, we found that eDNA concentrations and detection rates of the two water sampling areas (stream surface and under stones) were not significantly different, although the eDNA concentration in the water under stones was more varied than that on the surface. We, therefore, conclude that eDNA methods could be used to determine the distribution of macroorganisms inhabiting headwater systems by using samples collected from the surface of the water.
Distributed Interior-point Method for Loosely Coupled Problems
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Hansson, Anders; Andersen, Martin Skovgaard
2014-01-01
In this paper, we put forth distributed algorithms for solving loosely coupled unconstrained and constrained optimization problems. Such problems are usually solved using algorithms that are based on a combination of decomposition and first order methods. These algorithms are commonly very slow a...
Dekkers, A L M; Slob, W
2012-10-01
In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.
Analytical method for determining the channel-temperature distribution
International Nuclear Information System (INIS)
Kurbatov, I.M.
1992-01-01
The distribution of the predicted temperature over the volume or cross section of the active zone is important for thermal calculations of reactors taking into account random deviations. This requires a laborious calculation which includes the following steps: separation of the nominal temperature field, within the temperature range, into intervals, in each of which the temperature is set equal to its average value in the interval; determination of the number of channels whose temperature falls within each interval; construction of the channel-temperature distribution in each interval in accordance with the weighted error function; and summation of the number of channels with the same temperature over all intervals. This procedure can be greatly simplified with the help of methods which eliminate numerous variant calculations when the nominal temperature field is open-quotes refinedclose quotes up to the optimal field according to different criteria. In the present paper a universal analytical method is proposed for determining, by changing the coefficients in the channel-temperature distribution function, the form of this function that reflects all conditions of operation of the elements in the active zone. The problem is solved for the temperature of the coolant at the outlet from the reactor channels
DEFF Research Database (Denmark)
Chen, Shuheng; Hu, Weihao; Su, Chi
2015-01-01
A new and efficient methodology for optimal reactive power and voltage control of distribution networks with distributed generators based on fuzzy adaptive hybrid PSO (FAHPSO) is proposed. The objective is to minimize comprehensive cost, consisting of power loss and operation cost of transformers...... that the proposed method can search a more promising control schedule of all transformers, all capacitors and all distributed generators with less time consumption, compared with other listed artificial intelligent methods....... algorithm is implemented in VC++ 6.0 program language and the corresponding numerical experiments are finished on the modified version of the IEEE 33-node distribution system with two newly installed distributed generators and eight newly installed capacitors banks. The numerical results prove...
A method for exploring the distribution of radioelements at depth using gamma-ray spectrometric data
International Nuclear Information System (INIS)
Li Qingyang
1997-01-01
Based on the inherent relation between radioelements and terrestrial heat flow, theoretically shows the possibility of exploring the distribution of radioelements at depth using gamma-ray spectrometric data, and a data-processing and synthesizing method has been adopted to deduce the calculation formula. The practical application in the uranium mineralized area No. 2801 in Yunnan Province proves that this method is of practical value, and it has been successfully applied to the data processing and good results have been obtained
Effect of distributed generation installation on power loss using genetic algorithm method
Hasibuan, A.; Masri, S.; Othman, W. A. F. W. B.
2018-02-01
Injection of the generator distributed in the distribution network can affect the power system significantly. The effect that occurs depends on the allocation of DG on each part of the distribution network. Implementation of this approach has been made to the IEEE 30 bus standard and shows the optimum location and size of the DG which shows a decrease in power losses in the system. This paper aims to show the impact of distributed generation on the distribution system losses. The main purpose of installing DG on a distribution system is to reduce power losses on the power system.Some problems in power systems that can be solved with the installation of DG, one of which will be explored in the use of DG in this study is to reduce the power loss in the transmission line. Simulation results from case studies on the IEEE 30 bus standard system show that the system power loss decreased from 5.7781 MW to 1,5757 MW or just 27,27%. The simulated DG is injected to the bus with the lowest voltage drop on the bus number 8.
Directory of Open Access Journals (Sweden)
Олександр Павлович Кіркін
2017-06-01
Full Text Available Development of information technologies and market requirements in effective control over cargo flows, forces enterprises to look for new ways and methods of automated control over the technological operations. For rail transportation one of the most complicated tasks of automation is the cargo flows distribution over the sites of loading and unloading. In this article the solution with the use of one of the methods of artificial intelligence – a fuzzy inference has been proposed. The analysis of the last publications showed that the fuzzy inference method is effective for the solution of similar tasks, it makes it possible to accumulate experience, it is stable to temporary impacts of the environmental conditions. The existing methods of the cargo flows distribution over the sites of loading and unloading are too simplified and can lead to incorrect decisions. The purpose of the article is to create a distribution model of cargo flows of the enterprises over the sites of loading and unloading, basing on the fuzzy inference method and to automate the control. To achieve the objective a mathematical model of the cargo flows distribution over the sites of loading and unloading has been made using fuzzy logic. The key input parameters of the model are: «number of loading sites», «arrival of the next set of cars», «availability of additional operations». The output parameter is «a variety of set of cars». Application of the fuzzy inference method made it possible to reduce loading time by 15% and to reduce costs for preparatory operations before loading by 20%. Thus this method is an effective means and holds the greatest promise for railway competitiveness increase. Interaction between different types of transportation and their influence on the cargo flows distribution over the sites of loading and unloading hasn’t been considered. These sites may be busy transshipping at that very time which is characteristic of large enterprises
International Nuclear Information System (INIS)
Ju Yongjian; Chen Meihua; Sun Fuyin; Zhang Liang'an; Lei Chengzhi
2004-01-01
Objective: To study the relationship between tumor control probability (TCP) or equivalent uniform dose (EUD) and the heterogeneity degree of the dose changes with variable biological parameter values of the tumor. Methods: According to the definitions of TCP and EUD, calculating equations were derived. The dose distributions in the tumor were assumed to be Gaussian ones. The volume of the tumor was divided into several voxels, and the absorbed doses of these voxels were simulated by Monte Carlo methods. Then with the different values of radiosensitivity (α) and potential doubling time of the clonogens (T p ), the relationships between TCP or EUD and the standard deviation of dose (S d ) were evaluated. Results: The TCP-S d curves were influenced by the variable α and T p values, but the EUD-S d curves showed little variation. Conclusion: When the radiotherapy protocols with different dose distributions are compared, if the biological parameter values of the tumor have been known exactly, it's better to use the TCP, otherwise the EUD will be preferred
Directory of Open Access Journals (Sweden)
Massoud Tabesh
2011-07-01
Full Text Available Optimum operation of water distribution networks is one of the priorities of sustainable development of water resources, considering the issues of increasing efficiency and decreasing the water losses. One of the key subjects in optimum operational management of water distribution systems is preparing rehabilitation and replacement schemes, prediction of pipes break rate and evaluation of their reliability. Several approaches have been presented in recent years regarding prediction of pipe failure rates which each one requires especial data sets. Deterministic models based on age and deterministic multi variables and stochastic group modeling are examples of the solutions which relate pipe break rates to parameters like age, material and diameters. In this paper besides the mentioned parameters, more factors such as pipe depth and hydraulic pressures are considered as well. Then using multi variable regression method, intelligent approaches (Artificial neural network and neuro fuzzy models and Evolutionary polynomial Regression method (EPR pipe burst rate are predicted. To evaluate the results of different approaches, a case study is carried out in a part ofMashhadwater distribution network. The results show the capability and advantages of ANN and EPR methods to predict pipe break rates, in comparison with neuro fuzzy and multi-variable regression methods.
Distribution of uranium in dental porcelains by means of the fission track method
International Nuclear Information System (INIS)
Shimizu, Masami; Noguchi, Kunikazu; Moriwaki, Kazunari; Sairenji, Eiko
1980-01-01
Porcelain teeth, some of which contain uranium compounds for aesthetic purpose, have been widely used in dental clinics. Hazardous effects due to uranium radiation have been suggested by recent publications. In the previous study, the authors reported the uranium content of porcelain teeth and radiation dose by it. In this study, using the fission track method, the authors examined spatial distribution of uranium in dental porcelain teeth (4 brands) which were marketed in Japan. From each sample of porcelain tooth, a 1-mm-thick specimen was sliced, and uranium content was measured at every 0.19 mm from labial side to lingual side for making a uranium distribution chart. Higher uranium concentration was found in Trubyte Bioblend porcelain teeth (USA) and they showed almost uniform distribution of uranium, while those of the Japanese three brands indicated, in most case, comparatively lower concentration and found to be non-uniform distributions. Range of uranium concentration in these brands were N.D. -- 5.2 ppm (Shofu-Ace), N.D. -- 342 ppm (Shofu-Real), N.D. -- 47 ppm (G.C. Livdent) and N.D. -- 235 ppm (Trubyte Bioblend), respectively. (author)
Directory of Open Access Journals (Sweden)
Maman Abdurohman
2017-12-01
Full Text Available This research proposed a new method to enhance Distributed Denial of Service (DDoS detection attack on Software Defined Network (SDN environment. This research utilized the OpenFlow controller of SDN for DDoS attack detection using modified method and regarding entropy value. The new method would check whether the traffic was a normal traffic or DDoS attack by measuring the randomness of the packets. This method consisted of two steps, detecting attack and checking the entropy. The result shows that the new method can reduce false positive when there is a temporary and sudden increase in normal traffic. The new method succeeds in not detecting this as a DDoS attack. Compared to previous methods, this proposed method can enhance DDoS attack detection on SDN environment.
Analytical method for reconstruction pin to pin of the nuclear power density distribution
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2013-01-01
An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)
Analytical method for reconstruction pin to pin of the nuclear power density distribution
Energy Technology Data Exchange (ETDEWEB)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)
2013-07-01
An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)
An Analytical Method for Determining the Load Distribution of Single-Column Multibolt Connection
Directory of Open Access Journals (Sweden)
Nirut Konkong
2017-01-01
Full Text Available The purpose of this research was to investigate the effect of geometric variables on the bolt load distributions of a cold-formed steel bolt connection. The study was conducted using an experimental test, finite element analysis, and an analytical method. The experimental study was performed using single-lap shear testing of a concentrically loaded bolt connection fabricated from G550 cold-formed steel. Finite element analysis with shell elements was used to model the cold-formed steel plate while solid elements were used to model the bolt fastener for the purpose of studying the structural behavior of the bolt connections. Material nonlinearities, contact problems, and a geometric nonlinearity procedure were used to predict the failure behavior of the bolt connections. The analytical method was generated using the spring model. The bolt-plate interaction stiffness was newly proposed which was verified by the experiment and finite element model. It was applied to examine the effect of geometric variables on the single-column multibolt connection. The effects were studied of varying bolt diameter, plate thickness, and the plate thickness ratio (t2/t1 on the bolt load distribution. The results of the parametric study showed that the t2/t1 ratio controlled the efficiency of the bolt load distribution more than the other parameters studied.
Energy Technology Data Exchange (ETDEWEB)
Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-10-15
The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis.
International Nuclear Information System (INIS)
Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man
2013-01-01
The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis
Uncertainty Management of Dynamic Tariff Method for Congestion Management in Distribution Networks
DEFF Research Database (Denmark)
Huang, Shaojun; Wu, Qiuwei; Cheng, Lin
2016-01-01
The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate congestions that might occur in a distribution network with high penetration of distributed energy resources (DERs). Uncertainty management is required for the decentralized DT method because the DT...... is de- termined based on optimal day-ahead energy planning with forecasted parameters such as day-ahead energy prices and en- ergy needs which might be different from the parameters used by aggregators. The uncertainty management is to quantify and mitigate the risk of the congestion when employing...
Community Based Distribution of Child Spacing Methods at ...
African Journals Online (AJOL)
uses volunteer CBD agents. Mrs. E.F. Pelekamoyo. Service Delivery Officer. National Family Welfare Council of Malawi. Private Bag 308. Lilongwe 3. Malawi. Community Based Distribution of. Child Spacing Methods ... than us at the Hospital; male motivators by talking to their male counterparts help them to accept that their ...
Information-theoretic methods for estimating of complicated probability distributions
Zong, Zhi
2006-01-01
Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur
Directory of Open Access Journals (Sweden)
U. Filobello-Nino
2015-01-01
Full Text Available We propose an approximate solution of T-F equation, obtained by using the nonlinearities distribution homotopy perturbation method (NDHPM. Besides, we show a table of comparison, between this proposed approximate solution and a numerical of T-F, by establishing the accuracy of the results.
A simple nodal force distribution method in refined finite element meshes
Energy Technology Data Exchange (ETDEWEB)
Park, Jai Hak [Chungbuk National University, Chungju (Korea, Republic of); Shin, Kyu In [Gentec Co., Daejeon (Korea, Republic of); Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Cho, Seungyon [National Fusion Research Institute, Daejeon (Korea, Republic of)
2017-05-15
In finite element analyses, mesh refinement is frequently performed to obtain accurate stress or strain values or to accurately define the geometry. After mesh refinement, equivalent nodal forces should be calculated at the nodes in the refined mesh. If field variables and material properties are available at the integration points in each element, then the accurate equivalent nodal forces can be calculated using an adequate numerical integration. However, in certain circumstances, equivalent nodal forces cannot be calculated because field variable data are not available. In this study, a very simple nodal force distribution method was proposed. Nodal forces of the original finite element mesh are distributed to the nodes of refined meshes to satisfy the equilibrium conditions. The effect of element size should also be considered in determining the magnitude of the distributing nodal forces. A program was developed based on the proposed method, and several example problems were solved to verify the accuracy and effectiveness of the proposed method. From the results, accurate stress field can be recognized to be obtained from refined meshes using the proposed nodal force distribution method. In example problems, the difference between the obtained maximum stress and target stress value was less than 6 % in models with 8-node hexahedral elements and less than 1 % in models with 20-node hexahedral elements or 10-node tetrahedral elements.
Risk-based methods for reliability investments in electric power distribution systems
Energy Technology Data Exchange (ETDEWEB)
Alvehag, Karin
2011-07-01
Society relies more and more on a continuous supply of electricity. However, while under investments in reliability lead to an unacceptable number of power interruptions, over investments result in too high costs for society. To give incentives for a socio economically optimal level of reliability, quality regulations have been adopted in many European countries. These quality regulations imply new financial risks for the distribution system operator (DSO) since poor reliability can reduce the allowed revenue for the DSO and compensation may have to be paid to affected customers. This thesis develops a method for evaluating the incentives for reliability investments implied by different quality regulation designs. The method can be used to investigate whether socio economically beneficial projects are also beneficial for a profit-maximizing DSO subject to a particular quality regulation design. To investigate which reinvestment projects are preferable for society and a DSO, risk-based methods are developed. With these methods, the probability of power interruptions and the consequences of these can be simulated. The consequences of interruptions for the DSO will to a large extent depend on the quality regulation. The consequences for the customers, and hence also society, will depend on factors such as the interruption duration and time of occurrence. The proposed risk-based methods consider extreme outage events in the risk assessments by incorporating the impact of severe weather, estimating the full probability distribution of the total reliability cost, and formulating a risk-averse strategy. Results from case studies performed show that quality regulation design has a significant impact on reinvestment project profitability for a DSO. In order to adequately capture the financial risk that the DSO is exposed to, detailed riskbased methods, such as the ones developed in this thesis, are needed. Furthermore, when making investment decisions, a risk
Rock sampling. [method for controlling particle size distribution
Blum, P. (Inventor)
1971-01-01
A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.
Finite difference applied to the reconstruction method of the nuclear power density distribution
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2016-01-01
Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.
Tarmizi, S. N. M.; Asmat, A.; Sumari, S. M.
2014-02-01
PM10 is one of the air contaminants that can be harmful to human health. Meteorological factors and changes of monsoon season may affect the distribution of these particles. The objective of this study is to determine the temporal and spatial particulate matter (PM10) concentration distribution in Klang Valley, Malaysia by using the Inverse Distance Weighted (IDW) method at different monsoon season and meteorological conditions. PM10 and meteorological data were obtained from the Malaysian Department of Environment (DOE). Particles distribution data were added to the geographic database on a seasonal basis. Temporal and spatial patterns of PM10 concentration distribution were determined by using ArcGIS 9.3. The higher PM10 concentrations are observed during Southwest monsoon season. The values are lower during the Northeast monsoon season. Different monsoon seasons show different meteorological conditions that effect PM10 distribution.
International Nuclear Information System (INIS)
Tarmizi, S N M; Asmat, A; Sumari, S M
2014-01-01
PM 10 is one of the air contaminants that can be harmful to human health. Meteorological factors and changes of monsoon season may affect the distribution of these particles. The objective of this study is to determine the temporal and spatial particulate matter (PM 10 ) concentration distribution in Klang Valley, Malaysia by using the Inverse Distance Weighted (IDW) method at different monsoon season and meteorological conditions. PM 10 and meteorological data were obtained from the Malaysian Department of Environment (DOE). Particles distribution data were added to the geographic database on a seasonal basis. Temporal and spatial patterns of PM 10 concentration distribution were determined by using ArcGIS 9.3. The higher PM 10 concentrations are observed during Southwest monsoon season. The values are lower during the Northeast monsoon season. Different monsoon seasons show different meteorological conditions that effect PM 10 distribution
Distributed Cooperative Search Control Method of Multiple UAVs for Moving Target
Directory of Open Access Journals (Sweden)
Chang-jian Ru
2015-01-01
Full Text Available To reduce the impact of uncertainties caused by unknown motion parameters on searching plan of moving targets and improve the efficiency of UAV’s searching, a novel distributed Multi-UAVs cooperative search control method for moving target is proposed in this paper. Based on detection results of onboard sensors, target probability map is updated using Bayesian theory. A Gaussian distribution of target transition probability density function is introduced to calculate prediction probability of moving target existence, and then target probability map can be further updated in real-time. A performance index function combining with target cost, environment cost, and cooperative cost is constructed, and the cooperative searching problem can be transformed into a central optimization problem. To improve computational efficiency, the distributed model predictive control method is presented, and thus the control command of each UAV can be obtained. The simulation results have verified that the proposed method can avoid the blindness of UAV searching better and improve overall efficiency of the team effectively.
Neutron distribution modeling based on integro-probabilistic approach of discrete ordinates method
International Nuclear Information System (INIS)
Khromov, V.V.; Kryuchkov, E.F.; Tikhomirov, G.V.
1992-01-01
In this paper is described the universal nodal method for the neutron distribution calculation in reactor and shielding problems, based on using of influence functions and factors of local-integrated volume and surface neutron sources in phase subregions. This method permits to avoid the limited capabilities of collision-probability method concerning with the detailed calculation of angular neutron flux dependence, scattering anisotropy and empty channels. The proposed method may be considered as modification of S n - method with advantage of ray-effects elimination. There are presented the description of method theory and algorithm following by the examples of method applications for calculation of neutron distribution in three-dimensional model of fusion reactor blanket and in highly heterogeneous reactor with empty channel
Ferris, Kim; Jones, Dumont
2014-03-01
Local electric fields reflect the structural and dielectric fluctuations in a semiconductor, and affect the material performance both for electron transport and carrier lifetime properties. In this paper, we use the LOCALF methodology with periodic boundary conditions to examine the local electric field distributions and its perturbations for II-VI (CdTe, Cd(1-x)Zn(x)Te) semiconductors, containing Te inclusions and small fluctuations in the local dielectric susceptibility. With inclusion of the induced-field term, the electric field distribution shows enhancements and diminishments compared to the macroscopic applied field, reflecting the microstructure characteristics of the dielectric. Learning methods are applied to these distributions to assess the spatial extent of the perturbation, and determine an electric field defined defect size as compared to its physical dimension. Critical concentrations of defects are assessed in terms of defect formation energies. This work was supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-08-X-00872-e. This support does not constitute an express or implied endorsement on the part of the Gov't.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Chaoyang Shi; Bi Yu Chen; William H. K. Lam; Qingquan Li
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are f...
Gottschalk, Fadri; Nowack, Bernd
2013-01-01
This article presents a method of probabilistically computing species sensitivity distributions (SSD) that is well-suited to cope with distinct data scarcity and variability. First, a probability distribution that reflects the uncertainty and variability of sensitivity is modeled for each species considered. These single species sensitivity distributions are then combined to create an SSD for a particular ecosystem. A probabilistic estimation of the risk is carried out by combining the probability of critical environmental concentrations with the probability of organisms being impacted negatively by these concentrations. To evaluate the performance of the method, we developed SSD and risk calculations for the aquatic environment exposed to triclosan. The case studies showed that the probabilistic results reflect the empirical information well, and the method provides a valuable alternative or supplement to more traditional methods for calculating SSDs based on averaging raw data and/or on using theoretical distributional forms. A comparison and evaluation with single SSD values (5th-percentile [HC5]) revealed the robustness of the proposed method. Copyright © 2012 SETAC.
Risky Group Decision-Making Method for Distribution Grid Planning
Li, Cunbin; Yuan, Jiahang; Qi, Zhiqiang
2015-12-01
With rapid speed on electricity using and increasing in renewable energy, more and more research pay attention on distribution grid planning. For the drawbacks of existing research, this paper proposes a new risky group decision-making method for distribution grid planning. Firstly, a mixing index system with qualitative and quantitative indices is built. On the basis of considering the fuzziness of language evaluation, choose cloud model to realize "quantitative to qualitative" transformation and construct interval numbers decision matrices according to the "3En" principle. An m-dimensional interval numbers decision vector is regarded as super cuboids in m-dimensional attributes space, using two-level orthogonal experiment to arrange points uniformly and dispersedly. The numbers of points are assured by testing numbers of two-level orthogonal arrays and these points compose of distribution points set to stand for decision-making project. In order to eliminate the influence of correlation among indices, Mahalanobis distance is used to calculate the distance from each solutions to others which means that dynamic solutions are viewed as the reference. Secondly, due to the decision-maker's attitude can affect the results, this paper defines the prospect value function based on SNR which is from Mahalanobis-Taguchi system and attains the comprehensive prospect value of each program as well as the order. At last, the validity and reliability of this method is illustrated by examples which prove the method is more valuable and superiority than the other.
Directory of Open Access Journals (Sweden)
X. Ning
2012-08-01
To resolve the above-mentioned registration difficulties, a parallel and adaptive uniform-distributed registration method for CE-1 lunar remote sensed imagery is proposed in this paper. Based on 6 pairs of randomly selected images, both the standard SIFT algorithm and the parallel and adaptive uniform-distributed registration method were executed, the versatility and effectiveness were assessed. The experimental results indicate that: by applying the parallel and adaptive uniform-distributed registration method, the efficiency of CE-1 lunar remote sensed imagery registration were increased dramatically. Therefore, the proposed method in the paper could acquire uniform-distributed registration results more effectively, the registration difficulties including difficult to obtain results, time-consuming, non-uniform distribution could be successfully solved.
International Nuclear Information System (INIS)
Sharma, R.B.; Ghildyal, B.P.
1976-01-01
The root distribution of wheat variety UP 301 was obtained by determining the 32 P activity in soil-root cores by two methods, viz., ignition and triacid digestion. Root distribution obtained by these two methods was compared with that by standard root core washing procedure. The percent error in root distribution as determined by triacid digestion method was within +- 2.1 to +- 9.0 as against +- 5.5 to +- 21.2 by ignition method. Thus triacid digestion method proved better over the ignition method. (author)
Distributed AC power flow method for AC and AC-DC hybrid ...
African Journals Online (AJOL)
... on voltage level and R/X ratio in the formulation itself. DPFM is applied on a 10 bus, low voltage, microgrid system giving a better voltage profile.. Keywords: Microgrid (MG), Distributed Energy Resources (DER), Particle Swarm Optimization (OPF), Time varying inertia weight (TVIW), Distributed power flow method (DPFM) ...
Dynamic modeling method of the bolted joint with uneven distribution of joint surface pressure
Li, Shichao; Gao, Hongli; Liu, Qi; Liu, Bokai
2018-03-01
The dynamic characteristics of the bolted joints have a significant influence on the dynamic characteristics of the machine tool. Therefore, establishing a reasonable bolted joint dynamics model is helpful to improve the accuracy of machine tool dynamics model. Because the pressure distribution on the joint surface is uneven under the concentrated force of bolts, a dynamic modeling method based on the uneven pressure distribution of the joint surface is presented in this paper to improve the dynamic modeling accuracy of the machine tool. The analytic formulas between the normal, tangential stiffness per unit area and the surface pressure on the joint surface can be deduced based on the Hertz contact theory, and the pressure distribution on the joint surface can be obtained by the finite element software. Futhermore, the normal and tangential stiffness distribution on the joint surface can be obtained by the analytic formula and the pressure distribution on the joint surface, and assigning it into the finite element model of the joint. Qualitatively compared the theoretical mode shapes and the experimental mode shapes, as well as quantitatively compared the theoretical modal frequencies and the experimental modal frequencies. The comparison results show that the relative error between the first four-order theoretical modal frequencies and the first four-order experimental modal frequencies is 0.2% to 4.2%. Besides, the first four-order theoretical mode shapes and the first four-order experimental mode shapes are similar and one-to-one correspondence. Therefore, the validity of the theoretical model is verified. The dynamic modeling method proposed in this paper can provide a theoretical basis for the accurate dynamic modeling of the bolted joint in machine tools.
Synchronization Methods for Three Phase Distributed Power Generation Systems
DEFF Research Database (Denmark)
Timbus, Adrian Vasile; Teodorescu, Remus; Blaabjerg, Frede
2005-01-01
Nowadays, it is a general trend to increase the electricity production using Distributed Power Generation Systems (DPGS) based on renewable energy resources such as wind, sun or hydrogen. If these systems are not properly controlled, their connection to the utility network can generate problems...... on the grid side. Therefore, considerations about power generation, safe running and grid synchronization must be done before connecting these systems to the utility network. This paper is mainly dealing with the grid synchronization issues of distributed systems. An overview of the synchronization methods...
An improved in situ method for determining depth distributions of gamma-ray emitting radionuclides
International Nuclear Information System (INIS)
Benke, R.R.; Kearfott, K.J.
2001-01-01
In situ gamma-ray spectrometry determines the quantities of radionuclides in some medium with a portable detector. The main limitation of in situ gamma-ray spectrometry lies in determining the depth distribution of radionuclides. This limitation is addressed by developing an improved in situ method for determining the depth distributions of gamma-ray emitting radionuclides in large area sources. This paper implements a unique collimator design with conventional radiation detection equipment. Cylindrically symmetric collimators were fabricated to allow only those gamma-rays emitted from a selected range of polar angles (measured off the detector axis) to be detected. Positioned with its axis normal to surface of the media, each collimator enables the detection of gamma-rays emitted from a different range of polar angles and preferential depths. Previous in situ methods require a priori knowledge of the depth distribution shape. However, the absolute method presented in this paper determines the depth distribution as a histogram and does not rely on such assumptions. Other advantages over previous in situ methods are that this method only requires a single gamma-ray emission, provides more detailed depth information, and offers a superior ability for characterizing complex depth distributions. Collimated spectrometer measurements of buried area sources demonstrated the ability of the method to yield accurate depth information. Based on the results of actual measurements, this method increases the potential of in situ gamma-ray spectrometry as an independent characterization tool in situations with unknown radionuclide depth distributions
International Nuclear Information System (INIS)
Hubicki, W.; Hubicka, H.
1980-01-01
The method of basic precipitation of lanthanons was combined with the ion exchange distribution method using ammonium acetate. As a result of chromatogram development 1:2 the good results of distribution of Sm -Nd, the fractions 99,9% Nd 2 O 3 and Pr 6 O 11 and 99,5% La 2 O 3 were obtained. It was found that the way of packing the column influenced greatly the efficiency of ion distribution. (author)
Methods to determine fast-ion distribution functions from multi-diagnostic measurements
DEFF Research Database (Denmark)
Jacobsen, Asger Schou; Salewski, Mirko
-ion diagnostic views, it is possible to infer the distribution function using a tomography approach. Several inversion methods for solving this tomography problem in velocity space are implemented and compared. It is found that the best quality it obtained when using inversion methods which penalise steep......Understanding the behaviour of fast ions in a fusion plasma is very important, since the fusion-born alpha particles are expected to be the main source of heating in a fusion power plant. Preferably, the entire fast-ion velocity-space distribution function would be measured. However, no fast...
A calculation method for transient flow distribution of SCWR(CSR1000)
International Nuclear Information System (INIS)
Chen, Juan; Zhou, Tao; Chen, Jie; Liu, Liang; Muhammad, Ali Shahzad; Muhammad, Zeeshan Ali; Xia, Bangyang
2017-01-01
The supercritical water reactor CSR1000 is selected for the study. A parallel channel flow transient flow distribution module is developed, which is used for solving unsteady nonlinear equations. The incorporated programs of SCAC-CSR1000 are executed on normal and abnormal operating conditions. The analysis shows that: 1. Transient flow distribution can incorporate parallel channel flow calculation, with an error less than 0.1%; 2. After a total loss of coolant flow, the flow of each channel shows a downward trend; 3. In the event of introducing a traffic accident, the first coolant flow shows an increasing trend.
Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru
2016-03-30
As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.
99Tc in the environment. Sources, distribution and methods
International Nuclear Information System (INIS)
Garcia-Leon, Manuel
2005-01-01
99 Tc is a β-emitter, E max =294 keV, with a very long half-life (T 1/2 =2.11 x 10 5 y). It is mainly produced in the fission of 235 U and 239 Pu at a rate of about 6%. This rate together with its long half-life makes it a significant nuclide in the whole nuclear fuel cycle, from which it can be introduced into the environment at different rates depending on the cycle step. A gross estimation shows that adding all the possible sources, at least 2000 TBq had been released into the environment up to 2000 and that up to the middle of the nineties of the last century some 64000 TBq had been produced worldwide. Nuclear explosions have liberated some 160 TBq into the environment. In this work, environmental distribution of 99 Tc as well as the methods for its determination will be discussed. Emphasis is put on the environmental relevance of 99 Tc, mainly with regard to the future committed radiation dose received by the population and to the problem of nuclear waste management. Its determination at environmental levels is a challenging task. For that, special mention is made about the mass spectrometric methods for its measurement. (author)
Score Function of Distribution and Revival of the Moment Method
Czech Academy of Sciences Publication Activity Database
Fabián, Zdeněk
2016-01-01
Roč. 45, č. 4 (2016), s. 1118-1136 ISSN 0361-0926 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : characteristics of distributions * data characteristics * general moment method * Huber moment estimator * parametric methods * score function Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.311, year: 2016
Methods of Run-Time Error Detection in Distributed Process Control Software
DEFF Research Database (Denmark)
Drejer, N.
of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...
Quantification of the spatial strain distribution of scoliosis using a thin-plate spline method.
Kiriyama, Yoshimori; Watanabe, Kota; Matsumoto, Morio; Toyama, Yoshiaki; Nagura, Takeo
2014-01-03
The objective of this study was to quantify the three-dimensional spatial strain distribution of a scoliotic spine by nonhomogeneous transformation without using a statistically averaged reference spine. The shape of the scoliotic spine was determined from computed tomography images from a female patient with adolescent idiopathic scoliosis. The shape of the scoliotic spine was enclosed in a rectangular grid, and symmetrized using a thin-plate spline method according to the node positions of the grid. The node positions of the grid were determined by numerical optimization to satisfy symmetry. The obtained symmetric spinal shape was enclosed within a new rectangular grid and distorted back to the original scoliotic shape using a thin-plate spline method. The distorted grid was compared to the rectangular grid that surrounded the symmetrical spine. Cobb's angle was reduced from 35° in the scoliotic spine to 7° in the symmetrized spine, and the scoliotic shape was almost fully symmetrized. The scoliotic spine showed a complex Green-Lagrange strain distribution in three dimensions. The vertical and transverse compressive/tensile strains in the frontal plane were consistent with the major scoliotic deformation. The compressive, tensile and shear strains on the convex side of the apical vertebra were opposite to those on the concave side. These results indicate that the proposed method can be used to quantify the three-dimensional spatial strain distribution of a scoliotic spine, and may be useful in quantifying the deformity of scoliosis. © 2013 Elsevier Ltd. All rights reserved.
New method for extracting tumors in PET/CT images based on the probability distribution
International Nuclear Information System (INIS)
Nitta, Shuhei; Hontani, Hidekata; Hukami, Tadanori
2006-01-01
In this report, we propose a method for extracting tumors from PET/CT images by referring to the probability distribution of pixel values in the PET image. In the proposed method, first, the organs that normally take up fluorodeoxyglucose (FDG) (e.g., the liver, kidneys, and brain) are extracted. Then, the tumors are extracted from the images. The distribution of pixel values in PET images differs in each region of the body. Therefore, the threshold for detecting tumors is adaptively determined by referring to the distribution. We applied the proposed method to 37 cases and evaluated its performance. This report also presents the results of experiments comparing the proposed method and another method in which the pixel values are normalized for extracting tumors. (author)
Application of autoradiographic methods for contaminant distribution studies in soils
International Nuclear Information System (INIS)
Povetko, O.G.; Higley, K.A.
2000-01-01
In order to determine physical location of contaminants in soil, solidified soil 'thin' sections, which preserve the undisturbed structural characteristics of the original soil, were prepared. This paper describes an application of different autoradiographic methods to identify the distribution of selected nuclides along key structural features of sample soils and sizes of 'hot particles' of contaminant. These autoradiographic methods included contact autoradiography using CR-39 (Homalite Plastics) plastic alpha track detectors and neutron-induced autoradiography that produced fission fragment tracks in Lexan (Thrust Industries, Inc.) plastic detectors. Intact soil samples containing weapons-grade plutonium from Rocky Flats Environmental Test Site and control samples from outside the site location were used in thin soil section preparation. Distribution of particles of actinides was observed and analyzed through the soil section depth profile from the surface to the 15-cm depth. The combination of two autoradiographic methods allowed to distinguish alpha- emitting particles of natural U, 239+240 Pu and non-fissile alpha-emitters. Locations of 990 alpha 'stars' caused by 239+240 Pu and 241 Am 'hot particles' were recorded, particles were sized, their size-frequency, depth and activity distributions were analyzed. Several large colloidal conglomerates of 239+240 Pu and 241 Am 'hot particles' were found in soil profile. Their alpha and fission fragment 'star' images were micro photographed. (author)
Development of advanced methods for planning electric energy distribution systems. Final report
Energy Technology Data Exchange (ETDEWEB)
Goenen, T.; Foote, B.L.; Thompson, J.C.; Fagan, J.E.
1979-10-01
An extensive search was made for the identification and collection of reports published in the open literature which describes distribution planning methods and techniques. In addition, a questionnaire has been prepared and sent to a large number of electric power utility companies. A large number of these companies were visited and/or their distribution planners interviewed for the identification and description of distribution system planning methods and techniques used by these electric power utility companies and other commercial entities. Distribution systems planning models were reviewed and a set of new mixed-integer programming models were developed for the optimal expansion of the distribution systems. The models help the planner to select: (1) optimum substation locations; (2) optimum substation expansions; (3) optimum substation transformer sizes; (4) optimum load transfers between substations; (5) optimum feeder routes and sizes subject to a set of specified constraints. The models permit following existing right-of-ways and avoid areas where feeders and substations cannot be constructed. The results of computer runs were analyzed for adequacy in serving projected loads within regulation limits for both normal and emergency operation.
Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method
DEFF Research Database (Denmark)
Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua
2014-01-01
the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...
Communication Systems and Study Method for Active Distribution Power systems
DEFF Research Database (Denmark)
Wei, Mu; Chen, Zhe
Due to the involvement and evolvement of communication technologies in contemporary power systems, the applications of modern communication technologies in distribution power system are becoming increasingly important. In this paper, the International Organization for Standardization (ISO......) reference seven-layer model of communication systems, and the main communication technologies and protocols on each corresponding layer are introduced. Some newly developed communication techniques, like Ethernet, are discussed with reference to the possible applications in distributed power system....... The suitability of the communication technology to the distribution power system with active renewable energy based generation units is discussed. Subsequently the typical possible communication systems are studied by simulation. In this paper, a novel method of integrating communication system impact into power...
Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method
Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil
2014-05-01
Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.
A new hydraulic regulation method on district heating system with distributed variable-speed pumps
International Nuclear Information System (INIS)
Wang, Hai; Wang, Haiying; Zhu, Tong
2017-01-01
Highlights: • A hydraulic regulation method was presented for district heating with distributed variable speed pumps. • Information and automation technologies were utilized to support the proposed method. • A new hydraulic model was developed for distributed variable speed pumps. • A new optimization model was developed based on genetic algorithm. • Two scenarios of a multi-source looped system was illustrated to validate the method. - Abstract: Compared with the hydraulic configuration based on the conventional central circulating pump, a district heating system with distributed variable-speed-pumps configuration can often save 30–50% power consumption on circulating pumps with frequency inverters. However, the hydraulic regulations on distributed variable-speed-pumps configuration could be more complicated than ever while all distributed pumps need to be adjusted to their designated flow rates. Especially in a multi-source looped structure heating network where the distributed pumps have strongly coupled and severe non-linear hydraulic connections with each other, it would be rather difficult to maintain the hydraulic balance during the regulations. In this paper, with the help of the advanced automation and information technologies, a new hydraulic regulation method was proposed to achieve on-site hydraulic balance for the district heating systems with distributed variable-speed-pumps configuration. The proposed method was comprised of a new hydraulic model, which was developed to adapt the distributed variable-speed-pumps configuration, and a calibration model with genetic algorithm. By carrying out the proposed method step by step, the flow rates of all distributed pumps can be progressively adjusted to their designated values. A hypothetic district heating system with 2 heat sources and 10 substations was taken as a case study to illustrate the feasibility of the proposed method. Two scenarios were investigated respectively. In Scenario I, the
Jin, Lingyun; Zhang, Guangming; Zheng, Xiang
2015-02-01
A key step in sludge treatment is sludge dewatering. However, activated sludge is generally very difficult to be dewatered. Sludge dewatering performance is largely affected by the sludge moisture distribution. Sludge disintegration can destroy the sludge structure and cell wall, so as change the sludge floc structure and moisture distribution, thus affecting the dewatering performance of sludge. In this article, the disintegration methods were ultrasound treatment, K2FeO4 oxidation and KMnO4 oxidation. The degree of disintegration (DDCOD), sludge moisture distribution and the final water content of sludge cake after centrifuging were measured. Results showed that three disintegration methods were all effective, and K2FeO4 oxidation was more efficient than KMnO4 oxidation. The content of free water increased obviously with K2FeO4 and KMnO4 oxidations, while it decreased with ultrasound treatment. The changes of free water and interstitial water were in the opposite trend. The content of bounding water decreased with K2FeO4 oxidation, and increased slightly with KMnO4 oxidation, while it increased obviously with ultrasound treatment. The water content of sludge cake after centrifuging decreased with K2FeO4 oxidation, and did not changed with KMnO4 oxidation, but increased obviously with ultrasound treatment. In summary, ultrasound treatment deteriorated the sludge dewaterability, while K2FeO4 and KMnO4 oxidation improved the sludge dewaterability. Copyright © 2014. Published by Elsevier B.V.
The last picture show? Timing and order of movie distribution channels
Hennig-Thurau, Thorsten; Henning, Victor; Sattler, Henrik; Eggers, Felix; Houston, Mark B.
2007-01-01
Movies and other media goods are traditionally distributed across distinct sequential channels (e.g., theaters, home video, video on demand). The optimality of the currently employed timing and order of channel openings has become a matter of contentious debate among both industry experts and
Projection methods for the analysis of molecular-frame photoelectron angular distributions
International Nuclear Information System (INIS)
Grum-Grzhimailo, A.N.; Lucchese, R.R.; Liu, X.-J.; Pruemper, G.; Morishita, Y.; Saito, N.; Ueda, K.
2007-01-01
A projection method is developed for extracting the nondipole contribution from the molecular frame photoelectron angular distributions of linear molecules. A corresponding convenient parametric form for the angular distributions is derived. The analysis was performed for the N 1s photoionization of the NO molecule a few eV above the ionization threshold. No detectable nondipole contribution was found for the photon energy of 412 eV
Research on distributed optical fiber sensing data processing method based on LabVIEW
Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing
2018-01-01
The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications
Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne
2014-01-01
The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.
Correction of measured multiplicity distributions by the simulated annealing method
International Nuclear Information System (INIS)
Hafidouni, M.
1993-01-01
Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs
Rauscher, Sarah; Neale, Chris; Pomès, Régis
2009-10-13
Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.
Karian, Zaven A
2000-01-01
Throughout the physical and social sciences, researchers face the challenge of fitting statistical distributions to their data. Although the study of statistical modelling has made great strides in recent years, the number and variety of distributions to choose from-all with their own formulas, tables, diagrams, and general properties-continue to create problems. For a specific application, which of the dozens of distributions should one use? What if none of them fit well?Fitting Statistical Distributions helps answer those questions. Focusing on techniques used successfully across many fields, the authors present all of the relevant results related to the Generalized Lambda Distribution (GLD), the Generalized Bootstrap (GB), and Monte Carlo simulation (MC). They provide the tables, algorithms, and computer programs needed for fitting continuous probability distributions to data in a wide variety of circumstances-covering bivariate as well as univariate distributions, and including situations where moments do...
Method of imaging the electrical conductivity distribution of a subsurface
Johnson, Timothy C.
2017-09-26
A method of imaging electrical conductivity distribution of a subsurface containing metallic structures with known locations and dimensions is disclosed. Current is injected into the subsurface to measure electrical potentials using multiple sets of electrodes, thus generating electrical resistivity tomography measurements. A numeric code is applied to simulate the measured potentials in the presence of the metallic structures. An inversion code is applied that utilizes the electrical resistivity tomography measurements and the simulated measured potentials to image the subsurface electrical conductivity distribution and remove effects of the subsurface metallic structures with known locations and dimensions.
New method for exact measurement of thermal neutron distribution in elementary cell
International Nuclear Information System (INIS)
Takac, S.M.; Krcevinac, S.B.
1966-06-01
Exact measurement of thermal neutron density distribution in an elementary cell necessitates the knowledge of the perturbations involved in the cell by the measuring device. A new method has been developed in which a special stress is made to evaluate these perturbations by measuring the response from the perturbations introduced in the elementary cell. The unperturbed distribution was obtained by extrapolation to zero perturbation. The final distributions for different lattice pitches were compared with a THERMOS-type calculation. As a pleasing fact a very good agreement has been reached, which dissolves the long existing disagreement between THERMOS calculations and measured density distribution (author)
New method for exact measurement of thermal neutron distribution in elementary cell
Energy Technology Data Exchange (ETDEWEB)
Takac, S M; Krcevinac, S B [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Yugoslavia)
1966-06-15
Exact measurement of thermal neutron density distribution in an elementary cell necessitates the knowledge of the perturbations involved in the cell by the measuring device. A new method has been developed in which a special stress is made to evaluate these perturbations by measuring the response from the perturbations introduced in the elementary cell. The unperturbed distribution was obtained by extrapolation to zero perturbation. The final distributions for different lattice pitches were compared with a THERMOS-type calculation. As a pleasing fact a very good agreement has been reached, which dissolves the long existing disagreement between THERMOS calculations and measured density distribution (author)
Methods of Run-Time Error Detection in Distributed Process Control Software
DEFF Research Database (Denmark)
Drejer, N.
In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....
Directory of Open Access Journals (Sweden)
Qiao Wei
2017-01-01
Full Text Available Deep neural networks (DNNs have recently yielded strong results on a range of applications. Training these DNNs using a cluster of commodity machines is a promising approach since training is time consuming and compute-intensive. Furthermore, putting DNN tasks into containers of clusters would enable broader and easier deployment of DNN-based algorithms. Toward this end, this paper addresses the problem of scheduling DNN tasks in the containerized cluster environment. Efficiently scheduling data-parallel computation jobs like DNN over containerized clusters is critical for job performance, system throughput, and resource utilization. It becomes even more challenging with the complex workloads. We propose a scheduling method called Deep Learning Task Allocation Priority (DLTAP which performs scheduling decisions in a distributed manner, and each of scheduling decisions takes aggregation degree of parameter sever task and worker task into account, in particularly, to reduce cross-node network transmission traffic and, correspondingly, decrease the DNN training time. We evaluate the DLTAP scheduling method using a state-of-the-art distributed DNN training framework on 3 benchmarks. The results show that the proposed method can averagely reduce 12% cross-node network traffic, and decrease the DNN training time even with the cluster of low-end servers.
Control and operation of distributed generation in distribution systems
DEFF Research Database (Denmark)
Mahat, Pukar; Chen, Zhe; Bak-Jensen, Birgitte
2011-01-01
Many distribution systems nowadays have significant penetration of distributed generation (DG)and thus, islanding operation of these distribution systems is becoming a viable option for economical and technical reasons. The DG should operate optimally during both grid-connected and island...... algorithm, which uses average rate of change off requency (Af5) and real power shift RPS), in the islanded mode. RPS will increase or decrease the power set point of the generator with increasing or decreasing system frequency, respectively. Simulation results show that the proposed method can operate...
International Nuclear Information System (INIS)
Zhao Xuefeng; Wang Chuanke; Hu Feng; Kuang Longyu; Wang Zhebin; Li Sanwei; Liu Shengye; Jiang Gang
2011-01-01
The spatial distribution of backscatter light is very important for understanding the production of backscatter light. The experimental method of spatial distribution of full aperture backscatter light is based on the circular PIN array composed of concentric orbicular multi-PIN detectors. The image of backscatter light spatial distribution of full aperture SBS is obtained by measuring spatial distribution of full aperture backscatter light using the method in the experiment of laser hohlraum targets interaction at 'Shenguang II'. A preliminary method to measure spatial distribution of full aperture backscatter light is established. (authors)
Problem-Solving Methods for the Prospective Development of Urban Power Distribution Network
Directory of Open Access Journals (Sweden)
A. P. Karpenko
2014-01-01
Full Text Available This article succeeds the former A. P. K nko’ and A. I. Kuzmina’ ubl t on titled "A mathematical model of urban distribution electro-network considering its future development" (electronic scientific and technical magazine "Science and education" No. 5, 2014.The article offers a model of urban power distribution network as a set of transformer and distribution substations and cable lines. All elements of the network and new consumers are determined owing to vectors of parameters consistent with them.A problem of the urban power distribution network design, taking into account a prospective development of the city, is presented as a problem of discrete programming. It is in deciding on the optimal option to connect new consumers to the power supply network, on the number and sites to build new substations, and on the option to include them in the power supply network.Two methods, namely a reduction method for a set the nested tasks of global minimization and a decomposition method are offered to solve the problem.In reduction method the problem of prospective development of power supply network breaks into three subtasks of smaller dimension: a subtask to define the number and sites of new transformer and distribution substations, a subtask to define the option to connect new consumers to the power supply network, and a subtask to include new substations in the power supply network. The vector of the varied parameters is broken into three subvectors consistent with the subtasks. Each subtask is solved using an area of admissible vector values of the varied parameters at the fixed components of the subvectors obtained when solving the higher subtasks.In decomposition method the task is presented as a set of three, similar to reduction method, reductions of subtasks and a problem of coordination. The problem of coordination specifies a sequence of the subtasks solution, defines the moment of calculation termination. Coordination is realized by
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We
A Study of Economical Incentives for Voltage Profile Control Method in Future Distribution Network
Tsuji, Takao; Sato, Noriyuki; Hashiguchi, Takuhei; Goda, Tadahiro; Tange, Seiji; Nomura, Toshio
In a future distribution network, it is difficult to maintain system voltage because a large number of distributed generators are introduced to the system. The authors have proposed “voltage profile control method” using power factor control of distributed generators in the previous work. However, the economical disbenefit is caused by the active power decrease when the power factor is controlled in order to increase the reactive power. Therefore, proper incentives must be given to the customers that corporate to the voltage profile control method. Thus, in this paper, we develop a new rules which can decide the economical incentives to the customers. The method is tested in one feeder distribution network model and its effectiveness is shown.
Energy Technology Data Exchange (ETDEWEB)
Tung, Wu-Hsiung, E-mail: wstong@iner.gov.tw; Lee, Tien-Tso; Kuo, Weng-Sheng; Yaur, Shung-Jung
2017-03-15
Highlights: • An optimization method for axial enrichment distribution in a BWR fuel was developed. • Block coordinate descent method is employed to search for optimal solution. • Scoping libraries are used to reduce computational effort. • Optimization search space consists of enrichment difference parameters. • Capability of the method to find optimal solution is demonstrated. - Abstract: An optimization method has been developed to search for the optimal axial enrichment distribution in a fuel assembly for a boiling water reactor core. The optimization method features: (1) employing the block coordinate descent method to find the optimal solution in the space of enrichment difference parameters, (2) using scoping libraries to reduce the amount of CASMO-4 calculation, and (3) integrating a core critical constraint into the objective function that is used to quantify the quality of an axial enrichment design. The objective function consists of the weighted sum of core parameters such as shutdown margin and critical power ratio. The core parameters are evaluated by using SIMULATE-3, and the cross section data required for the SIMULATE-3 calculation are generated by using CASMO-4 and scoping libraries. The application of the method to a 4-segment fuel design (with the highest allowable segment enrichment relaxed to 5%) demonstrated that the method can obtain an axial enrichment design with improved thermal limit ratios and objective function value while satisfying the core design constraints and core critical requirement through the use of an objective function. The use of scoping libraries effectively reduced the number of CASMO-4 calculation, from 85 to 24, in the 4-segment optimization case. An exhausted search was performed to examine the capability of the method in finding the optimal solution for a 4-segment fuel design. The results show that the method found a solution very close to the optimum obtained by the exhausted search. The number of
Maadooliat, Mehdi; Gao, Xin; Huang, Jianhua Z.
2012-01-01
Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.
Maadooliat, Mehdi
2012-08-27
Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.
A Method for Medical Diagnosis Based on Optical Fluence Rate Distribution at Tissue Surface.
Hamdy, Omnia; El-Azab, Jala; Al-Saeed, Tarek A; Hassan, Mahmoud F; Solouma, Nahed H
2017-09-20
Optical differentiation is a promising tool in biomedical diagnosis mainly because of its safety. The optical parameters' values of biological tissues differ according to the histopathology of the tissue and hence could be used for differentiation. The optical fluence rate distribution on tissue boundaries depends on the optical parameters. So, providing image displays of such distributions can provide a visual means of biomedical diagnosis. In this work, an experimental setup was implemented to measure the spatially-resolved steady state diffuse reflectance and transmittance of native and coagulated chicken liver and native and boiled breast chicken skin at 635 and 808 nm wavelengths laser irradiation. With the measured values, the optical parameters of the samples were calculated in vitro using a combination of modified Kubelka-Munk model and Bouguer-Beer-Lambert law. The estimated optical parameters values were substituted in the diffusion equation to simulate the fluence rate at the tissue surface using the finite element method. Results were verified with Monte-Carlo simulation. The results obtained showed that the diffuse reflectance curves and fluence rate distribution images can provide discrimination tools between different tissue types and hence can be used for biomedical diagnosis.
A Method for Medical Diagnosis Based on Optical Fluence Rate Distribution at Tissue Surface
Directory of Open Access Journals (Sweden)
Omnia Hamdy
2017-09-01
Full Text Available Optical differentiation is a promising tool in biomedical diagnosis mainly because of its safety. The optical parameters’ values of biological tissues differ according to the histopathology of the tissue and hence could be used for differentiation. The optical fluence rate distribution on tissue boundaries depends on the optical parameters. So, providing image displays of such distributions can provide a visual means of biomedical diagnosis. In this work, an experimental setup was implemented to measure the spatially-resolved steady state diffuse reflectance and transmittance of native and coagulated chicken liver and native and boiled breast chicken skin at 635 and 808 nm wavelengths laser irradiation. With the measured values, the optical parameters of the samples were calculated in vitro using a combination of modified Kubelka-Munk model and Bouguer-Beer-Lambert law. The estimated optical parameters values were substituted in the diffusion equation to simulate the fluence rate at the tissue surface using the finite element method. Results were verified with Monte-Carlo simulation. The results obtained showed that the diffuse reflectance curves and fluence rate distribution images can provide discrimination tools between different tissue types and hence can be used for biomedical diagnosis.
An improved method for calculating force distributions in moment-stiff timber connections
DEFF Research Database (Denmark)
Ormarsson, Sigurdur; Blond, Mette
2012-01-01
An improved method for calculating force distributions in moment-stiff metal dowel-type timber connections is presented, a method based on use of three-dimensional finite element simulations of timber connections subjected to moment action. The study that was carried out aimed at determining how...... the slip modulus varies with the angle between the direction of the dowel forces and the fibres in question, as well as how the orthotropic stiffness behaviour of the wood material affects the direction and the size of the forces. It was assumed that the force distribution generated by the moment action...
Directory of Open Access Journals (Sweden)
Sheng Wanxing
2016-01-01
Full Text Available In allusion to the randomness of output power of distributed generation (DG, a reliability evaluation model based on sequential Monte Carlo simulation (SMCS for distribution system with DG is proposed. Operating states of the distribution system can be sampled by SMCS in chronological order thus the corresponding output power of DG can be generated. The proposed method has been tested on feeder F4 of IEEE-RBTS Bus 6. The results show that reliability evaluation of distribution system considering the uncertainty of output power of DG can be effectively implemented by SMCS.
International Nuclear Information System (INIS)
Yang, R.X.; Li, C.; Sun, Y.J.; Liu, Z.; Wang, X.Z.; Heng, Y.K.; Sun, S.S.; Dai, H.L.; Wu, Z.; An, F.F.
2017-01-01
The Beijing Spectrometer (BESIII) has just updated its end-cap Time-of-Flight (ETOF) system, using the Multi-gap Resistive Plate Chamber (MRPC) to replace the current scintillator detectors. These MRPCs shows multi-peak phenomena in their time-over-threshold (TOT) distribution, which was also observed in the Long-strip MRPC built for the RHIC-STAR Muon Telescope Detector (MTD). After carefully investigated the correlation between the multi-peak distribution and incident hit positions along the strips, we find out that it can be semi-quantitatively explained by the signal reflections on the ends of the readout strips. Therefore a new offline calibration method was implemented on the MRPC ETOF data in BESIII, making T-TOT correlation significantly improved to evaluate the time resolution.
A study of the up-and-down method for non-normal distribution functions
DEFF Research Database (Denmark)
Vibholm, Svend; Thyregod, Poul
1988-01-01
The assessment of breakdown probabilities is examined by the up-and-down method. The exact maximum-likelihood estimates for a number of response patterns are calculated for three different distribution functions and are compared with the estimates corresponding to the normal distribution. Estimates...
Distribution functions of magnetic nanoparticles determined by a numerical inversion method
International Nuclear Information System (INIS)
Bender, P; Balceris, C; Ludwig, F; Posth, O; Bogart, L K; Szczerba, W; Castro, A; Nilsson, L; Costo, R; Gavilán, H; González-Alonso, D; Pedro, I de; Barquín, L Fernández; Johansson, C
2017-01-01
In the present study, we applied a regularized inversion method to extract the particle size, magnetic moment and relaxation-time distribution of magnetic nanoparticles from small-angle x-ray scattering (SAXS), DC magnetization (DCM) and AC susceptibility (ACS) measurements. For the measurements the particles were colloidally dispersed in water. At first approximation the particles could be assumed to be spherically shaped and homogeneously magnetized single-domain particles. As model functions for the inversion, we used the particle form factor of a sphere (SAXS), the Langevin function (DCM) and the Debye model (ACS). The extracted distributions exhibited features/peaks that could be distinctly attributed to the individually dispersed and non-interacting nanoparticles. Further analysis of these peaks enabled, in combination with a prior characterization of the particle ensemble by electron microscopy and dynamic light scattering, a detailed structural and magnetic characterization of the particles. Additionally, all three extracted distributions featured peaks, which indicated deviations of the scattering (SAXS), magnetization (DCM) or relaxation (ACS) behavior from the one expected for individually dispersed, homogeneously magnetized nanoparticles. These deviations could be mainly attributed to partial agglomeration (SAXS, DCM, ACS), uncorrelated surface spins (DCM) and/or intra-well relaxation processes (ACS). The main advantage of the numerical inversion method is that no ad hoc assumptions regarding the line shape of the extracted distribution functions are required, which enabled the detection of these contributions. We highlighted this by comparing the results with the results obtained by standard model fits, where the functional form of the distributions was a priori assumed to be log-normal shaped. (paper)
International Nuclear Information System (INIS)
Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.
2014-01-01
A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online
A method to calculate flux distribution in reactor systems containing materials with grain structure
International Nuclear Information System (INIS)
Stepanek, J.
1980-01-01
A method is proposed to compute the neutron flux spatial distribution in slab, spherical or cylindrical systems containing zones with close grain structure of material. Several different types of equally distributed particles embedded in the matrix material are allowed in one or more zones. The multi-energy group structure of the flux is considered. The collision probability method is used to compute the fluxes in the grains and in an ''effective'' part of the matrix material. Then the overall structure of the flux distribution in the zones with homogenized materials is determined using the DPN ''surface flux'' method. Both computations are connected using the balance equation during the outer iterations. The proposed method is written in the code SURCU-DH. Two testcases are computed and discussed. One testcase is the computation of the eigenvalue in simplified slab geometry of an LWR container of one zone with boral grains equally distributed in an aluminium matrix. The second is the computation of the eigenvalue in spherical geometry of the HTR pebble-bed cell with spherical particles embedded in a graphite matrix. The results are compared to those obtained by repeated use of the WIMS Code. (author)
A method for atomic-level noncontact thermometry with electron energy distribution
Kinoshita, Ikuo; Tsukada, Chiharu; Ouchi, Kohei; Kobayashi, Eiichi; Ishii, Juntaro
2017-04-01
We devised a new method of determining the temperatures of materials with their electron-energy distributions. The Fermi-Dirac distribution convoluted with a linear combination of Gaussian and Lorentzian distributions was fitted to the photoelectron spectrum measured for the Au(110) single-crystal surface at liquid N2-cooled temperature. The fitting successfully determined the surface-local thermodynamic temperature and the energy resolution simultaneously from the photoelectron spectrum, without any preliminary results of other measurements. The determined thermodynamic temperature was 99 ± 2.1 K, which was in good agreement with the reference temperature of 98.5 ± 0.5 K measured using a silicon diode sensor attached to the sample holder.
Numerical study on visualization method for material distribution using photothermal effect
International Nuclear Information System (INIS)
Kim, Moo Joong; Yoo, Jai Suk; Kim, Dong Kwon; Kim, Hyun Jung
2015-01-01
Visualization and imaging techniques have become increasingly essential in a wide range of industrial fields. A few imaging methods such as X-ray imaging, computed tomography and magnetic resonance imaging have been developed for medical applications to materials that are basically transparent or X-ray penetrable; however, reliable techniques for optically opaque materials such as semiconductors or metallic circuits have not been suggested yet. The photothermal method has been developed mainly for the measurement of thermal properties using characteristics that exhibit photothermal effects depending on the thermal properties of the materials. This study attempts to numerically investigate the feasibility of using photothermal effects to visualize or measure the material distribution of opaque substances. For this purpose, we conducted numerical analyses of various intaglio patterns with approximate sizes of 1.2-6 mm in stainless steel 0.5 mm below copper. In addition, images of the intaglio patterns in stainless steel were reconstructed by two-dimensional numerical scanning. A quantitative comparison of the reconstructed results and the original geometries showed an average difference of 0.172 mm and demonstrated the possibility of application to experimental imaging.
Discrete method for design of flow distribution in manifolds
International Nuclear Information System (INIS)
Wang, Junye; Wang, Hualin
2015-01-01
Flow in manifold systems is encountered in designs of various industrial processes, such as fuel cells, microreactors, microchannels, plate heat exchanger, and radial flow reactors. The uniformity of flow distribution in manifold is a key indicator for performance of the process equipment. In this paper, a discrete method for a U-type arrangement was developed to evaluate the uniformity of the flow distribution and the pressure drop and then was used for direct comparisons between the U-type and the Z-type. The uniformity of the U-type is generally better than that of the Z-type in most of cases for small ζ and large M. The U-type and the Z-type approach each other as ζ increases or M decreases. However, the Z-type is more sensitive to structures than the U-type and approaches uniform flow distribution faster than the U-type as M decreases or ζ increases. This provides a simple yet powerful tool for the designers to evaluate and select a flow arrangement and offers practical measures for industrial applications. - Highlights: • Discrete methodology of flow field designs in manifolds with U-type arrangements. • Quantitative comparison between U-type and Z-type arrangements. • Discrete solution of flow distribution with varying flow coefficients. • Practical measures and guideline to design of manifold systems.
Blanchard, Philippe
2015-01-01
The second edition of this textbook presents the basic mathematical knowledge and skills that are needed for courses on modern theoretical physics, such as those on quantum mechanics, classical and quantum field theory, and related areas. The authors stress that learning mathematical physics is not a passive process and include numerous detailed proofs, examples, and over 200 exercises, as well as hints linking mathematical concepts and results to the relevant physical concepts and theories. All of the material from the first edition has been updated, and five new chapters have been added on such topics as distributions, Hilbert space operators, and variational methods. The text is divided into three main parts. Part I is a brief introduction to distribution theory, in which elements from the theories of ultradistributions and hyperfunctions are considered in addition to some deeper results for Schwartz distributions, thus providing a comprehensive introduction to the theory of generalized functions. P...
Proposal for a new method of reactor neutron flux distribution determination
Energy Technology Data Exchange (ETDEWEB)
Popic, V R [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)
1964-01-15
A method, based on the measurements of the activity produced in a medium flowing with variable velocity through a reactor, for the determination of the neutron flux distribution inside a reactor is considered theoretically (author)
International Nuclear Information System (INIS)
Imae, Toshikazu; Takenaka, Shigeharu; Saotome, Naoya
2016-01-01
The purpose of this study was to evaluate a post-analysis method for cumulative dose distribution in stereotactic body radiotherapy (SBRT) using volumetric modulated arc therapy (VMAT). VMAT is capable of acquiring respiratory signals derived from projection images and machine parameters based on machine logs during VMAT delivery. Dose distributions were reconstructed from the respiratory signals and machine parameters in the condition where respiratory signals were without division, divided into 4 and 10 phases. The dose distribution of each respiratory phase was calculated on the planned four-dimensional CT (4DCT). Summation of the dose distributions was carried out using deformable image registration (DIR), and cumulative dose distributions were compared with those of the corresponding plans. Without division, dose differences between cumulative distribution and plan were not significant. In the condition Where respiratory signals were divided, dose differences were observed over dose in cranial region and under dose in caudal region of planning target volume (PTV). Differences between 4 and 10 phases were not significant. The present method Was feasible for evaluating cumulative dose distribution in VMAT-SBRT using 4DCT and DIR. (author)
A simulation training evaluation method for distribution network fault based on radar chart
Directory of Open Access Journals (Sweden)
Yuhang Xu
2018-01-01
Full Text Available In order to solve the problem of automatic evaluation of dispatcher fault simulation training in distribution network, a simulation training evaluation method based on radar chart for distribution network fault is proposed. The fault handling information matrix is established to record the dispatcher fault handling operation sequence and operation information. The four situations of the dispatcher fault isolation operation are analyzed. The fault handling anti-misoperation rule set is established to describe the rules prohibiting dispatcher operation. Based on the idea of artificial intelligence reasoning, the feasibility of dispatcher fault handling is described by the feasibility index. The relevant factors and evaluation methods are discussed from the three aspects of the fault handling result feasibility, the anti-misoperation correctness and the operation process conciseness. The detailed calculation formula is given. Combining the independence and correlation between the three evaluation angles, a comprehensive evaluation method of distribution network fault simulation training based on radar chart is proposed. The method can comprehensively reflect the fault handling process of dispatchers, and comprehensively evaluate the fault handling process from various angles, which has good practical value.
Biotic and abiotic variables show little redundancy in explaining tree species distributions
DEFF Research Database (Denmark)
Meier, Elaine S.; Kienast, Felix; Pearman, Peter B.
2010-01-01
Abiotic factors such as climate and soil determine the species fundamental niche, which is further constrained by biotic interactions such as interspecific competition. To parameterize this realized niche, species distribution models (SDMs) most often relate species occurrence data to abiotic var...
Tambun, R.; Sihombing, R. O.; Simanjuntak, A.; Hanum, F.
2018-02-01
The buoyancy weighing-bar method is a new simple and cost-effective method to determine the particle size distribution both settling and floating particle. In this method, the density change in a suspension due to particle migration is measured by weighing buoyancy against a weighing-bar hung in the suspension, and then the particle size distribution is calculated using the length of the bar and the time-course change in the mass of the bar. The apparatus of this method consists of a weighing-bar and an analytical balance with a hook for under-floor weighing. The weighing bar is used to detect the density change in suspension. In this study we investigate the influences of position of weighing bar in vessel on settling particle size distribution measurements of cement by using the buoyancy weighing-bar method. The vessel used in this experiment is graduated cylinder with the diameter of 65 mm and the position of weighing bar is in center and off center of vessel. The diameter of weighing bar in this experiment is 10 mm, and the kerosene is used as a dispersion liquids. The results obtained show that the positions of weighing bar in vessel have no significant effect on determination the cement’s particle size distribution by using buoyancy weighing-bar method, and the results obtained are comparable to those measured by using settling balance method.
A FPGA-based identity authority method in quantum key distribution system
International Nuclear Information System (INIS)
Cui Ke; Luo Chunli; Zhang Hongfei; Lin Shengzhao; Jin Ge; Wang Jian
2012-01-01
In this article, an identity authority method realized in hardware is developed which is used in quantum key distribution (QKD) systems. This method is based on LFSR-Teoplitz hashing matrix. Its benefits relay on its easy implementation in hardware and high secure coefficient. It can gain very high security by means of splitting part of the final key generated from QKD systems as the seed where it is required in the identity authority method. We propose an specific flow of the identity authority method according to the problems and features of the hardware. The proposed method can satisfy many kinds of QKD systems. (authors)
Method for adding nodes to a quantum key distribution system
Grice, Warren P
2015-02-24
An improved quantum key distribution (QKD) system and method are provided. The system and method introduce new clients at intermediate points along a quantum channel, where any two clients can establish a secret key without the need for a secret meeting between the clients. The new clients perform operations on photons as they pass through nodes in the quantum channel, and participate in a non-secret protocol that is amended to include the new clients. The system and method significantly increase the number of clients that can be supported by a conventional QKD system, with only a modest increase in cost. The system and method are compatible with a variety of QKD schemes, including polarization, time-bin, continuous variable and entanglement QKD.
International Nuclear Information System (INIS)
Sohrabi, M.; Habibi, M.; Roshani, G.H.; Ramezani, V.
2012-01-01
A novel ion detection method has been developed and studied in this paper for the first time to detect and observe tracks of nitrogen ions and their angular distribution by unaided eyes in the Amirkabir 4 kJ plasma focus device (PFD). The method is based on electrochemical etching (ECE) of nitrogen ion tracks in 1 mm thick large area polycarbonate (PC) detectors. The ECE method employed a specially designed and constructed large area ECE chamber by applying a 50 Hz–high voltage (HV) generator under optimized ECE conditions. The nitrogen ion tracks and angular distribution were efficiently (constructed for this study) amplified to a point observable by the unaided eyes. The beam profile and angular distribution of nitrogen ion tracks in the central axes of the beam and two- and three-dimensional iso-ion track density distributions showing micro-beam spots were determined. The distribution of ion track density along the central axes versus angular position shows double humps around a dip at the 0° angular positions. The method introduced in this paper proved to be quite efficient for ion beam profile and characteristic studies in PFDs with potential for ion detection studies and other relevant dosimetry applications.
Calçada, Flávio Siqueira; Guimarães, Antônio Sérgio; Teixeira, Marcelo Lucchesi; Takamatsu, Flávio Atsushi
2017-01-01
ABSTRACT Objective: To assess the distribution of stress produced on TMJ disc by chincup therapy, by means of the finite element method. Methods: a simplified three-dimensional TMJ disc model was developed by using Rhinoceros 3D software, and exported to ANSYS software. A 4.9N load was applied on the inferior surface of the model at inclinations of 30, 40, and 50 degrees to the mandibular plane (GoMe). ANSYS was used to analyze stress distribution on the TMJ disc for the different angulations, by means of finite element method. Results: The results showed that the tensile and compressive stresses concentrations were higher on the inferior surface of the model. More presence of tensile stress was found in the middle-anterior region of the model and its location was not altered in the three directions of load application. There was more presence of compressive stress in the middle and mid-posterior regions, but when a 50o inclined load was applied, concentration in the middle region was prevalent. Tensile and compressive stresses intensities progressively diminished as the load was more vertically applied. Conclusions: stress induced by the chincup therapy is mainly located on the inferior surface of the model. Loads at greater angles to the mandibular plane produced distribution of stresses with lower intensity and a concentration of compressive stresses in the middle region. The simplified three-dimensional model proved useful for assessing the distribution of stresses on the TMJ disc induced by the chincup therapy. PMID:29160348
Stein, Paul C; di Cagno, Massimiliano; Bauer-Brandl, Annette
2011-09-01
In this work a new, accurate and convenient technique for the measurement of distribution coefficients and membrane permeabilities based on nuclear magnetic resonance (NMR) is described. This method is a novel implementation of localized NMR spectroscopy and enables the simultaneous analysis of the drug content in the octanol and in the water phase without separation. For validation of the method, the distribution coefficients at pH = 7.4 of four active pharmaceutical ingredients (APIs), namely ibuprofen, ketoprofen, nadolol, and paracetamol (acetaminophen), were determined using a classical approach. These results were compared to the NMR experiments which are described in this work. For all substances, the respective distribution coefficients found with the two techniques coincided very well. Furthermore, the NMR experiments make it possible to follow the distribution of the drug between the phases as a function of position and time. Our results show that the technique, which is available on any modern NMR spectrometer, is well suited to the measurement of distribution coefficients. The experiments present also new insight into the dynamics of the water-octanol interface itself and permit measurement of the interface permeability.
A simple identification method for spore-forming bacteria showing high resistance against γ-rays
International Nuclear Information System (INIS)
Koshikawa, Tomihiko; Sone, Koji; Kobayashi, Toshikazu
1993-01-01
A simple identification method was developed for spore-forming bacteria which are highly resistant against γ-rays. Among 23 species of Bacillus studied, the spores of Bacillus megaterium, B. cereus, B. thuringiensis, B. pumilus and B. aneurinolyticus showed high resistance against γ-rays as compared with other spores of Bacillus species. Combination of the seven kinds of biochemical tests, namely, the citrate utilization test, nitrate reduction test, starch hydrolysis test, Voges-Proskauer reaction test, gelatine hydrolysis test, mannitol utilization test and xylose utilization test showed a characteristic pattern for each species of Bacillus. The combination pattern of each the above tests with a few supplementary test, if necessary, was useful to identify Bacillus species showing high radiation resistance against γ-rays. The method is specific for B. megaterium, B. thuringiensis and B. pumilus, and highly selective for B. aneurinolyticus and B. cereus. (author)
Nahar, J.; Rusyaman, E.; Putri, S. D. V. E.
2018-03-01
This research was conducted at Perum BULOG Sub-Divre Medan which is the implementing institution of Raskin program for several regencies and cities in North Sumatera. Raskin is a program of distributing rice to the poor. In order to minimize rice distribution costs then rice should be allocated optimally. The method used in this study consists of the Improved Vogel Approximation Method (IVAM) to analyse the initial feasible solution, and Modified Distribution (MODI) to test the optimum solution. This study aims to determine whether the IVAM method can provide savings or cost efficiency of rice distribution. From the calculation with IVAM obtained the optimum cost is lower than the company's calculation of Rp945.241.715,5 while the cost of the company's calculation of Rp958.073.750,40. Thus, the use of IVAM can save rice distribution costs of Rp12.832.034,9.
International Nuclear Information System (INIS)
Schuerrer, F.
1980-01-01
For characterizing heterogene configurations of pebble-bed reactors the fine structure of the flux distribution as well as the determination of the macroscopic neutronphysical quantities are of interest. When calculating system parameters of Wigner-Seitz-cells the usual codes for neutron spectra calculation always neglect the modulation of the neutron flux by the influence of neighbouring spheres. To judge the error arising from that procedure it is necessary to determinate the flux distribution in the surrounding of a spherical fuel element. In the present paper an approximation method to calculate the flux distribution in the two-sphere model is developed. This method is based on the exactly solvable problem of the flux determination of a point source of neutrons in an infinite medium, which contains a spherical perturbation zone eccentric to the point source. An iteration method allows by superposing secondary fields and alternately satisfying the conditions of continuity on the surface of each of the two fuel elements to advance to continually improving approximations. (orig.) 891 RW/orig. 892 CKA [de
Directory of Open Access Journals (Sweden)
D. Kidmo Kaoga
2015-07-01
Full Text Available In this study, five numerical Weibull distribution methods, namely, the maximum likelihood method, the modified maximum likelihood method (MLM, the energy pattern factor method (EPF, the graphical method (GM, and the empirical method (EM were explored using hourly synoptic data collected from 1985 to 2013 in the district of Maroua in Cameroon. The performance analysis revealed that the MLM was the most accurate model followed by the EPF and the GM. Furthermore, the comparison between the wind speed standard deviation predicted by the proposed models and the measured data showed that the MLM has a smaller relative error of -3.33% on average compared to -11.67% on average for the EPF and -8.86% on average for the GM. As a result, the MLM was precisely recommended to estimate the scale and shape parameters for an accurate and efficient wind energy potential evaluation.
Directory of Open Access Journals (Sweden)
D. Kidmo Kaoga
2014-12-01
Full Text Available In this study, five numerical Weibull distribution methods, namely, the maximum likelihood method, the modified maximum likelihood method (MLM, the energy pattern factor method (EPF, the graphical method (GM, and the empirical method (EM were explored using hourly synoptic data collected from 1985 to 2013 in the district of Maroua in Cameroon. The performance analysis revealed that the MLM was the most accurate model followed by the EPF and the GM. Furthermore, the comparison between the wind speed standard deviation predicted by the proposed models and the measured data showed that the MLM has a smaller relative error of -3.33% on average compared to -11.67% on average for the EPF and -8.86% on average for the GM. As a result, the MLM was precisely recommended to estimate the scale and shape parameters for an accurate and efficient wind energy potential evaluation.
Diarra, Harona; Mazel, Vincent; Busignies, Virginie; Tchoreloff, Pierre
2015-09-30
Finite elements method was used to study the influence of tablet thickness and punch curvature on the density distribution inside convex faced (CF) tablets. The modeling of the process was conducted on 2 pharmaceutical excipients (anhydrous calcium phosphate and microcrystalline cellulose) by using Drucker-Prager Cap model in Abaqus(®) software. The parameters of the model were obtained from experimental tests. Several punch shapes based on industrial standards were used. A flat-faced (FF) punch and 3 convex faced (CF) punches (8R11, 8R8 and 8R6) with a diameter of 8mm were chosen. Different tablet thicknesses were studied at a constant compression force. The simulation of the compaction of CF tablets with increasing thicknesses showed an important change on the density distribution inside the tablet. For smaller thicknesses, low density zones are located toward the center. The density is not uniform inside CF tablets and the center of the 2 faces appears with low density whereas the distribution inside FF tablets is almost independent of the tablet thickness. These results showed that FF and CF tablets, even obtained at the same compression force, do not have the same density at the center of the compact. As a consequence differences in tensile strength, as measured by diametral compression, are expected. This was confirmed by experimental tests. Copyright © 2015 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Sanchez de Alsina, O.L.; Scaricabarozzi, R.A.
1982-01-01
A matrix non-iterative method to calculate the periodical distribution in reactors with thermal regeneration is presented. In case of exothermic reaction, a source term will be included. A computer code was developed to calculate the final temperature distribution in solids and in the outlet temperatures of the gases. The results obtained from ethane oxidation calculation in air, using the Dietrich kinetic data are presented. This method is more advantageous than iterative methods. (E.G.) [pt
Energy Technology Data Exchange (ETDEWEB)
Takayama, T., E-mail: takayama@yz.yamagata-u.ac.j [Faculty of Engineering, Yamagata University, 4-3-16, Johnan, Yonezawa, Yamagata 992-8510 (Japan); Kamitani, A.; Tanaka, A. [Graduate School of Science and Engineering, Yamagata University, 4-3-16, Johnan, Yonezawa, Yamagata 992-8510 (Japan)
2010-11-01
Influence of the magnet position on the determination of the distribution of the critical current density in a high-temperature superconducting (HTS) thin film has been investigated numerically. For this purpose, a numerical code has been developed for analyzing the shielding current density in a HTS sample. By using the code, the permanent magnet method is reproduced. The results of computations show that, even if the center of the permanent magnet is located near the film edge, the maximum repulsive force is roughly proportional to the critical current density. This means that the distribution of the critical current density in the HTS film can be estimated from the proportionality constants determined by using the relations between the maximum repulsive force and the critical current density.
Benke, R R
2002-01-01
In situ gamma-ray spectrometry uses a portable detector to quantify radionuclides in materials. The main shortcoming of in situ gamma-ray spectrometry has been its inability to determine radionuclide depth distributions. Novel collimator designs were paired with a commercial in situ gamma-ray spectrometry system to overcome this limitation for large area sources. Positioned with their axes normal to the material surface, the cylindrically symmetric collimators limited the detection of un attenuated gamma-rays from a selected range of polar angles (measured off the detector axis). Although this approach does not alleviate the need for some knowledge of the gamma-ray attenuation characteristics of the materials being measured, the collimation method presented in this paper represents an absolute method that determines the depth distribution as a histogram, while other in situ methods require a priori knowledge of the depth distribution shape. Other advantages over previous in situ methods are that this method d...
Mielke, Steven L; Dinpajooh, Mohammadhasan; Siepmann, J Ilja; Truhlar, Donald G
2013-01-07
We present a procedure to calculate ensemble averages, thermodynamic derivatives, and coordinate distributions by effective classical potential methods. In particular, we consider the displaced-points path integral (DPPI) method, which yields exact quantal partition functions and ensemble averages for a harmonic potential and approximate quantal ones for general potentials, and we discuss the implementation of the new procedure in two Monte Carlo simulation codes, one that uses uncorrelated samples to calculate absolute free energies, and another that employs Metropolis sampling to calculate relative free energies. The results of the new DPPI method are compared to those from accurate path integral calculations as well as to results of two other effective classical potential schemes for the case of an isolated water molecule. In addition to the partition function, we consider the heat capacity and expectation values of the energy, the potential energy, the bond angle, and the OH distance. We also consider coordinate distributions. The DPPI scheme performs best among the three effective potential schemes considered and achieves very good accuracy for all of the properties considered. A key advantage of the effective potential schemes is that they display much lower statistical sampling variances than those for accurate path integral calculations. The method presented here shows great promise for including quantum effects in calculations on large systems.
A New Method for the 2D DOA Estimation of Coherently Distributed Sources
Directory of Open Access Journals (Sweden)
Liang Zhou
2014-03-01
Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.
Distributed Solutions for Loosely Coupled Feasibility Problems Using Proximal Splitting Methods
DEFF Research Database (Denmark)
Pakazad, Sina Khoshfetrat; Andersen, Martin Skovgaard; Hansson, Anders
2014-01-01
In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations ...
Calculations of Neutron Flux Distributions by Means of Integral Transport Methods
Energy Technology Data Exchange (ETDEWEB)
Carlvik, I
1967-05-15
Flux distributions have been calculated mainly in one energy group, for a number of systems representing geometries interesting for reactor calculations. Integral transport methods of two kinds were utilised, collision probabilities (CP) and the discrete method (DIT). The geometries considered comprise the three one-dimensional geometries, planes, sphericals and annular, and further a square cell with a circular fuel rod and a rod cluster cell with a circular outer boundary. For the annular cells both methods (CP and DIT) were used and the results were compared. The purpose of the work is twofold, firstly to demonstrate the versatility and efficacy of integral transport methods and secondly to serve as a guide for anybody who wants to use the methods.
Estimation of the distribution coefficient by combined application of two different methods
International Nuclear Information System (INIS)
Vogl, G.; Gerstenbrand, F.
1982-01-01
A simple, non-invasive method is presented which permits determination of the rBCF and, in addition, of the distribution coefficient of the grey matter. The latter, which is closely correlated with the cerebral metabolism, has only been determined in vitro so far. The new method will be a means to check its accuracy. (orig.) [de
Method for distributed agent-based non-expert simulation of manufacturing process behavior
Ivezic, Nenad; Potok, Thomas E.
2004-11-30
A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.
Directory of Open Access Journals (Sweden)
Raul S Gonzalez
2015-01-01
Full Text Available Background: In many surgical pathology laboratories, operating room schedules are prospectively reviewed to determine specimen distribution to different subspecialty services and to predict the number and nature of potential intraoperative consultations for which prior medical records and slides require review. At our institution, such schedules were manually converted into easily interpretable, surgical pathology-friendly reports to facilitate these activities. This conversion, however, was time-consuming and arguably a non-value-added activity. Objective: Our goal was to develop a semi-automated method of generating these reports that improved their readability while taking less time to perform than the manual method. Materials and Methods: A dynamic Microsoft Excel workbook was developed to automatically convert published operating room schedules into different tabular formats. Based on the surgical procedure descriptions in the schedule, a list of linked keywords and phrases was utilized to sort cases by subspecialty and to predict potential intraoperative consultations. After two trial-and-optimization cycles, the method was incorporated into standard practice. Results: The workbook distributed cases to appropriate subspecialties and accurately predicted intraoperative requests. Users indicated that they spent 1-2 h fewer per day on this activity than before, and team members preferred the formatting of the newer reports. Comparison of the manual and semi-automatic predictions showed that the mean daily difference in predicted versus actual intraoperative consultations underwent no statistically significant changes before and after implementation for most subspecialties. Conclusions: A well-designed, lean, and simple information technology solution to determine subspecialty case distribution and prediction of intraoperative consultations in surgical pathology is approximately as accurate as the gold standard manual method and requires less
UOBPRM: A uniformly distributed obstacle-based PRM
Yeh, Hsin-Yi
2012-10-01
This paper presents a new sampling method for motion planning that can generate configurations more uniformly distributed on C-obstacle surfaces than prior approaches. Here, roadmap nodes are generated from the intersections between C-obstacles and a set of uniformly distributed fixed-length segments in C-space. The results show that this new sampling method yields samples that are more uniformly distributed than previous obstacle-based methods such as OBPRM, Gaussian sampling, and Bridge test sampling. UOBPRM is shown to have nodes more uniformly distributed near C-obstacle surfaces and also requires the fewest nodes and edges to solve challenging motion planning problems with varying narrow passages. © 2012 IEEE.
On the multipole moments of charge distributions
International Nuclear Information System (INIS)
Khare, P.L.
1977-01-01
There are two different standard methods for showing the equivalence of a charge distribution in a small volume tau surrounding a point O, to the superposition of a monopole, a dipole, a quadrupole and poles of higher moments at the point O: (a) to show that the electrostatic potential due to the charge distribution at an outside point is the same as due to these superposed multipoles (including a monopole). (b) to show that the energy of interaction of an external field with the charge distribution is the same as with the superposed equivalent monopole and multipoles. Neither of these methods gives a physical picture of the equivalence of a charge distribution to the superposition of different multipoles. An attempt is made to interpret in physical terms the emergence of the multipoles of different order, that are equivalent to a charge distribution and to show that the magnitudes of the moments of these multipoles are in agreement with the results of both the approaches (a) and (b). This physical interpretation also helps to understand, in a simple manner, some of the wellknown properties of the multipole moments of atoms and nuclei. (K.B.)
International Nuclear Information System (INIS)
Shen, L.; Levine, S.H.; Catchen, G.L.
1987-01-01
This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration
Linear Model for Optimal Distributed Generation Size Predication
Directory of Open Access Journals (Sweden)
Ahmed Al Ameri
2017-01-01
Full Text Available This article presents a linear model predicting optimal size of Distributed Generation (DG that addresses the minimum power loss. This method is based fundamentally on strong coupling between active power and voltage angle as well as between reactive power and voltage magnitudes. This paper proposes simplified method to calculate the total power losses in electrical grid for different distributed generation sizes and locations. The method has been implemented and tested on several IEEE bus test systems. The results show that the proposed method is capable of predicting approximate optimal size of DG when compared with precision calculations. The method that linearizes a complex model showed a good result, which can actually reduce processing time required. The acceptable accuracy with less time and memory required can help the grid operator to assess power system integrated within large-scale distribution generation.
Directory of Open Access Journals (Sweden)
Moehammad Awaluddin
2012-07-01
Full Text Available Continuous Global Positioning System (GPS observations showed significant crustal displacements as a result of the Bengkulu earthquake occurring on September 12, 2007. A maximum horizontal displacement of 2.11 m was observed at PRKB station, while the vertical component at BSAT station was uplifted with a maximum of 0.73 m, and the vertical component at LAIS station was subsided by -0.97 m. The method of adding more constraint on the inversion for the Bengkulu earthquake slip distribution from GPS observations can help solve a least squares inversion with an under-determined condition. Checkerboard tests were performed to help conduct the weighting for constraining the inversion. The inversion calculation of the Bengkulu earthquake slip distribution yielded in an optimum value of slip distribution by giving a weight of smoothing constraint of 0.001 and a weight of slip value constraint = 0 at the edge of the earthquake rupture area. A maximum coseismic slip of the optimal inversion calculation was 5.12 m at the lower area of PRKB and BSAT stations. The seismic moment calculated from the optimal slip distribution was 7.14 x 1021 Nm, which is equivalent to a magnitude of 8.5.
Directory of Open Access Journals (Sweden)
Ruben M. Mouangue
2014-05-01
Full Text Available The modeling of the wind speed distribution is of great importance for the assessment of wind energy potential and the performance of wind energy conversion system. In this paper, the choice of two determination methods of Weibull parameters shows theirs influences on the Weibull distribution performances. Because of important calm winds on the site of Ngaoundere airport, we characterize the wind potential using the approach of Weibull distribution with parameters which are determined by the modified maximum likelihood method. This approach is compared to the Weibull distribution with parameters which are determined by the maximum likelihood method and the hybrid distribution which is recommended for wind potential assessment of sites having nonzero probability of calm. Using data provided by the ASECNA Weather Service (Agency for the Safety of Air Navigation in Africa and Madagascar, we evaluate the goodness of fit of the various fitted distributions to the wind speed data using the Q – Q plots, the Pearson’s coefficient of correlation, the mean wind speed, the mean square error, the energy density and its relative error. It appears from the results that the accuracy of the Weibull distribution with parameters which are determined by the modified maximum likelihood method is higher than others. Then, this approach is used to estimate the monthly and annual energy productions of the site of the Ngaoundere airport. The most energy contribution is made in March with 255.7 MWh. It also appears from the results that a wind turbine generator installed on this particular site could not work for at least a half of the time because of higher frequency of calm. For this kind of sites, the modified maximum likelihood method proposed by Seguro and Lambert in 2000 is one of the best methods which can be used to determinate the Weibull parameters.
S-curve networks and an approximate method for estimating degree distributions of complex networks
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
S-curve networks and an approximate method for estimating degree distributions of complex networks
International Nuclear Information System (INIS)
Guo Jin-Li
2010-01-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)
International Nuclear Information System (INIS)
Huang, Zhi-Yong; Xie, Hong; Cao, Ying-Lan; Cai, Chao; Zhang, Zhi
2014-01-01
Highlights: • Large amounts of exogenous Pb were found to distribute in reducible fractions. • Very few of exogenous Pb were found to distribute in acid-extractable fractions. • More than 60% of exogenous Pb in rhizosphere soils lost after planting. • Isotopic labeling method and SEP enable to explore Pb bioavailability in soil. -- Abstract: The contamination of Pb in agricultural soils is one of the most important ecological problems, which potentially results in serious health risk on human health through food chain. Hence, the fate of exogenous Pb contaminated in agricultural soils is needed to be deeply explored. By spiking soils with the stable enriched isotopes of 206 Pb, the contamination of exogenous Pb 2+ ions in three agricultural soils sampled from the estuary areas of Jiulong River, China was simulated in the present study, and the distribution, mobility and bioavailability of exogenous Pb in the soils were investigated using the isotopic labeling method coupled with a four-stage BCR (European Community Bureau of Reference) sequential extraction procedure. Results showed that about 60–85% of exogenous Pb was found to distribute in reducible fractions, while the exogenous Pb in acid-extractable fractions was less than 1.0%. After planting, the amounts of exogenous Pb presenting in acid-extractable, reducible and oxidizable fractions in rhizospheric soils decreased by 60–66%, in which partial exogenous Pb was assimilated by plants while most of the metal might transfer downward due to daily watering and applying fertilizer. The results show that the isotopic labeling technique coupled with sequential extraction procedures enables us to explore the distribution, mobility and bioavailability of exogenous Pb contaminated in soils, which may be useful for the further soil remediation
Energy Technology Data Exchange (ETDEWEB)
Huang, Zhi-Yong, E-mail: zhyhuang@jmu.edu.cn [College of Bioengineering, Jimei University, Xiamen 361021 (China); Xie, Hong [College of Bioengineering, Jimei University, Xiamen 361021 (China); Shandong Vocational Animal Science and Veterinary College, Weifang 261061 (China); Cao, Ying-Lan [College of Bioengineering, Jimei University, Xiamen 361021 (China); Cai, Chao [Key Laboratory of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen 361021 (China); Zhang, Zhi [College of Bioengineering, Jimei University, Xiamen 361021 (China)
2014-02-15
Highlights: • Large amounts of exogenous Pb were found to distribute in reducible fractions. • Very few of exogenous Pb were found to distribute in acid-extractable fractions. • More than 60% of exogenous Pb in rhizosphere soils lost after planting. • Isotopic labeling method and SEP enable to explore Pb bioavailability in soil. -- Abstract: The contamination of Pb in agricultural soils is one of the most important ecological problems, which potentially results in serious health risk on human health through food chain. Hence, the fate of exogenous Pb contaminated in agricultural soils is needed to be deeply explored. By spiking soils with the stable enriched isotopes of {sup 206}Pb, the contamination of exogenous Pb{sup 2+} ions in three agricultural soils sampled from the estuary areas of Jiulong River, China was simulated in the present study, and the distribution, mobility and bioavailability of exogenous Pb in the soils were investigated using the isotopic labeling method coupled with a four-stage BCR (European Community Bureau of Reference) sequential extraction procedure. Results showed that about 60–85% of exogenous Pb was found to distribute in reducible fractions, while the exogenous Pb in acid-extractable fractions was less than 1.0%. After planting, the amounts of exogenous Pb presenting in acid-extractable, reducible and oxidizable fractions in rhizospheric soils decreased by 60–66%, in which partial exogenous Pb was assimilated by plants while most of the metal might transfer downward due to daily watering and applying fertilizer. The results show that the isotopic labeling technique coupled with sequential extraction procedures enables us to explore the distribution, mobility and bioavailability of exogenous Pb contaminated in soils, which may be useful for the further soil remediation.
John, D.A.
1987-01-01
Plutonic rocks, mostly granite and granodiorite, are widely distributed in the west two-thirds of the Tonopah 1 degree by 2 degree quadrangle, Nevada. These rocks were systematically studied as part of the Tonopah CUSMAP project. Studies included field mapping, petrographic and modal analyses, geochemical studies of both fresh and altered plutonic rocks and altered wallrocks, and K-Ar and Rb-Sr radiometric dating. Data collected during this study were combined with previously published data to produce a 1:250,000-scale map of the Tonopah quadrangle showing the distribution of individual plutons and an accompanying table summarizing composition, texture, age, and any noted hydrothermal alteration and mineralization effects for each pluton.
International Nuclear Information System (INIS)
Li Bihong; Shuang Na; Liu Qingcheng
2006-01-01
The principle of finite difference method is introduced, and the radon field distribution over sandstone-type uranium deposit is narrated. The radon field distribution theory equation is established. To solve radon field distribution equation using finite difference algorithm is to provide the value computational method for forward calculation about radon field over sandstone-type uranium mine. Study on 2-D finite difference method on the center of either high anomaly radon fields in view of the character of radon field over sandstone-type uranium provide an algorithm for further research. (authors)
International Nuclear Information System (INIS)
Murata, Isao; Mori, Takamasa; Nakagawa, Masayuki; Shirai, Hiroshi.
1996-03-01
High Temperature Gas-cooled Reactors (HTGRs) employ spherical fuels named coated fuel particles (CFPs) consisting of a microsphere of low enriched UO 2 with coating layers in order to prevent FP release. There exist many spherical fuels distributed randomly in the cores. Therefore, the nuclear design of HTGRs is generally performed on the basis of the multigroup approximation using a diffusion code, S N transport code or group-wise Monte Carlo code. This report summarizes a Monte Carlo hard sphere packing simulation code to simulate the packing of equal hard spheres and evaluate the necessary probability distribution of them, which is used for the application of the new Monte Carlo calculation method developed to treat randomly distributed spherical fuels with the continuous energy Monte Carlo method. By using this code, obtained are the various statistical values, namely Radial Distribution Function (RDF), Nearest Neighbor Distribution (NND), 2-dimensional RDF and so on, for random packing as well as ordered close packing of FCC and BCC. (author)
Directory of Open Access Journals (Sweden)
Cherre Sade Bezerra Da Silva
2013-09-01
Full Text Available New method for rearing Spodoptera frugiperda in laboratory shows that larval cannibalism is not obligatory. Here we show, for the first time, that larvae of the fall armyworm (FAW, Spodoptera frugiperda (Lepidoptera, Noctuidae, can be successfully reared in a cohort-based manner with virtually no cannibalism. FAW larvae were reared since the second instar to pupation in rectangular plastic containers containing 40 individuals with a surprisingly ca. 90% larval survivorship. Adult females from the cohort-based method showed fecundity similar to that already reported on literature for larvae reared individually, and fertility higher than 99%, with the advantage of combining economy of time, space and material resources. These findings suggest that the factors affecting cannibalism of FAW larvae in laboratory rearings need to be reevaluated, whilst the new technique also show potential to increase the efficiency of both small and mass FAW rearings.
Simple method of generating and distributing frequency-entangled qudits
Jin, Rui-Bo; Shimizu, Ryosuke; Fujiwara, Mikio; Takeoka, Masahiro; Wakabayashi, Ryota; Yamashita, Taro; Miki, Shigehito; Terai, Hirotaka; Gerrits, Thomas; Sasaki, Masahide
2016-11-01
High-dimensional, frequency-entangled photonic quantum bits (qudits for d-dimension) are promising resources for quantum information processing in an optical fiber network and can also be used to improve channel capacity and security for quantum communication. However, up to now, it is still challenging to prepare high-dimensional frequency-entangled qudits in experiments, due to technical limitations. Here we propose and experimentally implement a novel method for a simple generation of frequency-entangled qudts with d\\gt 10 without the use of any spectral filters or cavities. The generated state is distributed over 15 km in total length. This scheme combines the technique of spectral engineering of biphotons generated by spontaneous parametric down-conversion and the technique of spectrally resolved Hong-Ou-Mandel interference. Our frequency-entangled qudits will enable quantum cryptographic experiments with enhanced performances. This distribution of distinct entangled frequency modes may also be useful for improved metrology, quantum remote synchronization, as well as for fundamental test of stronger violation of local realism.
A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique
Rached, Nadhir B.
2015-06-08
The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.
A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique
Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul
2015-01-01
The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.
Planning and Optimization Methods for Active Distribution Systems
DEFF Research Database (Denmark)
Abbey, Chad; Baitch, Alex; Bak-Jensen, Birgitte
distribution planning. Active distribution networks (ADNs) have systems in place to control a combination of distributed energy resources (DERs), defined as generators, loads and storage. With these systems in place, the AND becomes an Active Distribution System (ADS). Distribution system operators (DSOs) have...
Directory of Open Access Journals (Sweden)
Yang Yu
2017-01-01
Full Text Available In this paper, a prediction method of the temperature distribution for the thermal stress for the throttle-regulated steam turbine rotor is proposed. The rotor thermal stress curve can be calculated according to the preset power requirement, the operation mode and the predicted critical parameters. The results of the 660 MW throttle turbine rotor show that the operators are able to predict the operation results and to adjust the operation parameters in advance with the help of the inertial element method. Meanwhile, it can also raise the operation level, thus providing the technical guarantee for the thermal stress optimization control and the safety of the steam turbine rotor under the variable load operation.
Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy
Energy Technology Data Exchange (ETDEWEB)
Randriantsizafy, R D; Ramanandraibe, M J [Madagascar Institut National des Sciences et Techniques Nucleaires, Antananarivo (Madagascar); Raboanary, R [Institut of astro and High-Energy Physics Madagascar, University of Antananarivo, Antananarivo (Madagascar)
2007-07-01
The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.
Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy
International Nuclear Information System (INIS)
Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.
2007-01-01
The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.
Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng
2018-02-01
Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.
International Nuclear Information System (INIS)
Badenhop, C.T.
1983-01-01
Presented here is a method for the determination of the pore size distribution of a membrane microfilter. Existing test metods are either cumbersome, as is the Erbe method; time consuming, as is the evaluation of electron microscope photographs; do not really measure the pore distribution, as the mercury intrusion method; or do not satisfactorily evaluate the large pore range of the filter, as is the case with the automated ASTM method. The new method described in this paper is based upon the solution of the integral flow equation for the pore distribution function. A computer program evaluates the flow test data and calculates the numerical pore distribution, water-flow distribution, air-flow distribution and capillary area distribution, as a function of the pore size. (orig./RW)
Combustor and method for distributing fuel in the combustor
Uhm, Jong Ho; Ziminsky, Willy Steve; Johnson, Thomas Edward; York, William David
2016-04-26
A combustor includes a tube bundle that extends radially across at least a portion of the combustor. The tube bundle includes an upstream surface axially separated from a downstream surface. A plurality of tubes extends from the upstream surface through the downstream surface, and each tube provides fluid communication through the tube bundle. A baffle extends axially inside the tube bundle between adjacent tubes. A method for distributing fuel in a combustor includes flowing a fuel into a fuel plenum defined at least in part by an upstream surface, a downstream surface, a shroud, and a plurality of tubes that extend from the upstream surface to the downstream surface. The method further includes impinging the fuel against a baffle that extends axially inside the fuel plenum between adjacent tubes.
Directory of Open Access Journals (Sweden)
Desak Wiwin,
2017-06-01
Full Text Available The purpose of this study was to analyze about (1 the effect of the method massed practice against the speed and accuracy of service, (2 the effect of the method of distributed practice against the speed and accuracy of service and (3 the influence of methods of massed practice and distributed practice against the speed and accuracy of service. This type of research used in this research is quantitative with quasiexperimental methods. The research design uses a non-randomized control group pretest posttest design, and data analysis using Manova. The process of data collection is done by testing the speed of service (dartfish and test accuracy (Hewitt during the pretest and posttest. The results of the study as follows: (1 there is a significant influence on the methods of massed practice to increase the speed and accuracy of service (2 there is a significant influence on the method of distributed practice to increase the speed and accuracy of service (3 There is no significant difference influence among methods massed ptactice practice and distributed to the speed and accuracy of service. Conclusions of this research is a method massed practice and distributed practice equally provide significant results but that gives the influence of better is method distributed practice to speed and accuracy of service.
Yan, Zhi Gang; Li, Jun Qing
2017-12-01
The areas of the habitat and bamboo forest, and the size of the giant panda wild population have greatly increased, while habitat fragmentation and local population isolation have also intensified in recent years. Accurate evaluation of ecosystem status of the panda in the giant panda distribution area is important for giant panda conservation. The ecosystems of the distribution area and six mountain ranges were subdivided into habitat and population subsystems based on the hie-rarchical system theory. Using the panda distribution area as the study area and the three national surveys as the time node, the evolution laws of ecosystems were studied using the entropy method, coefficient of variation, and correlation analysis. We found that with continuous improvement, some differences existed in the evolution and present situation of the ecosystems of six mountain ranges could be divided into three groups. Ecosystems classified into the same group showed many commonalities, and difference between the groups was considerable. Problems of habitat fragmentation and local population isolation became more serious, resulting in ecosystem degradation. Individuali-zed ecological protection measures should be formulated and implemented in accordance with the conditions in each mountain system to achieve the best results.
Design method of freeform light distribution lens for LED automotive headlamp based on DMD
Ma, Jianshe; Huang, Jianwei; Su, Ping; Cui, Yao
2018-01-01
We propose a new method to design freeform light distribution lens for light-emitting diode (LED) automotive headlamp based on digital micro mirror device (DMD). With the Parallel optical path architecture, the exit pupil of the illuminating system is set in infinity. Thus the principal incident rays of micro lens in DMD is parallel. DMD is made of high speed digital optical reflection array, the function of distribution lens is to distribute the emergent parallel rays from DMD and get a lighting pattern that fully comply with the national regulation GB 25991-2010.We use DLP 4500 to design the light distribution lens, mesh the target plane regulated by the national regulation GB 25991-2010 and correlate the mesh grids with the active mirror array of DLP4500. With the mapping relations and the refraction law, we can build the mathematics model and get the parameters of freeform light distribution lens. Then we import its parameter into the three-dimensional (3D) software CATIA to construct its 3D model. The ray tracing results using Tracepro demonstrate that the Illumination value of target plane is easily adjustable and fully comply with the requirement of the national regulation GB 25991-2010 by adjusting the exit brightness value of DMD. The theoretical optical efficiencies of the light distribution lens designed using this method could be up to 92% without any other auxiliary lens.
International Nuclear Information System (INIS)
Gao Gan
2015-01-01
Song [Song D 2004 Phys. Rev. A 69 034301] first proposed two key distribution schemes with the symmetry feature. We find that, in the schemes, the private channels which Alice and Bob publicly announce the initial Bell state or the measurement result through are not needed in discovering keys, and Song’s encoding methods do not arrive at the optimization. Here, an optimized encoding method is given so that the efficiencies of Song’s schemes are improved by 7/3 times. Interestingly, this optimized encoding method can be extended to the key distribution scheme composed of generalized Bell states. (paper)
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
Abe, Sumiyoshi
2014-11-01
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
Method of estimating thermal power distribution of core of BWR type reactor
International Nuclear Information System (INIS)
Sekimizu, Koichi
1982-01-01
Purpose: To accurately and rapidly predict the thermal power of the core of a BWR they reactor at load follow-up operating time. Method: A parameter value corrected from a correction coefficient deciding unit and a xenon density distribution value predicted and calculated from a xenon density distributor are inputted to a thermal power distribution predicting devise, the status amount such as coolant flow rate or the like predetermined at this and next high power operating times is substituted for physical model to predict and calculate the thermal power distribution. The status amount of a nuclear reactor at the time of operating in previous high power corresponding to the next high power operation to be predicted is read from the status amount of the reactor stored in time series manner is a reactor core status memory, and the physical model used in the prediction and calculation of the thermal power distribution at the time of next high power operation is corrected. (Sikiya, K.)
International Nuclear Information System (INIS)
Wasastjerna, F.; Lux, I.
1980-03-01
A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)
Valuation of wind power distributed generation by using Longstaff–Schwartz option pricing method
International Nuclear Information System (INIS)
Díaz, Guzmán; Moreno, Blanca; Coto, José; Gómez-Aleixandre, Javier
2015-01-01
Highlights: • We analyze the economic value of wind power distributed generation (DG) projects. • Unlike NPV, RO approach accounts for the flexibility for decision-making. • We adapt Longstaff–Schwartz (LS) option pricing to multivariate wind power setting. • LS finds optimal times for DG investment under revenue uncertainty and decaying costs. • We find this method best suited for valuating DG projects of expected low revenue. - Abstract: In the context of decaying capital cost and uncertain revenues, prospective valuation of a wind power distributed generation (DG) project is difficult. The conventional net present value (NPV) presents a static picture that does not account for the value of waiting for better market conditions to proceed with a DG investment. On the contrary, real options (RO) analysis does account for the managerial flexibility to switch between options over the investment horizon. In this paper we argue that the value of a DG wind-based project can be revisited by means of Longstaff–Schwartz method, originally intended for the evaluation of American financial options. The adaption of this method to the wind power DG setting provides a means for (i) efficiently dealing with the several stochastic processes involved (spot electricity prices and possibly various wind speed processes) avoiding the curse of dimensionality, (ii) accounting for the decaying capital cost of DG, and (iii) solving the perfect foresight problem presented by Monte Carlo conventional simulations. We present in this paper the procedure to follow when applying the method to the wind power DG setting. Particularly, we discuss the standardization of the wind speed and spot price processes, and the advantages of building a state space model that includes all the correlated processes by adequately transforming Box–Jenkins and Ornstein–Uhlenbeck models. Also we discuss the representation of the capital cost forecast by means of learning curves. On the whole, we
Distributed coding of multiview sparse sources with joint recovery
DEFF Research Database (Denmark)
Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren
2016-01-01
In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....
Road Short-Term Travel Time Prediction Method Based on Flow Spatial Distribution and the Relations
Directory of Open Access Journals (Sweden)
Mingjun Deng
2016-01-01
Full Text Available There are many short-term road travel time forecasting studies based on time series, but indeed, road travel time not only relies on the historical travel time series, but also depends on the road and its adjacent sections history flow. However, few studies have considered that. This paper is based on the correlation of flow spatial distribution and the road travel time series, applying nearest neighbor and nonparametric regression method to build a forecasting model. In aspect of spatial nearest neighbor search, three different space distances are defined. In addition, two forecasting functions are introduced: one combines the forecasting value by mean weight and the other uses the reciprocal of nearest neighbors distance as combined weight. Three different distances are applied in nearest neighbor search, which apply to the two forecasting functions. For travel time series, the nearest neighbor and nonparametric regression are applied too. Then minimizing forecast error variance is utilized as an objective to establish the combination model. The empirical results show that the combination model can improve the forecast performance obviously. Besides, the experimental results of the evaluation for the computational complexity show that the proposed method can satisfy the real-time requirement.
Methods to Regulate Unbundled Transmission and Distribution Business on Electricity Markets
International Nuclear Information System (INIS)
Forsberg, Kaj; Fritz, Peter
2003-11-01
The regulation of distribution utilities is evolving from the traditional approach based on a cost of service or rate of return remuneration, to ways of regulation more specifically focused on providing incentives for improving efficiency, known as performance-based regulation or ratemaking. Modern regulation systems are also, to a higher degree than previously, intended to simulate competitive market conditions. The Market Design 2003-conference gathered people from 18 countries to discuss 'Methods to regulate unbundled transmission and distribution business on electricity markets'. Speakers from nine different countries and backgrounds (academics, industry and regulatory) presented their experiences and most recent works on how to make the regulation of unbundled distribution business as accurate as possible. This paper does not claim to be a fully representative summary of everything that was presented or discussed during the conference. Rather, it is a purposely restricted document where we focus on a few central themes and experiences from different countries
Methods to Regulate Unbundled Transmission and Distribution Business on Electricity Markets
Energy Technology Data Exchange (ETDEWEB)
Forsberg, Kaj; Fritz, Peter
2003-11-01
The regulation of distribution utilities is evolving from the traditional approach based on a cost of service or rate of return remuneration, to ways of regulation more specifically focused on providing incentives for improving efficiency, known as performance-based regulation or ratemaking. Modern regulation systems are also, to a higher degree than previously, intended to simulate competitive market conditions. The Market Design 2003-conference gathered people from 18 countries to discuss 'Methods to regulate unbundled transmission and distribution business on electricity markets'. Speakers from nine different countries and backgrounds (academics, industry and regulatory) presented their experiences and most recent works on how to make the regulation of unbundled distribution business as accurate as possible. This paper does not claim to be a fully representative summary of everything that was presented or discussed during the conference. Rather, it is a purposely restricted document where we focus on a few central themes and experiences from different countries.
International Nuclear Information System (INIS)
Wada, Hiroshi; Igari, Toshihide; Kitade, Shoji.
1989-01-01
A prediction method was proposed for plastic ratcheting of a cylinder, which was subjected to axially moving temperature distribution without primary stress. First, a mechanism of this ratcheting was proposed, which considered the movement of temperature distribution as a driving force of this phenomenon. Predictive equations of the ratcheting strain for two representative temperature distributions were proposed based on this mechanism by assuming the elastic-perfectly-plastic material behavior. Secondly, an elastic-plastic analysis was made on a cylinder subjected to the representative two temperature distributions. Analytical results coincided well with the predicted results, and the applicability of the proposed equations was confirmed. (author)
Projection methods for the analysis of molecular-frame photoelectron angular distributions
International Nuclear Information System (INIS)
Lucchese, R.R.; Montuoro, R.; Grum-Grzhimailo, A.N.; Liu, X.-J.; Pruemper, G.; Morishita, Y.; Saito, N.; Ueda, K.
2007-01-01
The analysis of the molecular-frame photoelectron angular distributions (MFPADs) is discussed within the dipole approximation. The general expressions are reviewed and strategies for extracting the maximum amount of information from different types of experimental measurements are considered. The analysis of the N 1s photoionization of NO is given to illustrate the method
A "total parameter estimation" method in the varification of distributed hydrological models
Wang, M.; Qin, D.; Wang, H.
2011-12-01
Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in
Energy Technology Data Exchange (ETDEWEB)
Ono, Ryo; Oda, Tetsuji [Department of Electrical Engineering, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656 (Japan)
2004-03-07
The spatial distribution of ozone density is measured in pulsed corona discharges with a 40 {mu}m spatial resolution using a two-dimensional laser absorption method. Discharge occurs in a 13 mm point-to-plane gap in dry air with a pulse duration of 100 ns. The result shows that the ozone density increases for about 100 {mu}s after the discharge pulse. The rate coefficient of the ozone-producing reaction, O + O{sub 2} + M {yields} O{sub 3} + M, is estimated to be 3.5 x 10{sup -34} cm{sup 6} s{sup -1}. It is observed that ozone is mostly distributed in the secondary-streamer channel. This suggests that most of the ozone is produced by the secondary streamer, not the primary streamer. After the discharge pulse, ozone diffuses into the background from the secondary-streamer channel. The diffusion coefficient of ozone is estimated to be approximately 0.1 to 0.2 cm{sup 2} s{sup -1}.
International Nuclear Information System (INIS)
Ono, Ryo; Oda, Tetsuji
2004-01-01
The spatial distribution of ozone density is measured in pulsed corona discharges with a 40 μm spatial resolution using a two-dimensional laser absorption method. Discharge occurs in a 13 mm point-to-plane gap in dry air with a pulse duration of 100 ns. The result shows that the ozone density increases for about 100 μs after the discharge pulse. The rate coefficient of the ozone-producing reaction, O + O 2 + M → O 3 + M, is estimated to be 3.5 x 10 -34 cm 6 s -1 . It is observed that ozone is mostly distributed in the secondary-streamer channel. This suggests that most of the ozone is produced by the secondary streamer, not the primary streamer. After the discharge pulse, ozone diffuses into the background from the secondary-streamer channel. The diffusion coefficient of ozone is estimated to be approximately 0.1 to 0.2 cm 2 s -1
Applying Multivariate Discrete Distributions to Genetically Informative Count Data.
Kirkpatrick, Robert M; Neale, Michael C
2016-03-01
We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.
Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods
Gong, W.; Duan, Q.; Huo, X.
2017-12-01
Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.
Impact of distributed generation on distribution investment deferral
International Nuclear Information System (INIS)
Mendez, V.H.; Rivier, J.; Fuente, J.I. de la; Gomez, T.; Arceluz, J.; Marin, J.; Madurga, A.
2006-01-01
The amount of distributed generation (DG) is increasing worldwide, and it is foreseen that in the future it will play an important role in electrical energy systems. DG is located in distribution networks close to consumers or even in the consumers' side of the meter. Therefore, the net demand to be supplied through transmission and distribution networks may decrease, allowing to postpone reinforcement of existing networks. This paper proposes a method to assess the impact of DG on distribution networks investment deferral in the long-term. Due to the randomness of the variables that have an impact on such matter (load demand patterns, DG hourly energy production, DG availability, etc.), a probabilistic approach using a Monte Carlo simulation is adopted. Several scenarios characterized by different DG penetration and concentration levels, and DG technology mixes, are analyzed. Results show that, once initial network reinforcements for DG connection have been accomplished, in the medium and long-term DG can defer feeder and/or transformer reinforcements. (author)
International Nuclear Information System (INIS)
Caldarola, L.
1976-01-01
A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)
Prestressing force monitoring method for a box girder through distributed long-gauge FBG sensors
Chen, Shi-Zhi; Wu, Gang; Xing, Tuo; Feng, De-Cheng
2018-01-01
Monitoring prestressing forces is essential for prestressed concrete box girder bridges. However, the current monitoring methods used for prestressing force were not applicable for a box girder neither because of the sensor’s setup being constrained or shear lag effect not being properly considered. Through combining with the previous analysis model of shear lag effect in the box girder, this paper proposed an indirect monitoring method for on-site determination of prestressing force in a concrete box girder utilizing the distributed long-gauge fiber Bragg grating sensor. The performance of this method was initially verified using numerical simulation for three different distribution forms of prestressing tendons. Then, an experiment involving two concrete box girders was conducted to study the feasibility of this method under different prestressing levels preliminarily. The results of both numerical simulation and lab experiment validated this method’s practicability in a box girder.
Dirichlet and Related Distributions Theory, Methods and Applications
Ng, Kai Wang; Tang, Man-Lai
2011-01-01
The Dirichlet distribution appears in many areas of application, which include modelling of compositional data, Bayesian analysis, statistical genetics, and nonparametric inference. This book provides a comprehensive review of the Dirichlet distribution and two extended versions, the Grouped Dirichlet Distribution (GDD) and the Nested Dirichlet Distribution (NDD), arising from likelihood and Bayesian analysis of incomplete categorical data and survey data with non-response. The theoretical properties and applications are also reviewed in detail for other related distributions, such as the inve
A strategy for improved computational efficiency of the method of anchored distributions
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Large-Scale No-Show Patterns and Distributions for Clinic Operational Research
Directory of Open Access Journals (Sweden)
Michael L. Davies
2016-02-01
Full Text Available Patient no-shows for scheduled primary care appointments are common. Unused appointment slots reduce patient quality of care, access to services and provider productivity while increasing loss to follow-up and medical costs. This paper describes patterns of no-show variation by patient age, gender, appointment age, and type of appointment request for six individual service lines in the United States Veterans Health Administration (VHA. This retrospective observational descriptive project examined 25,050,479 VHA appointments contained in individual-level records for eight years (FY07-FY14 for 555,183 patients. Multifactor analysis of variance (ANOVA was performed, with no-show rate as the dependent variable, and gender, age group, appointment age, new patient status, and service line as factors. The analyses revealed that males had higher no-show rates than females to age 65, at which point males and females exhibited similar rates. The average no-show rates decreased with age until 75–79, whereupon rates increased. As appointment age increased, males and new patients had increasing no-show rates. Younger patients are especially prone to no-show as appointment age increases. These findings provide novel information to healthcare practitioners and management scientists to more accurately characterize no-show and attendance rates and the impact of certain patient factors. Future general population data could determine whether findings from VHA data generalize to others.
Large-Scale No-Show Patterns and Distributions for Clinic Operational Research.
Davies, Michael L; Goffman, Rachel M; May, Jerrold H; Monte, Robert J; Rodriguez, Keri L; Tjader, Youxu C; Vargas, Dominic L
2016-02-16
Patient no-shows for scheduled primary care appointments are common. Unused appointment slots reduce patient quality of care, access to services and provider productivity while increasing loss to follow-up and medical costs. This paper describes patterns of no-show variation by patient age, gender, appointment age, and type of appointment request for six individual service lines in the United States Veterans Health Administration (VHA). This retrospective observational descriptive project examined 25,050,479 VHA appointments contained in individual-level records for eight years (FY07-FY14) for 555,183 patients. Multifactor analysis of variance (ANOVA) was performed, with no-show rate as the dependent variable, and gender, age group, appointment age, new patient status, and service line as factors. The analyses revealed that males had higher no-show rates than females to age 65, at which point males and females exhibited similar rates. The average no-show rates decreased with age until 75-79, whereupon rates increased. As appointment age increased, males and new patients had increasing no-show rates. Younger patients are especially prone to no-show as appointment age increases. These findings provide novel information to healthcare practitioners and management scientists to more accurately characterize no-show and attendance rates and the impact of certain patient factors. Future general population data could determine whether findings from VHA data generalize to others.
Survey Shows Variation in Ph.D. Methods Training.
Steeves, Leslie; And Others
1983-01-01
Reports on a 1982 survey of journalism graduate studies indicating considerable variation in research methods requirements and emphases in 23 universities offering doctoral degrees in mass communication. (HOD)
International Nuclear Information System (INIS)
Ballini, J.-P.; Cazes, P.; Turpin, P.-Y.
1976-01-01
Analysing the histogram of anode pulse amplitudes allows a discussion of the hypothesis that has been proposed to account for the statistical processes of secondary multiplication in a photomultiplier. In an earlier work, good agreement was obtained between experimental and reconstructed spectra, assuming a first dynode distribution including two Poisson distributions of distinct mean values. This first approximation led to a search for a method which could give the weights of several Poisson distributions of distinct mean values. Three methods have been briefly exposed: classical linear regression, constraint regression (d'Esopo's method), and regression on variables subject to error. The use of these methods gives an approach of the frequency function which represents the dispersion of the punctual mean gain around the whole first dynode mean gain value. Comparison between this function and the one employed in Polya distribution allows the statement that the latter is inadequate to describe the statistical process of secondary multiplication. Numerous spectra obtained with two kinds of photomultiplier working under different physical conditions have been analysed. Then two points are discussed: - Does the frequency function represent the dynode structure and the interdynode collection process. - Is the model (the multiplication process of all dynodes but the first one, is Poissonian) valid whatever the photomultiplier and the utilization conditions. (Auth.)
Directory of Open Access Journals (Sweden)
J. D. Herman
2013-07-01
Full Text Available The increase in spatially distributed hydrologic modeling warrants a corresponding increase in diagnostic methods capable of analyzing complex models with large numbers of parameters. Sobol' sensitivity analysis has proven to be a valuable tool for diagnostic analyses of hydrologic models. However, for many spatially distributed models, the Sobol' method requires a prohibitive number of model evaluations to reliably decompose output variance across the full set of parameters. We investigate the potential of the method of Morris, a screening-based sensitivity approach, to provide results sufficiently similar to those of the Sobol' method at a greatly reduced computational expense. The methods are benchmarked on the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM over a six-month period in the Blue River watershed, Oklahoma, USA. The Sobol' method required over six million model evaluations to ensure reliable sensitivity indices, corresponding to more than 30 000 computing hours and roughly 180 gigabytes of storage space. We find that the method of Morris is able to correctly screen the most and least sensitive parameters with 300 times fewer model evaluations, requiring only 100 computing hours and 1 gigabyte of storage space. The method of Morris proves to be a promising diagnostic approach for global sensitivity analysis of highly parameterized, spatially distributed hydrologic models.
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto
2016-04-01
Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the
Energy Technology Data Exchange (ETDEWEB)
Dewey, Steven Clifford, E-mail: sdewey001@gmail.com [United States Air Force School of Aerospace Medicine, Occupational Environmental Health Division, Health Physics Branch, Radiation Analysis Laboratories, 2350 Gillingham Drive, Brooks City-Base, TX 78235 (United States); Whetstone, Zachary David, E-mail: zacwhets@umich.edu [Radiological Health Engineering Laboratory, Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, 1906 Cooley Building, Ann Arbor, MI 48109-2104 (United States); Kearfott, Kimberlee Jane, E-mail: kearfott@umich.edu [Radiological Health Engineering Laboratory, Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, 1906 Cooley Building, Ann Arbor, MI 48109-2104 (United States)
2011-06-15
When characterizing environmental radioactivity, whether in the soil or within concrete building structures undergoing remediation or decommissioning, it is highly desirable to know the radionuclide depth distribution. This is typically modeled using continuous analytical expressions, whose forms are believed to best represent the true source distributions. In situ gamma ray spectroscopic measurements are combined with these models to fully describe the source. Currently, the choice of analytical expressions is based upon prior experimental core sampling results at similar locations, any known site history, or radionuclide transport models. This paper presents a method, employing multiple in situ measurements at a single site, for determining the analytical form that best represents the true depth distribution present. The measurements can be made using a variety of geometries, each of which has a different sensitivity variation with source spatial distribution. Using non-linear least squares numerical optimization methods, the results can be fit to a collection of analytical models and the parameters of each model determined. The analytical expression that results in the fit with the lowest residual is selected as the most accurate representation. A cursory examination is made of the effects of measurement errors on the method. - Highlights: > A new method for determining radionuclide distribution as a function of depth is presented. > Multiple measurements are used, with enough measurements to determine the unknowns in analytical functions that might describe the distribution. > The measurements must be as independent as possible, which is achieved through special collimation of the detector. > Although the effects of measurements errors may be significant on the results, an improvement over other methods is anticipated.
International Nuclear Information System (INIS)
Takac, S.M.
1972-01-01
The method is based on perturbation of the reactor cell from a few up to few tens of percent. Measurements were performed for square lattice calls of zero power reactors Anna, NORA and RB, with metal uranium and uranium oxide fuel elements, water, heavy water and graphite moderators. Character and functional dependence of perturbations were obtained from the experimental results. Zero perturbation was determined by extrapolation thus obtaining the real physical neutron flux distribution in the reactor cell. Simple diffusion theory for partial plate cell perturbation was developed for verification of the perturbation method. The results of these calculation proved that introducing the perturbation sample in the fuel results in flattening the thermal neutron density dependent on the amplitude of the applied perturbation. Extrapolation applied for perturbed distributions was found to be justified
Research on Optimized Torque-Distribution Control Method for Front/Rear Axle Electric Wheel Loader
Directory of Open Access Journals (Sweden)
Zhiyu Yang
2017-01-01
Full Text Available Optimized torque-distribution control method (OTCM is a critical technology for front/rear axle electric wheel loader (FREWL to improve the operation performance and energy efficiency. In the paper, a longitudinal dynamics model of FREWL is created. Based on the model, the objective functions are that the weighted sum of variance and mean of tire workload is minimal and the total motor efficiency is maximal. Four nonlinear constraint optimization algorithms, quasi-newton Lagrangian multiplier method, sequential quadratic programming, adaptive genetic algorithms, and particle swarm optimization with random weighting and natural selection, which have fast convergent rate and quick calculating speed, are used as solving solutions for objective function. The simulation results show that compared to no-control FREWL, controlled FREWL utilizes the adhesion ability better and slips less. It is obvious that controlled FREWL gains better operation performance and higher energy efficiency. The energy efficiency of FREWL in equipment transferring condition is increased by 13–29%. In addition, this paper discussed the applicability of OTCM and analyzed the reason for different simulation results of four algorithms.
Li, Xue-chen; Jia, Peng-ying; Liu, Zhi-hui; Li, Li-chun; Dong, Li-fang
2008-12-01
In the present paper, stable glow discharges were obtained in air at low pressure with a dielectric barrier surface discharge device. Light emission from the discharge was detected by photomultiplier tubes and the research results show that the light signal exhibited one discharge pulse per half cycle of the applied voltage. The light pulses were asymmetric between the positive half cycle and the negative one of the applied voltage. The images of the glow surface discharge were processed by Photoshop software and the results indicate that the emission intensity remained almost constant for different places with the same distance from the powered electrode, while the emission intensity decreased with the distance from the powered electrode increasing. In dielectric barrier discharge, net electric field is determined by the applied voltage and the wall charges accumulated on the dielectric layer during the discharge, and consequently, it is important to obtain information about the net electric field distribution. For this purpose, optical emission spectroscopy method was used. The distribution of the net electric field can be deduced from the intensity ratio of spectral line 391.4 nm emitted from the first negative system of N2+ (B 2sigma u+ -->X 2sigma g+) to 337.1 nm emitted from the second positive system of N2 (C 3IIu-B 3IIg). The research results show that the electric field near the powered electric field is higher than at the edge of the discharge. These experimental results are very important for numerical study and industrial application of the surface discharge.
Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro
2017-07-01
Numerical analysis on the rotation of an ultrasonically levitated droplet in centrifugal coordinate is discussed. A droplet levitated in an acoustic chamber is simulated using the distributed point source method and the moving particle semi-implicit method. Centrifugal coordinate is adopted to avoid the Laplacian differential error, which causes numerical divergence or inaccuracy in the global coordinate calculation. Consequently, the duration of calculation stability has increased 30 times longer than that in a the previous paper. Moreover, the droplet radius versus rotational acceleration characteristics show a similar trend to the theoretical and experimental values in the literature.
International Nuclear Information System (INIS)
Fazekas, A.; Posch, E.; Harsing, L.
1979-01-01
The blood circulation of incisors, dental pulp and tongue was detemined using the measurement of 86 Rb distribution in rats. The results were compared with those obtained by a simultaneous micropearl method. It was found that 37 per cent of 86 Rb in dental tissues is localized in the hard propiodentium, with a high proportion diffusing from the periodontium. The 86 Rb fraction localized in the tongue represents its blood circulation. (author)
DEFF Research Database (Denmark)
Chen, Shuheng; Hu, Weihao; Chen, Zhe
2016-01-01
In this paper, an efficient methodology is proposed to deal with segmented-time reconfiguration problem of distribution networks coupled with segmented-time reactive power control of distributed generators. The target is to find the optimal dispatching schedule of all controllable switches...... and distributed generators’ reactive powers in order to minimize comprehensive cost. Corresponding constraints, including voltage profile, maximum allowable daily switching operation numbers (MADSON), reactive power limits, and so on, are considered. The strategy of grouping branches is used to simplify...... (FAHPSO) is implemented in VC++ 6.0 program language. A modified version of the typical 70-node distribution network and several real distribution networks are used to test the performance of the proposed method. Numerical results show that the proposed methodology is an efficient method for comprehensive...
Directory of Open Access Journals (Sweden)
W. Castaings
2009-04-01
Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.
In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.
It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.
For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.
Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.
A new method for assessing judgmental distributions
Moors, J.J.A.; Schuld, M.H.; Mathijssen, A.C.A.
1995-01-01
For a number of statistical applications subjective estimates of some distributional parameters - or even complete densities are needed. The literature agrees that it is wise behaviour to ask only for some quantiles of the distribution; from these, the desired quantities are extracted. Quite a lot
Pre-Aggregation with Probability Distributions
DEFF Research Database (Denmark)
Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach
2006-01-01
Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....
Distribution method optimization : inventory flexibility
Asipko, D.
2010-01-01
This report presents the outcome of the Logistics Design Project carried out for Nike Inc. This project has two goals: create a model to measure a flexibility aspect of the inventory usage in different Nike distribution channels, and analyze opportunities of changing the decision model of splitting
Directory of Open Access Journals (Sweden)
Yerriswamy Wooluru
2016-06-01
Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.
Directory of Open Access Journals (Sweden)
David W Redding
Full Text Available Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT, to a spatial Bayesian SDM method (fitted using R-INLA, when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account
Redding, David W; Lucas, Tim C D; Blackburn, Tim M; Jones, Kate E
2017-01-01
Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs) commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT), to a spatial Bayesian SDM method (fitted using R-INLA), when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account for spatial
International Nuclear Information System (INIS)
Ponting, A.C.; Nair, S.
1984-04-01
A concept extensively used in studying the consequences of accidental atmospheric radioactive releases is that of the Complementary Cumulative Distribution Function, CCDF. Various methods of calculating CCDFs have been developed with particular applications in putting degraded core accidents in perspective and in identifying release sequences leading to high risks. This note compares three methods with specific reference to their accuracy and computational efficiency. For two of the methods (that used in the US Reactor Safety Study code CRAC2 and extended version of that method), the effects of varying the sector width and considering site-specific population distributions have been determined. For the third method it is only necessary to consider the effects of site-specific population distributions. (author)
How the Television Show "Mythbusters" Communicates the Scientific Method
Zavrel, Erik; Sharpsteen, Eric
2016-01-01
The importance of understanding and internalizing the scientific method can hardly be exaggerated. Unfortunately, it is all too common for high school--and even university--students to graduate with only a partial or oversimplified understanding of what the scientific method is and how to actually employ it. Help in remedying this situation may…
Directory of Open Access Journals (Sweden)
Jakob H Lagerlöf
Full Text Available To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution.A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham's line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green's function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM and an individual tree method (ITM. Five tumour sub-sections were compared, to evaluate the methods.The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02 than the distributions of different samples using CTM (0.001< RMSD<0.01. The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS tests showed that millimetre-scale samples may not represent the whole.The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour.
Directory of Open Access Journals (Sweden)
Chih-Hsueh Lin
2016-04-01
Full Text Available In wireless sensor networks, sensing information must be transmitted from sensor nodes to the base station by multiple hopping. Every sensor node is a sender and a relay node that forwards the sensing information that is sent by other nodes. Under an attack, the sensing information may be intercepted, modified, interrupted, or fabricated during transmission. Accordingly, the development of mutual trust to enable a secure path to be established for forwarding information is an important issue. Random key pre-distribution has been proposed to establish mutual trust among sensor nodes. This article modifies the random key pre-distribution to a random secret pre-distribution and incorporates identity-based cryptography to establish an effective method of establishing mutual trust for a wireless sensor network. In the proposed method, base station assigns an identity and embeds n secrets into the private secret keys for every sensor node. Based on the identity and private secret keys, the mutual trust method is utilized to explore the types of trust among neighboring sensor nodes. The novel method can resist malicious attacks and satisfy the requirements of wireless sensor network, which are resistance to compromising attacks, masquerading attacks, forger attacks, replying attacks, authentication of forwarding messages, and security of sensing information.
Bullock, Meggan; Márquez, Lourdes; Hernández, Patricia; Ruíz, Fernando
2013-09-01
Traditional methods of aging adult skeletons suffer from the problem of age mimicry of the reference collection, as described by Bocquet-Appel and Masset (1982). Transition analysis (Boldsen et al., 2002) is a method of aging adult skeletons that addresses the problem of age mimicry of the reference collection by allowing users to select an appropriate prior probability. In order to evaluate whether transition analysis results in significantly different age estimates for adults, the method was applied to skeletal collections from Postclassic Cholula and Contact-Period Xochimilco. The resulting age-at-death distributions were then compared with age-at-death distributions for the two populations constructed using traditional aging methods. Although the traditional aging methods result in age-at-death distributions with high young adult mortality and few individuals living past the age of 50, the age-at-death distributions constructed using transition analysis indicate that most individuals who lived into adulthood lived past the age of 50. Copyright © 2013 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Shujing Su
2015-01-01
Full Text Available For the characteristics of parameters dispersion in large factories, storehouses, and other applications, a distributed parameter measurement system is designed that is based on the ring network. The structure of the system and the circuit design of the master-slave node are described briefly. The basic protocol architecture about transmission communication is introduced, and then this paper comes up with two kinds of distributed transmission control methods. Finally, the reliability, extendibility, and control characteristic of these two methods are tested through a series of experiments. Moreover, the measurement results are compared and discussed.
Directory of Open Access Journals (Sweden)
Pūle Daina
2016-12-01
Full Text Available Prevalence of Legionella in drinking water distribution systems is a widespread problem. Outbreaks of Legionella caused diseases occur despite various disinfectants are used in order to control Legionella. Conventional methods like thermal disinfection, silver/copper ionization, ultraviolet irradiation or chlorine-based disinfection have not been effective in the long term for control of biofilm bacteria. Therefore, research to develop more effective disinfection methods is still necessary.
Model Checking Geographically Distributed Interlocking Systems Using UMC
DEFF Research Database (Denmark)
Fantechi, Alessandro; Haxthausen, Anne Elisabeth; Nielsen, Michel Bøje Randahl
2017-01-01
the relevant distributed protocols. By doing that we obey the safety guidelines of the railway signalling domain, that require formal methods to support the certification of such products. We also show how formal modelling can help designing alternative distributed solutions, while maintaining adherence...
DEFF Research Database (Denmark)
Khoshfetrat Pakazad, Sina; Hansson, Anders; Andersen, Martin S.
2017-01-01
In this paper, we propose a distributed algorithm for solving coupled problems with chordal sparsity or an inherent tree structure which relies on primal–dual interior-point methods. We achieve this by distributing the computations at each iteration, using message-passing. In comparison to existi...
ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD
Directory of Open Access Journals (Sweden)
Yuri Gulbin
2011-05-01
Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.
Energy Technology Data Exchange (ETDEWEB)
Sundqvist, B; Gonczi, L; Koersner, I; Bergman, R; Lindh, U
1974-01-01
(d,p) reactions in /sup 14/N were used for probing single kernels of seed for nitrogen content and nitrogen depth distributions. Comparison with the Kjeldahl method was made on individual peas and beans. The results were found to be strongly correlated. The technique to obtain depth distributions of nitrogen was also used on high- and low-lysine varieties of barley for which large differences in nitrogen distributions were found.
Flow distribution in the accelerator-production-of-tritium target
International Nuclear Information System (INIS)
Siebe, D.A.; Spatz, T.L.; Pasamehmetoglu, K.O.; Sherman, M.P.
1999-01-01
Achieving nearly uniform flow distributions in the accelerator production of tritium (APT) target structures is an important design objective. Manifold effects tend to cause a nonuniform distribution in flow systems of this type, although nearly even distribution can be achieved. A program of hydraulic experiments is underway to provide a database for validation of calculational methodologies that may be used for analyzing this problem and to evaluate the approach with the most promise for achieving a nearly even flow distribution. Data from the initial three tests are compared to predictions made using four calculational methods. The data show that optimizing the ratio of the supply-to-return-manifold areas can produce an almost even flow distribution in the APT ladder assemblies. The calculations compare well with the data for ratios of the supply-to-return-manifold areas spanning the optimum value. Thus, the results to date show that a nearly uniform flow distribution can be achieved by carefully sizing the supply and return manifolds and that the calculational methods available are adequate for predicting the distributions through a range of conditions
Mixture distributions of wind speed in the UAE
Shin, J.; Ouarda, T.; Lee, T. S.
2013-12-01
Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for
Distributed Workflow Service Composition Based on CTR Technology
Feng, Zhilin; Ye, Yanming
Recently, WS-BPEL has gradually become the basis of a standard for web service description and composition. However, WS-BPEL cannot efficiently describe distributed workflow services for lacking of special expressive power and formal semantics. This paper presents a novel method for modeling distributed workflow service composition with Concurrent TRansaction logic (CTR). The syntactic structure of WS-BPEL and CTR are analyzed, and new rules of mapping WS-BPEL into CTR are given. A case study is put forward to show that the proposed method is appropriate for modeling workflow business services under distributed environments.
Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.
2017-10-01
The paper considers an important problem of designing distributed systems of hydrolithosphere processes management. The control actions on the hydrolithosphere processes under consideration are implemented by a set of extractive wells. The article shows the method of defining the approximation links for description of the dynamic characteristics of hydrolithosphere processes. The structure of distributed regulators, used in the management systems by the considered processes, is presented. The paper analyses the results of the synthesis of the distributed management system and the results of modelling the closed-loop control system by the parameters of the hydrolithosphere process.
Directory of Open Access Journals (Sweden)
Brian C. Houle
2010-01-01
Full Text Available Few studies consider obesity inequalities as a distributional property. This study uses relative distribution methods to explore inequalities in body mass index (BMI; kg/m2. Data from 1999–2006 from the National Health and Nutrition Examination Survey were used to compare BMI distributions by gender, Black/White race, and education subgroups in the United States. For men, comparisons between Whites and Blacks show a polarized relative distribution, with more Black men at increased risk of over or underweight. Comparisons by education (overall and within race/ethnic groups effects also show a polarized relative distribution, with more cases of the least educated men at the upper and lower tails of the BMI distribution. For women, Blacks have a greater probability of high BMI values largely due to a right-shifted BMI distribution relative to White women. Women with less education also have a BMI distribution shifted to the right compared to the most educated women.
International Nuclear Information System (INIS)
Ahmadigorji, Masoud; Amjady, Nima
2014-01-01
Highlights: • A new dynamic distribution network expansion planning model is presented. • A Binary Enhanced Particle Swarm Optimization (BEPSO) algorithm is proposed. • A Modified Differential Evolution (MDE) algorithm is proposed. • A new bi-level optimization approach composed of BEPSO and MDE is presented. • The effectiveness of the proposed optimization approach is extensively illustrated. - Abstract: Reconstruction in the power system and appearing of new technologies for generation capacity of electrical energy has led to significant innovation in Distribution Network Expansion Planning (DNEP). Distributed Generation (DG) includes the application of small/medium generation units located in power distribution networks and/or near the load centers. Appropriate utilization of DG can affect the various technical and operational indices of the distribution network such as the feeder loading, energy losses and voltage profile. In addition, application of DG in proper size is an essential tool to achieve the DG maximum potential benefits. In this paper, a time-based (dynamic) model for DNEP is proposed to determine the optimal size, location and installation year of DG in distribution system. Also, in this model, the Optimal Power Flow (OPF) is exerted to determine the optimal generation of DGs for every potential solution in order to minimize the investment and operation costs following the load growth in a specified planning period. Besides, the reinforcement requirements of existing distribution feeders are considered, simultaneously. The proposed optimization problem is solved by the combination of evolutionary methods of a new Binary Enhanced Particle Swarm Optimization (BEPSO) and Modified Differential Evolution (MDE) to find the optimal expansion strategy and solve OPF, respectively. The proposed planning approach is applied to two typical primary distribution networks and compared with several other methods. These comparisons illustrate the
Directory of Open Access Journals (Sweden)
Lizziane Kretli Winkelströter
2015-03-01
Full Text Available Listeria monocytogenes is a foodborne pathogen able to adhere and to form biofilms in several materials commonly present in food processing plants. The aim of this study was to evaluate the resistance of Listeria monocytogenes attached to abiotic surface, after treatment with sanitizers, by culture method, microscopy and Quantitative Real Time Polymerase Chain Reaction (qPCR. Biofilms of L. monocytogenes were obtained in stainless steel coupons immersed in Brain Heart Infusion Broth, under agitation at 37 °C for 24 h. The methods selected for this study were based on plate count, microscopic count with the aid of viability dyes (CTC-DAPI, and qPCR. Results of culture method showed that peroxyacetic acid was efficient to kill sessile L. monocytogenes populations, while sodium hypochlorite was only partially effective to kill attached L. monocytogenes (p < 0.05. When, viability dyes (CTC/DAPI combined with fluorescence microscopy and qPCR were used and lower counts were found after treatments (p < 0.05. Selective quantification of viable cells of L. monocytogenes by qPCR using EMA revelead that the pre-treatment with EMA was not appropriate since it also inhibited amplification of DNA from live cells by ca. 2 log. Thus, the use of CTC counts was the best method to count viable cells in biofilms.
Energy Technology Data Exchange (ETDEWEB)
Cerda-Arias, Jose Luis
2012-07-01
guide the expansion of the electrical system and coordinate solutions in order to obtain the maximal benefit for each new installation. The proposed method considers a temporary and spatial load forecasting model, the optimal distribution of the load into the grid, the proposal of new power generation projects and its branches, the planning of transmission and distribution systems and the classification of these proposed projects. The solution suggests a reduction of the active power losses due to the distribution of the load, the locations of new agents into the grid, and the interaction between transmission and the distribution system based on different tariff schemes. The usefulness of this method is highlighted by considering the influence of current variables which affect the behaviour of the load and the expansion of the power systems. The proposed method includes classical methodologies combined with exploratory development of other mathematical tools in the area of identification systems, the use of genetic algorithm, and the application of a branch and bound method, in the optimization methodology. The model is applied to a real system and shows the potential for its use in an analysis of the entry of new consumption devices, such as electric vehicles. The direct effect on each of the variables included in the analysis is distinguished mainly through the introduction of new components like energy efficiency in the load forecasting model, the effect of tariff scheme in the planning model, and the classification of projects in transmission and distribution systems.
Distributed Cooperation Solution Method of Complex System Based on MAS
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
Distribution of analytes over TXRF reflectors
International Nuclear Information System (INIS)
Bernasconi, G.; Tajani, A.
2000-01-01
One of the most frequently used methods for trace element analysis in TXRF involves the evaporation of small amounts of aqueous solutions over flat reflectors. This method has the advantage of in-situ pre-concentration of the analytes, which together with the low background due to the total reflection in the substrate leads to excellent detection limits and high signal to noise ratio. The spiking of the liquid sample with an internal standard provides also a simple way to achieve multielemental quantitative analysis. However the elements are not homogeneously distributed over the reflector after the liquid phase has been evaporated. This distribution may be different for the unknown elements and the internal standards and may influence the accuracy of the quantitative results. In this presentation we used μ-XRF techniques to map this distribution. Small (20 μl) drops of a binary solution were evaporated over silicon reflectors and then mapped using a focused X-ray beam with about 100 μm resolution. A typical ring structure showing some differences in the distribution of both elements has been observed. One of the reflectors was also measured in a TXRF setup turning it at different angles with reference to the X-ray beam (with constant incidence and take-off angles) and variations of the intensity relation between both elements were measured. This work shows the influence of the sample distribution and proposes methods to evaluate it. In order to assess the limitations of the accuracy of the results due to the sample distribution more measurements would be necessary, however due to the small size of typical TXRF samples and the tight geometry of TXRF setups the influence of the sample distribution is not large. (author)
International Nuclear Information System (INIS)
Surducan, V.; Surducan, E.; Dadarlat, D.
2013-01-01
Microwave induced heating is widely used in medical treatments, scientific and industrial applications. The temperature field inside a microwave heated sample is often inhomogenous, therefore multiple temperature sensors are required for an accurate result. Nowadays, non-contact (Infra Red thermography or microwave radiometry) or direct contact temperature measurement methods (expensive and sophisticated fiber optic temperature sensors transparent to microwave radiation) are mainly used. IR thermography gives only the surface temperature and can not be used for measuring temperature distributions in cross sections of a sample. In this paper we present a very simple experimental method for temperature distribution highlighting inside a cross section of a liquid sample, heated by a microwave radiation through a coaxial applicator. The method proposed is able to offer qualitative information about the heating distribution, using a temperature sensitive liquid crystal sheet. Inhomogeneities as smaller as 1°-2°C produced by the symmetry irregularities of the microwave applicator can be easily detected by visual inspection or by computer assisted color to temperature conversion. Therefore, the microwave applicator is tuned and verified with described method until the temperature inhomogeneities are solved
International Nuclear Information System (INIS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2017-01-01
Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results. (paper)
Distributed-Lagrange-Multiplier-based computational method for particulate flow with collisions
Ardekani, Arezoo; Rangel, Roger
2006-11-01
A Distributed-Lagrange-Multiplier-based computational method is developed for colliding particles in a solid-fluid system. A numerical simulation is conducted in two dimensions using the finite volume method. The entire domain is treated as a fluid but the fluid in the particle domains satisfies a rigidity constraint. We present an efficient method for predicting the collision between particles. In earlier methods, a repulsive force was applied to the particles when their distance was less than a critical value. In this method, an impulsive force is computed. During the frictionless collision process between two particles, linear momentum is conserved while the tangential forces are zero. Thus, instead of satisfying a condition of rigid body motion for each particle separately, as done when particles are not in contact, both particles are rigidified together along their line of centers. Particles separate from each other when the impulsive force is less than zero and after this time, a rigidity constraint is satisfied for each particle separately. Grid independency is implemented to ensure the accuracy of the numerical simulation. A comparison between this method and previous collision strategies is presented and discussed.
Monte Carlo Methods in Physics
International Nuclear Information System (INIS)
Santoso, B.
1997-01-01
Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained
A signature-based method for indexing cell cycle phase distribution from microarray profiles
Directory of Open Access Journals (Sweden)
Mizuno Hideaki
2009-03-01
Full Text Available Abstract Background The cell cycle machinery interprets oncogenic signals and reflects the biology of cancers. To date, various methods for cell cycle phase estimation such as mitotic index, S phase fraction, and immunohistochemistry have provided valuable information on cancers (e.g. proliferation rate. However, those methods rely on one or few measurements and the scope of the information is limited. There is a need for more systematic cell cycle analysis methods. Results We developed a signature-based method for indexing cell cycle phase distribution from microarray profiles under consideration of cycling and non-cycling cells. A cell cycle signature masterset, composed of genes which express preferentially in cycling cells and in a cell cycle-regulated manner, was created to index the proportion of cycling cells in the sample. Cell cycle signature subsets, composed of genes whose expressions peak at specific stages of the cell cycle, were also created to index the proportion of cells in the corresponding stages. The method was validated using cell cycle datasets and quiescence-induced cell datasets. Analyses of a mouse tumor model dataset and human breast cancer datasets revealed variations in the proportion of cycling cells. When the influence of non-cycling cells was taken into account, "buried" cell cycle phase distributions were depicted that were oncogenic-event specific in the mouse tumor model dataset and were associated with patients' prognosis in the human breast cancer datasets. Conclusion The signature-based cell cycle analysis method presented in this report, would potentially be of value for cancer characterization and diagnostics.
DEFF Research Database (Denmark)
Jensen, T.; Green, O.; Munkholm, Lars Juhl
2016-01-01
The goal of this research is to present and compare two methods for evaluating soil aggregate size distribution based on high resolution 3D images of the soil surface. The methods for analyzing the images are discrete Fourier transform and granulometry. The results of these methods correlate...... with a measured weight distribution of the soil aggregates. The results have shown that it is possible to distinguish between the cultivated and the uncultivated soil surface. A sensor system suitable for capturing in-situ high resolution 3D images of the soil surface is also described. This sensor system...
Simulation of the measure of the microparticle size distribution in two dimensions
International Nuclear Information System (INIS)
Lameiras, F.S.; Pinheiro, P.
1987-01-01
Different size distributions of plane figures were generated in a computer as a simply connected network. These size distributions were measured by the Saltykov method for two dimensions. The comparison between the generated and measured distributions showed that the Saltkov method tends to measure larger scattering than the real one and to move the maximum of the real distribution to larger diameters. These erros were determined by means of the ratio of the perimeter of the figures per unit area directly measured and the perimeter calculated from the size distribution obtained by using the SaltyKov method. (Author) [pt
Moisture distribution in sludges based on different testing methods
Institute of Scientific and Technical Information of China (English)
Wenyi Deng; Xiaodong Li; Jianhua Yan; Fei Wang; Yong Chi; Kefa Cen
2011-01-01
Moisture distributions in municipal sewage sludge, printing and dyeing sludge and paper mill sludge were experimentally studied based on four different methods, i.e., drying test, thermogravimetric-differential thermal analysis (TG-DTA) test, thermogravimetricdifferential scanning calorimetry (TG-DSC) test and water activity test. The results indicated that the moistures in the mechanically dewatered sludges were interstitial water, surface water and bound water. The interstitial water accounted for more than 50％ wet basis (wb) of the total moisture content. The bond strength of sludge moisture increased with decreasing moisture content, especially when the moisture content was lower than 50％ wb. Furthermore, the comparison among the four different testing methods was presented.The drying test was advantaged by its ability to quantify free water, interstitial water, surface water and bound water; while TG-DSC test, TG-DTA test and water activity test were capable of determining the bond strength of moisture in sludge. It was found that the results from TG-DSC and TG-DTA test are more persuasive than water activity test.
Uranium distribution in Baikal sediments using SSNTD method for paleoclimate reconstruction
Zhmodik, S M; Nemirovskaya, N A; Zhatnuev, N S
1999-01-01
First data on local distribution of uranium in the core of Lake Baikal floor sediments (Academician ridge, VER-95-2, St 3 BC, 53 deg. 113'12'N/108 deg. 25'01'E) are presented in this paper. They have been obtained using (n,f)-radiography. Various forms of U-occurrence in floor sediments are shown, i.e. evenly disseminated, associated with clayey and diatomaceous components; micro- and macroinclusions of uranium bearing minerals - microlocations with uranium content 10-50 times higher than U-concentrations associated with clayey and diatomaceous components. Relative and absolute U-concentration can be determined for every mineral. Signs of various order periodicity of U-distribution in the core of Lake Baikal floor sediments have been found. Using (n,f)-radiography method of the study of Baikal floor sediment permits gathering of new information that can be used at paleoclimate reconstruction.
Kosenko, Viktor; Persiyanova, Elena; Belotskyy, Oleksiy; Malyeyeva, Olga
2017-01-01
The subject matter of the article is information and communication networks (ICN) of critical infrastructure systems (CIS). The goal of the work is to create methods for managing the data flows and resources of the ICN of CIS to improve the efficiency of information processing. The following tasks were solved in the article: the data flow model of multi-level ICN structure was developed, the method of adaptive distribution of data flows was developed, the method of network resource assignment...
Desak Wiwin,; Edy Mintarto; Nurkholis Nurkholis
2017-01-01
The purpose of this study was to analyze about (1) the effect of the method massed practice against the speed and accuracy of service, (2) the effect of the method of distributed practice against the speed and accuracy of service and (3) the influence of methods of massed practice and distributed practice against the speed and accuracy of service. This type of research used in this research is quantitative with quasiexperimental methods. The research design uses a non-randomized control group...
Airline Overbooking Problem with Uncertain No-Shows
Directory of Open Access Journals (Sweden)
Chunxiao Zhang
2014-01-01
Full Text Available This paper considers an airline overbooking problem of a new single-leg flight with discount fare. Due to the absence of historical data of no-shows for a new flight, and various uncertain human behaviors or unexpected events which causes that a few passengers cannot board their aircraft on time, we fail to obtain the probability distribution of no-shows. In this case, the airlines have to invite some domain experts to provide belief degree of no-shows to estimate its distribution. However, human beings often overestimate unlikely events, which makes the variance of belief degree much greater than that of the frequency. If we still regard the belief degree as a subjective probability, the derived results will exceed our expectations. In order to deal with this uncertainty, the number of no-shows of new flight is assumed to be an uncertain variable in this paper. Given the chance constraint of social reputation, an overbooking model with discount fares is developed to maximize the profit rate based on uncertain programming theory. Finally, the analytic expression of the optimal booking limit is obtained through a numerical example, and the results of sensitivity analysis indicate that the optimal booking limit is affected by flight capacity, discount, confidence level, and parameters of the uncertainty distribution significantly.
New Technology/Old Technology: Comparing Lunar Grain Size Distribution Data and Methods
Fruland, R. M.; Cooper, Bonnie L.; Gonzalexz, C. P.; McKay, David S.
2011-01-01
Laser diffraction technology generates reproducible grain size distributions and reveals new structures not apparent in old sieve data. The comparison of specific sieve fractions with the Microtrac distribution curve generated for those specific fractions shows a reasonable match for the mean of each fraction between the two techniques, giving us confidence that the large existing body of sieve data can be cross-correlated with new data based on laser diffraction. It is well-suited for lunar soils, which have as much as 25% of the material in the less than 20 micrometer fraction. The fines in this range are of particular interest because they may contain a record of important space weathering processes.
Distributed Optimization based Dynamic Tariff for Congestion Management in Distribution Networks
DEFF Research Database (Denmark)
Huang, Shaojun; Wu, Qiuwei; Zhao, Haoran
2017-01-01
This paper proposes a distributed optimization based dynamic tariff (DDT) method for congestion management in distribution networks with high penetration of electric vehicles (EVs) and heat pumps (HPs). The DDT method employs a decomposition based optimization method to have aggregators explicitly...... is able to minimize the overall energy consumption cost and line loss cost, which is different from previous decomposition-based methods such as multiagent system methods. In addition, a reconditioning method and an integral controller are introduced to improve convergence of the distributed optimization...... where challenges arise due to multiple congestion points, multiple types of flexible demands and network constraints. The case studies demonstrate the efficacy of the DDT method for congestion management in distribution networks....
The price momentum of stock in distribution
Liu, Haijun; Wang, Longfei
2018-02-01
In this paper, a new momentum of stock in distribution is proposed and applied in real investment. Firstly, assuming that a stock behaves as a multi-particle system, its share-exchange distribution and cost distribution are introduced. Secondly, an estimation of the share-exchange distribution is given with daily transaction data by 3 σ rule from the normal distribution. Meanwhile, an iterative method is given to estimate the cost distribution. Based on the cost distribution, a new momentum is proposed for stock system. Thirdly, an empirical test is given to compare the new momentum with others by contrarian strategy. The result shows that the new one outperforms others in many places. Furthermore, entropy of stock is introduced according to its cost distribution.
Shokri, Ali
2017-04-01
The hydrological cycle contains a wide range of linked surface and subsurface flow processes. In spite of natural connections between surface water and groundwater, historically, these processes have been studied separately. The current trend in hydrological distributed physically based model development is to combine distributed surface water models with distributed subsurface flow models. This combination results in a better estimation of the temporal and spatial variability of the interaction between surface and subsurface flow. On the other hand, simple lumped models such as the Soil Conservation Service Curve Number (SCS-CN) are still quite common because of their simplicity. In spite of the popularity of the SCS-CN method, there have always been concerns about the ambiguity of the SCS-CN method in explaining physical mechanism of rainfall-runoff processes. The aim of this study is to minimize these ambiguity by establishing a method to find an equivalence of the SCS-CN solution to the DrainFlow model, which is a fully distributed physically based coupled surface-subsurface flow model. In this paper, two hypothetical v-catchment tests are designed and the direct runoff from a storm event are calculated by both SCS-CN and DrainFlow models. To find a comparable solution to runoff prediction through the SCS-CN and DrainFlow, the variance between runoff predictions by the two models are minimized by changing Curve Number (CN) and initial abstraction (Ia) values. Results of this study have led to a set of lumped model parameters (CN and Ia) for each catchment that is comparable to a set of physically based parameters including hydraulic conductivity, Manning roughness coefficient, ground surface slope, and specific storage. Considering the lack of physical interpretation in CN and Ia is often argued as a weakness of SCS-CN method, the novel method in this paper gives a physical explanation to CN and Ia.
Energy Technology Data Exchange (ETDEWEB)
Xi, Xi [CNNC Key Laboratory on Nuclear Reactor Thermal Hydraulics Technology, Nuclear Power Institute of China, Chengdu 610041 (China); Xiao, Zejun, E-mail: fabulous_2012@sina.com [CNNC Key Laboratory on Nuclear Reactor Thermal Hydraulics Technology, Nuclear Power Institute of China, Chengdu 610041 (China); Yan, Xiao; Li, Yongliang; Huang, Yanping [CNNC Key Laboratory on Nuclear Reactor Thermal Hydraulics Technology, Nuclear Power Institute of China, Chengdu 610041 (China)
2013-05-15
Highlights: ► CFX and MCNP codes are suitable to calculate the axial power profile of the FA. ► The partition method in the calculation will affect the final result. ► The density feedback has little effect on the axial power profile of CSR1000 FA. -- Abstract: SCWR (super critical water reactor) is one of the IV generation nuclear reactors in the world. In a typical SCWR the water enters the reactor from the cold leg with a temperature of 280 °C and then leaves the core with a temperature of 500 °C. Due to the sharp change in temperature, there is a huge density change of the water along the axial direction of the fuel assembly (FA), which will affect the moderating power of the water. So the axial power distribution of the SCWR FA could be different from the traditional PWR FA.In this paper, it is the first time that the thermal hydraulics code CFX and neutronics code MCNP are used to analyze the axial power distribution of the SCWR FA. First, the factors in the coupled method which could affect the result are analyzed such as the initialization value or the partition method especially in the MCNP code. Then the axial power distribution of the Europe HPLWR FA is obtained by the coupled method with the two codes and the result is compared with that obtained by Waata and Reiss. There is a good agreement among the three kinds of results. At last, this method is used to calculate the axial power distribution of the Chinese SCWR (CSR1000) FA. It is found the axial power profile of the CSR1000 FA is not so sensitive to the change of the moderator density.
Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat
2015-01-01
Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.
International Nuclear Information System (INIS)
Azad-Farsani, Ehsan; Agah, S.M.M.; Askarian-Abyaneh, Hossein; Abedi, Mehrdad; Hosseinian, S.H.
2016-01-01
LMP (Locational marginal price) calculation is a serious impediment in distribution operation when private DG (distributed generation) units are connected to the network. A novel policy is developed in this study to guide distribution company (DISCO) to exert its control over the private units when power loss and green-house gases emissions are minimized. LMP at each DG bus is calculated according to the contribution of the DG to the reduced amount of loss and emission. An iterative algorithm which is based on the Shapley value method is proposed to allocate loss and emission reduction. The proposed algorithm will provide a robust state estimation tool for DISCOs in the next step of operation. The state estimation tool provides the decision maker with the ability to exert its control over private DG units when loss and emission are minimized. Also, a stochastic approach based on the PEM (point estimate method) is employed to capture uncertainty in the market price and load demand. The proposed methodology is applied to a realistic distribution network, and efficiency and accuracy of the method are verified. - Highlights: • Reduction of the loss and emission at the same time. • Fair allocation of loss and emission reduction. • Estimation of the system state using an iterative algorithm. • Ability of DISCOs to control DG units via the proposed policy. • Modeling the uncertainties to calculate the stochastic LMP.
An Empirical Method to Fuse Partially Overlapping State Vectors for Distributed State Estimation
Sijs, J.; Hanebeck, U.; Noack, B.
2013-01-01
State fusion is a method for merging multiple estimates of the same state into a single fused estimate. Dealing with multiple estimates is one of the main concerns in distributed state estimation, where an estimated value of the desired state vector is computed in each node of a networked system.
Kim, Se-Ho; Kang, Phil Woong; Park, O Ok; Seol, Jae-Bok; Ahn, Jae-Pyoung; Lee, Ji Yeong; Choi, Pyuck-Pa
2018-07-01
We present a new method of preparing needle-shaped specimens for atom probe tomography from freestanding Pd and C-supported Pt nanoparticles. The method consists of two steps, namely electrophoresis of nanoparticles on a flat Cu substrate followed by electrodeposition of a Ni film acting as an embedding matrix for the nanoparticles. Atom probe specimen preparation can be subsequently carried out by means of focused-ion-beam milling. Using this approach, we have been able to perform correlative atom probe tomography and transmission electron microscopy analyses on both nanoparticle systems. Reliable mass spectra and three-dimensional atom maps could be obtained for Pd nanoparticle specimens. In contrast, atom probe samples prepared from C-supported Pt nanoparticles showed uneven field evaporation and hence artifacts in the reconstructed atom maps. Our developed method is a viable means of mapping the three-dimensional atomic distribution within nanoparticles and is expected to contribute to an improved understanding of the structure-composition-property relationships of various nanoparticle systems. Copyright © 2018 Elsevier B.V. All rights reserved.
An observer-theoretic approach to estimating neutron flux distribution
International Nuclear Information System (INIS)
Park, Young Ho; Cho, Nam Zin
1989-01-01
State feedback control provides many advantages such as stabilization and improved transient response. However, when the state feedback control is considered for spatial control of a nuclear reactor, it requires complete knowledge of the distributions of the system state variables. This paper describes a method for estimating the flux spatial distribution using only limited flux measurements. It is based on the Luenberger observer in control theory, extended to the distributed parameter systems such as the space-time reactor dynamics equation. The results of the application of the method to simple reactor models showed that the flux distribution is estimated by the observer very efficiently using information from only a few sensors
Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.
Impacts of Voltage Control Methods on Distribution Circuit’s Photovoltaic (PV Integration Limits
Directory of Open Access Journals (Sweden)
Anamika Dubey
2017-10-01
Full Text Available The widespread integration of photovoltaic (PV units may result in a number of operational issues for the utility distribution system. The advances in smart-grid technologies with better communication and control capabilities may help to mitigate these challenges. The objective of this paper is to evaluate multiple voltage control methods and compare their effectiveness in mitigating the impacts of high levels of PV penetrations on distribution system voltages. A Monte Carlo based stochastic analysis framework is used to evaluate the impacts of PV integration, with and without voltage control. Both snapshot power flow and time-series analysis are conducted for the feeder with varying levels of PV penetrations. The methods are compared for their impacts on (1 the feeder’s PV hosting capacity; (2 the number of voltage violations and the magnitude of the largest bus voltage; (3 the net reactive power demand from the substation; and (4 the number of switching operations of feeder’s legacy voltage support devices i.e., capacitor banks and load tap changers (LTCs. The simulation results show that voltage control help in mitigating overvoltage concerns and increasing the feeder’s hosting capacity. Although, the legacy control solves the voltage concerns for primary feeders, a smart inverter control is required to mitigate both primary and secondary feeder voltage regulation issues. The smart inverter control, however, increases the feeder’s reactive power demand and the number of LTC and capacitor switching operations. For the 34.5-kV test circuit, it is observed that the reactive power demand increases from 0 to 6.8 MVAR on enabling Volt-VAR control for PV inverters. The total number of capacitor and LTC operations over a 1-year period also increases from 455 operations to 1991 operations with Volt-VAR control mode. It is also demonstrated that by simply changing the control mode of capacitor banks, a significant reduction in the unnecessary
Laadan, Oren; Nieh, Jason; Phung, Dan
2012-10-02
Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.
Unstable Temperature Distribution in Friction Stir Welding
Directory of Open Access Journals (Sweden)
Sadiq Aziz Hussein
2014-01-01
Full Text Available In the friction stir welding process, a nonuniform and high generated temperature is undesirable. Unstable temperature and distribution affect thermal and residual stresses along the welding line, thus necessitating mitigation. This paper presents a simple method to prevent significant temperature difference along the welding line and also to help nullifying some defect types associated with this welding, such as end-hole, initial unwelded line, and deformed areas. In the experimental investigation, a heat and force thermocouple and dynamometer were utilized while couple-field thermomechanical models were used to evaluate temperature and its distribution, plastic strain, and material displacement. The suggested method generated uniform temperature distributions. Measurement results are discussed, showing a good correlation with predictions.
DEFF Research Database (Denmark)
Liu, Jingying; Christophersen, Philip C; Yang, Mingshi
2017-01-01
OBJECTIVE: The present study aimed at elucidating the influence of polymorphic stability of lipid excipients on the physicochemical characters of different solid lipid microparticles (SLM), with the focus on the alteration of protein distribution in SLM. METHODS: Labeled lysozyme was incorporated...... provides updated knowledge for rational development of lipid-based formulations for oral delivery of peptide or protein drugs.......OBJECTIVE: The present study aimed at elucidating the influence of polymorphic stability of lipid excipients on the physicochemical characters of different solid lipid microparticles (SLM), with the focus on the alteration of protein distribution in SLM. METHODS: Labeled lysozyme was incorporated...... into SLM prepared with different excipients, i.e. trimyristin (TG14), glyceryl distearate (GDS), and glyceryl monostearate (GMS), by water-oil-water (w/o/w) or solid-oil-water (s/o/w) method. The distribution of lysozyme in SLM and the release of the protein from SLM were evaluated by confocal laser...
International Nuclear Information System (INIS)
Dejneko, A.O.
2011-01-01
Based on an analysis of existing models, methods and means of acquiring knowledge, a base method of automated knowledge acquisition has been chosen. On the base of this method, a new approach to integrate information acquired from knowledge sources of different typologies has been proposed, and the concept of a distributed knowledge acquisition with the aim of computerized formation of the most complete and consistent models of problem areas has been introduced. An original algorithm for distributed knowledge acquisition from databases, based on the construction of binary decision trees has been developed [ru
Mustonen, Satu M; Tissari, Soile; Huikko, Laura; Kolehmainen, Mikko; Lehtola, Markku J; Hirvonen, Arja
2008-05-01
The distribution of drinking water generates soft deposits and biofilms in the pipelines of distribution systems. Disturbances in water distribution can detach these deposits and biofilms and thus deteriorate the water quality. We studied the effects of simulated pressure shocks on the water quality with online analysers. The study was conducted with copper and composite plastic pipelines in a pilot distribution system. The online data gathered during the study was evaluated with Self-Organising Map (SOM) and Sammon's mapping, which are useful methods in exploring large amounts of multivariate data. The objective was to test the usefulness of these methods in pinpointing the abnormal water quality changes in the online data. The pressure shocks increased temporarily the number of particles, turbidity and electrical conductivity. SOM and Sammon's mapping were able to separate these situations from the normal data and thus make those visible. Therefore these methods make it possible to detect abrupt changes in water quality and thus to react rapidly to any disturbances in the system. These methods are useful in developing alert systems and predictive applications connected to online monitoring.
A method and programme (BREACH) for predicting the flow distribution in water cooled reactor cores
International Nuclear Information System (INIS)
Randles, J.; Roberts, H.A.
1961-03-01
The method presented here of evaluating the flow rate in individual reactor channels may be applied to any type of water cooled reactor in which boiling occurs The flow distribution is calculated with the aid of a MERCURY autocode programme, BREACH, which is described in detail. This programme computes the steady state longitudinal void distribution and pressure drop in a single channel on the basis of the homogeneous model of two phase flow. (author)
A method and programme (BREACH) for predicting the flow distribution in water cooled reactor cores
Energy Technology Data Exchange (ETDEWEB)
Randles, J; Roberts, H A [Technical Assessments and Services Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)
1961-03-15
The method presented here of evaluating the flow rate in individual reactor channels may be applied to any type of water cooled reactor in which boiling occurs The flow distribution is calculated with the aid of a MERCURY autocode programme, BREACH, which is described in detail. This programme computes the steady state longitudinal void distribution and pressure drop in a single channel on the basis of the homogeneous model of two phase flow. (author)
Identification method of non-reflective faults based on index distribution of optical fibers.
Lee, Wonkyoung; Myong, Seung Il; Lee, Jyung Chan; Lee, Sangsoo
2014-01-13
This paper investigates an identification method of non-reflective faults based on index distribution of optical fibers. The method identifies not only reflective faults but also non-reflective faults caused by tilted fiber-cut, lateral connector-misalignment, fiber-bend, and temperature variation. We analyze the reason why wavelength dependence of the fiber-bend is opposite to that of the lateral connector-misalignment, and the effect of loss due to temperature variation on OTDR waveforms through simulation and experimental results. This method can be realized by only upgrade of fault-analysis software without the hardware change, it is, therefore, competitive and cost-effective in passive optical networks.
Methods and apparatuses for information analysis on shared and distributed computing systems
Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA
2011-02-22
Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.
Particle size distribution of iron nanomaterials in biological medium by SR-SAXS method
International Nuclear Information System (INIS)
Jing Long; Feng Weiyue; Wang Bing; Wang Meng; Ouyang Hong; Zhao Yuliang; Chai Zhifang; Wang Yun; Wang Huajiang; Zhu Motao; Wu Zhonghua
2009-01-01
A better understanding of biological effects of nanomaterials in organisms requests knowledge of the physicochemical properties of nanomaterials in biological systems. Affected by high concentration salts and proteins in biological medium, nanoparticles are much easy to agglomerate,hence the difficulties in characterizing size distribution of the nanomaterials in biological medium.In this work, synchrotron radiation small angle X-ray scattering(SR-SAXS) was used to determine size distributions of Fe, Fe 2 O 3 and Fe 3 O 4 nanoparticles of various concentrations in PBS and DMEM culture medium. The results show that size distributions of the nanomaterials could perfectly analyzed by SR-SAXS. The SR-SAXS data were not affected by the particle content and types of the dispersion medium.It is concluded that SR-SAXS can be used for size measurement of nanomaterials in unstable dispersion systems. (authors)
Maximum-likelihood methods for array processing based on time-frequency distributions
Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.
1999-11-01
This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.
Unifying distribution functions: some lesser known distributions.
Moya-Cessa, J R; Moya-Cessa, H; Berriel-Valdos, L R; Aguilar-Loreto, O; Barberis-Blostein, P
2008-08-01
We show that there is a way to unify distribution functions that describe simultaneously a classical signal in space and (spatial) frequency and position and momentum for a quantum system. Probably the most well known of them is the Wigner distribution function. We show how to unify functions of the Cohen class, Rihaczek's complex energy function, and Husimi and Glauber-Sudarshan distribution functions. We do this by showing how they may be obtained from ordered forms of creation and annihilation operators and by obtaining them in terms of expectation values in different eigenbases.
Review of islanding detection methods for distributed generation
DEFF Research Database (Denmark)
Chen, Zhe; Mahat, Pukar; Bak-Jensen, Birgitte
2008-01-01
This paper presents an overview of power system islanding and islanding detection techniques. Islanding detection techniques, for a distribution system with distributed generation (DG), can broadly be divided into remote and local techniques. A remote islanding detection technique is associated...
International Nuclear Information System (INIS)
Fukushima, Edwardo F.; Hirose, Shigeo
2000-01-01
This paper introduces an attitude control scheme based in optimal force distribution using quadratic programming which minimizes joint energy consumption. This method shares similarities with force distribution for multifingered hands, multiple coordinated manipulators and legged walking robots. In particular, an attitude control scheme was introduced inside the force distribution problem, and successfully implemented for control of the articulated body mobile robot KR-II. This is an actual mobile robot composed of cylindrical segments linked in series by prismatic joints and has a long snake-like appearance. These prismatic joints are force controlled so that each segment's vertical motion can automatically follow the terrain irregularities. An attitude control is necessary because this system acts like a system of wheeled inverted pendulum carts connected in series, being unstable by nature. The validity and effectiveness of the proposed method is verified by computer simulation and experiments with the robot KR-II. (author)
Tull, Eugene S.; Thurland, Anne; LaPorte, Ronald E.; Chambers, Earle C.
2003-01-01
The objective of this study was to determine whether acculturation and psychosocial stress exert differential effects on body fat distribution and insulin resistance among native-born African Americans and African-Caribbean immigrants living in the US Virgin Islands (USVI). Data collected from a non-diabetic sample of 183 USVI-born African Americans and 296 African-Caribbean immigrants age > 20 on the island of St. Croix, USVI were studied. Information on demographic characteristics, acculturation and psychosocial stress was collected by questionnaire. Anthropometric measurements were taken, and serum glucose and insulin were measured from fasting blood samples. Insulin resistance was estimated by the homeostasis model assessment (HOMA) method. The results showed that in multivariate regression analyses, controlling for age, education, gender, BMI, waist circumference, family history of diabetes, smoking and alcohol consumption, acculturation was independently related to logarithm of HOMA (InHOMA) scores among USVI-born African Americans, but not among African-Caribbean immigrants. In contrast, among USVI-born African Americans psychosocial stress was not significantly related to InHOMA, while among African-Caribbean immigrants psychosocial stress was independently related to InHOMA in models that included BMI, but not in those which included waist circumference. This study suggests that acculturation and psychosocial stress may have a differential effect on body fat distribution and insulin resistance among native-born and immigrant blacks living in the US Virgin Islands. PMID:12911254
Menduni, Giovanni; Pagani, Alessandro; Rulli, Maria Cristina; Rosso, Renzo
2002-02-01
The extraction of the river network from a digital elevation model (DEM) plays a fundamental role in modelling spatially distributed hydrological processes. The present paper deals with a new two-step procedure based on the preliminary identification of an ideal drainage network (IDN) from contour lines through a variable mesh size, and the further extraction of the actual drainage network (AND) from the IDN using land morphology. The steepest downslope direction search is used to identify individual channels, which are further merged into a network path draining to a given node of the IDN. The contributing area, peaks and saddles are determined by means of a steepest upslope direction search. The basin area is thus partitioned into physically based finite elements enclosed by irregular polygons. Different methods, i.e. the constant and variable threshold area methods, the contour line curvature method, and a topologic method descending from the Hortonian ordering scheme, are used to extract the ADN from the IDN. The contour line curvature method is shown to provide the most appropriate method from a comparison with field surveys. Using the ADN one can model the hydrological response of any sub-basin using a semi-distributed approach. The model presented here combines storm abstraction by the SCS-CN method with surface runoff routing as a geomorphological dispersion process. This is modelled using the gamma instantaneous unit hydrograph as parameterized by river geomorphology. The results are implemented using a project-oriented software facility for the Analysis of LAnd Digital HYdrological Networks (ALADHYN).
Concise method for evaluating the probability distribution of the marginal cost of power generation
International Nuclear Information System (INIS)
Zhang, S.H.; Li, Y.Z.
2000-01-01
In the developing electricity market, many questions on electricity pricing and the risk modelling of forward contracts require the evaluation of the expected value and probability distribution of the short-run marginal cost of power generation at any given time. A concise forecasting method is provided, which is consistent with the definitions of marginal costs and the techniques of probabilistic production costing. The method embodies clear physical concepts, so that it can be easily understood theoretically and computationally realised. A numerical example has been used to test the proposed method. (author)
Energy Technology Data Exchange (ETDEWEB)
Wang Wei [Key Laboratory of Marine Chemistry Theory and Technology, Ministry of Education, Ocean University of China, College of Chemistry and Chemical Engineering, Qingdao, 266100 (China)], E-mail: wwei@ouc.edu.cn; Zhang Xia [Key Laboratory of Marine Chemistry Theory and Technology, Ministry of Education, Ocean University of China, College of Chemistry and Chemical Engineering, Qingdao, 266100 (China); Wang Jia [Key Laboratory of Marine Chemistry Theory and Technology, Ministry of Education, Ocean University of China, College of Chemistry and Chemical Engineering, Qingdao, 266100 (China); State Key Laboratory for Corrosion and Protection, Shenyang, 110016 (China)
2009-09-30
The wire beam electrode (WBE) method was first used to study the activity of local glucose oxidase (GOD) on stainless steel surface in seawater. Glucose oxidase was immobilized in calcium alginate gel capsules, which were embedded in a layer of artificial biofilm (calcium alginate gel) on the WBE surface. The potential/current distributions on the WBE surface were mapped using a newly developed device for the WBE method in our lab. The results demonstrated that the catalysis of H{sub 2}O{sub 2} formation by GOD can produce local noble potential peaks and cathodic current zones on the stainless steel surface. An interesting fluctuant current distribution around cathodic zones was observed the first time. The potential and current maps showed that the enzyme heterogeneity of the artificial biofilm caused a corresponding electrochemical heterogeneity at the biofilm/metal interface. The application of the WBE method to ennoblement study enables us to observe the heterogeneous electrochemistry at biofilm/stainless steel interface directly, providing us with a powerful tool to investigate other biofilm-related processes such as microbially influenced corrosion (MIC)
International Nuclear Information System (INIS)
Wang Wei; Zhang Xia; Wang Jia
2009-01-01
The wire beam electrode (WBE) method was first used to study the activity of local glucose oxidase (GOD) on stainless steel surface in seawater. Glucose oxidase was immobilized in calcium alginate gel capsules, which were embedded in a layer of artificial biofilm (calcium alginate gel) on the WBE surface. The potential/current distributions on the WBE surface were mapped using a newly developed device for the WBE method in our lab. The results demonstrated that the catalysis of H 2 O 2 formation by GOD can produce local noble potential peaks and cathodic current zones on the stainless steel surface. An interesting fluctuant current distribution around cathodic zones was observed the first time. The potential and current maps showed that the enzyme heterogeneity of the artificial biofilm caused a corresponding electrochemical heterogeneity at the biofilm/metal interface. The application of the WBE method to ennoblement study enables us to observe the heterogeneous electrochemistry at biofilm/stainless steel interface directly, providing us with a powerful tool to investigate other biofilm-related processes such as microbially influenced corrosion (MIC).
Combined operation of AC and DC distribution system with distributed generation units
International Nuclear Information System (INIS)
Noroozian, R.; Abedi, M.; Gharehpetian, G.
2010-01-01
This paper presents a DC distribution system which has been supplied by external AC systems as well as local DG units in order to demonstrate an overall solution to power quality issue. In this paper, the proposed operation method is demonstrated by simulation of power transfer between external AC systems, DG units, AC and DC loads. The power flow control in DC distribution system has been achieved by network converters and DG converters. Also, the mathematical model of the network, DG and load converters are obtained by using the average technique, which allows converter systems accurately simulated and control strategies for this converters is achieved. A suitable control strategy for network converters has been proposed that involves DC voltage droop regulator and novel instantaneous power regulation scheme. Also, a novel control technique has been proposed for DG converters. In this paper, a novel control system based on stationary and synchronously rotating reference frame has been proposed for load converters for supplying AC loads connected to the DC bus by balanced voltages. The several case studies have been studied based on proposed methods. The simulation results show that DC distribution systems including DG units can improve the power quality at the point of common coupling (PCC) in the power distribution system or industrial power system. (authors)
Directory of Open Access Journals (Sweden)
J. Szymszal
2009-01-01
Full Text Available The study discusses application of computer simulation based on the method of inverse cumulative distribution function. The simulationrefers to an elementary static case, which can also be solved by physical experiment, consisting mainly in observations of foundryproduction in a selected foundry plant. For the simulation and forecasting of foundry production quality in selected cast iron grade, arandom number generator of Excel calculation sheet was chosen. Very wide potentials of this type of simulation when applied to theevaluation of foundry production quality were demonstrated, using a number generator of even distribution for generation of a variable ofan arbitrary distribution, especially of a preset empirical distribution, without any need of adjusting to this variable the smooth theoreticaldistributions.
Watanabe, Masahito; Eguchi, Minoru; Hibiya, Taketoshi
1999-07-01
A novel method for control and homogenization oxygen distribution in silicon crystals by using electromagnetic force (EMF) to rotate the melt without crucible rotation has been developed. We call it electromagnetic Czochralski method. An EMF in the azimuthal direction is generated in the melt by the interaction between an electric current through the melt in the radial direction and a vertical magnetic field. (B). The rotation rate (ωm) of the silicon melt is continuously changed from 0 to over 105 rpm under I equals 0 to 8 A and B equals 0 to 0.1 T. Thirty-mm-diameter silicon single crystals free of dislocations could be grown under several conditions. The oxygen concentration in the crystals was continuously changed from 1 X 1017 to 1 X 1018 atoms/cm3 with increase of melt rotation by electromagnetic force. The homogeneous oxygen distributions in the radial directions were achieved. The continuous change of oxygen concentration and the homogenization of oxygen distribution along the radial direction are attributed to the control of the diffusion-boundary-layer at both the melt/crucible and crystal/melt by forced flow due to the EMF. This new method would be useful for growth of the large-diameter silicon crystals with a homogeneous distribution of oxygen.
Development of methods for DSM and distribution automation planning
International Nuclear Information System (INIS)
Kaerkkaeinen, S.; Kekkonen, V.; Rissanen, P.
1998-01-01
Demand-Side Management (DSM) is usually an utility (or sometimes governmental) activity designed to influence energy demand of customers (both level and load variation). It includes basic options like strategic conservation or load growth, peak clipping. Load shifting and fuel switching. Typical ways to realize DSM are direct load control, innovative tariffs, different types of campaign etc. Restructuring of utility in Finland and increased competition in electricity market have had dramatic influence on the DSM. Traditional ways are impossible due to the conflicting interests of generation, network and supply business and increased competition between different actors in the market. Costs and benefits of DSM are divided to different companies, and different type of utilities are interested only in those activities which are beneficial to them. On the other hand, due to the increased competition the suppliers are diversifying to different types of products and increasing number of customer services partly based on DSM are available. The aim of this project was to develop and assess methods for DSM and distribution automation planning from the utility point of view. The methods were also applied to case studies at utilities
Development of methods for DSM and distribution automation planning
Energy Technology Data Exchange (ETDEWEB)
Kaerkkaeinen, S; Kekkonen, V [VTT Energy, Espoo (Finland); Rissanen, P [Tietosavo Oy (Finland)
1998-08-01
Demand-Side Management (DSM) is usually an utility (or sometimes governmental) activity designed to influence energy demand of customers (both level and load variation). It includes basic options like strategic conservation or load growth, peak clipping. Load shifting and fuel switching. Typical ways to realize DSM are direct load control, innovative tariffs, different types of campaign etc. Restructuring of utility in Finland and increased competition in electricity market have had dramatic influence on the DSM. Traditional ways are impossible due to the conflicting interests of generation, network and supply business and increased competition between different actors in the market. Costs and benefits of DSM are divided to different companies, and different type of utilities are interested only in those activities which are beneficial to them. On the other hand, due to the increased competition the suppliers are diversifying to different types of products and increasing number of customer services partly based on DSM are available. The aim of this project was to develop and assess methods for DSM and distribution automation planning from the utility point of view. The methods were also applied to case studies at utilities
Srivastava, Madhur; Freed, Jack H
2017-11-16
Regularization is often utilized to elicit the desired physical results from experimental data. The recent development of a denoising procedure yielding about 2 orders of magnitude in improvement in SNR obviates the need for regularization, which achieves a compromise between canceling effects of noise and obtaining an estimate of the desired physical results. We show how singular value decomposition (SVD) can be employed directly on the denoised data, using pulse dipolar electron spin resonance experiments as an example. Such experiments are useful in measuring distances and their distributions, P(r) between spin labels on proteins. In noise-free model cases exact results are obtained, but even a small amount of noise (e.g., SNR = 850 after denoising) corrupts the solution. We develop criteria that precisely determine an optimum approximate solution, which can readily be automated. This method is applicable to any signal that is currently processed with regularization of its SVD analysis.
A fully distributed method for dynamic spectrum sharing in femtocells
DEFF Research Database (Denmark)
Da Costa, Gustavo Wagner Oliveira; Cattoni, Andrea Fabio; Kovacs, Istvan
2012-01-01
such characteristics are combined the traditional network planning and optimization of cellular networks fails to be cost effective. Therefore, a greater deal of automation is needed in femtocells. In particular, this paper proposes a novel method for autonomous selection of spectrum/ channels in femtocells....... This method effectively mitigates cotier interference with no signaling at all across different femtocells. Still, the method has a remarkable simple implementation. The efficiency of the proposed method was evaluated by system level simulations. The results show large throughput gains for the cells...
a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution
Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin
Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.
International Nuclear Information System (INIS)
Snowdon, K.J.; Andresen, B.; Veje, E.
1978-01-01
The method of calculating relative initial level populations of excited states of sputtered atoms is developed in principle and compared with those in current use. The reason that the latter, although mathematically different, have generally led to similar population distributions is outlined. (Auth.)
International Nuclear Information System (INIS)
Chin', N.Kh.; Zazhogin, A.P.; Bulojchik, Zh.I.; Tanin, A.L.; Pashkovskaya, I.D.; Nechipurenko, N.I.
2011-01-01
Based on local analysis of the line intensities of Al, Ca, Mg, and Zn in spectra for the samples of dried drops of egg albumin, the possibility for estimation of the spatial elemental distribution by the drop diameter was demonstrated using the atomic-emission multichannel spectrometry method. It was found that with an increase in the concentration of the elements with a high diffusion coefficient (Ca) diffusion counteracts their carry-over to the boundary of evaporating drops, simultaneously displacing the salts of other elements (Al, Fe, Zn) to the drop periphery. This work shows that excitation of the analyzed surface of a dried protein drop by double laser pulses enables a semi-quantitative estimation of the distribution of essential elements by the drop radius. Such investigations look very promising in search for markers of various diseases and in the development of methods revealing the pathological processes at the preclinical stage, making it possible to look for the causes of the elemental unbalance, to realize a targeted selection of preparations and active additives, to correct the treatment course. (authors)
Ida, Midori; Hirata, Masakazu; Hosoda, Kiminori; Nakao, Kazuwa
2013-02-01
Two novel bioelectrical impedance analysis (BIA) methods have been developed recently for evaluation of intra-abdominal fat accumulation. Both methods use electrodes that are placed on abdominal wall and allow evaluation of intra-abdominal fat area (IAFA) easily without radiation exposure. Of these, "abdominal BIA" method measures impedance distribution along abdominal anterior-posterior axis, and IAFA by BIA method(BIA-IAFA) is calculated from waist circumference and the voltage occurring at the flank. Dual BIA method measures impedance of trunk and body surface at the abdominal level and calculates BIA-IAFA from transverse and antero-posterior diameters of the abdomen and the impedance of trunk and abdominal surface. BIA-IAFA by these two BIA methods correlated well with IAFA measured by abdominal CT (CT-IAFA) with correlatipn coefficient of 0.88 (n = 91, p abdominal adiposity in clinical study and routine clinical practice of metabolic syndrome and obesity.
Giri, Veda N.; Coups, Elliot J.; Ruth, Karen; Goplerud, Julia; Raysor, Susan; Kim, Taylor Y.; Bagden, Loretta; Mastalski, Kathleen; Zakrzewski, Debra; Leimkuhler, Suzanne; Watkins-Bruner, Deborah
2009-01-01
Purpose Men with a family history (FH) of prostate cancer (PCA) and African American (AA) men are at higher risk for PCA. Recruitment and retention of these high-risk men into early detection programs has been challenging. We report a comprehensive analysis on recruitment methods, show rates, and participant factors from the Prostate Cancer Risk Assessment Program (PRAP), which is a prospective, longitudinal PCA screening study. Materials and Methods Men 35–69 years are eligible if they have a FH of PCA, are AA, or have a BRCA1/2 mutation. Recruitment methods were analyzed with respect to participant demographics and show to the first PRAP appointment using standard statistical methods Results Out of 707 men recruited, 64.9% showed to the initial PRAP appointment. More individuals were recruited via radio than from referral or other methods (χ2 = 298.13, p < .0001). Men recruited via radio were more likely to be AA (p<0.001), less educated (p=0.003), not married or partnered (p=0.007), and have no FH of PCA (p<0.001). Men recruited via referrals had higher incomes (p=0.007). Men recruited via referral were more likely to attend their initial PRAP visit than those recruited by radio or other methods (χ2 = 27.08, p < .0001). Conclusions This comprehensive analysis finds that radio leads to higher recruitment of AA men with lower socioeconomic status. However, these are the high-risk men that have lower show rates for PCA screening. Targeted motivational measures need to be studied to improve show rates for PCA risk assessment for these high-risk men. PMID:19758657
DEFF Research Database (Denmark)
Jensen, Kåre Jean; Munk, Steen M.; Sørensen, John Aasted
1998-01-01
A new approach to the localization of high impedance ground faults in compensated radial power distribution networks is presented. The total size of such networks is often very large and a major part of the monitoring of these is carried out manually. The increasing complexity of industrial...... of three phase voltages and currents. The method consists of a feature extractor, based on a grid description of the feeder by impulse responses, and a neural network for ground fault localization. The emphasis of this paper is the feature extractor, and the detection of the time instance of a ground fault...... processes and communication systems lead to demands for improved monitoring of power distribution networks so that the quality of power delivery can be kept at a controlled level. The ground fault localization method for each feeder in a network is based on the centralized frequency broadband measurement...
Energy Technology Data Exchange (ETDEWEB)
Choi, Jang-Hwan, E-mail: jhchoi21@stanford.edu [Department of Radiology, Stanford University, Stanford, California 94305 and Department of Mechanical Engineering, Stanford University, Stanford, California 94305 (United States); Constantin, Dragos [Microwave Physics R& E, Varian Medical Systems, Palo Alto, California 94304 (United States); Ganguly, Arundhuti; Girard, Erin; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Morin, Richard L. [Mayo Clinic Jacksonville, Jacksonville, Florida 32224 (United States); Dixon, Robert L. [Department of Radiology, Wake Forest University, Winston-Salem, North Carolina 27157 (United States)
2015-08-15
.0% [±0.6%] difference) and the 6-point case (0.7% [±0.6%] difference) performed best for method 1 and method 2, respectively. Moreover, method 2 demonstrated high-fidelity surface reconstruction with as few as 5 points, showing pixelwise absolute differences of 3.80 mGy (±0.32 mGy). Although the performance was shown to be sensitive to the phantom displacement from the isocenter, the performance changed by less than 2% for shifts up to 2 cm in the x- and y-axes in the central phantom plane. Conclusions: With as few as five points, method 1 and method 2 were able to compute the mean dose with reasonable accuracy, demonstrating differences of 1.7% (±1.2%) and 1.3% (±1.0%), respectively. A larger number of points do not necessarily guarantee better performance of the methods; optimal choice of point placement is necessary. The performance of the methods is sensitive to the alignment of the center of the body phantom relative to the isocenter. In body applications where dose distributions are important, method 2 is a better choice than method 1, as it reconstructs the dose surface with high fidelity, using as few as five points.
Predictable return distributions
DEFF Research Database (Denmark)
Pedersen, Thomas Quistgaard
trace out the entire distribution. A univariate quantile regression model is used to examine stock and bond return distributions individually, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that certain parts of the return distributions......-of-sample analyses show that the relative accuracy of the state variables in predicting future returns varies across the distribution. A portfolio study shows that an investor with power utility can obtain economic gains by applying the empirical return distribution in portfolio decisions instead of imposing...
Five Kepler target stars that show multiple transiting exoplanet candidates
Energy Technology Data Exchange (ETDEWEB)
Steffen, Jason H.; /Fermilab; Batalha, Natalie M.; /San Jose State U.; Borucki, William J.; /NASA, Ames; Buchhave, Lars A.; /Harvard-Smithsonian Ctr. Astrophys. /Bohr Inst.; Caldwell, Douglas A.; /NASA, Ames /SETI Inst., Mtn. View; Cochran, William D.; /Texas U.; Endl, Michael; /Texas U.; Fabrycky, Daniel C.; /Harvard-Smithsonian Ctr. Astrophys.; Fressin, Francois; /Harvard-Smithsonian Ctr. Astrophys.; Ford, Eric B.; /Florida U.; Fortney, Jonathan J.; /UC, Santa Cruz, Phys. Dept. /NASA, Ames
2010-06-01
We present and discuss five candidate exoplanetary systems identified with the Kepler spacecraft. These five systems show transits from multiple exoplanet candidates. Should these objects prove to be planetary in nature, then these five systems open new opportunities for the field of exoplanets and provide new insights into the formation and dynamical evolution of planetary systems. We discuss the methods used to identify multiple transiting objects from the Kepler photometry as well as the false-positive rejection methods that have been applied to these data. One system shows transits from three distinct objects while the remaining four systems show transits from two objects. Three systems have planet candidates that are near mean motion commensurabilities - two near 2:1 and one just outside 5:2. We discuss the implications that multitransiting systems have on the distribution of orbital inclinations in planetary systems, and hence their dynamical histories; as well as their likely masses and chemical compositions. A Monte Carlo study indicates that, with additional data, most of these systems should exhibit detectable transit timing variations (TTV) due to gravitational interactions - though none are apparent in these data. We also discuss new challenges that arise in TTV analyses due to the presence of more than two planets in a system.
Directory of Open Access Journals (Sweden)
Joon-Ho Choi
2013-09-01
Full Text Available A distribution system was designed and operated by considering unidirectional power flow from a utility source to end-use loads. The large penetrations of distributed generation (DG into the existing distribution system causes a variety of technical problems, such as frequent tap changing problems of the on-load tap changer (OLTC transformer, local voltage rise, protection coordination, exceeding short-circuit capacity, and harmonic distortion. In view of voltage regulation, the intermittent fluctuation of the DG output power results in frequent tap changing operations of the OLTC transformer. Thus, many utilities limit the penetration level of DG and are eager to find the reasonable penetration limits of DG in the distribution system. To overcome this technical problem, utilities have developed a new voltage regulation method in the distribution system with a large DG penetration level. In this paper, the impact of DG on the OLTC operations controlled by the line drop compensation (LDC method is analyzed. In addition, a generalized determination methodology for the DG penetration limits in a distribution substation transformer is proposed. The proposed DG penetration limits could be adopted for a simplified interconnection process in DG interconnection guidelines.
International Nuclear Information System (INIS)
Muhammad Subekti; Darwis Isnaini; Endiah Puji Hastuti
2013-01-01
The measurement experiment for coolant-velocity distribution in the subchannel of fuel element of RSG-GAS research reactor is difficult to be carried out due to too narrow channel and subchannel placed inside the fuel element. Hence, the calculation is required to predict the coolant-velocity distribution inside subchannel to confirm that the handle presence does not ruin the velocity distribution into every subchannel. This calculation utilizes CFD method, which respect to 3-dimension interior. Moreover, the calculation of coolant-velocity distribution inside subchannel was not ever carried out. The research object is to investigate the distribution of coolant-velocity in plat-typed fuel element using 3-dimension CFD method for RSG-GAS research reactor. This research is required as a part of the development of thermalhydraulic design of fuel element for innovative research reactor as well. The modeling uses ½ model in Gambit software and calculation uses turbulence equation in FLUENT 6.3 software. Calculation result of 3D coolant-velocity in subchannel using CFD method is lower about 4.06 % than 1D calculation result due to 1D calculation obeys handle availability. (author)
Method of estimating the reactor power distribution
International Nuclear Information System (INIS)
Mitsuta, Toru; Fukuzaki, Takaharu; Doi, Kazuyori; Kiguchi, Takashi.
1984-01-01
Purpose: To improve the calculation accuracy for the power distribution thereby improve the reliability of power distribution monitor. Constitution: In detector containing strings disposed within a reactor core, movable type neutron flux monitors are provided in addition to position fixed type neutron monitors conventionally disposed so far. Upon periodical monitoring, a power distribution X1 is calculated from a physical reactor core model. Then, a higher power position X2 is detected by position detectors and value X2 is sent to a neutron flux monitor driving device to displace the movable type monitors to a higher power position in each of the strings. After displacement, the value X1 is amended by an amending device using measured values from the movable type and fixed type monitors and the amended value is sent to a reactor core monitor device. Upon failure of the fixed type monitors, the position is sent to the monitor driving device and the movable monitors are displaced to that position for measurement. (Sekiya, K.)
A new method for distribution of consumed heat in a fuel and costs in power and heating plants
Energy Technology Data Exchange (ETDEWEB)
Kadrnozka, J [Technical Univ., Brno (Czech Republic)
1993-09-01
There is described a new method for distribution of consumed heat in a fuel and costs in the power and heating plants, which is based on the relatively the same proportion of advantages followed from combine generation of electricity and heat on electricity and heat. The method is physically substantiated, it is very universal and it is applied for new types of power and heating plants and for distribution of investment costs and other costs. (orig./GL)
comparison of estimation methods for fitting weibull distribution
African Journals Online (AJOL)
Tersor
Tree diameter characterisation using probability distribution functions is essential for determining the structure of forest stands. This has been an intrinsic part of forest management planning, decision-making and research in recent times. The distribution of species and tree size in a forest area gives the structure of the stand.
Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method
Directory of Open Access Journals (Sweden)
Shaoyun Ge
2014-01-01
Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.
International Nuclear Information System (INIS)
Lovera, Oscar M.; Grove, Marty; Kimbrough, David L.; Abbott, Patrick L.
1999-01-01
We have developed a two-dimensional, thermokinetic model that predicts the closure age distributions of detrital minerals from pervasively intruded and differentially exhumed basement. Using this model, we outline a method to determine the denudation history of orogenic regions on the basis of closure age distributions in synorogenic to postorogenic forearc strata. At relatively high mean denudation rates of 0.5 km m.y.-1 sustained over millions of years, magmatic heating events have minimal influence upon the age distributions of detrital minerals such as K-feldspar that are moderately retentive of radiogenic Ar. At lower rates, however, the effects of batholith emplacement may be substantial. We have applied the approach to detrital K-feldspars from forearc strata derived from the deeply denuded Peninsular Ranges batholith (PRB). Agreement of the denudation history deduced from the detrital K-feldspar data with thermochronologic constraints from exposed PRB basement lead us to conclude that exhumation histories of magmatic arcs should be decipherable solely from closure age distributions of detrital minerals whose depositional age is known. (c) 1999 American Geophysical Union
An analog computer method for solving flux distribution problems in multi region nuclear reactors
Energy Technology Data Exchange (ETDEWEB)
Radanovic, L; Bingulac, S; Lazarevic, B; Matausek, M [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)
1963-04-15
The paper describes a method developed for determining criticality conditions and plotting flux distribution curves in multi region nuclear reactors on a standard analog computer. The method, which is based on the one-dimensional two group treatment, avoids iterative procedures normally used for boundary value problems and is practically insensitive to errors in initial conditions. The amount of analog equipment required is reduced to a minimum and is independent of the number of core regions and reflectors. (author)
Harris, Scott H.; Johnson, Joel A.; Neiswanger, Jeffery R.; Twitchell, Kevin E.
2004-03-09
The present invention includes systems configured to distribute a telephone call, communication systems, communication methods and methods of routing a telephone call to a customer service representative. In one embodiment of the invention, a system configured to distribute a telephone call within a network includes a distributor adapted to connect with a telephone system, the distributor being configured to connect a telephone call using the telephone system and output the telephone call and associated data of the telephone call; and a plurality of customer service representative terminals connected with the distributor and a selected customer service representative terminal being configured to receive the telephone call and the associated data, the distributor and the selected customer service representative terminal being configured to synchronize, application of the telephone call and associated data from the distributor to the selected customer service representative terminal.
Garrido, Eva; Camacho-Muñoz, Dolores; Martín, Julia; Santos, Antonio; Santos, Juan Luis; Aparicio, Irene; Alonso, Esteban
2016-12-01
Guadiamar River is located in the southwest of the Iberian Peninsula and connects two protected areas in the South of Spain: Sierra Morena and Doñana National Park. It is sited in an area affected by urban, industrial and agriculture sewage pollution and with tradition on intensive mining activities. Most of the studies performed in this area have been mainly focused on the presence of heavy metals and, until now, little is known about the occurrence of other contaminants such as emerging organic pollutants (EOPs). In this work, an analytical method has been optimized and validated for monitoring of forty-seven EOPs in surface water. The analytical method has been applied to study the distribution and environmental risk of these pollutants in Guadiamar River basin. The analytical method was based on solid-phase extraction and determination by liquid chromatography-triple quadrupole-tandem mass spectrometry. The 60 % of the target compounds were found in the analyzed samples. The highest concentrations were found for two plasticizers (bisphenol A and di(2-ethyhexyl)phthalate, mean concentration up to 930 ng/L) and two pharmaceutical compounds (caffeine (up to 623 ng/L) and salicylic acid (up to 318 ng/L)). This study allowed to evaluate the potential sources (industrial or urban) of the studied compounds and the spatial distribution of their concentrations along the river. Environmental risk assessment showed a major risk on the south of the river, mainly due to discharges of wastewater effluents.
Dumont, Gaël; Pilawski, Tamara; Dzaomuho-Lenieregue, Phidias; Hiligsmann, Serge; Delvigne, Frank; Thonart, Philippe; Robert, Tanguy; Nguyen, Frédéric; Hermans, Thomas
2016-09-01
The gravimetric water content of the waste material is a key parameter in waste biodegradation. Previous studies suggest a correlation between changes in water content and modification of electrical resistivity. This study, based on field work in Mont-Saint-Guibert landfill (Belgium), aimed, on one hand, at characterizing the relationship between gravimetric water content and electrical resistivity and on the other hand, at assessing geoelectrical methods as tools to characterize the gravimetric water distribution in a landfill. Using excavated waste samples obtained after drilling, we investigated the influences of the temperature, the liquid phase conductivity, the compaction and the water content on the electrical resistivity. Our results demonstrate that Archie's law and Campbell's law accurately describe these relationships in municipal solid waste (MSW). Next, we conducted a geophysical survey in situ using two techniques: borehole electromagnetics (EM) and electrical resistivity tomography (ERT). First, in order to validate the use of EM, EM values obtained in situ were compared to electrical resistivity of excavated waste samples from corresponding depths. The petrophysical laws were used to account for the change of environmental parameters (temperature and compaction). A rather good correlation was obtained between direct measurement on waste samples and borehole electromagnetic data. Second, ERT and EM were used to acquire a spatial distribution of the electrical resistivity. Then, using the petrophysical laws, this information was used to estimate the water content distribution. In summary, our results demonstrate that geoelectrical methods represent a pertinent approach to characterize spatial distribution of water content in municipal landfills when properly interpreted using ground truth data. These methods might therefore prove to be valuable tools in waste biodegradation optimization projects. Copyright © 2016 Elsevier Ltd. All rights reserved.
Huang, Zhi-Yong; Xie, Hong; Cao, Ying-Lan; Cai, Chao; Zhang, Zhi
2014-02-15
The contamination of Pb in agricultural soils is one of the most important ecological problems, which potentially results in serious health risk on human health through food chain. Hence, the fate of exogenous Pb contaminated in agricultural soils is needed to be deeply explored. By spiking soils with the stable enriched isotopes of (206)Pb, the contamination of exogenous Pb(2+) ions in three agricultural soils sampled from the estuary areas of Jiulong River, China was simulated in the present study, and the distribution, mobility and bioavailability of exogenous Pb in the soils were investigated using the isotopic labeling method coupled with a four-stage BCR (European Community Bureau of Reference) sequential extraction procedure. Results showed that about 60-85% of exogenous Pb was found to distribute in reducible fractions, while the exogenous Pb in acid-extractable fractions was less than 1.0%. After planting, the amounts of exogenous Pb presenting in acid-extractable, reducible and oxidizable fractions in rhizospheric soils decreased by 60-66%, in which partial exogenous Pb was assimilated by plants while most of the metal might transfer downward due to daily watering and applying fertilizer. The results show that the isotopic labeling technique coupled with sequential extraction procedures enables us to explore the distribution, mobility and bioavailability of exogenous Pb contaminated in soils, which may be useful for the further soil remediation. Copyright © 2014 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Gigase, Yves
2007-01-01
Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide as example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)
Walker, Guy H; Stanton, Neville A; Baber, Chris; Wells, Linda; Gibson, Huw; Salmon, Paul; Jenkins, Daniel
2010-02-01
Command and control is a generic activity involving the exercise of authority over assigned resources, combined with planning, coordinating and controlling how those resources are used. The challenge for understanding this type of activity is that it is not often amenable to the conventional experimental/methodological approach. Command and control tends to be multi-faceted (so requires more than one method), is made up of interacting socio and technical elements (so requires a systemic approach) and exhibits aggregate behaviours that emerge from these interactions (so requires methods that go beyond reductionism). In these circumstances a distributed cognition approach is highly appropriate yet the existing ethnographic methods make it difficult to apply and, for non-specialist audiences, sometimes difficult to meaningfully interpret. The Event Analysis for Systemic Teamwork method is put forward as a means of working from a distributed cognition perspective but in a way that goes beyond ethnography. A worked example from Air Traffic Control is used to illustrate how the language of social science can be translated into the language of systems analysis. Statement of Relevance: Distributed cognition provides a highly appropriate conceptual response to complex work settings such as Air Traffic Control. This paper deals with how to realise those benefits in practice without recourse to problematic ethnographic techniques.
A nodal method of calculating power distributions for LWR-type reactors with square fuel lattices
International Nuclear Information System (INIS)
Hoeglund, Randolph.
1980-06-01
A nodal model is developed for calculating the power distribution in the core of a light water reactor with a square fuel lattice. The reactor core is divided into a number of more or less cubic nodes and a nodal coupling equation, which gives the thermal power density in one node as a function of the power densities in the neighbour nodes, is derived from the neutron diffusion equations for two energy groups. The three-dimensional power distribution can be computed iteratively using this coupling equation, for example following the point Jacobi, the Gauss-Seidel or the point successive overrelaxation scheme. The method has been included as the neutronic model in a reactor core simulation computer code BOREAS, where it is combined with a thermal-hydraulic model in order to make a simultaneous computation of the interdependent power and void distributions in a boiling water reactor possible. Also described in this report are a method for temporary one-dimensional iteration developed in order to accelerate the iterative solution of the problem and the Haling principle which is widely used in the planning of reloading operations for BWR reactors. (author)
International Nuclear Information System (INIS)
Ekspong, G.; Johansson, H.
1976-04-01
In high energy particle reactions where many neutral pions may be produced the information contained in the decay gamma radiation can be converted to information about the neutral pions. Two methods are described to obtain the moments of the multiplicity distribution of the neutral pions from the distribution of the number of electron-positron pairs. (Auth.)
Standardization of a method to study the distribution of Americium in purex process
International Nuclear Information System (INIS)
Dapolikar, T.T.; Pant, D.K.; Kapur, H.N.; Kumar, Rajendra; Dubey, K.
2017-01-01
In the present work the distribution of Americium in PUREX process is investigated in various process streams. For this purpose a method has been standardized for the determination of Am in process samples. The method involves extraction of Am with associated actinides using 30% TRPO-NPH at 0.3M HNO 3 followed by selective stripping of Am from the organic phase into aqueous phase at 6M HNO 3 . The assay of aqueous phase for Am content is carried out by alpha radiometry. The investigation has revealed that 100% Am follows the HLLW route. (author)
Determination of the particle size distribution in a powder using radiotracers
International Nuclear Information System (INIS)
Revilla D, R.
1974-01-01
To determine experimentally the particle size distribution in a powder the meshed method is generally used. This method has the disadvantage that in the obtained distribution is not observed at detail the fine structure of such distribution. In this work, a method for obtaining the distribution of particle size using radiotracers is presented. In the obtained distribution by this method it is observed with more detail the fine structure of the distribution, comparing with the obtained results by the classical method of meshed. The radiotracer method has major resolution for the experimental determination mentioned. In the chapter 1, it is done a brief analysis about theoretical aspects related with the method. In the first part it is analysed the particle behavior (sedimenting) in a fluid. The second part treats the relating with the radioactivity of an activated material as well as its detection. In the chapter 2, a description of the method is done also the experimental problems to applying to the alumina crystals sample are discussed. In the chapter 3 the obtained results and the mistake calculations in such results are showed. Finally, in the chapter 4 the conclusions and recommendations are given which is possible to obtain better results and improve to those in this work were obtained. (Author)
International Nuclear Information System (INIS)
Escobedo-Morales, A.; Téllez-Flores, D.; Ruiz Peralta, Ma. de Lourdes; Garcia-Serrano, J.; Herrera-González, Ana M.; Rubio-Rosas, E.; Sánchez-Mora, E.; Olivares Xometl, O.
2015-01-01
A green method for producing pristine porous ZnO nanoparticles with narrow particle size distribution is reported. This method consists in synthesizing ZnO 2 nanopowders via a hydrothermal route using cheap and non-toxic reagents, and its subsequent thermal decomposition at low temperature under a non-protective atmosphere (air). The morphology, structural and optical properties of the obtained porous ZnO nanoparticles were studied by means of powder X-ray diffraction, scanning electron microscopy, transmission electron microscopy, Raman spectroscopy, and nitrogen adsorption–desorption measurements. It was found that after thermal decomposition of the ZnO 2 powders, pristine ZnO nanoparticles are obtained. These particles are round-shaped with narrow size distribution. A further analysis of the obtained ZnO nanoparticles reveals that they are hierarchical self-assemblies of primary ZnO particles. The agglomeration of these primary particles at the very early stage of the thermal decomposition of ZnO 2 powders provides to the resulting ZnO nanoparticles a porous nature. The possibility of using the synthesized porous ZnO nanoparticles as photocatalysts has been evaluated on the degradation of rhodamine B dye. - Highlights: • A green synthesis method for obtaining porous ZnO nanoparticles is reported. • The obtained ZnO nanoparticles have narrow particle size distribution. • This method allows obtaining pristine ZnO nanoparticles avoiding unintentional doping. • A growth mechanism for the obtained porous ZnO nanoparticles is proposed
Energy Technology Data Exchange (ETDEWEB)
Escobedo-Morales, A., E-mail: alejandro.escobedo@correo.buap.mx [Facultad de Ingeniería Química, Benemérita Universidad Autónoma de Puebla, C.P. 72570 Puebla, Pue. (Mexico); Téllez-Flores, D.; Ruiz Peralta, Ma. de Lourdes [Facultad de Ingeniería Química, Benemérita Universidad Autónoma de Puebla, C.P. 72570 Puebla, Pue. (Mexico); Garcia-Serrano, J.; Herrera-González, Ana M. [Centro de Investigaciones en Materiales y Metalurgia, Universidad Autónoma del Estado de Hidalgo, Carretera Pachuca Tulancingo Km 4.5, Pachuca, Hidalgo (Mexico); Rubio-Rosas, E. [Centro Universitario de Vinculación y Transferencia de Tecnología, Benemérita Universidad Autónoma de Puebla, C.P. 72570 Puebla, Pue. (Mexico); Sánchez-Mora, E. [Instituto de Física, Benemérita Universidad Autónoma de Puebla, Apdo. Postal J-48, 72570 Puebla, Pue. (Mexico); Olivares Xometl, O. [Facultad de Ingeniería Química, Benemérita Universidad Autónoma de Puebla, C.P. 72570 Puebla, Pue. (Mexico)
2015-02-01
A green method for producing pristine porous ZnO nanoparticles with narrow particle size distribution is reported. This method consists in synthesizing ZnO{sub 2} nanopowders via a hydrothermal route using cheap and non-toxic reagents, and its subsequent thermal decomposition at low temperature under a non-protective atmosphere (air). The morphology, structural and optical properties of the obtained porous ZnO nanoparticles were studied by means of powder X-ray diffraction, scanning electron microscopy, transmission electron microscopy, Raman spectroscopy, and nitrogen adsorption–desorption measurements. It was found that after thermal decomposition of the ZnO{sub 2} powders, pristine ZnO nanoparticles are obtained. These particles are round-shaped with narrow size distribution. A further analysis of the obtained ZnO nanoparticles reveals that they are hierarchical self-assemblies of primary ZnO particles. The agglomeration of these primary particles at the very early stage of the thermal decomposition of ZnO{sub 2} powders provides to the resulting ZnO nanoparticles a porous nature. The possibility of using the synthesized porous ZnO nanoparticles as photocatalysts has been evaluated on the degradation of rhodamine B dye. - Highlights: • A green synthesis method for obtaining porous ZnO nanoparticles is reported. • The obtained ZnO nanoparticles have narrow particle size distribution. • This method allows obtaining pristine ZnO nanoparticles avoiding unintentional doping. • A growth mechanism for the obtained porous ZnO nanoparticles is proposed.
Methods for Prediction of Temperature Distribution in Flashover Caused by Backdraft Fire
Directory of Open Access Journals (Sweden)
Guowei Zhang
2014-01-01
Full Text Available Accurately predicting temperature distribution in flashover fire is a key issue for evacuation and fire-fighting. Now many good flashover fire experiments have be conducted, but most of these experiments are proceeded in enclosure with fixed openings; researches on fire development and temperature distribution in flashover caused by backdraft fire did not receive enough attention. In order to study flashover phenomenon caused by backdraft fire, a full-scale fire experiment was conducted in one abandoned office building. Process of fire development and temperature distribution in room and corridor were separately recorded during the experiment. The experiment shows that fire development in enclosure is closely affected by the room ventilation. Unlike existing temperature curves which have only one temperature peak, temperature in flashover caused by backdraft may have more than one peak value and that there is a linear relationship between maximum peak temperature and distance away from fire compartment. Based on BFD curve and experimental data, mathematical models are proposed to predict temperature curve in flashover fire caused by backdraft at last. These conclusions and experiment data obtained in this paper could provide valuable reference to fire simulation, hazard assessment, and fire protection design.
Energy Technology Data Exchange (ETDEWEB)
Yamai, T [Asia Air Survey Co. Ltd., Tokyo (Japan)
1993-06-30
This paper explains groundwater investigating methods, introducing particularly two methods. The resistivity tomography is a method to measure potential distribution generated when current is applied around an area to be investigated. Using the measurement result as projection data, the resistivity distribution is restructured within the area of the investigation. The paper summarizes the measurement method (measuring equipment, electrode arrangement, etc.), the analytic method (establishment of an initial model, and the alpha-center method to calculate theoretical potentials corresponding to the model), and an application example of the method in which the resistivity change was observed using permeation of salt tracers. The resistivity imaging method (two-dimensional resistivity analysis method) is superior in practicability to the above method under the present state of the art. This method does not differ from the conventional resistivity method in methodology, but improves the analytic accuracy over the conventional electrical investigation method as a result of analysis combining the simplified two-dimensional analysis method with the alpha-center method. A summary is given on the measuring and analyzing methods. 13 refs., 10 figs.
QACD: A method for the quantitative assessment of compositional distribution in geologic materials
Loocke, M. P.; Lissenberg, J. C. J.; MacLeod, C. J.
2017-12-01
In order to fully understand the petrogenetic history of a rock, it is critical to obtain a thorough characterization of the chemical and textural relationships of its mineral constituents. Element mapping combines the microanalytical techniques that allow for the analysis of major- and minor elements at high spatial resolutions (e.g., electron microbeam analysis) with 2D mapping of samples in order to provide unprecedented detail regarding the growth histories and compositional distributions of minerals within a sample. We present a method for the acquisition and processing of large area X-ray element maps obtained by energy-dispersive X-ray spectrometer (EDS) to produce a quantitative assessment of compositional distribution (QACD) of mineral populations within geologic materials. By optimizing the conditions at which the EDS X-ray element maps are acquired, we are able to obtain full thin section quantitative element maps for most major elements in relatively short amounts of time. Such maps can be used to not only accurately identify all phases and calculate mineral modes for a sample (e.g., a petrographic thin section), but, critically, enable a complete quantitative assessment of their compositions. The QACD method has been incorporated into a python-based, easy-to-use graphical user interface (GUI) called Quack. The Quack software facilitates the generation of mineral modes, element and molar ratio maps and the quantification of full-sample compositional distributions. The open-source nature of the Quack software provides a versatile platform which can be easily adapted and modified to suit the needs of the user.
Haroldson, Mark A.; Schwartz, Charles C.; Thompson, Daniel J.; Bjornlie, Daniel D.; Gunther, Kerry A.; Cain, Steven L.; Tyers, Daniel B.; Frey, Kevin L.; Aber, Bryan C.
2014-01-01
The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.
Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng
2017-12-01
There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.
EVALUATING THE NOVEL METHODS ON SPECIES DISTRIBUTION MODELING IN COMPLEX FOREST
Directory of Open Access Journals (Sweden)
C. H. Tu
2012-07-01
Full Text Available The prediction of species distribution has become a focus in ecology. For predicting a result more effectively and accurately, some novel methods have been proposed recently, like support vector machine (SVM and maximum entropy (MAXENT. However, high complexity in the forest, like that in Taiwan, will make the modeling become even harder. In this study, we aim to explore which method is more applicable to species distribution modeling in the complex forest. Castanopsis carlesii (long-leaf chinkapin, LLC, growing widely in Taiwan, was chosen as the target species because its seeds are an important food source for animals. We overlaid the tree samples on the layers of altitude, slope, aspect, terrain position, and vegetation index derived from SOPT-5 images, and developed three models, MAXENT, SVM, and decision tree (DT, to predict the potential habitat of LLCs. We evaluated these models by two sets of independent samples in different site and the effect on the complexity of forest by changing the background sample size (BSZ. In the forest with low complex (small BSZ, the accuracies of SVM (kappa = 0.87 and DT (0.86 models were slightly higher than that of MAXENT (0.84. In the more complex situation (large BSZ, MAXENT kept high kappa value (0.85, whereas SVM (0.61 and DT (0.57 models dropped significantly due to limiting the habitat close to samples. Therefore, MAXENT model was more applicable to predict species’ potential habitat in the complex forest; whereas SVM and DT models would tend to underestimate the potential habitat of LLCs.
A practical method for in-situ thickness determination using energy distribution of beta particles
International Nuclear Information System (INIS)
Yalcin, S.; Gurler, O.; Gundogdu, O.; Bradley, D.A.
2012-01-01
This paper discusses a method to determine the thickness of an absorber using the energy distribution of beta particles. An empirical relationship was obtained between the absorber thickness and the energy distribution of beta particles transmitted through. The thickness of a polyethylene radioactive source cover was determined by exploiting this relationship, which has largely been left unexploited allowing us to determine the in-situ cover thickness of beta sources in a fast, cheap and non-destructive way. - Highlights: ► A practical and in-situ unknown cover thickness determination ► Cheap and readily available compared to other techniques. ► Beta energy spectrum.
A practical method for in-situ thickness determination using energy distribution of beta particles
Energy Technology Data Exchange (ETDEWEB)
Yalcin, S., E-mail: syalcin@kastamonu.edu.tr [Kastamonu University, Education Faculty, 37200 Kastamonu (Turkey); Gurler, O. [Physics Department, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey); Gundogdu, O. [Kocaeli University, Umuttepe Campus, 41380 Kocaeli (Turkey); Bradley, D.A. [CNRP, Department of Physics, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH (United Kingdom)
2012-01-15
This paper discusses a method to determine the thickness of an absorber using the energy distribution of beta particles. An empirical relationship was obtained between the absorber thickness and the energy distribution of beta particles transmitted through. The thickness of a polyethylene radioactive source cover was determined by exploiting this relationship, which has largely been left unexploited allowing us to determine the in-situ cover thickness of beta sources in a fast, cheap and non-destructive way. - Highlights: Black-Right-Pointing-Pointer A practical and in-situ unknown cover thickness determination Black-Right-Pointing-Pointer Cheap and readily available compared to other techniques. Black-Right-Pointing-Pointer Beta energy spectrum.
Pascual Pañach, Josep
2010-01-01
Leaks are present in all water distribution systems. In this paper a method for leakage detection and localisation is presented. It uses pressure measurements and simulation models. Leakage localisation methodology is based on pressure sensitivity matrix. Sensitivity is normalised and binarised using a common threshold for all nodes, so a signatures matrix is obtained. A pressure sensor optimal distribution methodology is developed too, but it is not used in the real test. To validate this...
DEFF Research Database (Denmark)
Grumo, Davide di; Lövei, Gabor L.
2015-01-01
Despite the obligatory post-market environmental monitoring of genetically modified (GM) crops in Europe, there are no available standards on methods. Our aim was to examine the suitability of using changes in carabid body size distribution as a possible monitoring method. The sampling was carried...... informative Lorenz asymmetry coefficients. A total of 6339 carabids belonging to 38 species were captured and indentified. The analysis detected a shift in size distribution between months but no important differences in the assemblages in Bt vs. non-Bt maize plots were found. We concluded that an increasing...... body size trend from spring to autumn was evident, and the use of a multilevel analysis was important to correctly interpret the body size distribution. Therefore, the proposed methods are indeed sensitive to subtle changes in the structure of the carabid assemblages, and they have the potential...
Lamb, Brian K; Edburg, Steven L; Ferrara, Thomas W; Howard, Touché; Harrison, Matthew R; Kolb, Charles E; Townsend-Small, Amy; Dyck, Wesley; Possolo, Antonio; Whetstone, James R
2015-04-21
Fugitive losses from natural gas distribution systems are a significant source of anthropogenic methane. Here, we report on a national sampling program to measure methane emissions from 13 urban distribution systems across the U.S. Emission factors were derived from direct measurements at 230 underground pipeline leaks and 229 metering and regulating facilities using stratified random sampling. When these new emission factors are combined with estimates for customer meters, maintenance, and upsets, and current pipeline miles and numbers of facilities, the total estimate is 393 Gg/yr with a 95% upper confidence limit of 854 Gg/yr (0.10% to 0.22% of the methane delivered nationwide). This fraction includes emissions from city gates to the customer meter, but does not include other urban sources or those downstream of customer meters. The upper confidence limit accounts for the skewed distribution of measurements, where a few large emitters accounted for most of the emissions. This emission estimate is 36% to 70% less than the 2011 EPA inventory, (based largely on 1990s emission data), and reflects significant upgrades at metering and regulating stations, improvements in leak detection and maintenance activities, as well as potential effects from differences in methodologies between the two studies.
Energy Technology Data Exchange (ETDEWEB)
Senvar, O.; Sennaroglu, B.
2016-07-01
This study examines Clements’ Approach (CA), Box-Cox transformation (BCT), and Johnson transformation (JT) methods for process capability assessments through Weibull-distributed data with different parameters to figure out the effects of the tail behaviours on process capability and compares their estimation performances in terms of accuracy and precision. Design/methodology/approach: Usage of process performance index (PPI) Ppu is handled for process capability analysis (PCA) because the comparison issues are performed through generating Weibull data without subgroups. Box plots, descriptive statistics, the root-mean-square deviation (RMSD), which is used as a measure of error, and a radar chart are utilized all together for evaluating the performances of the methods. In addition, the bias of the estimated values is important as the efficiency measured by the mean square error. In this regard, Relative Bias (RB) and the Relative Root Mean Square Error (RRMSE) are also considered. Findings: The results reveal that the performance of a method is dependent on its capability to fit the tail behavior of the Weibull distribution and on targeted values of the PPIs. It is observed that the effect of tail behavior is more significant when the process is more capable. Research limitations/implications: Some other methods such as Weighted Variance method, which also give good results, were also conducted. However, we later realized that it would be confusing in terms of comparison issues between the methods for consistent interpretations... (Author)
Imaging of current distributions in superconducting thin film structures
International Nuclear Information System (INIS)
Doenitz, D.
2006-01-01
Local analysis plays an important role in many fields of scientific research. However, imaging methods are not very common in the investigation of superconductors. For more than 20 years, Low Temperature Scanning Electron Microscopy (LTSEM) has been successfully used at the University of Tuebingen for studying of condensed matter phenomena, especially of superconductivity. In this thesis LTSEM was used for imaging current distributions in different superconducting thin film structures: - Imaging of current distributions in Josephson junctions with ferromagnetic interlayer, also known as SIFS junctions, showed inhomogeneous current transport over the junctions which directly led to an improvement in the fabrication process. An investigation of improved samples showed a very homogeneous current distribution without any trace of magnetic domains. Either such domains were not present or too small for imaging with the LTSEM. - An investigation of Nb/YBCO zigzag Josephson junctions yielded important information on signal formation in the LTSEM both for Josephson junctions in the short and in the long limit. Using a reference junction our signal formation model could be verified, thus confirming earlier results on short zigzag junctions. These results, which could be reproduced in this work, support the theory of d-wave symmetry in the superconducting order parameter of YBCO. Furthermore, investigations of the quasiparticle tunneling in the zigzag junctions showed the existence of Andreev bound states, which is another indication of the d-wave symmetry in YBCO. - The LTSEM study of Hot Electron Bolometers (HEB) allowed the first successful imaging of a stable 'Hot Spot', a self-heating region in HEB structures. Moreover, the electron beam was used to induce an - otherwise unstable - hot spot. Both investigations yielded information on the homogeneity of the samples. - An entirely new method of imaging the current distribution in superconducting interference devices
Methods for obtaining true particle size distributions from cross section measurements
Energy Technology Data Exchange (ETDEWEB)
Lord, Kristina Alyse [Iowa State Univ., Ames, IA (United States)
2013-01-01
Sectioning methods are frequently used to measure grain sizes in materials. These methods do not provide accurate grain sizes for two reasons. First, the sizes of features observed on random sections are always smaller than the true sizes of solid spherical shaped objects, as noted by Wicksell [1]. This is the case because the section very rarely passes through the center of solid spherical shaped objects randomly dispersed throughout a material. The sizes of features observed on random sections are inversely related to the distance of the center of the solid object from the section [1]. Second, on a plane section through the solid material, larger sized features are more frequently observed than smaller ones due to the larger probability for a section to come into contact with the larger sized portion of the spheres than the smaller sized portion. As a result, it is necessary to find a method that takes into account these reasons for inaccurate particle size measurements, while providing a correction factor for accurately determining true particle size measurements. I present a method for deducing true grain size distributions from those determined from specimen cross sections, either by measurement of equivalent grain diameters or linear intercepts.
Reliability of twin-dependent triple junction distributions measured from a section plane
International Nuclear Information System (INIS)
Hardy, Graden B.; Field, David P.
2016-01-01
Numerous studies indicate polycrystalline triple junctions are independent microstructural features with distinct properties from their constituent grain boundaries. Despite the influence of triple junctions on material properties, it is impractical to characterize triple junctions on a large scale using current three-dimensional methods. This work demonstrates the ability to characterize twin-dependent triple junction distributions from a section plane by adopting a grain boundary plane stereology. The technique is validated through simulated distributions and simulated electron back-scatter diffraction (EBSD) data. Measures of validation and convergence are adopted to demonstrate the quantitative reliability of the technique as well as the convergence behavior of twin-dependent triple junction distributions. This technique expands the characterization power of EBSD and prepares the way for characterizing general triple junction distributions from a section plane. - Graphical abstract: The distribution of planes forming a triple junction with a given twin boundary is shown partially in the stereographic projections below from a given projection. The plot on the left shows the ideal/measured distribution and the plot on the right shows the distribution obtained from the stereological method presented here.
Nucleon momentum distribution in deuteron and other nuclei within the light-front dynamics method
International Nuclear Information System (INIS)
Antonov, A.N.; Gaidarov, M.K.; Ivanov, M.V.; Kadrev, D.N.; Krumova, G.Z.; Hodgson, P.E.; Geramb, H.V. von
2002-01-01
The relativistic light-front dynamics (LFD) method has been shown to give a correct description of the most recent data for the deuteron monopole and quadrupole charge form factors obtained at the Jefferson Laboratory for elastic electron-deuteron scattering for six values of the squared momentum transfer between 0.66 and 1.7 (GeV/c) 2 . The good agreement with the data is in contrast with the results of the existing nonrelativistic approaches. In this work we first make a complementary test of the LFD applying it to calculate another important characteristic, the nucleon momentum distribution n(q) of the deuteron, using six invariant functions f i (i=1,...,6) instead of two (S and D waves) in the nonrelativistic case. The comparison with the y-scaling data shows the decisive role of the function f 5 which at q≥500 MeV/c exceeds all other f functions (as well as the S and D waves) for the correct description of n(q) of the deuteron in the high-momentum region. Comparison with other calculations using S and D waves corresponding to various nucleon-nucleon potentials is made. Second, using clear indications that the high-momentum components of n(q) in heavier nuclei are related to those in the deuteron, we develop an approach within the natural orbital representation to calculate n(q) in (A,Z) nuclei on the basis of the deuteron momentum distribution. As examples, n(q) in 4 He, 12 C, and 56 Fe are calculated and good agreement with the y-scaling data is obtained
VAN method of short-term earthquake prediction shows promise
Uyeda, Seiya
Although optimism prevailed in the 1970s, the present consensus on earthquake prediction appears to be quite pessimistic. However, short-term prediction based on geoelectric potential monitoring has stood the test of time in Greece for more than a decade [VarotsosandKulhanek, 1993] Lighthill, 1996]. The method used is called the VAN method.The geoelectric potential changes constantly due to causes such as magnetotelluric effects, lightning, rainfall, leakage from manmade sources, and electrochemical instabilities of electrodes. All of this noise must be eliminated before preseismic signals are identified, if they exist at all. The VAN group apparently accomplished this task for the first time. They installed multiple short (100-200m) dipoles with different lengths in both north-south and east-west directions and long (1-10 km) dipoles in appropriate orientations at their stations (one of their mega-stations, Ioannina, for example, now has 137 dipoles in operation) and found that practically all of the noise could be eliminated by applying a set of criteria to the data.
Method of preparing mercury with an arbitrary isotopic distribution
Grossman, M.W.; George, W.A.
1986-12-16
This invention provides for a process for preparing mercury with a predetermined, arbitrary, isotopic distribution. In one embodiment, different isotopic types of Hg[sub 2]Cl[sub 2], corresponding to the predetermined isotopic distribution of Hg desired, are placed in an electrolyte solution of HCl and H[sub 2]O. The resulting mercurous ions are then electrolytically plated onto a cathode wire producing mercury containing the predetermined isotopic distribution. In a similar fashion, Hg with a predetermined isotopic distribution is obtained from different isotopic types of HgO. In this embodiment, the HgO is dissolved in an electrolytic solution of glacial acetic acid and H[sub 2]O. The isotopic specific Hg is then electrolytically plated onto a cathode and then recovered. 1 fig.
Directory of Open Access Journals (Sweden)
Qing He
2018-01-01
Full Text Available In this paper, the particle size distribution is reconstructed using finite moments based on a converted spline-based method, in which the number of linear system of equations to be solved reduced from 4m × 4m to (m + 3 × (m + 3 for (m + 1 nodes by using cubic spline compared to the original method. The results are verified by comparing with the reference firstly. Then coupling with the Taylor-series expansion moment method, the evolution of particle size distribution undergoing Brownian coagulation and its asymptotic behavior are investigated.
Midland reactor pressure vessel flaw distribution
International Nuclear Information System (INIS)
Foulds, J.R.; Kennedy, E.L.; Rosinski, S.T.
1993-12-01
The results of laboratory nondestructive examination (NDE), and destructive cross-sectioning of selected weldment sections of the Midland reactor pressure vessel were analyzed per a previously developed methodology in order to develop a flaw distribution. The flaw distributions developed from the NDE results obtained by two different ultrasonic test (UT) inspections (Electric Power Research Institute NDE Center and Pacific Northwest Laboratories) were not statistically significantly different. However, the distribution developed from the NDE Center's (destructive) cross-sectioning-based data was found to be significantly different than those obtained through the UT inspections. A fracture mechanics-based comparison of the flaw distributions showed that the cross-sectioning-based data, conservatively interpreted (all defects considered as flaws), gave a significantly lower vessel failure probability when compared with the failure probability values obtained using the UT-based distributions. Given that the cross-sectioning data were reportedly biased toward larger, more significant-appearing (by UT) indications, it is concluded that the nondestructive examinations produced definitively conservative results. In addition to the Midland vessel inspection-related analyses, a set of twenty-seven numerical simulations, designed to provide a preliminary quantitative assessment of the accuracy of the flaw distribution method used here, were conducted. The calculations showed that, in more than half the cases, the analysis produced reasonably accurate predictions
International Nuclear Information System (INIS)
Rauhut, J.
1982-01-01
Established methods are presented by which life distributions of machine elements can be determined on the basis of laboratory experiments and operational observations. Practical observations are given special attention as the results estimated on the basis of conventional have not been accurate enough. As an introduction, the stochastic life concept, the general method of determining life distributions, various sampling methods, and the Weibull distribution are explained. Further, possible life testing schedules and maximum-likelihood estimates are discussed for the complete sample case and for censered sampling without replacement in laboratory experiments. Finally, censered sampling with replacement in laboratory experiments is discussed; it is shown how suitable parameter estimates can be obtained for given life distributions by means of the maximum-likelihood method. (orig./RW) [de
Bayesian analysis of general failure data from an ageing distribution: advances in numerical methods
International Nuclear Information System (INIS)
Procaccia, H.; Villain, B.; Clarotti, C.A.
1996-01-01
EDF and ENEA carried out a joint research program for developing the numerical methods and computer codes needed for Bayesian analysis of component-lives in the case of ageing. Early results of this study were presented at ESREL'94. Since then the following further steps have been gone: input data have been generalized to the case that observed lives are censored both on the right and on the left; allowable life distributions are Weibull and gamma - their parameters are both unknown and can be statistically dependent; allowable priors are histograms relative to different parametrizations of the life distribution of concern; first-and-second-order-moments of the posterior distributions can be computed. In particular the covariance will give some important information about the degree of the statistical dependence between the parameters of interest. An application of the code to the appearance of a stress corrosion cracking in a tube of the PWR Steam Generator system is presented. (authors)
Data-adaptive Robust Optimization Method for the Economic Dispatch of Active Distribution Networks
DEFF Research Database (Denmark)
Zhang, Yipu; Ai, Xiaomeng; Fang, Jiakun
2018-01-01
Due to the restricted mathematical description of the uncertainty set, the current two-stage robust optimization is usually over-conservative which has drawn concerns from the power system operators. This paper proposes a novel data-adaptive robust optimization method for the economic dispatch...... of active distribution network with renewables. The scenario-generation method and the two-stage robust optimization are combined in the proposed method. To reduce the conservativeness, a few extreme scenarios selected from the historical data are used to replace the conventional uncertainty set....... The proposed extreme-scenario selection algorithm takes advantage of considering the correlations and can be adaptive to different historical data sets. A theoretical proof is given that the constraints will be satisfied under all the possible scenarios if they hold in the selected extreme scenarios, which...
A Capacity Dimensioning Method for Broadband Distribution Networks
DEFF Research Database (Denmark)
Shawky, Ahmed; Pedersen, Jens Myrup; Bergheim, Hans
2010-01-01
This paper presents capacity dimensioning for a hypothetical distribution network in the Danish municipality of Aalborg. The number of customers in need for a better service level and the continuous increase in network traffic makes it harder for ISPs to deliver high levels of service to their cu......This paper presents capacity dimensioning for a hypothetical distribution network in the Danish municipality of Aalborg. The number of customers in need for a better service level and the continuous increase in network traffic makes it harder for ISPs to deliver high levels of service...... to their customers. This paper starts by defining three levels of services, together with traffic demands based on research of traffic distribution and generation in networks. Calculations for network dimension are then calculated. The results from the dimensioning are used to compare different network topologies...
Matrix logistics indicators assessment of distributed transport hub
Directory of Open Access Journals (Sweden)
Igor Arefyev
2014-06-01
Full Text Available Background: The paper is devoted to the distributed transport hub substantiation and assessment. The paper was an example of the technique and form an array of logistical factors as variables that determine this condition. Experience in organizing multimodal transport showed that the "bottleneck" of transport logistics are items of cargo handling ports, terminals, freight stations and warehouses. At the core of the solution of these problems is the problem of estimating the variables determine the Multi-purpose Hubs. The aim is to develop a method of forming the system of logistical multiplying factors determine the role of each of the types in the technologiacal process of distributed Multi-purpose Hubs. Methods: The assessment model for the formation of Distributed Transport Units can be based on formal methods to predict the behavior of complex systems engineering complexes. Then one of the approaches to the solution of the problem may be the matrix method of technological factors. Results and conclusions: The proposed methodology of the selection and validation of logistic coefficients has the practical importance in the models development for assessing the condition and behavior of Distributed Transport.
Bufalo, Gennaro; Ambrosone, Luigi
2016-01-14
A method for studying the kinetics of thermal degradation of complex compounds is suggested. Although the method is applicable to any matrix whose grain size can be measured, herein we focus our investigation on thermogravimetric analysis, under a nitrogen atmosphere, of ground soft wheat and ground maize. The thermogravimetric curves reveal that there are two well-distinct jumps of mass loss. They correspond to volatilization, which is in the temperature range 298-433 K, and decomposition regions go from 450 to 1073 K. Thermal degradation is schematized as a reaction in the solid state whose kinetics is analyzed separately in each of the two regions. By means of a sieving analysis different size fractions of the material are separated and studied. A quasi-Newton fitting algorithm is used to obtain the grain size distribution as best fit to experimental data. The individual fractions are thermogravimetrically analyzed for deriving the functional relationship between activation energy of the degradation reactions and the particle size. Such functional relationship turns out to be crucial to evaluate the moments of the activation energy distribution, which is unknown in terms of the distribution calculated by sieve analysis. From the knowledge of moments one can reconstruct the reaction conversion. The method is applied first to the volatilization region, then to the decomposition region. The comparison with the experimental data reveals that the method reproduces the experimental conversion with an accuracy of 5-10% in the volatilization region and of 3-5% in the decomposition region.
Advanced Distribution Network Modelling with Distributed Energy Resources
O'Connell, Alison
The addition of new distributed energy resources, such as electric vehicles, photovoltaics, and storage, to low voltage distribution networks means that these networks will undergo major changes in the future. Traditionally, distribution systems would have been a passive part of the wider power system, delivering electricity to the customer and not needing much control or management. However, the introduction of these new technologies may cause unforeseen issues for distribution networks, due to the fact that they were not considered when the networks were originally designed. This thesis examines different types of technologies that may begin to emerge on distribution systems, as well as the resulting challenges that they may impose. Three-phase models of distribution networks are developed and subsequently utilised as test cases. Various management strategies are devised for the purposes of controlling distributed resources from a distribution network perspective. The aim of the management strategies is to mitigate those issues that distributed resources may cause, while also keeping customers' preferences in mind. A rolling optimisation formulation is proposed as an operational tool which can manage distributed resources, while also accounting for the uncertainties that these resources may present. Network sensitivities for a particular feeder are extracted from a three-phase load flow methodology and incorporated into an optimisation. Electric vehicles are the focus of the work, although the method could be applied to other types of resources. The aim is to minimise the cost of electric vehicle charging over a 24-hour time horizon by controlling the charge rates and timings of the vehicles. The results demonstrate the advantage that controlled EV charging can have over an uncontrolled case, as well as the benefits provided by the rolling formulation and updated inputs in terms of cost and energy delivered to customers. Building upon the rolling optimisation, a
Load forecasting method considering temperature effect for distribution network
Directory of Open Access Journals (Sweden)
Meng Xiao Fang
2016-01-01
Full Text Available To improve the accuracy of load forecasting, the temperature factor was introduced into the load forecasting in this paper. This paper analyzed the characteristics of power load variation, and researched the rule of the load with the temperature change. Based on the linear regression analysis, the mathematical model of load forecasting was presented with considering the temperature effect, and the steps of load forecasting were given. Used MATLAB, the temperature regression coefficient was calculated. Using the load forecasting model, the full-day load forecasting and time-sharing load forecasting were carried out. By comparing and analyzing the forecast error, the results showed that the error of time-sharing load forecasting method was small in this paper. The forecasting method is an effective method to improve the accuracy of load forecasting.
Directory of Open Access Journals (Sweden)
Jinhong Noh
2016-04-01
Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.
The compaction of a random distribution of metal cylinders by the discrete element method
DEFF Research Database (Denmark)
Redanz, Pia; Fleck, N. A.
2001-01-01
-linear springs. The initial packing of the particles is generated by the ballistic deposition method. Salient micromechanical features of closed die and isostatic powder compaction are elucidated for both frictionless and sticking contacts. It is found that substantial rearrangement of frictionless particles......The cold compaction of a 2D random distribution of metal circular cylinders has been investigated numerically by the discrete element method. Each cylindrical particle is located by a node at its centre and the plastic indentation of the contacts between neighbouring particles is represented by non...
Directory of Open Access Journals (Sweden)
William B Monahan
Full Text Available The ability of species to respond to novel future climates is determined in part by their physiological capacity to tolerate climate change and the degree to which they have reached and continue to maintain distributional equilibrium with the environment. While broad-scale correlative climatic measurements of a species' niche are often described as estimating the fundamental niche, it is unclear how well these occupied portions actually approximate the fundamental niche per se, versus the fundamental niche that exists in environmental space, and what fitness values bounding the niche are necessary to maintain distributional equilibrium. Here, we investigate these questions by comparing physiological and correlative estimates of the thermal niche in the introduced North American house sparrow (Passer domesticus. Our results indicate that occupied portions of the fundamental niche derived from temperature correlations closely approximate the centroid of the existing fundamental niche calculated on a fitness threshold of 50% population mortality. Using these niche measures, a 75-year time series analysis (1930-2004 further shows that: (i existing fundamental and occupied niche centroids did not undergo directional change, (ii interannual changes in the two niche centroids were correlated, (iii temperatures in North America moved through niche space in a net centripetal fashion, and consequently, (iv most areas throughout the range of the house sparrow tracked the existing fundamental niche centroid with respect to at least one temperature gradient. Following introduction to a new continent, the house sparrow rapidly tracked its thermal niche and established continent-wide distributional equilibrium with respect to major temperature gradients. These dynamics were mediated in large part by the species' broad thermal physiological tolerances, high dispersal potential, competitive advantage in human-dominated landscapes, and climatically induced
Optimization of hot water transport and distribution networks by analytical method: OPTAL program
International Nuclear Information System (INIS)
Barreau, Alain; Caizergues, Robert; Moret-Bailly, Jean
1977-06-01
This report presents optimization studies of hot water transport and distribution network by minimizing operating cost. Analytical optimization is used: Lagrange's method of undetermined multipliers. Optimum diameter of each pipe is calculated for minimum network operating cost. The characteristics of the computer program used for calculations, OPTAL, are given in this report. An example of network is calculated and described: 52 branches and 27 customers. Results are discussed [fr
International Nuclear Information System (INIS)
Soussaline, F.; Bidaut, L.; Raynaud, C.; Le Coq, G.
1983-06-01
An analytical solution to the SPECT reconstruction problem, where the actual attenuation effect can be included, was developped using a regularizing iterative method (RIM). The potential of this approach in quantitative brain studies when using a tracer for cerebrovascular disorders is now under evaluation. Mathematical simulations for a distributed activity in the brain surrounded by the skull and physical phantom studies were performed, using a rotating camera based SPECT system, allowing the calibration of the system and the evaluation of the adapted method to be used. On the simulation studies, the contrast obtained along a profile, was less than 5%, the standard deviation 8% and the quantitative accuracy 13%, for a uniform emission distribution of mean = 100 per pixel and a double attenuation coefficient of μ = 0.115 cm -1 and 0.5 cm -1 . Clinical data obtained after injection of 123 I (AMPI) were reconstructed using the RIM without and with cerebrovascular diseases or lesion defects. Contour finding techniques were used for the delineation of the brain and the skull, and measured attenuation coefficients were assumed within these two regions. Using volumes of interest, selected on homogeneous regions on an hemisphere and reported symetrically, the statistical uncertainty for 300 K events in the tomogram was found to be 12%, the index of symetry was of 4% for normal distribution. These results suggest that quantitative SPECT reconstruction for brain distribution is feasible, and that combined with an adapted tracer and an adequate model physiopathological parameters could be extracted
International Nuclear Information System (INIS)
Odano, Ikuo; Takahashi, Naoya; Ohtaki, Hiroh; Noguchi, Eikichi; Hatano, Masayoshi; Yamasaki, Yoshihiro; Nishihara, Mamiko; Ohkubo, Masaki; Yokoi, Takashi.
1993-01-01
We developed a new graphic method using N-isopropyl-p-[ 123 I]iodoamphetamine (IMP) and SPECT of the brain, the graph on which all three parameters, cerebral blood flow, distribution volume (V d ) and delayed count to early count ratio (Delayed/Early ratio), were able to be evaluated simultaneously. The kinetics of 123 I-IMP in the brain was analyzed by a 2-compartment model, and a standard input function was prepared by averaging the time activity curves of 123 I-IMP in arterial blood on 6 patients with small cerebral infarction etc. including 2 normal controls. Being applied this method to the differential diagnosis between Parkinson's disease and progressive supranuclear palsy, we were able to differentiate both with a glance, because the distribution volume of the frontal lobe significantly decreased in Parkinson's disease (Mean±SD; 26±6 ml/g). This method was clinically useful. We think that the distribution volume of 123 I-IMP may reflect its retention mechanism in the brain, and the values are related to amine, especially to dopamine receptors and its metabolism. (author)
This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...
International Nuclear Information System (INIS)
Pacilio, Massimiliano; Basile, Chiara; Amato, Ernesto; Lanconelli, Nico; Torres, Leonel Alberto; Perez, Marco Coca; Gil, Alex Vergara; Botta, Francesca; Ferrari, Mahila; Cremonesi, Marta; Diaz, Nestor Cornejo; Fernández, María; Lassmann, Michael
2015-01-01
This study compares 3D dose distributions obtained with voxel S values (VSVs) for soft tissue, calculated by several methods at their current state-of-the-art, varying the degree of image blurring. The methods were: 1) convolution of Dose Point Kernel (DPK) for water, using a scaling factor method; 2) an analytical model (AM), fitting the deposited energy as a function of the source-target distance; 3) a rescaling method (RSM) based on a set of high-resolution VSVs for each isotope; 4) local energy deposition (LED). VSVs calculated by direct Monte Carlo simulations were assumed as reference. Dose distributions were calculated considering spheroidal clusters with various sizes (251, 1237 and 4139 voxels of 3 mm size), uniformly filled with 131 I, 177 Lu, 188 Re or 90 Y. The activity distributions were blurred with Gaussian filters of various widths (6, 8 and 12 mm). Moreover, 3D-dosimetry was performed for 10 treatments with 90 Y derivatives. Cumulative Dose Volume Histograms (cDVHs) were compared, studying the differences in D 95% , D 50% or D max (ΔD 95% , ΔD 50% and ΔD max ) and dose profiles. For unblurred spheroidal clusters, ΔD 95% , ΔD 50% and ΔD max were mostly within some percents, slightly higher for 177 Lu with DPK (8%) and RSM (12%) and considerably higher for LED (ΔD 95% up to 59%). Increasing the blurring, differences decreased and also LED yielded very similar results, but D 95% and D 50% underestimations between 30–60% and 15–50%, respectively (with respect to 3D-dosimetry with unblurred distributions), were evidenced. Also for clinical images (affected by blurring as well), cDVHs differences for most methods were within few percents, except for slightly higher differences with LED, and almost systematic for dose profiles with DPK (−1.2%), AM (−3.0%) and RSM (4.5%), whereas showed an oscillating trend with LED. The major concern for 3D-dosimetry on clinical SPECT images is more strongly represented by image blurring than by
International Nuclear Information System (INIS)
Mil'shtejn, R.S.
1988-01-01
Analysis of dose fields in a heterogeneous tissue equivalent medium has shown that dose distributions have radial symmetry and can be described by a curve of axial distribution with renormalization of maximum ionization depth. A method of the calculation of a dose field in a heterogeneous medium using the principle of radial symmetry is presented
Energy Technology Data Exchange (ETDEWEB)
Nordgaard, Dag Eirik
2010-04-15
During the last 10 to 15 years electricity distribution companies throughout the world have been ever more focused on asset management as the guiding principle for their activities. Within asset management, risk is a key issue for distribution companies, together with handling of cost and performance. There is now an increased awareness of the need to include risk analyses into the companies' decision making processes. Much of the work on risk in electricity distribution systems has focused on aspects of reliability. This is understandable, since it is surely an important feature of the product delivered by the electricity distribution infrastructure, and it is high on the agenda for regulatory authorities in many countries. However, electricity distribution companies are also concerned with other risks relevant for their decision making. This typically involves intangible risks, such as safety, environmental impacts and company reputation. In contrast to the numerous methodologies developed for reliability risk analysis, there are relatively few applications of structured analyses to support decisions concerning intangible risks, even though they represent an important motivation for decisions taken in electricity distribution companies. The overall objective of this PhD work has been to explore risk analysis methods that can be used to improve and support decision making in electricity distribution system asset management, with an emphasis on the analysis of intangible risks. The main contributions of this thesis can be summarised as: An exploration and testing of quantitative risk analysis (QRA) methods to support decisions concerning intangible risks; The development of a procedure for using life curve models to provide input to QRA models; The development of a framework for risk-informed decision making where QRA are used to analyse selected problems; In addition, the results contribute to clarify the basic concepts of risk, and highlight challenges
A Method for Citizen Scientists to Catalogue Worldwide Chlorociboria spp. Distribution
Directory of Open Access Journals (Sweden)
Sarath M. Vega Gutierrez
2018-03-01
Full Text Available The blue-green pigment known as xylindein that is produced by species in the Chlorociboria genus is under heavy investigation for its potential in textile dyes, wood dyes, and solar cells. Xylindein has not yet been synthesized, and while its production can be stimulated under laboratory conditions, it is also plentiful in downed, decayed wood in forested lands. Unfortunately, little is known about the wood preference and forest type preference for this genus, especially outside New Zealand. To map the genus would be a massive undertaking, and herein a method by which citizen scientists could contribute to the distribution map of Chlorociboria species is proposed. The initial trial of this method found untrained participants successfully identified Chlorociboria stained wood in each instance, regardless of forest type. This simple, easy identification and classification system should be well received by citizen-scientists and is the first step towards a global understanding of how xylindein production might be managed for across various ecosystems.
Analysis of the distribution of X-ray characteristic production using the Monte Carlo methods
International Nuclear Information System (INIS)
Del Giorgio, Marcelo; Brizuela, Horacio; Riveros, J.A.
1987-01-01
The Monte Carlo method has been applied for the simulation of electron trajectories in a bulk sample, and therefore for the distribution of signals produced in an electron microprobe. Results for the function φ(ρz) are compared with experimental data. Some conclusions are drawn with respect to the parameters involved in the gaussian model. (Author) [es
Kilany, N M
2016-01-01
The Lomax distribution (Pareto Type-II) is widely applicable in reliability and life testing problems in engineering as well as in survival analysis as an alternative distribution. In this paper, Weighted Lomax distribution is proposed and studied. The density function and its behavior, moments, hazard and survival functions, mean residual life and reversed failure rate, extreme values distributions and order statistics are derived and studied. The parameters of this distribution are estimated by the method of moments and the maximum likelihood estimation method and the observed information matrix is derived. Moreover, simulation schemes are derived. Finally, an application of the model to a real data set is presented and compared with some other well-known distributions.
International Nuclear Information System (INIS)
Kosarev, E.L.
1980-01-01
A new method to reconstruct spatial star distribution in globular clusters is presented. The method gives both the estimation of unknown spatial distribution and the probable reconstruction error. This error has statistical origin and depends only on the number of stars in a cluster. The method is applied to reconstruct the spatial density of 441 flare stars in Pleiades. The spatial density has a maximum in the centre of the cluster of about 1.6-2.5 pc -3 and with increasing distance from the center smoothly falls down to zero approximately with the Gaussian law with a scale parameter of 3.5 pc
Ray tracing the Wigner distribution function for optical simulations
Mout, B.M.; Wick, Michael; Bociort, F.; Petschulat, Joerg; Urbach, Paul
2018-01-01
We study a simulation method that uses the Wigner distribution function to incorporate wave optical effects in an established framework based on geometrical optics, i.e., a ray tracing engine. We use the method to calculate point spread functions and show that it is accurate for paraxial systems
On spectral distribution of high dimensional covariation matrices
DEFF Research Database (Denmark)
Heinrich, Claudio; Podolskij, Mark
In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....
Spatial Distribution Analysis of Scrub Typhus in Korea
Jin, Hong Sung; Chu, Chaeshin; Han, Dong Yeob
2013-01-01
Objective: This study analyzes the spatial distribution of scrub typhus in Korea. Methods: A spatial distribution of Orientia tsutsugamushi occurrence using a geographic information system (GIS) is presented, and analyzed by means of spatial clustering and correlations. Results: The provinces of Gangwon-do and Gyeongsangbuk-do show a low incidence throughout the year. Some districts have almost identical environmental conditions of scrub typhus incidence. The land use change of districts does...
Binns, Lewis A.; Valachis, Dimitris; Anderson, Sean; Gough, David W.; Nicholson, David; Greenway, Phil
2002-07-01
Previously, we have developed techniques for Simultaneous Localization and Map Building based on the augmented state Kalman filter. Here we report the results of experiments conducted over multiple vehicles each equipped with a laser range finder for sensing the external environment, and a laser tracking system to provide highly accurate ground truth. The goal is simultaneously to build a map of an unknown environment and to use that map to navigate a vehicle that otherwise would have no way of knowing its location, and to distribute this process over several vehicles. We have constructed an on-line, distributed implementation to demonstrate the principle. In this paper we describe the system architecture, the nature of the experimental set up, and the results obtained. These are compared with the estimated ground truth. We show that distributed SLAM has a clear advantage in the sense that it offers a potential super-linear speed-up over single vehicle SLAM. In particular, we explore the time taken to achieve a given quality of map, and consider the repeatability and accuracy of the method. Finally, we discuss some practical implementation issues.
A methodology for more efficient tail area sampling with discrete probability distribution
International Nuclear Information System (INIS)
Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon
1988-01-01
Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor
DEFF Research Database (Denmark)
Dimitrov, Nikolay Krasimirov
2016-01-01
We have tested the performance of statistical extrapolation methods in predicting the extreme response of a multi-megawatt wind turbine generator. We have applied the peaks-over-threshold, block maxima and average conditional exceedance rates (ACER) methods for peaks extraction, combined with four...... levels, based on the assumption that the response tail is asymptotically Gumbel distributed. Example analyses were carried out, aimed at comparing the different methods, analysing the statistical uncertainties and identifying the factors, which are critical to the accuracy and reliability...
A method for ion distribution function evaluation using escaping neutral atom kinetic energy samples
International Nuclear Information System (INIS)
Goncharov, P.R.; Ozaki, T.; Veshchev, E.A.; Sudo, S.
2008-01-01
A reliable method to evaluate the probability density function for escaping atom kinetic energies is required for the analysis of neutral particle diagnostic data used to study the fast ion distribution function in fusion plasmas. Digital processing of solid state detector signals is proposed in this paper as an improvement of the simple histogram approach. Probability density function for kinetic energies of neutral particles escaping from the plasma has been derived in a general form taking into account the plasma ion energy distribution, electron capture and loss rates, superposition along the diagnostic sight line and the magnetic surface geometry. A pseudorandom number generator has been realized that enables a sample of escaping neutral particle energies to be simulated for given plasma parameters and experimental conditions. Empirical probability density estimation code has been developed and tested to reconstruct the probability density function from simulated samples assuming. Maxwellian and classical slowing down plasma ion energy distribution shapes for different temperatures and different slowing down times. The application of the developed probability density estimation code to the analysis of experimental data obtained by the novel Angular-Resolved Multi-Sightline Neutral Particle Analyzer has been studied to obtain the suprathermal particle distributions. The optimum bandwidth parameter selection algorithm has also been realized. (author)
FINITE VOLUME METHOD FOR SOLVING THREE-DIMENSIONAL ELECTRIC FIELD DISTRIBUTION
Directory of Open Access Journals (Sweden)
Paţiuc V.I.
2011-04-01
Full Text Available The paper examines a new approach to finite volume method which is used to calculate the electric field spatially homogeneous three-dimensional environment. It is formulated the problem Dirihle with building of the computational grid on base of space partition, which is known as Delone triangulation with the use of Voronoi cells. It is proposed numerical algorithm for calculating the potential and electric field strength in the space formed by a cylinder placed in the air. It is developed algorithm and software which were for the case, when the potential on the inner surface of the cylinder has been assigned and on the outer surface and the bottom of cylinder it was assigned zero potential. There are presented results of calculations of distribution in the potential space and electric field strength.
International Nuclear Information System (INIS)
Miyamoto, H.; Kubo, M.; Katori, T.
1981-01-01
Experimental investigation by 3-D photoelasticity has been carried out to measure the stress distribution of partial penetration welded nozzles attached to the bottom head of a pressure vessel. A 3-D photoelastic stress freezing method was chosen as the most effective means of observation of the stress distribution in the vicinity of the nozzle/wall weld. The experimental model was a 1:20 scale spherical bottom head. Both an axisymmetric nozzle and an asymmetric nozzle were investigated. Epoxy resin, which is a thermosetting plastic, was used as the model material. The oblique effect was examined by comparing the stress distribution of the asymmetric nozzle with that of the axisymmetric nozzle. Furthermore, the experimental results were compared with the analytical results using 3-D finite element method (FEM). The stress distributions obtained from the frozen fringe pattern of the 3-D photoelastic model were in good agreement with those by 3-D FEM. (orig.)
Parallel Harmony Search Based Distributed Energy Resource Optimization
Energy Technology Data Exchange (ETDEWEB)
Ceylan, Oguzhan [ORNL; Liu, Guodong [ORNL; Tomsovic, Kevin [University of Tennessee, Knoxville (UTK)
2015-01-01
This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.
DEFF Research Database (Denmark)
Sun, Qiuye; Han, Renke; Zhang, Huaguang
2015-01-01
With the bidirectional power flow provided by the Energy Internet, various methods are promoted to improve and increase the energy utilization between Energy Internet and Main-Grid. This paper proposes a novel distributed coordinated controller combined with a multi-agent-based consensus algorithm...... which is applied to distributed generators in the Energy Internet. Then, the decomposed tasks, models, and information flow of the proposed method are analyzed. The proposed coordinated controller installed between the Energy Internet and the Main-Grid keeps voltage angles and amplitudes consensus while...... providing accurate power-sharing and minimizing circulating currents. Finally, the Energy Internet can be integrated into the Main-Grid seamlessly if necessary. Hence the Energy Internet can be operated as a spinning reserve system. Simulation results are provided to show the effectiveness of the proposed...
International Nuclear Information System (INIS)
Jiang Zheng-Xian; Cui Bao-Tong; Lou Xu-Yang; Zhuang Bo
2017-01-01
In this paper, the control problem of distributed parameter systems is investigated by using wireless sensor and actuator networks with the observer-based method. Firstly, a centralized observer which makes use of the measurement information provided by the fixed sensors is designed to estimate the distributed parameter systems. The mobile agents, each of which is affixed with a controller and an actuator, can provide the observer-based control for the target systems. By using Lyapunov stability arguments, the stability for the estimation error system and distributed parameter control system is proved, meanwhile a guidance scheme for each mobile actuator is provided to improve the control performance. A numerical example is finally used to demonstrate the effectiveness and the advantages of the proposed approaches. (paper)
Evaluation of the differential energy distribution of systems of non-thermally activated molecules
International Nuclear Information System (INIS)
Rogers, E.B.
1986-01-01
A non-thermally activated molecule may undergo pressure dependent deactivation or energy dependent decomposition. It should be possible to use the pressure dependent stabilization/decomposition yields to determine the energy distribution in non-thermal systems. The numerical technique of regularization has been applied to this chemical problem to evaluate this distribution. The resulting method has been tested with a number of simulated distributions and kinetic models. Application was then made to several real chemical systems to determine the energy distribution resulting from the primary excitation process. Testing showed the method to be quite effective in reproducing input distributions from simulated data in all test cases. The effect of experimental error proved to be negligible when the error-filled data were first smoothed with a parabolic spline. This method has been applied to three different hot atom activated systems. Application to 18 F-for-F substituted CH 3 CF 3 generated a broad distribution extending from 62 to 318 kcal/mol, with a median energy of 138 kcal/mol. The shape of this distribution (and those from the other applications) indicated the involvement of two mechanisms in the excitation process. Analysis of the T-for-H substituted CH 3 CH 2 F system showed a more narrow distribution (56-218 kcal/mol) with a median energy of 79.8 kcal/mol. The distribution of the T-for-H substituted CH 3 CH 2 Cl system, extending from 54.5 to 199 kcal/mol was seen to be quite similar. It was concluded that this method is a valid approach to evaluating differential energy distributions in non-thermal systems, specifically those activated by hot atom substitution
Distribution view: a tool to write and simulate distributions
Coelho, José; Branco, Fernando; Oliveira, Teresa
2006-01-01
In our work we present a tool to write and simulate distributions. This tool allows to write mathematical expressions which can contain not only functions and variables, but also statistical distributions, including mixtures. Each time the expression is evaluated, for all inner distributions, is generated a value according to the distribution and is used for expression value determination. The inversion method can be used in this language, allowing to generate all distributions...
Measurement of void fraction distribution in two-phase flow by impedance CT with neural network
International Nuclear Information System (INIS)
Hayashi, Hideaki; Sumida, Isao; Sakai, Sinji; Wakai, Kazunori
1996-01-01
This paper describes a new method for measurement of void distribution using impedance CT with a hierarchical neural network. The present method consists of four processes. First, output electric currents are calculated by simulation of various distributions of void fraction. The relationship between distribution of void fraction and electric current is called 'teaching data'. Second, the neural network learns the teaching data by the back propagation method. Third, output electric currents are measured about actual two-phase flow. Finally, distribution of void fraction is calculated by the taught neural network using the measured electric currents. In this paper, measurement and learning parameters are adjusted, experimental results obtained using the impedance CT method are compared with data obtained by the impedance probe method. The results show that our method is effective for measurement of void fraction distribution. (author)
May, Megan K.; Kevorkian, Richard T.; Steen, Andrew D.
2013-01-01
There is no universally accepted method to quantify bacteria and archaea in seawater and marine sediments, and different methods have produced conflicting results with the same samples. To identify best practices, we compiled data from 65 studies, plus our own measurements, in which bacteria and archaea were quantified with fluorescent in situ hybridization (FISH), catalyzed reporter deposition FISH (CARD-FISH), polyribonucleotide FISH, or quantitative PCR (qPCR). To estimate efficiency, we defined “yield” to be the sum of bacteria and archaea counted by these techniques divided by the total number of cells. In seawater, the yield was high (median, 71%) and was similar for FISH, CARD-FISH, and polyribonucleotide FISH. In sediments, only measurements by CARD-FISH in which archaeal cells were permeabilized with proteinase K showed high yields (median, 84%). Therefore, the majority of cells in both environments appear to be alive, since they contain intact ribosomes. In sediments, the sum of bacterial and archaeal 16S rRNA gene qPCR counts was not closely related to cell counts, even after accounting for variations in copy numbers per genome. However, qPCR measurements were precise relative to other qPCR measurements made on the same samples. qPCR is therefore a reliable relative quantification method. Inconsistent results for the relative abundance of bacteria versus archaea in deep subsurface sediments were resolved by the removal of CARD-FISH measurements in which lysozyme was used to permeabilize archaeal cells and qPCR measurements which used ARCH516 as an archaeal primer or TaqMan probe. Data from best-practice methods showed that archaea and bacteria decreased as the depth in seawater and marine sediments increased, although archaea decreased more slowly. PMID:24096423
Directory of Open Access Journals (Sweden)
Ozlem Senvar
2016-08-01
Full Text Available Purpose: This study examines Clements’ Approach (CA, Box-Cox transformation (BCT, and Johnson transformation (JT methods for process capability assessments through Weibull-distributed data with different parameters to figure out the effects of the tail behaviours on process capability and compares their estimation performances in terms of accuracy and precision. Design/methodology/approach: Usage of process performance index (PPI Ppu is handled for process capability analysis (PCA because the comparison issues are performed through generating Weibull data without subgroups. Box plots, descriptive statistics, the root-mean-square deviation (RMSD, which is used as a measure of error, and a radar chart are utilized all together for evaluating the performances of the methods. In addition, the bias of the estimated values is important as the efficiency measured by the mean square error. In this regard, Relative Bias (RB and the Relative Root Mean Square Error (RRMSE are also considered. Findings: The results reveal that the performance of a method is dependent on its capability to fit the tail behavior of the Weibull distribution and on targeted values of the PPIs. It is observed that the effect of tail behavior is more significant when the process is more capable. Research limitations/implications: Some other methods such as Weighted Variance method, which also give good results, were also conducted. However, we later realized that it would be confusing in terms of comparison issues between the methods for consistent interpretations. Practical implications: Weibull distribution covers a wide class of non-normal processes due to its capability to yield a variety of distinct curves based on its parameters. Weibull distributions are known to have significantly different tail behaviors, which greatly affects the process capability. In quality and reliability applications, they are widely used for the analyses of failure data in order to understand how
Johnson, Nathan C.; Haig, Susan M.; Mosher, Stephen M.
2018-01-01
We described past and present distribution and abundance data to evaluate the status of the endangered Mariana Swiftlet (Aerodramus bartschi), a little-known echolocating cave swiftlet that currently inhabits 3 of 5 formerly occupied islands in the Mariana archipelago. We then evaluated the survey methods used to attain these estimates via fieldwork carried out on an introduced population of Mariana Swiftlets on the island of O'ahu, Hawaiian Islands, to derive better methods for future surveys. We estimate the range-wide population of Mariana Swiftlets to be 5,704 individuals occurring in 15 caves on Saipan, Aguiguan, and Guam in the Marianas; and 142 individuals occupying one tunnel on O'ahu. We further confirm that swiftlets have been extirpated from Rota and Tinian and have declined on Aguiguan. Swiftlets have remained relatively stable on Guam and Saipan in recent years. Our assessment of survey methods used for Mariana Swiftlets suggests overestimates depending on the technique used. We suggest the use of night vision technology and other changes to more accurately reflect their distribution, abundance, and status.
The Burr X Pareto Distribution: Properties, Applications and VaR Estimation
Directory of Open Access Journals (Sweden)
Mustafa Ç. Korkmaz
2017-12-01
Full Text Available In this paper, a new three-parameter Pareto distribution is introduced and studied. We discuss various mathematical and statistical properties of the new model. Some estimation methods of the model parameters are performed. Moreover, the peaks-over-threshold method is used to estimate Value-at-Risk (VaR by means of the proposed distribution. We compare the distribution with a few other models to show its versatility in modelling data with heavy tails. VaR estimation with the Burr X Pareto distribution is presented using time series data, and the new model could be considered as an alternative VaR model against the generalized Pareto model for financial institutions.
Monitoring system and methods for a distributed and recoverable digital control system
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2010-01-01
A monitoring system and methods are provided for a distributed and recoverable digital control system. The monitoring system generally comprises two independent monitoring planes within the control system. The first monitoring plane is internal to the computing units in the control system, and the second monitoring plane is external to the computing units. The internal first monitoring plane includes two in-line monitors. The first internal monitor is a self-checking, lock-step-processing monitor with integrated rapid recovery capability. The second internal monitor includes one or more reasonableness monitors, which compare actual effector position with commanded effector position. The external second monitor plane includes two monitors. The first external monitor includes a pre-recovery computing monitor, and the second external monitor includes a post recovery computing monitor. Various methods for implementing the monitoring functions are also disclosed.
COVAL, Compound Probability Distribution for Function of Probability Distribution
International Nuclear Information System (INIS)
Astolfi, M.; Elbaz, J.
1979-01-01
1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions
Measuring the plutonium distribution in fuel elements by the gamma scanning method
International Nuclear Information System (INIS)
Gorobets, A.K.; Leshchenko, Yu.I.; Semenov, A.L.
1982-01-01
An on-line system designed for measuring Pu distribution in the length of fresh fuel elements with vibrocompacted UO 2 -PuO 2 fuel rods by the γ-scanning method is described. An algorithm for measurement result processing and the procedure of determination of calibration parameters necessary for the valid signal separat.ion by means of a two-channel analyzer and for evaluation of the self-absorption effect are considered. The device scanning unit consists of two NaI(Tl) detectors simultaneously detecting γ-radiation from the opposite sides of a measured fuel rod section. The cesium source with Esub(γ)=660 keV is used for fuel scanning. On the base of the analysis of the results obtained when studying the BOR-60 experimental fuel elements with fuel rods of 400 mm long by means of the described device clusion is made that fuel element scanning during 20 min (scanning step is 4 mm, measuring time at each step is 10 s) makes it possible to determine Pu distribution with the error less than +-4% at the confidence probability of 0.68
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant
Distribution of photon strength in nuclei by a method of two-step cascades
International Nuclear Information System (INIS)
Becvar, F.; Cejnar, P.; Kopecky, J.
1990-01-01
The applicability of sum-coincidence measurements of two-step cascade γ-ray spectra to the determination of photon strength functions at intermediate γ-ray energies (3 or 4 MeV) is discussed. An experiment based on thermal neutron capture in Nd was undertaken at the Brookhaven National Laboratory High Flux Beam Reactor to test this model. To understand the role of various uncertainties in similar experiments a series of model calculations was performed. We present an analysis of our experimental data which demonstrates the high sensitivity of the method to E1 and M1 photon strength functions. Our experimental data are in sharp contradiction to those expected from an E1 photon strength distributed according to the classical Lorentzian form with an energy invariant damping width. An alternative distribution of Kadmenskij et al., which violates Brink's Hypothesis, is strongly preferred. 13 refs., 5 figs
Holographic monitoring of spatial distributions of singlet oxygen in water
Belashov, A. V.; Bel'tyukova, D. M.; Vasyutinskii, O. S.; Petrov, N. V.; Semenova, I. V.; Chupov, A. S.
2014-12-01
A method for monitoring spatial distributions of singlet oxygen in biological media has been developed. Singlet oxygen was generated using Radachlorin® photosensitizer, while thermal disturbances caused by nonradiative deactivation of singlet oxygen were detected by the holographic interferometry technique. Processing of interferograms yields temperature maps that characterize the deactivation process and show the distribution of singlet oxygen species.
Prasetyo, S. Y. J.; Hartomo, K. D.
2018-01-01
The Spatial Plan of the Province of Central Java 2009-2029 identifies that most regencies or cities in Central Java Province are very vulnerable to landslide disaster. The data are also supported by other data from Indonesian Disaster Risk Index (In Indonesia called Indeks Risiko Bencana Indonesia) 2013 that suggest that some areas in Central Java Province exhibit a high risk of natural disasters. This research aims to develop an application architecture and analysis methodology in GIS to predict and to map rainfall distribution. We propose our GIS architectural application of “Multiplatform Architectural Spatiotemporal” and data analysis methods of “Triple Exponential Smoothing” and “Spatial Interpolation” as our significant scientific contribution. This research consists of 2 (two) parts, namely attribute data prediction using TES method and spatial data prediction using Inverse Distance Weight (IDW) method. We conduct our research in 19 subdistricts in the Boyolali Regency, Central Java Province, Indonesia. Our main research data is the biweekly rainfall data in 2000-2016 Climatology, Meteorology, and Geophysics Agency (In Indonesia called Badan Meteorologi, Klimatologi, dan Geofisika) of Central Java Province and Laboratory of Plant Disease Observations Region V Surakarta, Central Java. The application architecture and analytical methodology of “Multiplatform Architectural Spatiotemporal” and spatial data analysis methodology of “Triple Exponential Smoothing” and “Spatial Interpolation” can be developed as a GIS application framework of rainfall distribution for various applied fields. The comparison between the TES and IDW methods show that relative to time series prediction, spatial interpolation exhibit values that are approaching actual. Spatial interpolation is closer to actual data because computed values are the rainfall data of the nearest location or the neighbour of sample values. However, the IDW’s main weakness is that some
Method of measuring the current density distribution and emittance of pulsed electron beams
International Nuclear Information System (INIS)
Schilling, H.B.
1979-07-01
This method of current density measurement employs an array of many Faraday cups, each cup being terminated by an integrating capacitor. The voltages of the capacitors are subsequently displayed on a scope, thus giving the complete current density distribution with one shot. In the case of emittance measurements, a moveable small-diameter aperture is inserted at some distance in front of the cup array. Typical results with a two-cathode, two-energy electron source are presented. (orig.)
Generation of Optimal Basis Functions for Reconstruction of Power Distribution
Energy Technology Data Exchange (ETDEWEB)
Park, Moonghu [Sejong Univ., Seoul (Korea, Republic of)
2014-05-15
This study proposes GMDH to find not only the best functional form but also the optimal parameters those describe the power distribution most accurately. A total of 1,060 cases of axially 1-dimensional core power distributions of 20-nodes are generated by 3-dimensional core analysis code covering BOL to EOL core burnup histories to validate the method. Axially five-point box powers at in-core detectors are considered as measurements. The reconstructed axial power shapes using GMDH method are compared to the reference power shapes. The results show that the proposed method is very robust and accurate compared with spline fitting method. It is shown that the GMDH analysis can give optimal basis functions for core power shape reconstruction. The in-core measurements are the 5 detector snapshots and the 20-node power distribution is successfully reconstructed. The effectiveness of the method is demonstrated by comparing the results of spline fitting for BOL, saddle and top-skewed power shapes.
Method and apparatus for anti-islanding protection of distributed generations
Ye, Zhihong; John, Vinod; Wang, Changyong; Garces, Luis Jose; Zhou, Rui; Li, Lei; Walling, Reigh Allen; Premerlani, William James; Sanza, Peter Claudius; Liu, Yan; Dame, Mark Edward
2006-03-21
An apparatus for anti-islanding protection of a distributed generation with respect to a feeder connected to an electrical grid is disclosed. The apparatus includes a sensor adapted to generate a voltage signal representative of an output voltage and/or a current signal representative of an output current at the distributed generation, and a controller responsive to the signals from the sensor. The controller is productive of a control signal directed to the distributed generation to drive an operating characteristic of the distributed generation out of a nominal range in response to the electrical grid being disconnected from the feeder.
Directory of Open Access Journals (Sweden)
EMIROGLU, S.
2017-11-01
Full Text Available This paper proposes a distributed reactive power control based approach to deploy Volt/VAr optimization (VVO / Conservation Voltage Reduction (CVR algorithm in a distribution network with distributed generations (DG units and distribution static synchronous compensators (D-STATCOM. A three-phase VVO/CVR problem is formulated and the reactive power references of D-STATCOMs and DGs are determined in a distributed way by decomposing the VVO/CVR problem into voltage and reactive power control. The main purpose is to determine the coordination between voltage regulator (VR and reactive power sources (Capacitors, D-STATCOMs and DGs based on VVO/CVR. The study shows that the reactive power injection capability of DG units may play an important role in VVO/CVR. In addition, it is shown that the coordination of VR and reactive power sources does not only save more energy and power but also reduces the power losses. Moreover, the proposed VVO/CVR algorithm reduces the computational burden and finds fast solutions. To illustrate the effectiveness of the proposed method, the VVO/CVR is performed on the IEEE 13-node test system feeder considering unbalanced loading and line configurations. The tests are performed taking the practical voltage-dependent load modeling and different customer types into consideration to improve accuracy.
International Nuclear Information System (INIS)
Arvieu, R.
The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr
International Nuclear Information System (INIS)
Sanda, T.; Azekura, K.
1983-01-01
A model for calculating the power distribution and the control rod worth in fast reactors has been developed. This model is based on the influence function method. The characteristics of the model are as follows: Influence functions for any changes in the control rod insertion ratio are expressed by using an influence function for an appropriate control rod insertion in order to reduce the computer memory size required for the method. A control rod worth is calculated on the basis of a one-group approximation in which cross sections are generated by bilinear (flux-adjoint) weighting, not the usual flux weighting, in order to reduce the collapse error. An effective neutron multiplication factor is calculated by adjoint weighting in order to reduce the effect of the error in the one-group flux distribution. The results obtained in numerical examinations of a prototype fast reactor indicate that this method is suitable for on-line core performance evaluation because of a short computing time and a small memory size
Tritium distributing in stainless steel determined by chemical etchin
International Nuclear Information System (INIS)
Xiong Yifu; Luo Deli; Chen Changan; Chen Shicun; Jing Wenyong
2009-01-01
The depth distribution of tritium in stainless steel was measured by chemical etching. The results show that the method can more quantitatively evaluate the tritium distributing in stainless steel. The maximum amount of tritium which distributed in crystal lattice of stainless steel is limitted by its solubility at room temperature. The other form of tritium in stainless steel is gaseous tritium that are trapped by defects, impurities, fractures, etc. within it. The gaseous tritium is several times more than the solid-dissolved tritium. (authors)
A Geographic Method for High Resolution Spatial Heat Planning
DEFF Research Database (Denmark)
Nielsen, Steffen
2014-01-01
more detailed modelling that takes the geographic placement of buildings and the differences among DH systems into account. In the present article, a method for assessing the costs of DH expansions has been developed. The method was applied in a geographic information system (GIS) model that consists...... are considering distribution costs based on the geographic properties of each area and assessing transmission costs based on an iterative process that examines expansion potentials gradually. The GIS model is only applicable to a Danish context, but the method itself can be applied to other countries....... of three parts and assesses the costs of heat production, distribution, and transmission. The model was also applied to an actual case in order to show how it can be used. The model shows many improvements in the method for the assessment of distribution costs and transmission costs. Most notable...
International Nuclear Information System (INIS)
Ren Shang-Qing; Yang Hong; Wang Wen-Wu; Tang Bo; Tang Zhao-Yun; Wang Xiao-Lei; Xu Hao; Luo Wei-Chun; Zhao Chao; Yan Jiang; Chen Da-Peng; Ye Tian-Chun
2015-01-01
A new method is proposed to extract the energy distribution of negative charges, which results from electron trapping by traps in the gate stack of nMOSFET during positive bias temperature instability (PBTI) stress based on the recovery measurement. In our case, the extracted energy distribution of negative charges shows an obvious dependence on energy, and the energy level of the largest energy density of negative charges is 0.01 eV above the conduction band of silicon. The charge energy distribution below that energy level shows strong dependence on the stress voltage. (paper)
International Nuclear Information System (INIS)
Bossew, P.; Žunić, Z.S.; Stojanovska, Z.; Tollefsen, T.; Carpentieri, C.; Veselinović, N.; Komatina, S.; Vaupotič, J.; Simović, R.D.; Antignani, S.; Bochicchio, F.
2014-01-01
Between 2008 and 2011 a survey of radon ( 222 Rn) was performed in schools of several districts of Southern Serbia. Some results have been published previously (Žunić et al., 2010; Carpentieri et al., 2011; Žunić et al., 2013). This article concentrates on the geographical distribution of the measured Rn concentrations. Applying geostatistical methods we generate “school radon maps” of expected concentrations and of estimated probabilities that a concentration threshold is exceeded. The resulting maps show a clearly structured spatial pattern which appears related to the geological background. In particular in areas with vulcanite and granitoid rocks, elevated radon (Rn) concentrations can be expected. The “school radon map” can therefore be considered as proxy to a map of the geogenic radon potential, and allows identification of radon-prone areas, i.e. areas in which higher Rn radon concentrations can be expected for natural reasons. It must be stressed that the “radon hazard”, or potential risk, estimated this way, has to be distinguished from the actual radon risk, which is a function of exposure. This in turn may require (depending on the target variable which is supposed to measure risk) considering demographic and sociological reality, i.e. population density, distribution of building styles and living habits. -- Highlights: • A map of Rn concentrations in primary schools of Southern Serbia. • Application of geostatistical methods. • Correlation with geology found. • Can serve as proxy to identify radon prone areas
Stand diameter distribution modelling and prediction based on Richards function.
Directory of Open Access Journals (Sweden)
Ai-guo Duan
Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.
Energy Technology Data Exchange (ETDEWEB)
Choi, Myung Soo; Yang, Kyong Uk [Chonnam National University, Yeosu (Korea, Republic of); Kondou, Takahiro [Kyushu University, Fukuoka (Japan); Bonkobara, Yasuhiro [University of Miyazaki, Miyazaki (Japan)
2016-03-15
We developed a method for analyzing the free vibration of a structure regarded as a distributed system, by combining the Wittrick-Williams algorithm and the transfer dynamic stiffness coefficient method. A computational algorithm was formulated for analyzing the free vibration of a straight-line beam regarded as a distributed system, to explain the concept of the developed method. To verify the effectiveness of the developed method, the natural frequencies of straight-line beams were computed using the finite element method, transfer matrix method, transfer dynamic stiffness coefficient method, the exact solution, and the developed method. By comparing the computational results of the developed method with those of the other methods, we confirmed that the developed method exhibited superior performance over the other methods in terms of computational accuracy, cost and user convenience.
The Exponentiated Gumbel Type-2 Distribution: Properties and Application
Directory of Open Access Journals (Sweden)
I. E. Okorie
2016-01-01
Full Text Available We introduce a generalized version of the standard Gumble type-2 distribution. The new lifetime distribution is called the Exponentiated Gumbel (EG type-2 distribution. The EG type-2 distribution has three nested submodels, namely, the Gumbel type-2 distribution, the Exponentiated Fréchet (EF distribution, and the Fréchet distribution. Some statistical and reliability properties of the new distribution were given and the method of maximum likelihood estimates was proposed for estimating the model parameters. The usefulness and flexibility of the Exponentiated Gumbel (EG type-2 distribution were illustrated with a real lifetime data set. Results based on the log-likelihood and information statistics values showed that the EG type-2 distribution provides a better fit to the data than the other competing distributions. Also, the consistency of the parameters of the new distribution was demonstrated through a simulation study. The EG type-2 distribution is therefore recommended for effective modelling of lifetime data.
Methods and Strategies for Overvoltage Prevention in Low Voltage Distribution Systems with PV
DEFF Research Database (Denmark)
Hashemi Toghroljerdi, Seyedmostafa; Østergaard, Jacob
2016-01-01
to handle a high share of PV power. This paper provides an in-depth review of methods and strategies proposed to prevent overvoltage in LV grids with PV, and discusses the effectiveness, advantages, and disadvantages of them in detail. Based on the mathematical framework presented in the paper......, the overvoltage caused by high PV penetration is described, solutions to facilitate higher PV penetration are classified, and their effectiveness, advantages, and disadvantages are illustrated. The investigated solutions include the grid reinforcement, electrical energy storage application, reactive power...... absorption by PV inverters, application of active medium voltage to low voltage (MV/LV) transformers, active power curtailment, and demand response (DR). Coordination between voltage control units by localized, distributed, and centralized voltage control methods is compared using the voltage sensitivity...
Seong, Ki Moon; Park, Hweon; Kim, Seong Jung; Ha, Hyo Nam; Lee, Jae Yung; Kim, Joon
2007-06-01
A yeast transcriptional activator, Gcn4p, induces the expression of genes that are involved in amino acid and purine biosynthetic pathways under amino acid starvation. Gcn4p has an acidic activation domain in the central region and a bZIP domain in the C-terminus that is divided into the DNA-binding motif and dimerization leucine zipper motif. In order to identify amino acids in the DNA-binding motif of Gcn4p which are involved in transcriptional activation, we constructed mutant libraries in the DNA-binding motif through an innovative application of random mutagenesis. Mutant library made by oligonucleotides which were mutated randomly using the Poisson distribution showed that the actual mutation frequency was in good agreement with expected values. This method could save the time and effort to create a mutant library with a predictable mutation frequency. Based on the studies using the mutant libraries constructed by the new method, the specific residues of the DNA-binding domain in Gcn4p appear to be involved in the transcriptional activities on a conserved binding site.
Gurgiolo, Chris; Vinas, Adolfo F.
2009-01-01
This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.
Energy Technology Data Exchange (ETDEWEB)
Odano, Ikuo; Takahashi, Naoya; Ohtaki, Hiroh; Noguchi, Eikichi; Hatano, Masayoshi; Yamasaki, Yoshihiro; Nishihara, Mamiko [Niigata Univ. (Japan). School of Medicine; Ohkubo, Masaki; Yokoi, Takashi
1993-10-01
We developed a new graphic method using N-isopropyl-p-[[sup 123]I]iodoamphetamine (IMP) and SPECT of the brain, the graph on which all three parameters, cerebral blood flow, distribution volume (V[sub d]) and delayed count to early count ratio (Delayed/Early ratio), were able to be evaluated simultaneously. The kinetics of [sup 123]I-IMP in the brain was analyzed by a 2-compartment model, and a standard input function was prepared by averaging the time activity curves of [sup 123]I-IMP in arterial blood on 6 patients with small cerebral infarction etc. including 2 normal controls. Being applied this method to the differential diagnosis between Parkinson's disease and progressive supranuclear palsy, we were able to differentiate both with a glance, because the distribution volume of the frontal lobe significantly decreased in Parkinson's disease (Mean[+-]SD; 26[+-]6 ml/g). This method was clinically useful. We think that the distribution volume of [sup 123]I-IMP may reflect its retention mechanism in the brain, and the values are related to amine, especially to dopamine receptors and its metabolism. (author).
Aerosol distribution measurements by laser - Doppler - spectroscopy
International Nuclear Information System (INIS)
Baldassari, J.
1977-01-01
Laser-Doppler-Spectroscopy is used to study particle size distribution, especially sodium aerosols, in the presence of uncondensable gases. Theoretical basis are given, and an experimental technique is described. First theoretical results show reasonably good agreement with experimental data available; this method seems to be a promising one. (author)
Research on social communication network evolution based on topology potential distribution
Zhao, Dongjie; Jiang, Jian; Li, Deyi; Zhang, Haisu; Chen, Guisheng
2011-12-01
Aiming at the problem of social communication network evolution, first, topology potential is introduced to measure the local influence among nodes in networks. Second, from the perspective of topology potential distribution the method of network evolution description based on topology potential distribution is presented, which takes the artificial intelligence with uncertainty as basic theory and local influence among nodes as essentiality. Then, a social communication network is constructed by enron email dataset, the method presented is used to analyze the characteristic of the social communication network evolution and some useful conclusions are got, implying that the method is effective, which shows that topology potential distribution can effectively describe the characteristic of sociology and detect the local changes in social communication network.
DEFF Research Database (Denmark)
Chen, Shuheng; Wang, Xiongfei; Su, Chi
2014-01-01
with a reduced memory size. The voltage error of each PV node is adjusted by a reactive power adjusting strategy. The adjusting strategy is based on a multi-variable linear function with an accelerating factor. Finally, this new improved power flow method is realized by the software system developed in VC......Based on an extended chain-table storage structure, an improved power flow method is presented, which can be applied to a distribution network with multi PV nodes. The extended chain-table storage structure is designed on the basis of address-pointer technology describing the radial topology...... and the corresponding case study has been done. The experimental data and the further analysis have proved that this method can calculate the power flow of a distribution network with multi PV nodes precisely and fast. © 2014 IEEE....
Guinda, Xabier; Juanes, José Antonio; Puente, Araceli; Echavarri-Erasun, Beatriz
2012-01-01
The extensive field work carried out over the last century has allowed the worldwide description of general distribution patterns and specific composition of rocky intertidal communities. However, the information concerning subtidal communities on hard substrates is more recent and scarce due to the difficulties associated with working in such environments. In this work, a non-destructive method is applied to the study and mapping of subtidal rocky bottom macroalgae assemblages on the coast of Cantabria (N Spain) which is quick, easy and economical. Gelidium corneum and Cystoseira baccata were the dominant species, however, the composition and coverage of macroalgae assemblages varied significantly at different locations and depth ranges. The high presence of Laminaria ochroleuca and Saccorhiza polyschides, characteristic of colder waters, shows the transitional character of this coastal area. The results obtained throughout this study have been very useful to the application of the European Water Framework Directive (WFD 2000/60/EC) and could be of great interest for the future conservation and management of these ecosystems (e.g. Habitats Directive 92/43/EEC).
Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N
2000-05-01
We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.
Externally studentized normal midrange distribution
Directory of Open Access Journals (Sweden)
Ben Dêivide de Oliveira Batista
Full Text Available ABSTRACT The distribution of externally studentized midrange was created based on the original studentization procedures of Student and was inspired in the distribution of the externally studentized range. The large use of the externally studentized range in multiple comparisons was also a motivation for developing this new distribution. This work aimed to derive analytic equations to distribution of the externally studentized midrange, obtaining the cumulative distribution, probability density and quantile functions and generating random values. This is a new distribution that the authors could not find any report in the literature. A second objective was to build an R package for obtaining numerically the probability density, cumulative distribution and quantile functions and make it available to the scientific community. The algorithms were proposed and implemented using Gauss-Legendre quadrature and the Newton-Raphson method in R software, resulting in the SMR package, available for download in the CRAN site. The implemented routines showed high accuracy proved by using Monte Carlo simulations and by comparing results with different number of quadrature points. Regarding to the precision to obtain the quantiles for cases where the degrees of freedom are close to 1 and the percentiles are close to 100%, it is recommended to use more than 64 quadrature points.
International Nuclear Information System (INIS)
Muryono, H.; Sumining; Agus Taftazani; Kris Tri Basuki; Sukarman, A.
1999-01-01
The evaluation of trace elements distribution in water, sediment, soil and cassava plant in Muria peninsula by NAA method were done. The nuclear power plant (NPP) and the coal power plant (CPP) will be built in Muria peninsula, so, the Muria peninsula is an important site for samples collection and monitoring of environment. River-water, sediment, dryland-soil and cassava plant were choosen as specimens samples from Muria peninsula environment. The analysis result of trace elements were used as a contributed data for environment monitoring before and after NPP was built. The trace elements in specimens of river-water, sediment, dryland-soil and cassava plant samples were analyzed by INAA method. It was found that the trace elements distribution were not evenly distributed. Percentage of trace elements distribution in river-water, sediment, dryland-soil and cassava leaves were 0.00026-0.037% in water samples, 0.49-62.7% in sediment samples, 36.29-99.35% in soil samples and 0.21-99.35% in cassava leaves. (author)
Energy Technology Data Exchange (ETDEWEB)
Muryono, H.; Sumining; Agus Taftazani; Kris Tri Basuki; Sukarman, A. [Yogyakarta Nuclear Research Center, Yogyakarta (Indonesia)
1999-10-01
The evaluation of trace elements distribution in water, sediment, soil and cassava plant in Muria peninsula by NAA method were done. The nuclear power plant (NPP) and the coal power plant (CPP) will be built in Muria peninsula, so, the Muria peninsula is an important site for samples collection and monitoring of environment. River-water, sediment, dryland-soil and cassava plant were choosen as specimens samples from Muria peninsula environment. The analysis result of trace elements were used as a contributed data for environment monitoring before and after NPP was built. The trace elements in specimens of river-water, sediment, dryland-soil and cassava plant samples were analyzed by INAA method. It was found that the trace elements distribution were not evenly distributed. Percentage of trace elements distribution in river-water, sediment, dryland-soil and cassava leaves were 0.00026-0.037% in water samples, 0.49-62.7% in sediment samples, 36.29-99.35% in soil samples and 0.21-99.35% in cassava leaves. (author)
Optimal placement and sizing of multiple distributed generating units in distribution
Directory of Open Access Journals (Sweden)
D. Rama Prabha
2016-06-01
Full Text Available Distributed generation (DG is becoming more important due to the increase in the demands for electrical energy. DG plays a vital role in reducing real power losses, operating cost and enhancing the voltage stability which is the objective function in this problem. This paper proposes a multi-objective technique for optimally determining the location and sizing of multiple distributed generation (DG units in the distribution network with different load models. The loss sensitivity factor (LSF determines the optimal placement of DGs. Invasive weed optimization (IWO is a population based meta-heuristic algorithm based on the behavior of weeds. This algorithm is used to find optimal sizing of the DGs. The proposed method has been tested for different load models on IEEE-33 bus and 69 bus radial distribution systems. This method has been compared with other nature inspired optimization methods. The simulated results illustrate the good applicability and performance of the proposed method.
A Novel Method of Clock Synchronization in Distributed Systems
Li, Gun; Niu, Meng-jie; Chai, Yang-shun; Chen, Xin; Ren, Yan-qiu
2017-04-01
Time synchronization plays an important role in the spacecraft formation flight and constellation autonomous navigation, etc. For the application of clock synchronization in a network system, it is not always true that all the observed nodes in the network are interconnected, therefore, it is difficult to achieve the high-precision time synchronization of a network system in the condition that a certain node can only obtain the measurement information of clock from a single neighboring node, but cannot obtain it from other nodes. Aiming at this problem, a novel method of high-precision time synchronization in a network system is proposed. In this paper, each clock is regarded as a node in the network system, and based on the definition of different topological structures of a distributed system, the three control algorithms of time synchronization under the following three cases are designed: without a master clock (reference clock), with a master clock (reference clock), and with a fixed communication delay in the network system. And the validity of the designed clock synchronization protocol is proved by both stability analysis and numerical simulation.
LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas
2015-01-01
High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.
LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas
2015-01-01
High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV in certain areas, but large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.
The value of electricity distribution networks
International Nuclear Information System (INIS)
De Paoli, L.
2000-01-01
This article presents the results of a study aimed at evaluating parts of the distribution network of ENEL, in charge of distributing and supplying electricity to its captive market, that could be sold as a separate entity. To determine the asset value of these hypothetical companies, the discounted cash flow method has been used applied to the 147 ENEL's distributing zones. The econometric analysis shows that the relevant variables are the quantity sold to non residential and non big industrial consumers and the length of medium voltage lines. According to the available data and to the methodology chosen, the per client value of the distribution zones of ENEL varies substantially. The maximum value is bout three times the mean value and the minimum value is largely negative. The article maintains that changes in regulation could greatly modify the asset value of distribution networks. The main regulatory risks are linked to the degree of market opening, the introduction of compensation mechanisms between different distributors and the allowed maximum revenue fixed by energy Authority for a given period of time. This point is developed in the appendix where it is shown that the price cap method is decided on the basis of a rate of return which is valid at the moment of the cap fixing but that could be no longer valid if the rate of inflation varies [it
An adaptive distributed admission approach in Bluetooth network with QoS provisions
DEFF Research Database (Denmark)
Son, L.T.; Schiøler, Henrik; Madsen, Ole Brun
2002-01-01
In this paper, a method of adaptive distributed admission with end-to-end Quality of Service (QoS) provisions for real time and non real time tra°cs in Blue-tooth networks is highlighted, its mathematic background is analyzed and a simulation with bursty tra°c sources, Interrupted Bernoulli Process...... (IBP), is carried out. The simulation results show that the performance of Bluetooth network is improved when applying the distributed admission method...
International Nuclear Information System (INIS)
Goenczi, L.; Didriksson, R.; Berggren, H.; Sundqvist, B.; Lindh, U.; Awal, M.A.
1980-01-01
The 14 N(d,p 0 ) 15 N reaction has been used to measure nitrogen depth distributions in single grains of wheat and barley. With the beam energy used (6 MeV) a depth of 225 μm was reached. In order to test the applicability of the method for plant breeding purposes we have studied 1000 grains of wheat and grains of barley, which are part of a larger material of about 50 000 grains grown and harvested under controlled biological conditions. The measured nitrogen distributions in wheat show striking correlations with parameters describing the nitrogen level of fertilizer, the time of harvesting, the grain position in a head and the analyzed variety of wheat. Contributions to the spectra from silicon in the hull of barley are demonstrated. Contributions from interfering elements in the aleuron layer in wheat placed a limit of 120 μm to the depth region analyzed. The importance of effects like pileup, heating of the grains by the beam and grain asymmetries were studied in detail. The possibility to use the technique for selection purposes in plant breeding will be discussed. (author)
International Nuclear Information System (INIS)
Goncharenko, Y. D.; Evseev, L.A.; Risovany, V.D.
2005-01-01
The SIMS technique (with using a linear analysis and 2D surface imaging) has been to measure the radial distribution of the boron isotope ratio in the boron carbide pellets irradiated in the fast reactor. It was revealed that a radial distribution of isotope ratio in the boron carbide pellets is significantly different after irradiation in fast and thermal reactors. It was showed the advisability of using ion images for such examinations. (Author)
A Step-Wise Approach to Elicit Triangular Distributions
Greenberg, Marc W.
2013-01-01
Adapt/combine known methods to demonstrate an expert judgment elicitation process that: 1.Models expert's inputs as a triangular distribution, 2.Incorporates techniques to account for expert bias and 3.Is structured in a way to help justify expert's inputs. This paper will show one way of "extracting" expert opinion for estimating purposes. Nevertheless, as with most subjective methods, there are many ways to do this.
DEFF Research Database (Denmark)
Hasmasan, Adrian Augustin; Busca, Christian; Teodorescu, Remus
2012-01-01
In this paper, a FEM (finite element method) based mechanical model for PP (press-pack) IGBTs (insulated gate bipolar transistors) is presented, which can be used to calculate the clamping force distribution among chips under various clamping conditions. The clamping force is an important parameter...... for the chip, because it influences contact electrical resistance, contact thermal resistance and power cycling capability. Ideally, the clamping force should be equally distributed among chips, in order to maximize the reliability of the PP IGBT. The model is built around a hypothetical PP IGBT with 9 chips......, and it has numerous simplifications in order to reduce the simulation time as much as possible. The developed model is used to analyze the clamping force distribution among chips, in various study cases, where uniform and non-uniform clamping pressures are applied on the studied PP IGBT....
Income inequality in Romania: The exponential-Pareto distribution
Oancea, Bogdan; Andrei, Tudorel; Pirjol, Dan
2017-03-01
We present a study of the distribution of the gross personal income and income inequality in Romania, using individual tax income data, and both non-parametric and parametric methods. Comparing with official results based on household budget surveys (the Family Budgets Survey and the EU-SILC data), we find that the latter underestimate the income share of the high income region, and the overall income inequality. A parametric study shows that the income distribution is well described by an exponential distribution in the low and middle incomes region, and by a Pareto distribution in the high income region with Pareto coefficient α = 2.53. We note an anomaly in the distribution in the low incomes region (∼9,250 RON), and present a model which explains it in terms of partial income reporting.
Income distribution in the Colombian economy from an econophysics perspective
Directory of Open Access Journals (Sweden)
Hernando Quevedo Cubillos
2016-09-01
Full Text Available Recently, in econophysics, it has been shown that it is possible to analyze economic systems as equilibrium thermodynamic models. We apply statistical thermodynamics methods to analyze income distribution in the Colombian economic system. Using the data obtained in random polls, we show that income distribution in the Colombian economic system is characterized by two specific phases. The first includes about 90% of the interviewed individuals, and is characterized by an exponential Boltzmann-Gibbs distribution. The second phase, which contains the individuals with the highest incomes, can be described by means of one or two power-law density distributions that are known as Pareto distributions.
International Nuclear Information System (INIS)
Wagner, J. C.; Blakeman, E. D.; Peplow, D. E.
2009-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is a variation on the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for some time to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain approximately uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented in the ADVANTG/MCNP framework and has been fully automated within the MAVRIC sequence of SCALE 6. Results of the application of the method to enabling the calculation of dose rates throughout an entire full-scale pressurized-water reactor facility are presented and discussed. (authors)
A new kind of droplet space distribution measuring method
International Nuclear Information System (INIS)
Ma Chao; Bo Hanliang
2012-01-01
A new kind of droplet space distribution measuring technique was introduced mainly, and the experimental device which was designed for the measuring the space distribution and traces of the flying film droplet produced by the bubble breaking up near the free surface of the water. This experiment was designed with a kind of water-sensitivity test paper (rice paper) which could record the position and size of the colored scattering droplets precisely. The rice papers were rolled into cylinders with different diameters by using tools. The bubbles broke up exactly in the center of the cylinder, and the space distribution and the traces of the droplets would be received by analysing all the positions of the droplets produced by the same size bubble on the rice papers. (authors)
International Nuclear Information System (INIS)
El-Shanshoury, Gh.I.
2015-01-01
Assessing the adequacy of probability distributions for estimating the extreme events of air temperature in Dabaa region is one of the pre-requisite s for any design purpose at Dabaa site which can be achieved by probability approach. In the present study, three extreme value distributions are considered and compared to estimate the extreme events of monthly and annual maximum and minimum temperature. These distributions include the Gumbel/Frechet distributions for estimating the extreme maximum values and Gumbel /Weibull distributions for estimating the extreme minimum values. Lieblein technique and Method of Moments are applied for estimating the distribution para meters. Subsequently, the required design values with a given return period of exceedance are obtained. Goodness-of-Fit tests involving Kolmogorov-Smirnov and Anderson-Darling are used for checking the adequacy of fitting the method/distribution for the estimation of maximum/minimum temperature. Mean Absolute Relative Deviation, Root Mean Square Error and Relative Mean Square Deviation are calculated, as the performance indicators, to judge which distribution and method of parameters estimation are the most appropriate one to estimate the extreme temperatures. The present study indicated that the Weibull distribution combined with Method of Moment estimators gives the highest fit, most reliable, accurate predictions for estimating the extreme monthly and annual minimum temperature. The Gumbel distribution combined with Method of Moment estimators showed the highest fit, accurate predictions for the estimation of the extreme monthly and annual maximum temperature except for July, August, October and November. The study shows that the combination of Frechet distribution with Method of Moment is the most accurate for estimating the extreme maximum temperature in July, August and November months while t he Gumbel distribution and Lieblein technique is the best for October
Oguchi, Masahiro; Fuse, Masaaki
2015-02-03
Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.
The study for the Spatial Distribution Pattern of NDVI in the Western of Jilin Province
Yang, Shu-jie; Li, Xiao-dong; Yan, Shou-gang
2018-02-01
Using methods of spatial autocorrelation analysis and trend analysis, the paper studies the spatial distribution pattern of NDVI based on the GIMMS NDVI dataset (1998-2008), in Western Jilin. The maximum value for 15d is got through the method of MAX processing. Results show that: the NDVI in growing season shows a rising trend in western Jilin in 1998-2008. In the study area, the NDVI in Western Jilin shows positive spatial autocorrelation in the whole region, but the partial NDVI is apt to scattered distribution, which means the vegetation cover of Western Jilin is generally fragmental.
Uncertainties of predictions from parton distribution functions. I. The Lagrange multiplier method
International Nuclear Information System (INIS)
Stump, D.; Pumplin, J.; Brock, R.; Casey, D.; Huston, J.; Kalk, J.; Lai, H. L.; Tung, W. K.
2002-01-01
We apply the Lagrange multiplier method to study the uncertainties of physical predictions due to the uncertainties of parton distribution functions (PDF's), using the cross section σ W for W production at a hadron collider as an archetypal example. An effective χ 2 function based on the CTEQ global QCD analysis is used to generate a series of PDF's, each of which represents the best fit to the global data for some specified value of σ W . By analyzing the likelihood of these 'alterative hypotheses', using available information on errors from the individual experiments, we estimate that the fractional uncertainty of σ W due to current experimental input to the PDF analysis is approximately ±4% at the Fermilab Tevatron, and ±8-10% at the CERN Large Hadron Collider. We give sets of PDF's corresponding to these up and down variations of σ W . We also present similar results on Z production at the colliders. Our method can be applied to any combination of physical variables in precision QCD phenomenology, and it can be used to generate benchmarks for testing the accuracy of approximate methods based on the error matrix
Fowler, Mike S; Ruokolainen, Lasse
2013-01-01
The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let