Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
Radiotherapy Dose Fractionation under Parameter Uncertainty
Davison, Matt; Kim, Daero; Keller, Harald
2011-01-01
In radiotherapy, radiation is directed to damage a tumor while avoiding surrounding healthy tissue. Tradeoffs ensue because dose cannot be exactly shaped to the tumor. It is particularly important to ensure that sensitive biological structures near the tumor are not damaged more than a certain amount. Biological tissue is known to have a nonlinear response to incident radiation. The linear quadratic dose response model, which requires the specification of two clinically and experimentally observed response coefficients, is commonly used to model this effect. This model yields an optimization problem giving two different types of optimal dose sequences (fractionation schedules). Which fractionation schedule is preferred depends on the response coefficients. These coefficients are uncertainly known and may differ from patient to patient. Because of this not only the expected outcomes but also the uncertainty around these outcomes are important, and it might not be prudent to select the strategy with the best expected outcome.
Price Uncertainty in Linear Production Situations
Suijs, J.P.M.
1999-01-01
This paper analyzes linear production situations with price uncertainty, and shows that the corrresponding stochastic linear production games are totally balanced. It also shows that investment funds, where investors pool their individual capital for joint investments in financial assets, fit into
Mohammadtaghi Hamidi Beheshti
2010-01-01
Full Text Available We propose a fractional-order controller to stabilize unstable fractional-order open-loop systems with interval uncertainty whereas one does not need to change the poles of the closed-loop system in the proposed method. For this, we will use the robust stability theory of Fractional-Order Linear Time Invariant (FO-LTI systems. To determine the control parameters, one needs only a little knowledge about the plant and therefore, the proposed controller is a suitable choice in the control of interval nonlinear systems and especially in fractional-order chaotic systems. Finally numerical simulations are presented to show the effectiveness of the proposed controller.
Sayyad Delshad Saleh
2010-01-01
Full Text Available Abstract We propose a fractional-order controller to stabilize unstable fractional-order open-loop systems with interval uncertainty whereas one does not need to change the poles of the closed-loop system in the proposed method. For this, we will use the robust stability theory of Fractional-Order Linear Time Invariant (FO-LTI systems. To determine the control parameters, one needs only a little knowledge about the plant and therefore, the proposed controller is a suitable choice in the control of interval nonlinear systems and especially in fractional-order chaotic systems. Finally numerical simulations are presented to show the effectiveness of the proposed controller.
Linear Programming Problems for Generalized Uncertainty
Thipwiwatpotjana, Phantipa
2010-01-01
Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…
An Approach for Solving Linear Fractional Programming Problems
Andrew Oyakhobo Odior
2012-01-01
Linear fractional programming problems are useful tools in production planning, financial and corporate planning, health care and hospital planning and as such have attracted considerable research interest. The paper presents a new approach for solving a fractional linear programming problem in which the objective function is a linear fractional function, while the constraint functions are in the form of linear inequalities. The approach adopted is based mainly upon solving the problem algebr...
An approach for solving linear fractional programming problems ...
The paper presents a new approach for solving a fractional linear programming problem in which the objective function is a linear fractional function, while the constraint functions are in the form of linear inequalities. The approach adopted is based mainly upon solving the problem algebraically using the concept of duality ...
Linear models in the mathematics of uncertainty
Mordeson, John N; Clark, Terry D; Pham, Alex; Redmond, Michael A
2013-01-01
The purpose of this book is to present new mathematical techniques for modeling global issues. These mathematical techniques are used to determine linear equations between a dependent variable and one or more independent variables in cases where standard techniques such as linear regression are not suitable. In this book, we examine cases where the number of data points is small (effects of nuclear warfare), where the experiment is not repeatable (the breakup of the former Soviet Union), and where the data is derived from expert opinion (how conservative is a political party). In all these cases the data is difficult to measure and an assumption of randomness and/or statistical validity is questionable. We apply our methods to real world issues in international relations such as nuclear deterrence, smart power, and cooperative threat reduction. We next apply our methods to issues in comparative politics such as successful democratization, quality of life, economic freedom, political stability, and fail...
Linear Matrix Inequality Based Fuzzy Synchronization for Fractional Order Chaos
Bin Wang
2015-01-01
Full Text Available This paper investigates fuzzy synchronization for fractional order chaos via linear matrix inequality. Based on generalized Takagi-Sugeno fuzzy model, one efficient stability condition for fractional order chaos synchronization or antisynchronization is given. The fractional order stability condition is transformed into a set of linear matrix inequalities and the rigorous proof details are presented. Furthermore, through fractional order linear time-invariant (LTI interval theory, the approach is developed for fractional order chaos synchronization regardless of the system with uncertain parameters. Three typical examples, including synchronization between an integer order three-dimensional (3D chaos and a fractional order 3D chaos, anti-synchronization of two fractional order hyperchaos, and the synchronization between an integer order 3D chaos and a fractional order 4D chaos, are employed to verify the theoretical results.
Ai-Min Yang
2014-01-01
Full Text Available The local fractional Laplace variational iteration method was applied to solve the linear local fractional partial differential equations. The local fractional Laplace variational iteration method is coupled by the local fractional variational iteration method and Laplace transform. The nondifferentiable approximate solutions are obtained and their graphs are also shown.
New Inequalities and Uncertainty Relations on Linear Canonical Transform Revisit
Xu Guanlei
2009-01-01
Full Text Available The uncertainty principle plays an important role in mathematics, physics, signal processing, and so on. Firstly, based on definition of the linear canonical transform (LCT and the traditional Pitt's inequality, one novel Pitt's inequality in the LCT domains is obtained, which is connected with the LCT parameters a and b. Then one novel logarithmic uncertainty principle is derived from this novel Pitt's inequality in the LCT domains, which is associated with parameters of the two LCTs. Secondly, from the relation between the original function and LCT, one entropic uncertainty principle and one Heisenberg's uncertainty principle in the LCT domains are derived, which are associated with the LCT parameters a and b. The reason why the three lower bounds are only associated with LCT parameters a and b and independent of c and d is presented. The results show it is possible that the bounds tend to zeros.
Evaluation method for uncertainty of effective delayed neutron fraction βeff
Zukeran, Atsushi
1999-01-01
Uncertainty of effective delayed neutron fraction β eff is evaluated in terms of three quantities; uncertainties of the basic delayed neutron constants, energy dependence of delayed neutron yield ν d m , and the uncertainties of the fission cross sections of fuel elements. The uncertainty of β eff due to the delayed neutron yield is expressed by a linearized formula assuming that the delayed neutron yield does not depend on the incident energy, and the energy dependence is supplemented by using the detailed energy dependence proposed by D'Angelo and Filip. The third quantity, uncertainties of fission cross section, is evaluated on the basis of the generalized perturbation theory in relation to reaction rate rations such as central spectral indexes or average reaction rate ratios. Resultant uncertainty of β eff is about 4 to 5%s, in which primary factor is the delayed neutron yield, and the secondary one is the fission cross section uncertainty, especially for 238 U. The energy dependence of ν d m systematically reduces the magnitude of β eff about 1.4% to 1.7%, depending on the model of the energy vs. ν d m correlation curve. (author)
On the discretization of linear fractional representations of LPV systems
Toth, R.; Lovera, M.; Heuberger, P.S.C.; Corno, M.; Hof, Van den P.M.J.
2012-01-01
Commonly, controllers for linear parameter-varying (LPV) systems are designed in continuous time using a linear fractional representation (LFR) of the plant. However, the resulting controllers are implemented on digital hardware. Furthermore, discrete-time LPV synthesis approaches require a
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig
2015-04-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig; Al-Naffouri, Tareq Y.
2015-01-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Fractional order differentiation by integration: An application to fractional linear systems
Liu, Dayan
2013-02-04
In this article, we propose a robust method to compute the output of a fractional linear system defined through a linear fractional differential equation (FDE) with time-varying coefficients, where the input can be noisy. We firstly introduce an estimator of the fractional derivative of an unknown signal, which is defined by an integral formula obtained by calculating the fractional derivative of a truncated Jacobi polynomial series expansion. We then approximate the FDE by applying to each fractional derivative this formal algebraic integral estimator. Consequently, the fractional derivatives of the solution are applied on the used Jacobi polynomials and then we need to identify the unknown coefficients of the truncated series expansion of the solution. Modulating functions method is used to estimate these coefficients by solving a linear system issued from the approximated FDE and some initial conditions. A numerical result is given to confirm the reliability of the proposed method. © 2013 IFAC.
Bayesian uncertainty quantification in linear models for diffusion MRI.
Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans
2018-03-29
Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.
Linear minimax estimation for random vectors with parametric uncertainty
Bitar, E
2010-06-01
In this paper, we take a minimax approach to the problem of computing a worst-case linear mean squared error (MSE) estimate of X given Y , where X and Y are jointly distributed random vectors with parametric uncertainty in their distribution. We consider two uncertainty models, PA and PB. Model PA represents X and Y as jointly Gaussian whose covariance matrix Λ belongs to the convex hull of a set of m known covariance matrices. Model PB characterizes X and Y as jointly distributed according to a Gaussian mixture model with m known zero-mean components, but unknown component weights. We show: (a) the linear minimax estimator computed under model PA is identical to that computed under model PB when the vertices of the uncertain covariance set in PA are the same as the component covariances in model PB, and (b) the problem of computing the linear minimax estimator under either model reduces to a semidefinite program (SDP). We also consider the dynamic situation where x(t) and y(t) evolve according to a discrete-time LTI state space model driven by white noise, the statistics of which is modeled by PA and PB as before. We derive a recursive linear minimax filter for x(t) given y(t).
SLFP: a stochastic linear fractional programming approach for sustainable waste management.
Zhu, H; Huang, G H
2011-12-01
A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.
Linear fractional diffusion-wave equation for scientists and engineers
Povstenko, Yuriy
2015-01-01
This book systematically presents solutions to the linear time-fractional diffusion-wave equation. It introduces the integral transform technique and discusses the properties of the Mittag-Leffler, Wright, and Mainardi functions that appear in the solutions. The time-nonlocal dependence between the flux and the gradient of the transported quantity with the “long-tail” power kernel results in the time-fractional diffusion-wave equation with the Caputo fractional derivative. Time-nonlocal generalizations of classical Fourier’s, Fick’s and Darcy’s laws are considered and different kinds of boundary conditions for this equation are discussed (Dirichlet, Neumann, Robin, perfect contact). The book provides solutions to the fractional diffusion-wave equation with one, two and three space variables in Cartesian, cylindrical and spherical coordinates. The respective sections of the book can be used for university courses on fractional calculus, heat and mass transfer, transport processes in porous media and ...
A Solution to the Fundamental Linear Fractional Order Differential Equation
Hartley, Tom T.; Lorenzo, Carl F.
1998-01-01
This paper provides a solution to the fundamental linear fractional order differential equation, namely, (sub c)d(sup q, sub t) + ax(t) = bu(t). The impulse response solution is shown to be a series, named the F-function, which generalizes the normal exponential function. The F-function provides the basis for a qth order "fractional pole". Complex plane behavior is elucidated and a simple example, the inductor terminated semi- infinite lossy line, is used to demonstrate the theory.
Influence of the void fraction in the linear reactivity model
Castillo, J.A.; Ramirez, J.R.; Alonso, G.
2003-01-01
The linear reactivity model allows the multicycle analysis in pressurized water reactors in a simple and quick way. In the case of the Boiling water reactors the void fraction it varies axially from 0% of voids in the inferior part of the fuel assemblies until approximately 70% of voids to the exit of the same ones. Due to this it is very important the determination of the average void fraction during different stages of the reactor operation to predict the burnt one appropriately of the same ones to inclination of the pattern of linear reactivity. In this work a pursuit is made of the profile of power for different steps of burnt of a typical operation cycle of a Boiling water reactor. Starting from these profiles it builds an algorithm that allows to determine the voids profile and this way to obtain the average value of the same one. The results are compared against those reported by the CM-PRESTO code that uses another method to carry out this calculation. Finally, the range in which is the average value of the void fraction during a typical cycle is determined and an estimate of the impact that it would have the use of this value in the prediction of the reactivity produced by the fuel assemblies is made. (Author)
Measuring the Higgs branching fraction into two photons at future linear e+e- colliders
Boos, E.; Schreiber, H.J.; Shanidze, R.
2001-01-01
We examine the prospects for a measurement of the branching fraction of the γγ decay mode of a Standard Model-like Higgs boson with a mass of 120 GeV/c 2 at the future TESLA linear e + e - collider, assuming an integrated luminosity of 1 ab -1 and centre-of-mass energies of 350 GeV and 500 GeV. A relative uncertainty on BF(H→γγ) of 16% can be achieved in unpolarised e + e - collisions at √(s) = 500 GeV, while for √(s) = 350 GeV the expected precision is slightly poorer. With appropriate initial state polarisations the uncertainty can be improved to 10%. If this measurement is combined with a measurement of the total Higgs width, a precision of 10% on the Higgs boson partial width for the γγ decay mode appears feasible. (orig.)
Non-linear Calibration Leads to Improved Correspondence between Uncertainties
Andersen, Jens Enevold Thaulov
2007-01-01
limit theorem, an excellent correspondence was obtained between predicted uncertainties and measured uncertainties. In order to validate the method, experiments were applied of flame atomic absorption spectrometry (FAAS) for the analysis of Co and Pt, and experiments of electrothermal atomic absorption...
Uncertainty relations, zero point energy and the linear canonical group
Sudarshan, E. C. G.
1993-01-01
The close relationship between the zero point energy, the uncertainty relations, coherent states, squeezed states, and correlated states for one mode is investigated. This group-theoretic perspective enables the parametrization and identification of their multimode generalization. In particular the generalized Schroedinger-Robertson uncertainty relations are analyzed. An elementary method of determining the canonical structure of the generalized correlated states is presented.
Results of fractionated stereotactic radiotherapy with linear accelerator
Aoki, Masahiko; Watanabe, Sadao [Aomori Prefectural Central Hospital (Japan); Mariya, Yasushi [and others
1997-03-01
A lot of clinical data about stereotactic radiotherapy (SRT) were reported, however, standard fractionated schedules were not shown. In this paper, our clinical results of SRT, 3 fractions of 10 Gy, are reported. Between February 1992 and March 1995, we treated 41 patients with 7 arteriovenous malformations and 41 intracranial tumors using a stereotactic technique implemented by a standard 10MV X-ray linear accelerator. Average age was 47.4 years (range 3-80 years) and average follow-up time was 16.7 months (range 3.5-46.1 months). The patients received 3 fractions of 10 Gy for 3 days delivered by multiple arc narrow beams under 3 cm in width and length. A three-pieces handmade shell was used for head fixation without any anesthetic procedures. Three-dimensional treatment planning system (Focus) was applied for the dose calculation. All patients have received at least one follow-up radiographic study and one clinical examination. In four of the 7 patients with AVM the nidus has become smaller, 9 of the 21 patients with benign intracranial tumors and 9 of the 13 patients with intracranial malignant tumors have shown complete or partial response to the therapy. In 14 patients, diseases were stable or unevaluable due to the short follow-up time. In 5 patients (3 with astrocytoma, 1 each with meningioma and craniopharyngioma), diseases were progressive. Only 1 patient with falx meningioma had minor complication due to the symptomatic brain edema around the tumor. Although, further evaluation of target control (i.e. tumor and nidus) and late normal tissue damage is needed, preliminary clinical results indicate that SRT with our methods is safe and effective. (author)
Linear minimax estimation for random vectors with parametric uncertainty
Bitar, E; Baeyens, E; Packard, A; Poolla, K
2010-01-01
consider two uncertainty models, PA and PB. Model PA represents X and Y as jointly Gaussian whose covariance matrix Λ belongs to the convex hull of a set of m known covariance matrices. Model PB characterizes X and Y as jointly distributed according to a
Song, William; Battista, Jerry; Van Dyk, Jake
2004-01-01
The convolution method can be used to model the effect of random geometric uncertainties into planned dose distributions used in radiation treatment planning. This is effectively done by linearly adding infinitesimally small doses, each with a particular geometric offset, over an assumed infinite number of fractions. However, this process inherently ignores the radiobiological dose-per-fraction effect since only the summed physical dose distribution is generated. The resultant potential error on predicted radiobiological outcome [quantified in this work with tumor control probability (TCP), equivalent uniform dose (EUD), normal tissue complication probability (NTCP), and generalized equivalent uniform dose (gEUD)] has yet to be thoroughly quantified. In this work, the results of a Monte Carlo simulation of geometric displacements are compared to those of the convolution method for random geometric uncertainties of 0, 1, 2, 3, 4, and 5 mm (standard deviation). The α/β CTV ratios of 0.8, 1.5, 3, 5, and 10 Gy are used to represent the range of radiation responses for different tumors, whereas a single α/β OAR ratio of 3 Gy is used to represent all the organs at risk (OAR). The analysis is performed on a four-field prostate treatment plan of 18 MV x rays. The fraction numbers are varied from 1-50, with isoeffective adjustments of the corresponding dose-per-fractions to maintain a constant tumor control, using the linear-quadratic cell survival model. The average differences in TCP and EUD of the target, and in NTCP and gEUD of the OAR calculated from the convolution and Monte Carlo methods reduced asymptotically as the total fraction number increased, with the differences reaching negligible levels beyond the treatment fraction number of ≥20. The convolution method generally overestimates the radiobiological indices, as compared to the Monte Carlo method, for the target volume, and underestimates those for the OAR. These effects are interconnected and attributed
The intelligence of dual simplex method to solve linear fractional fuzzy transportation problem.
Narayanamoorthy, S; Kalyani, S
2015-01-01
An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.
The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem
S. Narayanamoorthy
2015-01-01
Full Text Available An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.
Planning under uncertainty solving large-scale stochastic linear programs
Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft
1992-12-01
For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.
Ma, X.B., E-mail: maxb@ncepu.edu.cn; Qiu, R.M.; Chen, Y.X.
2017-02-15
Uncertainties regarding fission fractions are essential in understanding antineutrino flux predictions in reactor antineutrino experiments. A new Monte Carlo-based method to evaluate the covariance coefficients between isotopes is proposed. The covariance coefficients are found to vary with reactor burnup and may change from positive to negative because of balance effects in fissioning. For example, between {sup 235}U and {sup 239}Pu, the covariance coefficient changes from 0.15 to −0.13. Using the equation relating fission fraction and atomic density, consistent uncertainties in the fission fraction and covariance matrix were obtained. The antineutrino flux uncertainty is 0.55%, which does not vary with reactor burnup. The new value is about 8.3% smaller. - Highlights: • The covariance coefficients between isotopes vs reactor burnup may change its sign because of two opposite effects. • The relation between fission fraction uncertainty and atomic density are first studied. • A new MC-based method of evaluating the covariance coefficients between isotopes was proposed.
Polat, Buelent; Guenther, Iris; Wilbert, Juergen; Goebel, Joachim; Sweeney, Reinhart A.; Flentje, Michael; Guckenberger, Matthias
2008-01-01
To evaluate intra-fractional uncertainties during intensity-modulated radiotherapy (IMRT) of prostate cancer. During IMRT of 21 consecutive patients, kilovolt (kV) cone-beam computed tomography (CBCT) images were acquired prior to and immediately after treatment: a total of 252 treatment fractions with 504 CBCT studies were basis of this analysis. The prostate position in anterior-posterior (AP) direction was determined using contour matching; patient set-up based on the pelvic bony anatomy was evaluated using automatic image registration. Internal variability of the prostate position was the difference between absolute prostate and patient position errors. Intra-fractional changes of prostate position, patient position, rectal distension in AP direction and bladder volume were analyzed. With a median treatment time of 16 min, intra-fractional drifts of the prostate were > 5 mm in 12% of all fractions and a margin of 6 mm was calculated for compensation of this uncertainty. Mobility of the prostate was independent from the bony anatomy with poor correlation between absolute prostate motion and motion of the bony anatomy (R 2 = 0.24). A systematic increase of bladder filling by 41 ccm on average was observed; however, these changes did not influence the prostate position. Small variations of the prostate position occurred independently from intra-fractional changes of the rectal distension; a weak correlation between large internal prostate motion and changes of the rectal volume was observed (R 2 = 0.55). Clinically significant intra-fractional changes of the prostate position were observed and margins of 6 mm were calculated for this intra-fractional uncertainty. Repeated or continuous verification of the prostate position may allow further margin reduction. (orig.)
Polat, Buelent; Guenther, Iris; Wilbert, Juergen; Goebel, Joachim; Sweeney, Reinhart A.; Flentje, Michael; Guckenberger, Matthias [Wuerzburg Univ. (Germany). Dept. of Radiation Oncology
2008-12-15
To evaluate intra-fractional uncertainties during intensity-modulated radiotherapy (IMRT) of prostate cancer. During IMRT of 21 consecutive patients, kilovolt (kV) cone-beam computed tomography (CBCT) images were acquired prior to and immediately after treatment: a total of 252 treatment fractions with 504 CBCT studies were basis of this analysis. The prostate position in anterior-posterior (AP) direction was determined using contour matching; patient set-up based on the pelvic bony anatomy was evaluated using automatic image registration. Internal variability of the prostate position was the difference between absolute prostate and patient position errors. Intra-fractional changes of prostate position, patient position, rectal distension in AP direction and bladder volume were analyzed. With a median treatment time of 16 min, intra-fractional drifts of the prostate were > 5 mm in 12% of all fractions and a margin of 6 mm was calculated for compensation of this uncertainty. Mobility of the prostate was independent from the bony anatomy with poor correlation between absolute prostate motion and motion of the bony anatomy (R{sup 2} = 0.24). A systematic increase of bladder filling by 41 ccm on average was observed; however, these changes did not influence the prostate position. Small variations of the prostate position occurred independently from intra-fractional changes of the rectal distension; a weak correlation between large internal prostate motion and changes of the rectal volume was observed (R{sup 2} = 0.55). Clinically significant intra-fractional changes of the prostate position were observed and margins of 6 mm were calculated for this intra-fractional uncertainty. Repeated or continuous verification of the prostate position may allow further margin reduction. (orig.)
Propagation of registration uncertainty during multi-fraction cervical cancer brachytherapy
Amir-Khalili, A.; Hamarneh, G.; Zakariaee, R.; Spadinger, I.; Abugharbieh, R.
2017-10-01
Multi-fraction cervical cancer brachytherapy is a form of image-guided radiotherapy that heavily relies on 3D imaging during treatment planning, delivery, and quality control. In this context, deformable image registration can increase the accuracy of dosimetric evaluations, provided that one can account for the uncertainties associated with the registration process. To enable such capability, we propose a mathematical framework that first estimates the registration uncertainty and subsequently propagates the effects of the computed uncertainties from the registration stage through to the visualizations, organ segmentations, and dosimetric evaluations. To ensure the practicality of our proposed framework in real world image-guided radiotherapy contexts, we implemented our technique via a computationally efficient and generalizable algorithm that is compatible with existing deformable image registration software. In our clinical context of fractionated cervical cancer brachytherapy, we perform a retrospective analysis on 37 patients and present evidence that our proposed methodology for computing and propagating registration uncertainties may be beneficial during therapy planning and quality control. Specifically, we quantify and visualize the influence of registration uncertainty on dosimetric analysis during the computation of the total accumulated radiation dose on the bladder wall. We further show how registration uncertainty may be leveraged into enhanced visualizations that depict the quality of the registration and highlight potential deviations from the treatment plan prior to the delivery of radiation treatment. Finally, we show that we can improve the transfer of delineated volumetric organ segmentation labels from one fraction to the next by encoding the computed registration uncertainties into the segmentation labels.
Tilly, David; Tilly, Nina; Ahnesjö, Anders
2013-01-01
Calculation of accumulated dose in fractionated radiotherapy based on spatial mapping of the dose points generally requires deformable image registration (DIR). The accuracy of the accumulated dose thus depends heavily on the DIR quality. This motivates investigations of how the registration uncertainty influences dose planning objectives and treatment outcome predictions. A framework was developed where the dose mapping can be associated with a variable known uncertainty to simulate the DIR uncertainties in a clinical workflow. The framework enabled us to study the dependence of dose planning metrics, and the predicted treatment outcome, on the DIR uncertainty. The additional planning margin needed to compensate for the dose mapping uncertainties can also be determined. We applied the simulation framework to a hypofractionated proton treatment of the prostate using two different scanning beam spot sizes to also study the dose mapping sensitivity to penumbra widths. The planning parameter most sensitive to the DIR uncertainty was found to be the target D 95 . We found that the registration mean absolute error needs to be ≤0.20 cm to obtain an uncertainty better than 3% of the calculated D 95 for intermediate sized penumbras. Use of larger margins in constructing PTV from CTV relaxed the registration uncertainty requirements to the cost of increased dose burdens to the surrounding organs at risk. The DIR uncertainty requirements should be considered in an adaptive radiotherapy workflow since this uncertainty can have significant impact on the accumulated dose. The simulation framework enabled quantification of the accuracy requirement for DIR algorithms to provide satisfactory clinical accuracy in the accumulated dose
Belkhatir, Zehor
2015-11-05
This paper deals with the joint estimation of the unknown input and the fractional differentiation orders of a linear fractional order system. A two-stage algorithm combining the modulating functions with a first-order Newton method is applied to solve this estimation problem. First, the modulating functions approach is used to estimate the unknown input for a given fractional differentiation orders. Then, the method is combined with a first-order Newton technique to identify the fractional orders jointly with the input. To show the efficiency of the proposed method, numerical examples illustrating the estimation of the neural activity, considered as input of a fractional model of the neurovascular coupling, along with the fractional differentiation orders are presented in both noise-free and noisy cases.
Analysis of fractional non-linear diffusion behaviors based on Adomian polynomials
Wu Guo-Cheng
2017-01-01
Full Text Available A time-fractional non-linear diffusion equation of two orders is considered to investigate strong non-linearity through porous media. An equivalent integral equation is established and Adomian polynomials are adopted to linearize non-linear terms. With the Taylor expansion of fractional order, recurrence formulae are proposed and novel numerical solutions are obtained to depict the diffusion behaviors more accurately. The result shows that the method is suitable for numerical simulation of the fractional diffusion equations of multi-orders.
Stability Tests of Positive Fractional Continuous-time Linear Systems with Delays
Tadeusz Kaczorek
2013-06-01
Full Text Available Necessary and sufficient conditions for the asymptotic stability of positive fractional continuous-time linear systems with many delays are established. It is shown that: 1 the asymptotic stability of the positive fractional system is independent of their delays, 2 the checking of the asymptotic stability of the positive fractional systems with delays can be reduced to checking of the asymptotic stability of positive standard linear systems without delays.
SU-E-T-429: Uncertainties of Cell Surviving Fractions Derived From Tumor-Volume Variation Curves
Chvetsov, A
2014-01-01
Purpose: To evaluate uncertainties of cell surviving fraction reconstructed from tumor-volume variation curves during radiation therapy using sensitivity analysis based on linear perturbation theory. Methods: The time dependent tumor-volume functions V(t) have been calculated using a twolevel cell population model which is based on the separation of entire tumor cell population in two subpopulations: oxygenated viable and lethally damaged cells. The sensitivity function is defined as S(t)=[δV(t)/V(t)]/[δx/x] where δV(t)/V(t) is the time dependent relative variation of the volume V(t) and δx/x is the relative variation of the radiobiological parameter x. The sensitivity analysis was performed using direct perturbation method where the radiobiological parameter x was changed by a certain error and the tumor-volume was recalculated to evaluate the corresponding tumor-volume variation. Tumor volume variation curves and sensitivity functions have been computed for different values of cell surviving fractions from the practically important interval S 2 =0.1-0.7 using the two-level cell population model. Results: The sensitivity functions of tumor-volume to cell surviving fractions achieved a relatively large value of 2.7 for S 2 =0.7 and then approached zero as S 2 is approaching zero Assuming a systematic error of 3-4% we obtain that the relative error in S 2 is less that 20% in the range S2=0.4-0.7. This Resultis important because the large values of S 2 are associated with poor treatment outcome should be measured with relatively small uncertainties. For the very small values of S2<0.3, the relative error can be larger than 20%; however, the absolute error does not increase significantly. Conclusion: Tumor-volume curves measured during radiotherapy can be used for evaluation of cell surviving fractions usually observed in radiation therapy with conventional fractionation
Tunjo Perić
2017-01-01
Full Text Available This paper presents and analyzes the applicability of three linearization techniques used for solving multi-objective linear fractional programming problems using the goal programming method. The three linearization techniques are: (1 Taylor’s polynomial linearization approximation, (2 the method of variable change, and (3 a modification of the method of variable change proposed in [20]. All three linearization techniques are presented and analyzed in two variants: (a using the optimal value of the objective functions as the decision makers’ aspirations, and (b the decision makers’ aspirations are given by the decision makers. As the criteria for the analysis we use the efficiency of the obtained solutions and the difficulties the analyst comes upon in preparing the linearization models. To analyze the applicability of the linearization techniques incorporated in the linear goal programming method we use an example of a financial structure optimization problem.
Fractional hereditariness of lipid membranes: Instabilities and linearized evolution.
Deseri, L; Pollaci, P; Zingales, M; Dayal, K
2016-05-01
In this work lipid ordering phase changes arising in planar membrane bilayers is investigated both accounting for elasticity alone and for effective viscoelastic response of such assemblies. The mechanical response of such membranes is studied by minimizing the Gibbs free energy which penalizes perturbations of the changes of areal stretch and their gradients only (Deseri and Zurlo, 2013). As material instabilities arise whenever areal stretches characterizing homogeneous configurations lie inside the spinoidal zone of the free energy density, bifurcations from such configurations are shown to occur as oscillatory perturbations of the in-plane displacement. Experimental observations (Espinosa et al., 2011) show a power-law in-plane viscous behavior of lipid structures allowing for an effective viscoelastic behavior of lipid membranes, which falls in the framework of Fractional Hereditariness. A suitable generalization of the variational principle invoked for the elasticity is applied in this case, and the corresponding Euler-Lagrange equation is found together with a set of boundary and initial conditions. Separation of variables allows for showing how Fractional Hereditariness owes bifurcated modes with a larger number of spatial oscillations than the corresponding elastic analog. Indeed, the available range of areal stresses for material instabilities is found to increase with respect to the purely elastic case. Nevertheless, the time evolution of the perturbations solving the Euler-Lagrange equation above exhibits time-decay and the large number of spatial oscillation slowly relaxes, thereby keeping the features of a long-tail type time-response. Copyright © 2015 Elsevier Ltd. All rights reserved.
Soheil Salahshour
2015-02-01
Full Text Available In this paper, we apply the concept of Caputo’s H-differentiability, constructed based on the generalized Hukuhara difference, to solve the fuzzy fractional differential equation (FFDE with uncertainty. This is in contrast to conventional solutions that either require a quantity of fractional derivatives of unknown solution at the initial point (Riemann–Liouville or a solution with increasing length of their support (Hukuhara difference. Then, in order to solve the FFDE analytically, we introduce the fuzzy Laplace transform of the Caputo H-derivative. To the best of our knowledge, there is limited research devoted to the analytical methods to solve the FFDE under the fuzzy Caputo fractional differentiability. An analytical solution is presented to confirm the capability of the proposed method.
Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions
Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem
2017-01-01
This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating
Adaptive robust fault-tolerant control for linear MIMO systems with unmatched uncertainties
Zhang, Kangkang; Jiang, Bin; Yan, Xing-Gang; Mao, Zehui
2017-10-01
In this paper, two novel fault-tolerant control design approaches are proposed for linear MIMO systems with actuator additive faults, multiplicative faults and unmatched uncertainties. For time-varying multiplicative and additive faults, new adaptive laws and additive compensation functions are proposed. A set of conditions is developed such that the unmatched uncertainties are compensated by actuators in control. On the other hand, for unmatched uncertainties with their projection in unmatched space being not zero, based on a (vector) relative degree condition, additive functions are designed to compensate for the uncertainties from output channels in the presence of actuator faults. The developed fault-tolerant control schemes are applied to two aircraft systems to demonstrate the efficiency of the proposed approaches.
Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem
2017-01-01
This paper proposes a two-stage estimation algorithm to solve the problem of joint estimation of the parameters and the fractional differentiation orders of a linear continuous-time fractional system with non-commensurate orders. The proposed algorithm combines the modulating functions and the first-order Newton methods. Sufficient conditions ensuring the convergence of the method are provided. An error analysis in the discrete case is performed. Moreover, the method is extended to the joint estimation of smooth unknown input and fractional differentiation orders. The performance of the proposed approach is illustrated with different numerical examples. Furthermore, a potential application of the algorithm is proposed which consists in the estimation of the differentiation orders of a fractional neurovascular model along with the neural activity considered as input for this model.
Belkhatir, Zehor
2017-05-31
This paper proposes a two-stage estimation algorithm to solve the problem of joint estimation of the parameters and the fractional differentiation orders of a linear continuous-time fractional system with non-commensurate orders. The proposed algorithm combines the modulating functions and the first-order Newton methods. Sufficient conditions ensuring the convergence of the method are provided. An error analysis in the discrete case is performed. Moreover, the method is extended to the joint estimation of smooth unknown input and fractional differentiation orders. The performance of the proposed approach is illustrated with different numerical examples. Furthermore, a potential application of the algorithm is proposed which consists in the estimation of the differentiation orders of a fractional neurovascular model along with the neural activity considered as input for this model.
A goal programming procedure for solving fuzzy multiobjective fractional linear programming problems
Tunjo Perić
2014-12-01
Full Text Available This paper presents a modification of Pal, Moitra and Maulik's goal programming procedure for fuzzy multiobjective linear fractional programming problem solving. The proposed modification of the method allows simpler solving of economic multiple objective fractional linear programming (MOFLP problems, enabling the obtained solutions to express the preferences of the decision maker defined by the objective function weights. The proposed method is tested on the production planning example.
Christensen, Bent Jesper; Kruse, Robinson; Sibbertsen, Philipp
We consider hypothesis testing in a general linear time series regression framework when the possibly fractional order of integration of the error term is unknown. We show that the approach suggested by Vogelsang (1998a) for the case of integer integration does not apply to the case of fractional...
Asymptotic behavior of solutions of linear multi-order fractional differential equation systems
Diethelm, Kai; Siegmund, Stefan; Tuan, H. T.
2017-01-01
In this paper, we investigate some aspects of the qualitative theory for multi-order fractional differential equation systems. First, we obtain a fundamental result on the existence and uniqueness for multi-order fractional differential equation systems. Next, a representation of solutions of homogeneous linear multi-order fractional differential equation systems in series form is provided. Finally, we give characteristics regarding the asymptotic behavior of solutions to some classes of line...
Shao, Xingling; Wang, Honglun
2015-01-01
This paper investigates a novel compound control scheme combined with the advantages of trajectory linearization control (TLC) and alternative active disturbance rejection control (ADRC) for hypersonic reentry vehicle (HRV) attitude tracking system with bounded uncertainties. Firstly, in order to overcome actuator saturation problem, nonlinear tracking differentiator (TD) is applied in the attitude loop to achieve fewer control consumption. Then, linear extended state observers (LESO) are constructed to estimate the uncertainties acting on the LTV system in the attitude and angular rate loop. In addition, feedback linearization (FL) based controllers are designed using estimates of uncertainties generated by LESO in each loop, which enable the tracking error for closed-loop system in the presence of large uncertainties to converge to the residual set of the origin asymptotically. Finally, the compound controllers are derived by integrating with the nominal controller for open-loop nonlinear system and FL based controller. Also, comparisons and simulation results are presented to illustrate the effectiveness of the control strategy. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Linear-quadratic model underestimates sparing effect of small doses per fraction in rat spinal cord
Shun Wong, C.; Toronto University; Minkin, S.; Hill, R.P.; Toronto University
1993-01-01
The application of the linear-quadratic (LQ) model to describe iso-effective fractionation schedules for dose fraction sizes less than 2 Gy has been controversial. Experiments are described in which the effect of daily fractionated irradiation given with a wide range of fraction sizes was assessed in rat cervical spine cord. The first group of rats was given doses in 1, 2, 4, 8 and 40 fractions/day. The second group received 3 initial 'top-up'doses of 9 Gy given once daily, representing 3/4 tolerance, followed by doses in 1, 2, 10, 20, 30 and 40 fractions/day. The fractionated portion of the irradiation schedule therefore constituted only the final quarter of the tolerance dose. The endpoint of the experiments was paralysis of forelimbs secondary to white matter necrosis. Direct analysis of data from experiments with full course fractionation up to 40 fractions/day (25.0-1.98 Gy/fraction) indicated consistency with the LQ model yielding an α/β value of 2.41 Gy. Analysis of data from experiments in which the 3 'top-up' doses were followed by up to 10 fractions (10.0-1.64 Gy/fraction) gave an α/β value of 3.41 Gy. However, data from 'top-up' experiments with 20, 30 and 40 fractions (1.60-0.55 Gy/fraction) were inconsistent with LQ model and gave a very small α/β of 0.48 Gy. It is concluded that LQ model based on data from large doses/fraction underestimates the sparing effect of small doses/fraction, provided sufficient time is allowed between each fraction for repair of sublethal damage. (author). 28 refs., 5 figs., 1 tab
Estimated of associated uncertainties of the linearity test of dose calibrators
Sousa, Carlos H.S.; Peixoto, Jose G.P.
2013-01-01
Activimeters determine the activity of radioactive samples and them are validated by performance tests. This research determined the expanded uncertainties associated to the linearity test. Were used three dose calibrators and three sources of 99 Tc m for testing using recommended protocol by the IAEA, which considered the decay of radioactive samples. The expanded uncertainties evaluated were not correlated with each other and their analysis considered a rectangular probability distribution. The results are also presented in graphical form by the function of normalized activity measured in terms of conventional true value. (author)
Linear systems with unstructured multiplicative uncertainty: Modeling and robust stability analysis.
Radek Matušů
Full Text Available This article deals with continuous-time Linear Time-Invariant (LTI Single-Input Single-Output (SISO systems affected by unstructured multiplicative uncertainty. More specifically, its aim is to present an approach to the construction of uncertain models based on the appropriate selection of a nominal system and a weight function and to apply the fundamentals of robust stability investigation for considered sort of systems. The initial theoretical parts are followed by three extensive illustrative examples in which the first order time-delay, second order and third order plants with parametric uncertainty are modeled as systems with unstructured multiplicative uncertainty and subsequently, the robust stability of selected feedback loops containing constructed models and chosen controllers is analyzed and obtained results are discussed.
Zhang, Langwen; Xie, Wei; Wang, Jingcheng
2017-11-01
In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.
Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun
2015-01-01
The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Simplified Proof of Uncertainty Principle for Quaternion Linear Canonical Transform
Mawardi Bahri
2016-01-01
Full Text Available We provide a short and simple proof of an uncertainty principle associated with the quaternion linear canonical transform (QLCT by considering the fundamental relationship between the QLCT and the quaternion Fourier transform (QFT. We show how this relation allows us to derive the inverse transform and Parseval and Plancherel formulas associated with the QLCT. Some other properties of the QLCT are also studied.
A linear programming approach to characterizing norm bounded uncertainty from experimental data
Scheid, R. E.; Bayard, D. S.; Yam, Y.
1991-01-01
The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).
Underprediction of human skin erythema at low doses per fraction by the linear quadratic model
Hamilton, Christopher S.; Denham, James W.; O'Brien, Maree; Ostwald, Patricia; Kron, Tomas; Wright, Suzanne; Doerr, Wolfgang
1996-01-01
Background and purpose. The erythematous response of human skin to radiotherapy has proven useful for testing the predictions of the linear quadratic (LQ) model in terms of fractionation sensitivity and repair half time. No formal investigation of the response of human skin to doses less than 2 Gy per fraction has occurred. This study aims to test the validity of the LQ model for human skin at doses ranging from 0.4 to 5.2 Gy per fraction. Materials and methods. Complete erythema reaction profiles were obtained using reflectance spectrophotometry in two patient populations: 65 patients treated palliatively with 5, 10, 12 and 20 daily treatment fractions (varying thicknesses of bolus, various body sites) and 52 patients undergoing prostatic irradiation for localised carcinoma of the prostate (no bolus, 30-32 fractions). Results and conclusions. Gender, age, site and prior sun exposure influence pre- and post-treatment erythema values independently of dose administered. Out-of-field effects were also noted. The linear quadratic model significantly underpredicted peak erythema values at doses less than 1.5 Gy per fraction. This suggests that either the conventional linear quadratic model does not apply for low doses per fraction in human skin or that erythema is not exclusively initiated by radiation damage to the basal layer. The data are potentially explained by an induced repair model
Ding, Da-Wei; Liu, Fang-Fang; Chen, Hui; Wang, Nian; Liang, Dong
2017-12-01
In this paper, a simplest fractional-order delayed memristive chaotic system is proposed in order to control the chaos behaviors via sliding mode control strategy. Firstly, we design a sliding mode control strategy for the fractional-order system with time delay to make the states of the system asymptotically stable. Then, we obtain theoretical analysis results of the control method using Lyapunov stability theorem which guarantees the asymptotic stability of the non-commensurate order and commensurate order system with and without uncertainty and an external disturbance. Finally, numerical simulations are given to verify that the proposed sliding mode control method can eliminate chaos and stabilize the fractional-order delayed memristive system in a finite time. Supported by the National Nature Science Foundation of China under Grant No. 61201227, Funding of China Scholarship Council, the Natural Science Foundation of Anhui Province under Grant No. 1208085M F93, 211 Innovation Team of Anhui University under Grant Nos. KJTD007A and KJTD001B
The numerical solution of linear multi-term fractional differential equations: systems of equations
Edwards, John T.; Ford, Neville J.; Simpson, A. Charles
2002-11-01
In this paper, we show how the numerical approximation of the solution of a linear multi-term fractional differential equation can be calculated by reduction of the problem to a system of ordinary and fractional differential equations each of order at most unity. We begin by showing how our method applies to a simple class of problems and we give a convergence result. We solve the Bagley Torvik equation as an example. We show how the method can be applied to a general linear multi-term equation and give two further examples.
Guo, Feng; Wang, Xue-Yuan; Zhu, Cheng-Yin; Cheng, Xiao-Feng; Zhang, Zheng-Yu; Huang, Xu-Hui
2017-12-01
The stochastic resonance for a fractional oscillator with time-delayed kernel and quadratic trichotomous noise is investigated. Applying linear system theory and Laplace transform, the system output amplitude (SPA) for the fractional oscillator is obtained. It is found that the SPA is a periodical function of the kernel delayed-time. Stochastic multiplicative phenomenon appears on the SPA versus the driving frequency, versus the noise amplitude, and versus the fractional exponent. The non-monotonous dependence of the SPA on the system parameters is also discussed.
Sasaki, Takehito; Kamata, Rikisaburo; Urahashi, Shingo; Yamaguchi, Tetsuji.
1993-01-01
One hundred and sixty-nine cervical lymph node-metastases from head and neck squamous cell carcinomas treated with either even fractionation or uneven fractionation regimens were analyzed in the present investigation. Logistic multivariate regression analysis indicated that: type of fractionation (even vs uneven), size of metastases, T value of primary tumors, and total dose are independent variables out of 18 variables that significantly influenced the rate of tumor clearance. The data, with statistical bias corrected by the regression equation, indicated that the uneven fractionation scheme significantly improved the rate of tumor clearance for the same size of metastases, total dose, and overall time compared to the even fractionation scheme. Further analysis by a linear-quadratic cell survival model indicated that the clinical improvement by uneven fractionation might not be explained entirely by a larger dose per fraction. It is suggested that tumor cells irradiated with an uneven fractionation regimen might repopulate more slowly, or they might be either less hypoxic or redistributed in a more radiosensitive phase in the cell cycle than those irradiated with even fractionation. This conclusion is clearly not definite, but it is suitable, pending the results of further investigation. (author)
Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison
2017-11-01
Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.
Sadeghi, Mehdi; Mirshojaeian Hosseini, Hossein
2006-01-01
For many years, energy models have been used in developed or developing countries to satisfy different needs in energy planning. One of major problems against energy planning and consequently energy models is uncertainty, spread in different economic, political and legal dimensions of energy planning. Confronting uncertainty, energy planners have often used two well-known strategies. The first strategy is stochastic programming, in which energy system planners define different scenarios and apply an explicit probability of occurrence to each scenario. The second strategy is Minimax Regret strategy that minimizes regrets of different decisions made in energy planning. Although these strategies have been used extensively, they could not flexibly and effectively deal with the uncertainties caused by fuzziness. 'Fuzzy Linear Programming (FLP)' is a strategy that can take fuzziness into account. This paper tries to demonstrate the method of application of FLP for optimization of supply energy system in Iran, as a case study. The used FLP model comprises fuzzy coefficients for investment costs. Following the mentioned purpose, it is realized that FLP is an easy and flexible approach that can be a serious competitor for other confronting uncertainties approaches, i.e. stochastic and Minimax Regret strategies. (author)
Invited Review Article: Measurement uncertainty of linear phase-stepping algorithms
Hack, Erwin [EMPA, Laboratory Electronics/Metrology/Reliability, Ueberlandstrasse 129, CH-8600 Duebendorf (Switzerland); Burke, Jan [Australian Centre for Precision Optics, CSIRO (Commonwealth Scientific and Industrial Research Organisation) Materials Science and Engineering, P.O. Box 218, Lindfield, NSW 2070 (Australia)
2011-06-15
Phase retrieval techniques are widely used in optics, imaging and electronics. Originating in signal theory, they were introduced to interferometry around 1970. Over the years, many robust phase-stepping techniques have been developed that minimize specific experimental influence quantities such as phase step errors or higher harmonic components of the signal. However, optimizing a technique for a specific influence quantity can compromise its performance with regard to others. We present a consistent quantitative analysis of phase measurement uncertainty for the generalized linear phase stepping algorithm with nominally equal phase stepping angles thereby reviewing and generalizing several results that have been reported in literature. All influence quantities are treated on equal footing, and correlations between them are described in a consistent way. For the special case of classical N-bucket algorithms, we present analytical formulae that describe the combined variance as a function of the phase angle values. For the general Arctan algorithms, we derive expressions for the measurement uncertainty averaged over the full 2{pi}-range of phase angles. We also give an upper bound for the measurement uncertainty which can be expressed as being proportional to an algorithm specific factor. Tabular compilations help the reader to quickly assess the uncertainties that are involved with his or her technique.
S. M. Aithal
2018-01-01
Full Text Available Initial conditions of the working fluid (air-fuel mixture within an engine cylinder, namely, mixture composition and temperature, greatly affect the combustion characteristics and emissions of an engine. In particular, the percentage of residual gas fraction (RGF in the engine cylinder can significantly alter the temperature and composition of the working fluid as compared with the air-fuel mixture inducted into the engine, thus affecting engine-out emissions. Accurate measurement of the RGF is cumbersome and expensive, thus making it hard to accurately characterize the initial mixture composition and temperature in any given engine cycle. This uncertainty can lead to challenges in accurately interpreting experimental emissions data and in implementing real-time control strategies. Quantifying the effects of the RGF can have important implications for the diagnostics and control of internal combustion engines. This paper reports on the use of a well-validated, two-zone quasi-dimensional model to compute the engine-out NO and CO emission in a gasoline engine. The effect of varying the RGF on the emissions under lean, near-stoichiometric, and rich engine conditions was investigated. Numerical results show that small uncertainties (~2–4% in the measured/computed values of the RGF can significantly affect the engine-out NO/CO emissions.
A uniform law for convergence to the local times of linear fractional stable motions
Duffy, James A.
2016-01-01
We provide a uniform law for the weak convergence of additive functionals of partial sum processes to the local times of linear fractional stable motions, in a setting sufficiently general for statistical applications. Our results are fundamental to the analysis of the global properties of nonparametric estimators of nonlinear statistical models that involve such processes as covariates.
Yolci Omeroglu, Perihan; Ambrus, Árpad; Boyacioglu, Dilek
2018-03-28
Determination of pesticide residues is based on calibration curves constructed for each batch of analysis. Calibration standard solutions are prepared from a known amount of reference material at different concentration levels covering the concentration range of the analyte in the analysed samples. In the scope of this study, the applicability of both ordinary linear and weighted linear regression (OLR and WLR) for pesticide residue analysis was investigated. We used 782 multipoint calibration curves obtained for 72 different analytical batches with high-pressure liquid chromatography equipped with an ultraviolet detector, and gas chromatography with electron capture, nitrogen phosphorus or mass spectrophotometer detectors. Quality criteria of the linear curves including regression coefficient, standard deviation of relative residuals and deviation of back calculated concentrations were calculated both for WLR and OLR methods. Moreover, the relative uncertainty of the predicted analyte concentration was estimated for both methods. It was concluded that calibration curve based on WLR complies with all the quality criteria set by international guidelines compared to those calculated with OLR. It means that all the data fit well with WLR for pesticide residue analysis. It was estimated that, regardless of the actual concentration range of the calibration, relative uncertainty at the lowest calibrated level ranged between 0.3% and 113.7% for OLR and between 0.2% and 22.1% for WLR. At or above 1/3 of the calibrated range, uncertainty of calibration curve ranged between 0.1% and 16.3% for OLR and 0% and 12.2% for WLR, and therefore, the two methods gave comparable results.
Completeness, special functions and uncertainty principles over q-linear grids
Abreu, LuIs Daniel
2006-01-01
We derive completeness criteria for sequences of functions of the form f(xλ n ), where λ n is the nth zero of a suitably chosen entire function. Using these criteria, we construct complete nonorthogonal systems of Fourier-Bessel functions and their q-analogues, as well as other complete sets of q-special functions. We discuss connections with uncertainty principles over q-linear grids and the completeness of certain sets of q-Bessel functions is used to prove that, if a function f and its q-Hankel transform both vanish at the points {q -n } ∞ n=1 , 0 n } ∞ n=-∞
Uncertainties in linear energy transfer spectra measured with track-etched detectors in space
Pachnerová Brabcová, Kateřina; Ambrožová, Iva; Kolísková, Zlata; Malušek, Alexandr
2013-01-01
Roč. 713, JUN 11 (2013), s. 5-10 ISSN 0168-9002 R&D Projects: GA ČR GA205/09/0171; GA AV ČR IAA100480902; GA AV ČR KJB100480901; GA ČR GD202/09/H086 Institutional research plan: CEZ:AV0Z10480505 Institutional support: RVO:61389005 Keywords : CR-39 * linear energy transfer * uncertainty model * space dosimetry Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.316, year: 2013
Vinai, Paolo; Macian-Juan, Rafael; Chawla, Rakesh
2011-01-01
The paper describes the propagation of void fraction uncertainty, as quantified by employing a novel methodology developed at Paul Scherrer Institut, in the RETRAN-3D simulation of the Peach Bottom turbine trip test. Since the transient considered is characterized by a strong coupling between thermal-hydraulics and neutronics, the accuracy in the void fraction model has a very important influence on the prediction of the power history and, in particular, of the maximum power reached. It has been shown that the objective measures used for the void fraction uncertainty, based on the direct comparison between experimental and predicted values extracted from a database of appropriate separate-effect tests, provides power uncertainty bands that are narrower and more realistic than those based, for example, on expert opinion. The applicability of such an approach to best estimate, nuclear power plant transient analysis has thus been demonstrated.
Uniqueness of non-linear ground states for fractional Laplacians in R
Frank, Rupert L.; Lenzmann, Enno
2013-01-01
We prove uniqueness of ground state solutions Q = Q(|x|) ≥ 0 of the non-linear equation (−Δ)sQ+Q−Qα+1=0inR,where 0 fractional Laplacian in one dimension. In particular, we answer affirmatively an open question...... recently raised by Kenig–Martel–Robbiano and we generalize (by completely different techniques) the specific uniqueness result obtained by Amick and Toland for s=12 and α = 1 in [5] for the Benjamin–Ono equation. As a technical key result in this paper, we show that the associated linearized operator L...... + = (−Δ) s +1−(α+1)Q α is non-degenerate; i.e., its kernel satisfies ker L + = span{Q′}. This result about L + proves a spectral assumption, which plays a central role for the stability of solitary waves and blowup analysis for non-linear dispersive PDEs with fractional Laplacians, such as the generalized...
Liu, Da-Yan; Tian, Yang; Boutat, Driss; Laleg-Kirati, Taous-Meriem
2015-01-01
This paper aims at designing a digital fractional order differentiator for a class of signals satisfying a linear differential equation to estimate fractional derivatives with an arbitrary order in noisy case, where the input can be unknown or known with noises. Firstly, an integer order differentiator for the input is constructed using a truncated Jacobi orthogonal series expansion. Then, a new algebraic formula for the Riemann-Liouville derivative is derived, which is enlightened by the algebraic parametric method. Secondly, a digital fractional order differentiator is proposed using a numerical integration method in discrete noisy case. Then, the noise error contribution is analyzed, where an error bound useful for the selection of the design parameter is provided. Finally, numerical examples illustrate the accuracy and the robustness of the proposed fractional order differentiator.
Analytical approach to linear fractional partial differential equations arising in fluid mechanics
Momani, Shaher; Odibat, Zaid
2006-01-01
In this Letter, we implement relatively new analytical techniques, the variational iteration method and the Adomian decomposition method, for solving linear fractional partial differential equations arising in fluid mechanics. The fractional derivatives are described in the Caputo sense. The two methods in applied mathematics can be used as alternative methods for obtaining analytic and approximate solutions for different types of fractional differential equations. In these methods, the solution takes the form of a convergent series with easily computable components. The corresponding solutions of the integer order equations are found to follow as special cases of those of fractional order equations. Some numerical examples are presented to illustrate the efficiency and reliability of the two methods
Liu, Da-Yan
2015-04-30
This paper aims at designing a digital fractional order differentiator for a class of signals satisfying a linear differential equation to estimate fractional derivatives with an arbitrary order in noisy case, where the input can be unknown or known with noises. Firstly, an integer order differentiator for the input is constructed using a truncated Jacobi orthogonal series expansion. Then, a new algebraic formula for the Riemann-Liouville derivative is derived, which is enlightened by the algebraic parametric method. Secondly, a digital fractional order differentiator is proposed using a numerical integration method in discrete noisy case. Then, the noise error contribution is analyzed, where an error bound useful for the selection of the design parameter is provided. Finally, numerical examples illustrate the accuracy and the robustness of the proposed fractional order differentiator.
Lorenzo, C F; Hartley, T T; Malti, R
2013-05-13
A new and simplified method for the solution of linear constant coefficient fractional differential equations of any commensurate order is presented. The solutions are based on the R-function and on specialized Laplace transform pairs derived from the principal fractional meta-trigonometric functions. The new method simplifies the solution of such fractional differential equations and presents the solutions in the form of real functions as opposed to fractional complex exponential functions, and thus is directly applicable to real-world physics.
Il Young Song
2015-01-01
Full Text Available This paper focuses on estimation of a nonlinear function of state vector (NFS in discrete-time linear systems with time-delays and model uncertainties. The NFS represents a multivariate nonlinear function of state variables, which can indicate useful information of a target system for control. The optimal nonlinear estimator of an NFS (in mean square sense represents a function of the receding horizon estimate and its error covariance. The proposed receding horizon filter represents the standard Kalman filter with time-delays and special initial horizon conditions described by the Lyapunov-like equations. In general case to calculate an optimal estimator of an NFS we propose using the unscented transformation. Important class of polynomial NFS is considered in detail. In the case of polynomial NFS an optimal estimator has a closed-form computational procedure. The subsequent application of the proposed receding horizon filter and nonlinear estimator to a linear stochastic system with time-delays and uncertainties demonstrates their effectiveness.
GPI-repetitive control for linear systems with parameter uncertainty / variation
John A. Cortés-Romero
2015-01-01
Full Text Available Robust repetitive control problems for uncertain linear systems have been considered by different approaches. This article proposes the use of Repetitive Control and Generalized Proportional Integral (GPI Control in a complementary fashion. The conditioning and coupling of these techniques has been done in a time discrete context. Repetitive control is a control technique, based on the internal model principle, which yields perfect asymptotic tracking and rejection of periodic signals. On the other hand, GPI control is established as a robust linear control system design technique that is able to reject structured time polynomial additive perturbation, in particular, parameter uncertainty that can be locally approximated by time polynomial signal. GPI control provides a suitable stability and robustness conditions for the proper Repetitive Control operation. A stability analysis is presented under the frequency response framework using plant samples for different parameter uncertainty conditions. We carry out some comparative stability analysis with other complementary control approaches that has been effective for this kind of task, enhancing a better robustness and an improved performance for the GPI case. Illustrative simulation examples are presented which validate the proposed approach.
Asymptotical Behavior of the Solution of a SDOF Linear Fractionally Damped Vibration System
Z.H. Wang
2011-01-01
Full Text Available Fractional-order derivative has been shown an adequate tool to the study of so-called "anomalous" social and physical behaviors, in reflecting their non-local, frequency- and history-dependent properties, and it has been used to model practical systems in engineering successfully, including the famous Bagley-Torvik equation modeling forced motion of a rigid plate immersed in Newtonian fluid. The solutions of the initial value problems of linear fractional differential equations are usually expressed in terms of Mittag-Leffler functions or some other kind of power series. Such forms of solutions are not good for engineers not only in understanding the solutions but also in investigation. This paper proves that for the linear SDOF oscillator with a damping described by fractional-order derivative whose order is between 1 and 2, the solution of its initial value problem free of external excitation consists of two parts, the first one is the 'eigenfunction expansion' that is similar to the case without fractional-order derivative, and the second one is a definite integral that is independent of the eigenvalues (or characteristic roots. The integral disappears in the classical linear oscillator and it can be neglected from the solution when stationary solution is addressed. Moreover, the response of the fractionally damped oscillator under harmonic excitation is calculated in a similar way, and it is found that the fractional damping with order between 1 and 2 can be used to produce oscillation with large amplitude as well as to suppress oscillation, depending on the ratio of the excitation frequency and the natural frequency.
Arcentales, Andres; Rivera, Patricio; Caminal, Pere; Voss, Andreas; Bayes-Genis, Antonio; Giraldo, Beatriz F
2016-08-01
Changes in the left ventricle function produce alternans in the hemodynamic and electric behavior of the cardiovascular system. A total of 49 cardiomyopathy patients have been studied based on the blood pressure signal (BP), and were classified according to the left ventricular ejection fraction (LVEF) in low risk (LR: LVEF>35%, 17 patients) and high risk (HR: LVEF≤35, 32 patients) groups. We propose to characterize these patients using a linear and a nonlinear methods, based on the spectral estimation and the recurrence plot, respectively. From BP signal, we extracted each systolic time interval (STI), upward systolic slope (BPsl), and the difference between systolic and diastolic BP, defined as pulse pressure (PP). After, the best subset of parameters were obtained through the sequential feature selection (SFS) method. According to the results, the best classification was obtained using a combination of linear and nonlinear features from STI and PP parameters. For STI, the best combination was obtained considering the frequency peak and the diagonal structures of RP, with an area under the curve (AUC) of 79%. The same results were obtained when comparing PP values. Consequently, the use of combined linear and nonlinear parameters could improve the risk stratification of cardiomyopathy patients.
Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions
Belkhatir, Zehor
2017-06-28
This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.
Stability Analysis for Fractional-Order Linear Singular Delay Differential Systems
Hai Zhang
2014-01-01
Full Text Available We investigate the delay-independently asymptotic stability of fractional-order linear singular delay differential systems. Based on the algebraic approach, the sufficient conditions are presented to ensure the asymptotic stability for any delay parameter. By applying the stability criteria, one can avoid solving the roots of transcendental equations. An example is also provided to illustrate the effectiveness and applicability of the theoretical results.
High-order sliding mode observer for fractional commensurate linear systems with unknown input
Belkhatir, Zehor
2017-05-20
In this paper, a high-order sliding mode observer (HOSMO) is proposed for the joint estimation of the pseudo-state and the unknown input of fractional commensurate linear systems with single unknown input and a single output. The convergence of the proposed observer is proved using a Lyapunov-based approach. In addition, an enhanced variant of the proposed fractional-HOSMO is introduced to avoid the peaking phenomenon and thus to improve the estimation results in the transient phase. Simulation results are provided to illustrate the performance of the proposed fractional observer in both noise-free and noisy cases. The effect of the observer’s gains on the estimated pseudo-state and unknown input is also discussed.
High-order sliding mode observer for fractional commensurate linear systems with unknown input
Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem
2017-01-01
In this paper, a high-order sliding mode observer (HOSMO) is proposed for the joint estimation of the pseudo-state and the unknown input of fractional commensurate linear systems with single unknown input and a single output. The convergence of the proposed observer is proved using a Lyapunov-based approach. In addition, an enhanced variant of the proposed fractional-HOSMO is introduced to avoid the peaking phenomenon and thus to improve the estimation results in the transient phase. Simulation results are provided to illustrate the performance of the proposed fractional observer in both noise-free and noisy cases. The effect of the observer’s gains on the estimated pseudo-state and unknown input is also discussed.
An Improved Method for Solving Multiobjective Integer Linear Fractional Programming Problem
Meriem Ait Mehdi
2014-01-01
Full Text Available We describe an improvement of Chergui and Moulaï’s method (2008 that generates the whole efficient set of a multiobjective integer linear fractional program based on the branch and cut concept. The general step of this method consists in optimizing (maximizing without loss of generality one of the fractional objective functions over a subset of the original continuous feasible set; then if necessary, a branching process is carried out until obtaining an integer feasible solution. At this stage, an efficient cut is built from the criteria’s growth directions in order to discard a part of the feasible domain containing only nonefficient solutions. Our contribution concerns firstly the optimization process where a linear program that we define later will be solved at each step rather than a fractional linear program. Secondly, local ideal and nadir points will be used as bounds to prune some branches leading to nonefficient solutions. The computational experiments show that the new method outperforms the old one in all the treated instances.
Baogui Xin
2012-01-01
Full Text Available Based on linear feedback control technique, a projective synchronization scheme of N-dimensional chaotic fractional-order systems is proposed, which consists of master and slave fractional-order financial systems coupled by linear state error variables. It is shown that the slave system can be projectively synchronized with the master system constructed by state transformation. Based on the stability theory of linear fractional order systems, a suitable controller for achieving synchronization is designed. The given scheme is applied to achieve projective synchronization of chaotic fractional-order financial systems. Numerical simulations are given to verify the effectiveness of the proposed projective synchronization scheme.
Weihua Jin
2013-01-01
Full Text Available This paper proposes a genetic-algorithms-based approach as an all-purpose problem-solving method for operation programming problems under uncertainty. The proposed method was applied for management of a municipal solid waste treatment system. Compared to the traditional interactive binary analysis, this approach has fewer limitations and is able to reduce the complexity in solving the inexact linear programming problems and inexact quadratic programming problems. The implementation of this approach was performed using the Genetic Algorithm Solver of MATLAB (trademark of MathWorks. The paper explains the genetic-algorithms-based method and presents details on the computation procedures for each type of inexact operation programming problems. A comparison of the results generated by the proposed method based on genetic algorithms with those produced by the traditional interactive binary analysis method is also presented.
Output feedback control of linear fractional transformation systems subject to actuator saturation
Ban, Xiaojun; Wu, Fen
2016-11-01
In this paper, the control problem for a class of linear parameter varying (LPV) plant subject to actuator saturation is investigated. For the saturated LPV plant depending on the scheduling parameters in linear fractional transformation (LFT) fashion, a gain-scheduled output feedback controller in the LFT form is designed to guarantee the stability of the closed-loop LPV system and provide optimised disturbance/error attenuation performance. By using the congruent transformation, the synthesis condition is formulated as a convex optimisation problem in terms of a finite number of LMIs for which efficient optimisation techniques are available. The nonlinear inverted pendulum problem is employed to demonstrate the effectiveness of the proposed approach. Moreover, the comparison between our LPV saturated approach with an existing linear saturated method reveals the advantage of the LPV controller when handling nonlinear plants.
Nurdan Cetin
2014-01-01
Full Text Available We consider a multiobjective linear fractional transportation problem (MLFTP with several fractional criteria, such as, the maximization of the transport profitability like profit/cost or profit/time, and its two properties are source and destination. Our aim is to introduce MLFTP which has not been studied in literature before and to provide a fuzzy approach which obtain a compromise Pareto-optimal solution for this problem. To do this, first, we present a theorem which shows that MLFTP is always solvable. And then, reducing MLFTP to the Zimmermann’s “min” operator model which is the max-min problem, we construct Generalized Dinkelbach’s Algorithm for solving the obtained problem. Furthermore, we provide an illustrative numerical example to explain this fuzzy approach.
Hariz, M.I.; Laitinen, L.V.; Henriksson, R.; Saeterborg, N.-E.; Loefroth, P.-O.
1990-01-01
A new technique for fractionated stereotactic irradiation of intracranial lesions is described. The treatment is based on a versatile, non-invasive interface for stereotactic localization of the brain target imaged by computed tomography (CT), angiography or magnetic resonance tomography (MRT), and subsequent repetitive stereotactic irradiation of the target using a linear accelerator. The fractionation of the stereotactic irradiation was intended to meet the requirements of the basic principles of radiobiology. The radiophysical evaluation using phantoms, and the clinical results in a small number of patients, demonstrated a good reproducibilit between repeated positionings of the target in the isocenter of the accelerator, and a high degree of accuracy in the treatment of brain lesions. (authors). 28 refs.; 11 figs.; 1 tab
To reflect this uncertainty in the climate scenarios, the use of AOGCMs that explicitly simulate the carbon cycle and chemistry of all the substances are needed. The Hadley Centre has developed a version of the climate model that allows the effect of climate change on the carbon cycle and its feedback into climate, to be ...
Silva, T.A. da
1988-01-01
The comparison between the uncertainty method recommended by International Atomic Energy Agency (IAEA) and the and the International Weight and Measure Commitee (CIPM) are showed, for the calibration of clinical dosimeters in the secondary standard Dosimetry Laboratory (SSDL). (C.G.C.) [pt
Optical Measurement of Radiocarbon below Unity Fraction Modern by Linear Absorption Spectroscopy.
Fleisher, Adam J; Long, David A; Liu, Qingnan; Gameson, Lyn; Hodges, Joseph T
2017-09-21
High-precision measurements of radiocarbon ( 14 C) near or below a fraction modern 14 C of 1 (F 14 C ≤ 1) are challenging and costly. An accurate, ultrasensitive linear absorption approach to detecting 14 C would provide a simple and robust benchtop alternative to off-site accelerator mass spectrometry facilities. Here we report the quantitative measurement of 14 C in gas-phase samples of CO 2 with F 14 C radiocarbon measurement science including the study of biofuels and bioplastics, illicitly traded specimens, bomb dating, and atmospheric transport.
Pei, Soo-Chang; Ding, Jian-Jiun
2005-03-01
Prolate spheroidal wave functions (PSWFs) are known to be useful for analyzing the properties of the finite-extension Fourier transform (fi-FT). We extend the theory of PSWFs for the finite-extension fractional Fourier transform, the finite-extension linear canonical transform, and the finite-extension offset linear canonical transform. These finite transforms are more flexible than the fi-FT and can model much more generalized optical systems. We also illustrate how to use the generalized prolate spheroidal functions we derive to analyze the energy-preservation ratio, the self-imaging phenomenon, and the resonance phenomenon of the finite-sized one-stage or multiple-stage optical systems.
Scatter fractions from linear accelerators with x-ray energies from 6 to 24 MV.
Taylor, P L; Rodgers, J E; Shobe, J
1999-08-01
Computation of shielding requirements for a linear accelerator must take into account the amount of radiation scattered from the patient to areas outside the primary beam. Currently, the most frequently used data are from NCRP 49 that only includes data for x-ray energies up to 6 MV and angles from 30 degrees to 135 degrees. In this work we have determined by Monte Carlo simulation the scattered fractions of dose for a wide range of energies and angles of clinical significance including 6, 10, 18, and 24 MV and scattering angles from 10 degrees to 150 degrees. Calculations were made for a 400 cm2 circular field size impinging onto a spherical phantom. Scattered fractions of dose were determined at 1 m from the phantom. Angles from 10 degrees to 30 degrees are of concern for higher energies where the scatter is primarily in the forward direction. An error in scatter fraction may result in too little secondary shielding near the junction with the primary barrier. The Monte Carlo code ITS (Version 3.0) developed at Sandia National Laboratory and NIST was used to simulate scatter from the patient to the barrier. Of significance was the variation of calculated scattered dose with depth of measurement within the barrier indicating that accurate values may be difficult to obtain. Mean energies of scatter x-ray spectra are presented.
Stiebel-Kalish, Hadas; Reich, Ehud; Gal, Lior; Rappaport, Zvi Harry; Nissim, Ouzi; Pfeffer, Raphael; Spiegelmann, Roberto
2012-01-01
Purpose: Meningiomas threatening the anterior visual pathways (AVPs) and not amenable for surgery are currently treated with multisession stereotactic radiotherapy. Stereotactic radiotherapy is available with a number of devices. The most ubiquitous include the gamma knife, CyberKnife, tomotherapy, and isocentric linear accelerator systems. The purpose of our study was to describe a case series of AVP meningiomas treated with linear accelerator fractionated stereotactic radiotherapy (FSRT) using the multiple, noncoplanar, dynamic conformal rotation paradigm and to compare the success and complication rates with those reported for other techniques. Patients and Methods: We included all patients with AVP meningiomas followed up at our neuro-ophthalmology unit for a minimum of 12 months after FSRT. We compared the details of the neuro-ophthalmologic examinations and tumor size before and after FSRT and at the end of follow-up. Results: Of 87 patients with AVP meningiomas, 17 had been referred for FSRT. Of the 17 patients, 16 completed >12 months of follow-up (mean 39). Of the 16 patients, 11 had undergone surgery before FSRT and 5 had undergone FSRT as first-line management. Tumor control was achieved in 14 of the 16 patients, with three meningiomas shrinking in size after RT. Two meningiomas progressed, one in an area that was outside the radiation field. The visual function had improved in 6 or stabilized in 8 of the 16 patients (88%) and worsened in 2 (12%). Conclusions: Linear accelerator fractionated RT using the multiple noncoplanar dynamic rotation conformal paradigm can be offered to patients with meningiomas that threaten the anterior visual pathways as an adjunct to surgery or as first-line treatment, with results comparable to those reported for other stereotactic RT techniques.
Song Xiao-Na; Song Shuai; Liu Lei-Po; Tejado Balsera, Inés
2017-01-01
This paper investigates the mixed H ∞ and passive projective synchronization problem for fractional-order (FO) memristor-based neural networks. Our aim is to design a controller such that, though the unavoidable phenomena of time-delay and parameter uncertainty are fully considered, the resulting closed-loop system is asymptotically stable with a mixed H ∞ and passive performance level. By combining active and adaptive control methods, a novel hybrid control strategy is designed, which can guarantee the robust stability of the closed-loop system and also ensure a mixed H ∞ and passive performance level. Via the application of FO Lyapunov stability theory, the projective synchronization conditions are addressed in terms of linear matrix inequality techniques. Finally, two simulation examples are given to illustrate the effectiveness of the proposed method. (paper)
Hua Wang
2017-01-01
Full Text Available Abstract In this paper, we first introduce some new Morrey-type spaces containing generalized Morrey space and weighted Morrey space with two weights as special cases. Then we give the weighted strong type and weak type estimates for fractional integral operators I α $I_{\\alpha}$ in these new Morrey-type spaces. Furthermore, the weighted strong type estimate and endpoint estimate of linear commutators [ b , I α ] $[b,I_{\\alpha}]$ formed by b and I α $I_{\\alpha}$ are established. Also we study related problems about two-weight, weak type inequalities for I α $I_{\\alpha}$ and [ b , I α ] $[b,I_{\\alpha}]$ in the Morrey-type spaces and give partial results.
Application of the method of continued fractions for electron scattering by linear molecules
Lee, M.-T.; Iga, I.; Fujimoto, M.M.; Lara, O.; Brasilia Univ., DF
1995-01-01
The method of continued fractions (MCF) of Horacek and Sasakawa is adapted for the first time to study low-energy electron scattering by linear molecules. Particularly, we have calculated the reactance K-matrices for an electron scattered by hydrogen molecule and hydrogen molecular ion as well as by a polar LiH molecule in the static-exchange level. For all the applications studied herein. the calculated physical quantities converge rapidly, even for a strongly polar molecule such as LiH, to the correct values and in most cases the convergence is monotonic. Our study suggests that the MCF could be an efficient method for studying electron-molecule scattering and also photoionization of molecules. (Author)
Shen, Peiping; Zhang, Tongli; Wang, Chunfeng
2017-01-01
This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.
Hennelly, Bryan M.; Sheridan, John T.
2005-05-01
By use of matrix-based techniques it is shown how the space-bandwidth product (SBP) of a signal, as indicated by the location of the signal energy in the Wigner distribution function, can be tracked through any quadratic-phase optical system whose operation is described by the linear canonical transform. Then, applying the regular uniform sampling criteria imposed by the SBP and linking the criteria explicitly to a decomposition of the optical matrix of the system, it is shown how numerical algorithms (employing interpolation and decimation), which exhibit both invertibility and additivity, can be implemented. Algorithms appearing in the literature for a variety of transforms (Fresnel, fractional Fourier) are shown to be special cases of our general approach. The method is shown to allow the existing algorithms to be optimized and is also shown to permit the invention of many new algorithms.
Scattered fractions of dose from 18 and 25 MV X-ray radiotherapy linear accelerators
Shobe, J.; Rodgers, J.E.; Taylor, P.L.; Jackson, J.; Popescu, G.
1996-01-01
Over the years, measurements have been made at a few energies to estimate the scattered fraction of dose from the patient in medical radiotherapy operations. This information has been a useful aid in the determination of shielding requirements for these facilities. With these measurements, known characteriztics of photons, and various other known parameters, Monte Carlo codes are being used to calculate the scattered fractions and hence the shielding requirements for the photons of other energies commonly used in radiotherapeutic applications. The National Institute of Standards and Technology (NIST) acquired a Sagittaire medical linear accelerator (linac) which was previously located at the Yale-New Haven Hospital. This linac provides an X-ray beam of 25 MV photons and electron beams with energies up to 32 MeV. The housing on the gantry was permanently removed from the accelerator during installation. A Varian Clinac 1800 linear accelerator was used to produce the 18 MV photons at the Frederick Memorial Hospital Regional Cancer Therapy Center in Frederick, MD. This paper represents a study of the photon dose scattered from a patient in typical radiation treatment situations as it relates to the dose delivered at the isocenter in water. The results of these measurements will be compared to Monte Carlo calculations. Photon spectral measurements were not made at this time. Neutron spectral measurements were made on this Sagittaire machine in its previous location and that work was not repeated here, although a brief study of the neutron component of the 18 and 25 MV linacs was performed utilizing thermoluminescent dosimetry (TLD) to determine the isotropy of the neutron dose. (author)
Yan Han
2013-01-01
Full Text Available An interval-parameter fuzzy linear programming with stochastic vertices (IFLPSV method is developed for water resources management under uncertainty by coupling interval-parameter fuzzy linear programming (IFLP with stochastic programming (SP. As an extension of existing interval parameter fuzzy linear programming, the developed IFLPSV approach has advantages in dealing with dual uncertainty optimization problems, which uncertainty presents as interval parameter with stochastic vertices in both of the objective functions and constraints. The developed IFLPSV method improves upon the IFLP method by allowing dual uncertainty parameters to be incorporated into the optimization processes. A hybrid intelligent algorithm based on genetic algorithm and artificial neural network is used to solve the developed model. The developed method is then applied to water resources allocation in Beijing city of China in 2020, where water resources shortage is a challenging issue. The results indicate that reasonable solutions have been obtained, which are helpful and useful for decision makers. Although the amount of water supply from Guanting and Miyun reservoirs is declining with rainfall reduction, water supply from the South-to-North Water Transfer project will have important impact on water supply structure of Beijing city, particularly in dry year and extraordinary dry year.
Merrikh-Bayat, Farshad
2011-04-01
One main approach for time-domain simulation of the linear output-feedback systems containing fractional-order controllers is to approximate the transfer function of the controller with an integer-order transfer function and then perform the simulation. In general, this approach suffers from two main disadvantages: first, the internal stability of the resulting feedback system is not guaranteed, and second, the amount of error caused by this approximation is not exactly known. The aim of this paper is to propose an efficient method for time-domain simulation of such systems without facing the above mentioned drawbacks. For this purpose, the fractional-order controller is approximated with an integer-order transfer function (possibly in combination with the delay term) such that the internal stability of the closed-loop system is guaranteed, and then the simulation is performed. It is also shown that the resulting approximate controller can effectively be realized by using the proposed method. Some formulas for estimating and correcting the simulation error, when the feedback system under consideration is subjected to the unit step command or the unit step disturbance, are also presented. Finally, three numerical examples are studied and the results are compared with the Oustaloup continuous approximation method. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.
2015-10-01
In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.
Belazi, Akram; Abd El-Latif, Ahmed A.; Diaconu, Adrian-Viorel; Rhouma, Rhouma; Belghith, Safya
2017-01-01
In this paper, a new chaos-based partial image encryption scheme based on Substitution-boxes (S-box) constructed by chaotic system and Linear Fractional Transform (LFT) is proposed. It encrypts only the requisite parts of the sensitive information in Lifting-Wavelet Transform (LWT) frequency domain based on hybrid of chaotic maps and a new S-box. In the proposed encryption scheme, the characteristics of confusion and diffusion are accomplished in three phases: block permutation, substitution, and diffusion. Then, we used dynamic keys instead of fixed keys used in other approaches, to control the encryption process and make any attack impossible. The new S-box was constructed by mixing of chaotic map and LFT to insure the high confidentiality in the inner encryption of the proposed approach. In addition, the hybrid compound of S-box and chaotic systems strengthened the whole encryption performance and enlarged the key space required to resist the brute force attacks. Extensive experiments were conducted to evaluate the security and efficiency of the proposed approach. In comparison with previous schemes, the proposed cryptosystem scheme showed high performances and great potential for prominent prevalence in cryptographic applications.
Dale, R.G.
1986-01-01
By extending a previously developed mathematical model based on the linear-quadratic dose-effect relationship, it is possible to examine the consequences of performing fractionated treatments for which there is insufficient time between fractions to allow complete damage repair. Equations are derived which give the relative effectiveness of such treatments in terms of tissue-repair constants (μ values) and α/β ratios, and these are then applied to some examples of treatments involving multiple fractions per day. The interplay of the various mechanisms involved (including repopulation effects) and their possible influence on treatments involving closely spaced fractions are examined. If current indications of the differences in recovery rates between early- and late-reacting normal tissues are representative, then it is shown that such differences may limit the clinical potential of accelerated fractionation regimes, where several fractions per day are given in a relatively short overall time. (author)
Rossikhin Yury A.
2018-01-01
Full Text Available Non-linear damped vibrations of a cylindrical shell embedded into a fractional derivative medium are investigated for the case of the combinational internal resonance, resulting in modal interaction, using two different numerical methods with further comparison of the results obtained. The damping properties of the surrounding medium are described by the fractional derivative Kelvin-Voigt model utilizing the Riemann-Liouville fractional derivatives. Within the first method, the generalized displacements of a coupled set of nonlinear ordinary differential equations of the second order are estimated using numerical solution of nonlinear multi-term fractional differential equations by the procedure based on the reduction of the problem to a system of fractional differential equations. According to the second method, the amplitudes and phases of nonlinear vibrations are estimated from the governing nonlinear differential equations describing amplitude-and-phase modulations for the case of the combinational internal resonance. A good agreement in results is declared.
Yang, Yongge; Xu, Wei; Yang, Guidong; Jia, Wantao
2016-08-01
The Poisson white noise, as a typical non-Gaussian excitation, has attracted much attention recently. However, little work was referred to the study of stochastic systems with fractional derivative under Poisson white noise excitation. This paper investigates the stationary response of a class of quasi-linear systems with fractional derivative excited by Poisson white noise. The equivalent stochastic system of the original stochastic system is obtained. Then, approximate stationary solutions are obtained with the help of the perturbation method. Finally, two typical examples are discussed in detail to demonstrate the effectiveness of the proposed method. The analysis also shows that the fractional order and the fractional coefficient significantly affect the responses of the stochastic systems with fractional derivative.
Yang, Yongge; Xu, Wei; Yang, Guidong; Jia, Wantao
2016-01-01
The Poisson white noise, as a typical non-Gaussian excitation, has attracted much attention recently. However, little work was referred to the study of stochastic systems with fractional derivative under Poisson white noise excitation. This paper investigates the stationary response of a class of quasi-linear systems with fractional derivative excited by Poisson white noise. The equivalent stochastic system of the original stochastic system is obtained. Then, approximate stationary solutions are obtained with the help of the perturbation method. Finally, two typical examples are discussed in detail to demonstrate the effectiveness of the proposed method. The analysis also shows that the fractional order and the fractional coefficient significantly affect the responses of the stochastic systems with fractional derivative.
Yang, Yongge; Xu, Wei, E-mail: weixu@nwpu.edu.cn; Yang, Guidong; Jia, Wantao [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)
2016-08-15
The Poisson white noise, as a typical non-Gaussian excitation, has attracted much attention recently. However, little work was referred to the study of stochastic systems with fractional derivative under Poisson white noise excitation. This paper investigates the stationary response of a class of quasi-linear systems with fractional derivative excited by Poisson white noise. The equivalent stochastic system of the original stochastic system is obtained. Then, approximate stationary solutions are obtained with the help of the perturbation method. Finally, two typical examples are discussed in detail to demonstrate the effectiveness of the proposed method. The analysis also shows that the fractional order and the fractional coefficient significantly affect the responses of the stochastic systems with fractional derivative.
Ren, Jingzheng; Dong, Liang; Sun, Lu
2015-01-01
in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed...
Five-DOF innovative linear MagLev slider to account for pitch, tilt and load uncertainty
Kao, Yi-Ming; Tsai, Nan-Chyuan; Chiu, Hsin-Lin
2017-02-01
This paper is focused at position deviation regulation upon a slider by Fuzzy Sliding Mode Control (FSMC). Five Degrees Of Freedom (DOF) of position deviation are required to be regulated except for the direction (i.e., X-axis) in which the slider moves forward and backward. Totally 8 sets of Magnetic Actuators (MAs) and an Electro-Pneumatic Transducer (EPT) are employed to drive the slider carrying loads under the commands of FSMC. EPT is applied to adjust the pressure of compressed air to counterbalance the weight of slider itself. At first, the system dynamic model of slider, including load uncertainty and load position uncertainty, is established. Intensive computer simulations are undertaken to verify the validity of proposed control strategy. Finally, a prototype of realistic slider position deviation regulation system is successfully built up. According to the experiments by cooperation of pneumatic and magnetic control, the actual linear position deviations of slider can be regulated within ±8 μm and angular position deviations within ±1 mini-degrees. From the viewpoint of energy consumption, the applied currents to 8 sets of MAs are all below 1.2 A. To sum up, the closed-loop levitation system by cooperation of pneumatic and magnetic control is capable to account for load uncertainty and uncertainty of the standing position of load to be carried.
Parametric linear programming for a materials requirement planning problem solution with uncertainty
Martin Darío Arango Serna; Conrado Augusto Serna; Giovanni Pérez Ortega
2010-01-01
Using fuzzy set theory as a methodology for modelling and analysing decision systems is particularly interesting for researchers in industrial engineering because it allows qualitative and quantitative analysis of problems involving uncertainty and imprecision. Thus, in an effort to gain a better understanding of the use of fuzzy logic in industrial engineering, more specifically in the field of production planning, this article was aimed at providing a materials requirement planning (MRP) pr...
Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.
2016-01-01
Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.
Bagherpour, Esmaeel A.; HairiTazdi, Mohammad Reza; Mahjoob, Mohammad
2014-01-01
In this paper, we deal with residual vector generation for fault detection problems in linear systems via unknown input observer (UIO) when the so-called observer matching condition is not satisfied. Based on the relative degree between unknown input and output, a vector of the auxiliary output is introduced so that the observer matching condition is satisfied with respect to the vector. Auxiliary outputs are related to the derivatives of measured signals. However, differentiation leads to excessive amplification of measurement noise. A dynamically equivalent configuration of linear systems is developed using successive integrations to avoid differentiation. As such, auxiliary outputs are constructed without differentiation. Then, the equivalent dynamic system and its corresponding auxiliary outputs are used to generate the residual vector via an exponentially converging UIO. Fault detection in the generated residual vector is also investigated. Finally, the effectiveness of the proposed method is shown via numerical simulation.
Song, Xiao-Na; Song, Shuai; Tejado Balsera, Inés; Liu, Lei-Po
2017-10-01
This paper investigates the mixed H ∞ and passive projective synchronization problem for fractional-order (FO) memristor-based neural networks. Our aim is to design a controller such that, though the unavoidable phenomena of time-delay and parameter uncertainty are fully considered, the resulting closed-loop system is asymptotically stable with a mixed H ∞ and passive performance level. By combining active and adaptive control methods, a novel hybrid control strategy is designed, which can guarantee the robust stability of the closed-loop system and also ensure a mixed H ∞ and passive performance level. Via the application of FO Lyapunov stability theory, the projective synchronization conditions are addressed in terms of linear matrix inequality techniques. Finally, two simulation examples are given to illustrate the effectiveness of the proposed method. Supported by National Natural Science Foundation of China under Grant Nos. U1604146, U1404610, 61473115, 61203047, Science and Technology Research Project in Henan Province under Grant Nos. 152102210273, 162102410024, and Foundation for the University Technological Innovative Talents of Henan Province under Grant No. 18HASTIT019
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
Iman Ghasemi
2017-05-01
Full Text Available In this paper, iterative learning control (ILC is combined with an optimal fractional order derivative (BBO-Da-type ILC and optimal fractional and proportional-derivative (BBO-PDa-type ILC. In the update law of Arimoto's derivative iterative learning control, a first order derivative of tracking error signal is used. In the proposed method, fractional order derivative of the error signal is stated in term of 'sa' where to update iterative learning control law. Two types of fractional order iterative learning control namely PDa-type ILC and Da-type ILC are gained for different value of a. In order to improve the performance of closed-loop control system, coefficients of both and learning law i.e. proportional , derivative and are optimized using Biogeography-Based optimization algorithm (BBO. Outcome of the simulation results are compared with those of the conventional fractional order iterative learning control to verify effectiveness of BBO-Da-type ILC and BBO-PDa-type ILC
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James [Clarkson Univ., Potsdam, NY (United States)
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
Laxy, Michael; Stark, Renée; Peters, Annette; Hauner, Hans; Holle, Rolf; Teuner, Christina M
2017-08-30
This study aims to analyse the non-linear relationship between Body Mass Index (BMI) and direct health care costs, and to quantify the resulting cost fraction attributable to obesity in Germany. Five cross-sectional surveys of cohort studies in southern Germany were pooled, resulting in data of 6757 individuals (31-96 years old). Self-reported information on health care utilisation was used to estimate direct health care costs for the year 2011. The relationship between measured BMI and annual costs was analysed using generalised additive models, and the cost fraction attributable to obesity was calculated. We found a non-linear association of BMI and health care costs with a continuously increasing slope for increasing BMI without any clear threshold. Under the consideration of the non-linear BMI-cost relationship, a shift in the BMI distribution so that the BMI of each individual is lowered by one point is associated with a 2.1% reduction of mean direct costs in the population. If obesity was eliminated, and the BMI of all obese individuals were lowered to 29.9 kg/m², this would reduce the mean direct costs by 4.0% in the population. Results show a non-linear relationship between BMI and health care costs, with very high costs for a few individuals with high BMI. This indicates that population-based interventions in combination with selective measures for very obese individuals might be the preferred strategy.
Regis, J.-M., E-mail: regis@ikp.uni-koeln.de [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany); Rudigier, M.; Jolie, J.; Blazhev, A.; Fransen, C.; Pascovici, G.; Warr, N. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany)
2012-08-21
The electronic {gamma}-{gamma} fast timing technique allows for direct nuclear lifetime determination down to the few picoseconds region by measuring the time difference between two coincident {gamma}-ray transitions. Using high resolution ultra-fast LaBr{sub 3}(Ce) scintillator detectors in combination with the recently developed mirror symmetric centroid difference method, nuclear lifetimes are measured with a time resolving power of around 5 ps. The essence of the method is to calibrate the energy dependent position (centroid) of the prompt response function of the setup which is obtained for simultaneously occurring events. This time-walk of the prompt response function induced by the analog constant fraction discriminator has been determined by systematic measurements using different photomultiplier tubes and timing adjustments of the constant fraction discriminator. We propose a universal calibration function which describes the time-walk or the combined {gamma}-{gamma} time-walk characteristics, respectively, for either a linear or a non-linear amplitude versus energy dependency of the scintillator detector output pulses.
Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang
2017-05-18
This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.
Zhang, Lifu; Li, Chuxin; Zhong, Haizhe; Xu, Changwen; Lei, Dajun; Li, Ying; Fan, Dianyuan
2016-06-27
We have investigated the propagation dynamics of super-Gaussian optical beams in fractional Schrödinger equation. We have identified the difference between the propagation dynamics of super-Gaussian beams and that of Gaussian beams. We show that, the linear propagation dynamics of the super-Gaussian beams with order m > 1 undergo an initial compression phase before they split into two sub-beams. The sub-beams with saddle shape separate each other and their interval increases linearly with propagation distance. In the nonlinear regime, the super-Gaussian beams evolve to become a single soliton, breathing soliton or soliton pair depending on the order of super-Gaussian beams, nonlinearity, as well as the Lévy index. In two dimensions, the linear evolution of super-Gaussian beams is similar to that for one dimension case, but the initial compression of the input super-Gaussian beams and the diffraction of the splitting beams are much stronger than that for one dimension case. While the nonlinear propagation of the super-Gaussian beams becomes much more unstable compared with that for the case of one dimension. Our results show the nonlinear effects can be tuned by varying the Lévy index in the fractional Schrödinger equation for a fixed input power.
Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Xiaoling Zhang
2013-01-01
Full Text Available The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers’ preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Zhang, Mutian; Zhang, Qinghui; Gan, Hua; Li, Sicong; Zhou, Su-min
2016-02-01
In the present study, clinical stereotactic radiosurgery (SRS) setup uncertainties from image-guidance data are analyzed, and the corresponding setup margin is estimated for treatment planning purposes. Patients undergoing single-fraction SRS at our institution were localized using invasive head ring or non-invasive thermoplastic masks. Setup discrepancies were obtained from an in-room x-ray patient position monitoring system. Post treatment re-planning using the measured setup errors was performed in order to estimate the individual target margins sufficient to compensate for the actual setup errors. The formula of setup margin for a general SRS patient population was derived by proposing a correlation between the three-dimensional setup error and the required minimal margin. Setup errors of 104 brain lesions were analyzed, in which 81 lesions were treated using an invasive head ring, and 23 were treated using non-invasive masks. In the mask cases with image guidance, the translational setup uncertainties achieved the same level as those in the head ring cases. Re-planning results showed that the margins for individual patients could be smaller than the clinical three-dimensional setup errors. The derivation of setup margin adequate to address the patient setup errors was demonstrated by using the arbitrary planning goal of treating 95% of the lesions with sufficient doses. With image guidance, the patient setup accuracy of mask cases can be comparable to that of invasive head rings. The SRS setup margin can be derived for a patient population with the proposed margin formula to compensate for the institution-specific setup errors. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Simic, Vladimir; Dimitrijevic, Branka
2015-02-01
An interval linear programming approach is used to formulate and comprehensively test a model for optimal long-term planning of vehicle recycling in the Republic of Serbia. The proposed model is applied to a numerical case study: a 4-year planning horizon (2013-2016) is considered, three legislative cases and three scrap metal price trends are analysed, availability of final destinations for sorted waste flows is explored. Potential and applicability of the developed model are fully illustrated. Detailed insights on profitability and eco-efficiency of the projected contemporary equipped vehicle recycling factory are presented. The influences of the ordinance on the management of end-of-life vehicles in the Republic of Serbia on the vehicle hulks procuring, sorting generated material fractions, sorted waste allocation and sorted metals allocation decisions are thoroughly examined. The validity of the waste management strategy for the period 2010-2019 is tested. The formulated model can create optimal plans for procuring vehicle hulks, sorting generated material fractions, allocating sorted waste flows and allocating sorted metals. Obtained results are valuable for supporting the construction and/or modernisation process of a vehicle recycling system in the Republic of Serbia. © The Author(s) 2015.
Quadratic-linear pattern in cancer fractional radiotherapy. Equations for a computering program
Burgos, D.; Bullejos, J.; Garcia Puche, J.L.; Pedraza, V.
1990-01-01
Knowledge of equivalence between different tratment schemes with the same iso-effect is the essential thing in clinical cancer radiotherapy. For this purpose it is very useful the group of ideas derived from quadratic-linear pattern (Q-L) proposed in order to analyze cell survival curve to radiation. Iso-effect definition caused by several irradiation rules is done by extrapolated tolerance dose (ETD). Because equations for ETD are complex, a computering program have been carried out. In this paper, iso-effect equations for well defined therapeutic situations and flow diagram proposed for resolution, have been studied. (Author)
Li, Meng; Gu, Xian-Ming; Huang, Chengming; Fei, Mingfa; Zhang, Guoyu
2018-04-01
In this paper, a fast linearized conservative finite element method is studied for solving the strongly coupled nonlinear fractional Schrödinger equations. We prove that the scheme preserves both the mass and energy, which are defined by virtue of some recursion relationships. Using the Sobolev inequalities and then employing the mathematical induction, the discrete scheme is proved to be unconditionally convergent in the sense of L2-norm and H α / 2-norm, which means that there are no any constraints on the grid ratios. Then, the prior bound of the discrete solution in L2-norm and L∞-norm are also obtained. Moreover, we propose an iterative algorithm, by which the coefficient matrix is independent of the time level, and thus it leads to Toeplitz-like linear systems that can be efficiently solved by Krylov subspace solvers with circulant preconditioners. This method can reduce the memory requirement of the proposed linearized finite element scheme from O (M2) to O (M) and the computational complexity from O (M3) to O (Mlog M) in each iterative step, where M is the number of grid nodes. Finally, numerical results are carried out to verify the correction of the theoretical analysis, simulate the collision of two solitary waves, and show the utility of the fast numerical solution techniques.
Wilson, Peter J; Williams, Janet R; Smee, Robert I
2014-09-01
Nelson's syndrome is a unique clinical phenomenon of growth of a pituitary adenoma following bilateral adrenalectomies for the control of Cushing's disease. Primary management is surgical, with limited effective medical therapies available. We report our own institution's series of this pathology managed with radiation: prior to 1990, 12 patients were managed with conventional radiotherapy, and between 1990 and 2007, five patients underwent stereotactic radiosurgery (SRS) and two patients fractionated stereotactic radiotherapy (FSRT), both using the linear accelerator (LINAC). Tumour control was equivocal, with two of the five SRS patients having a reduction in tumour volume, one patient remaining unchanged, and two patients having an increase in volume. In the FSRT group, one patient had a decrease in tumour volume whilst the other had an increase in volume. Treatment related morbidity was low. Nelson's syndrome is a challenging clinical scenario, with a highly variable response to radiation in our series. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wilson, P J; Williams, J R; Smee, R I
2014-01-01
Cushing's disease is hypercortisolaemia secondary to an adrenocorticotrophic hormone secreting pituitary adenoma. Primary management is almost always surgical, with limited effective medical interventions available. Adjuvant therapy in the form of radiation is gaining popularity, with the bulk of the literature related to the Gamma Knife. We present the results from our own institution using the linear accelerator (LINAC) since 1990. Thirty-six patients who underwent stereotactic radiosurgery (SRS), one patient who underwent fractionated stereotactic radiotherapy (FSRT) and for the purposes of comparison, 13 patients who had undergone conventional radiotherapy prior to 1990, were included in the analysis. Serum cortisol levels improved in nine of 36 (25%) SRS patients and 24 hour urinary free cortisol levels improved in 13 of 36 patients (36.1%). Tumour volume control was excellent in the SRS group with deterioration in only one patient (3%). The patient who underwent FSRT had a highly aggressive tumour refractory to radiation. Published by Elsevier Ltd.
Lomax, A J
2008-01-01
Simple tools for studying the effects of inter-fraction and inter-field motions on intensity modulated proton therapy (IMPT) plans have been developed, and have been applied to both 3D and distal edge tracking (DET) IMPT plans. For the inter-fraction motion, we have investigated the effects of misaligned density heterogeneities, whereas for the inter-field motion analysis, the effects of field misalignment on the plans have been assessed. Inter-fraction motion problems have been analysed using density differentiated error (DDE) distributions, which specifically show the additional problems resulting from misaligned density heterogeneities for proton plans. Likewise, for inter-field motion, we present methods for calculating motion differentiated error (MDE) distributions. DDE and MDE analysis of all plans demonstrate that the 3D approach is generally more robust to both inter-fraction and inter-field motions than the DET approach, but that strong in-field dose gradients can also adversely affect a plan's robustness. An important additional conclusion is that, for certain IMPT plans, even inter-fraction errors cannot necessarily be compensated for by the use of a simple PTV margins, implying that more sophisticated tools need to be developed for uncertainty management and assessment for IMPT treatments at the treatment planning level
Sousa, Carlos H.S.; Peixoto, Jose G.P., E-mail: chenrique@ird.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), RIo de Janeiro, RJ (Brazil)
2013-07-01
Activimeters determine the activity of radioactive samples and them are validated by performance tests. This research determined the expanded uncertainties associated to the linearity test. Were used three dose calibrators and three sources of {sup 99}Tc{sup m} for testing using recommended protocol by the IAEA, which considered the decay of radioactive samples. The expanded uncertainties evaluated were not correlated with each other and their analysis considered a rectangular probability distribution. The results are also presented in graphical form by the function of normalized activity measured in terms of conventional true value. (author)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M; Walker, La Tonya Nicole; Roberts, Barry L; Malczynski, Leonard A.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M; Walker, La Tonya Nicole; Roberts, Barry L; Malczynski, Leonard A.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.
Rui, Yichao; Murphy, Daniel V; Wang, Xiaoli; Hoyle, Frances C
2016-10-18
Rebuilding 'lost' soil carbon (C) is a priority in mitigating climate change and underpinning key soil functions that support ecosystem services. Microorganisms determine if fresh C input is converted into stable soil organic matter (SOM) or lost as CO 2 . Here we quantified if microbial biomass and respiration responded positively to addition of light fraction organic matter (LFOM, representing recent inputs of plant residue) in an infertile semi-arid agricultural soil. Field trial soil with different historical plant residue inputs [soil C content: control (tilled) = 9.6 t C ha -1 versus tilled + plant residue treatment (tilled + OM) = 18.0 t C ha -1 ] were incubated in the laboratory with a gradient of LFOM equivalent to 0 to 3.8 t C ha -1 (0 to 500% LFOM). Microbial biomass C significantly declined under increased rates of LFOM addition while microbial respiration increased linearly, leading to a decrease in the microbial C use efficiency. We hypothesise this was due to insufficient nutrients to form new microbial biomass as LFOM input increased the ratio of C to nitrogen, phosphorus and sulphur of soil. Increased CO 2 efflux but constrained microbial growth in response to LFOM input demonstrated the difficulty for C storage in this environment.
F. Hossain
2004-01-01
Full Text Available This study presents a simple and efficient scheme for Bayesian estimation of uncertainty in soil moisture simulation by a Land Surface Model (LSM. The scheme is assessed within a Monte Carlo (MC simulation framework based on the Generalized Likelihood Uncertainty Estimation (GLUE methodology. A primary limitation of using the GLUE method is the prohibitive computational burden imposed by uniform random sampling of the model's parameter distributions. Sampling is improved in the proposed scheme by stochastic modeling of the parameters' response surface that recognizes the non-linear deterministic behavior between soil moisture and land surface parameters. Uncertainty in soil moisture simulation (model output is approximated through a Hermite polynomial chaos expansion of normal random variables that represent the model's parameter (model input uncertainty. The unknown coefficients of the polynomial are calculated using limited number of model simulation runs. The calibrated polynomial is then used as a fast-running proxy to the slower-running LSM to predict the degree of representativeness of a randomly sampled model parameter set. An evaluation of the scheme's efficiency in sampling is made through comparison with the fully random MC sampling (the norm for GLUE and the nearest-neighborhood sampling technique. The scheme was able to reduce computational burden of random MC sampling for GLUE in the ranges of 10%-70%. The scheme was also found to be about 10% more efficient than the nearest-neighborhood sampling method in predicting a sampled parameter set's degree of representativeness. The GLUE based on the proposed sampling scheme did not alter the essential features of the uncertainty structure in soil moisture simulation. The scheme can potentially make GLUE uncertainty estimation for any LSM more efficient as it does not impose any additional structural or distributional assumptions.
Wilson, Peter J; Williams, Janet Rosemary; Smee, Robert Ian
2015-06-01
Primary management of prolactinomas is usually medical, with surgery a secondary option where necessary. This study is a review of a single centre's experience with focused radiotherapy where benefit was not gained by medical or surgical approaches. Radiotherapy as an alternative and adjuvant treatment for prolactinomas has been performed at our institution with the linear accelerator since 1990. We present a retrospective review of 13 patients managed with stereotactic radiosurgery (SRS) and 5 managed with fractionated stereotactic radiotherapy (FSRT), as well as 5 managed with conventional radiotherapy, at the Prince of Wales Hospital. Patients with a histopathologically diagnosed prolactinoma were eligible. Those patients who had a confirmed pathological diagnosis of prolactinoma following surgical intervention, a prolactin level elevated above 500 μg/L, or a prolactin level persistently elevated above 200 μg/L with exclusion of other causes were represented in this review. At the end of documented follow-up (SRS median 6 years, FSRT median 2 years), no SRS patients showed an increase in tumour volume. After FSRT, 1 patient showed an increase in size, 2 showed a decrease in size and 2 patients showed no change. Prolactin levels trended towards improvement after SRS and FSRT, but no patients achieved the remission level of <20 μg/L. Seven of 13 patients in the SRS group achieved a level of <500 μg/L, whereas no patients reached this target after FSRT. A reduction in prolactin level is frequent after SRS and FSRT for prolactinomas; however, true biochemical remission is uncommon. Tumour volume control in this series was excellent, but this may be related to the natural history of the disease. Morbidity and mortality after stereotactic radiation were very low in this series. © 2014 The Royal Australian and New Zealand College of Radiologists.
Hong, Linda X.; Garg, Madhur; Lasala, Patrick; Kim, Mimi; Mah, Dennis; Chen, Chin-Cheng; Yaparpalvi, Ravindra; Mynampati, Dinesh; Kuo, Hsiang-Chi; Guha, Chandan; Kalnicki, Shalom [Department of Radiation Oncology, Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, New York 10461 (United States); Department of Neurosurgery, Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, New York 10461 (United States); Department of Epidemiology and Population Health, Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, New York 10461 (United States); Department of Radiation Oncology, Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, New York 10461 (United States)
2011-03-15
Purpose: Sharp dose fall off outside a tumor is essential for high dose single fraction stereotactic radiosurgery (SRS) plans. This study explores the relationship among tumor dose inhomogeneity, conformity, and dose fall off in normal tissues for micromultileaf collimator (mMLC) linear accelerator (LINAC) based cranial SRS plans. Methods: Between January 2007 and July 2009, 65 patients with single cranial lesions were treated with LINAC-based SRS. Among them, tumors had maximum diameters {<=}20 mm: 31; between 20 and 30 mm: 21; and >30 mm: 13. All patients were treated with 6 MV photons on a Trilogy linear accelerator (Varian Medical Systems, Palo Alto, CA) with a tertiary m3 high-resolution mMLC (Brainlab, Feldkirchen, Germany), using either noncoplanar conformal fixed fields or dynamic conformal arcs. The authors also created retrospective study plans with identical beam arrangement as the treated plan but with different tumor dose inhomogeneity by varying the beam margins around the planning target volume (PTV). All retrospective study plans were normalized so that the minimum PTV dose was the prescription dose (PD). Isocenter dose, mean PTV dose, RTOG conformity index (CI), RTOG homogeneity index (HI), dose gradient index R{sub 50}-R{sub 100} (defined as the difference between equivalent sphere radius of 50% isodose volume and prescription isodose volume), and normal tissue volume (as a ratio to PTV volume) receiving 50% prescription dose (NTV{sub 50}) were calculated. Results: HI was inversely related to the beam margins around the PTV. CI had a ''V'' shaped relationship with HI, reaching a minimum when HI was approximately 1.3. Isocenter dose and mean PTV dose (as percentage of PD) increased linearly with HI. R{sub 50}-R{sub 100} and NTV{sub 50} initially declined with HI and then reached a plateau when HI was approximately 1.3. These trends also held when tumors were grouped according to their maximum diameters. The smallest tumor group
Stamova, Ivanka; Stamov, Gani
2017-12-01
In this paper, we propose a fractional-order neural network system with time-varying delays and reaction-diffusion terms. We first develop a new Mittag-Leffler synchronization strategy for the controlled nodes via impulsive controllers. Using the fractional Lyapunov method sufficient conditions are given. We also study the global Mittag-Leffler synchronization of two identical fractional impulsive reaction-diffusion neural networks using linear controllers, which was an open problem even for integer-order models. Since the Mittag-Leffler stability notion is a generalization of the exponential stability concept for fractional-order systems, our results extend and improve the exponential impulsive control theory of neural network system with time-varying delays and reaction-diffusion terms to the fractional-order case. The fractional-order derivatives allow us to model the long-term memory in the neural networks, and thus the present research provides with a conceptually straightforward mathematical representation of rather complex processes. Illustrative examples are presented to show the validity of the obtained results. We show that by means of appropriate impulsive controllers we can realize the stability goal and to control the qualitative behavior of the states. An image encryption scheme is extended using fractional derivatives. Copyright © 2017 Elsevier Ltd. All rights reserved.
Maryam Montazeri
2013-01-01
Full Text Available This paper presents a control approach to the fuzzy-adaptive control scheme for rigid manipulators with unknown parameters. Lagrange’s method is employed for computing robot motion dynamics. Stability analysis guaranteed through Lyapunov’s theory using some suitable adaptive rules that make sure all signals in the closed-loop system are bounded and tracking error ones asymptotically reaches to zero. Compared with other controllers, there are some numerical simulations that verify effectiveness of the proposed method. Also, simulation results verify that the proposed controller can deal with uncertainties in the system.
Olsson, Per-Ivar; Fiandaca, Gianluca; Larsen, Jakob Juul
, a logarithmic gate width distribution for optimizing IP data quality and an estimate of gating uncertainty. Additional steps include modelling and cancelling of non-linear background drift and harmonic noise and a technique for efficiently identifying and removing spikes. The cancelling of non-linear background...... drift is based on a Cole-Cole model which effectively handles current induced electrode polarization drift. The model-based cancelling of harmonic noise reconstructs the harmonic noise as a sum of harmonic signals with a common fundamental frequency. After segmentation of the signal and determining....... The processing steps is successfully applied on full field profile data sets. With the model-based cancelling of harmonic noise, the first usable IP gate is moved one decade closer to time zero. Furthermore, with a Cole-Cole background drift model the shape of the response at late times is accurately retrieved...
Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François
2018-04-01
Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.
Castillo, J.A.; Ramirez, J.R.; Alonso, G. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)]. e-mail: jacm@nuclear.inin.mx
2003-07-01
The linear reactivity model allows the multicycle analysis in pressurized water reactors in a simple and quick way. In the case of the Boiling water reactors the void fraction it varies axially from 0% of voids in the inferior part of the fuel assemblies until approximately 70% of voids to the exit of the same ones. Due to this it is very important the determination of the average void fraction during different stages of the reactor operation to predict the burnt one appropriately of the same ones to inclination of the pattern of linear reactivity. In this work a pursuit is made of the profile of power for different steps of burnt of a typical operation cycle of a Boiling water reactor. Starting from these profiles it builds an algorithm that allows to determine the voids profile and this way to obtain the average value of the same one. The results are compared against those reported by the CM-PRESTO code that uses another method to carry out this calculation. Finally, the range in which is the average value of the void fraction during a typical cycle is determined and an estimate of the impact that it would have the use of this value in the prediction of the reactivity produced by the fuel assemblies is made. (Author)
Mukherjee, Kanchan Kumar; Kumar, Narendra; Tripathi, Manjul; Oinam, Arun S; Ahuja, Chirag K; Dhandapani, Sivashanmugam; Kapoor, Rakesh; Ghoshal, Sushmita; Kaur, Rupinder; Bhatt, Sandeep
2017-01-01
To evaluate the feasibility, safety and efficacy of dose fractionated gamma knife radiosurgery (DFGKRS) on a daily schedule beyond the linear quadratic (LQ) model, for large volume arteriovenous malformations (AVMs). Between 2012-16, 14 patients of large AVMs (median volume 26.5 cc) unsuitable for surgery or embolization were treated in 2-3 of DFGKRS sessions. The Leksell G frame was kept in situ during the whole procedure. 86% (n = 12) patients had radiologic evidence of bleed, and 43% (n = 6) had presented with a history of seizures. 57% (n = 8) patients received a daily treatment for 3 days and 43% (n = 6) were on an alternate day (2 fractions) regimen. The marginal dose was split into 2 or 3 fractions of the ideal prescription dose of a single fraction of 23-25 Gy. The median follow up period was 35.6 months (8-57 months). In the three-fraction scheme, the marginal dose ranged from 8.9-11.5 Gy, while in the two-fraction scheme, the marginal dose ranged from 11.3-15 Gy at 50% per fraction. Headache (43%, n = 6) was the most common early postoperative complication, which was controlled with short course steroids. Follow up evaluation of at least three years was achieved in seven patients, who have shown complete nidus obliteration in 43% patients while the obliteration has been in the range of 50-99% in rest of the patients. Overall, there was a 67.8% reduction in the AVM volume at 3 years. Nidus obliteration at 3 years showed a significant rank order correlation with the cumulative prescription dose (p 0.95, P value 0.01), with attainment of near-total (more than 95%) obliteration rates beyond 29 Gy of the cumulative prescription dose. No patient receiving a cumulative prescription dose of less than 31 Gy had any severe adverse reaction. In co-variate adjusted ordinal regression, only the cumulative prescription dose had a significant correlation with common terminology criteria for adverse events (CTCAE) severity (P value 0.04), independent of age, AVM volume
Juste, B.; Miró, R.; Verdú, G.; Macián, R.
2012-01-01
A calculation of the correct dose in radiation therapy requires an accurate description of the radiation source because uncertainties in characterization of the linac photon spectrum are propagated through the dose calculations. Unfortunately, detailed knowledge of the initial electron beam parameters is not readily available, and many researchers adjust the initial electron fluence values by trial-and-error methods. The main goal of this work was to develop a methodology to characterize the fluence of initial electrons before they hit the tungsten target of an Elekta Precise medical linear accelerator. To this end, we used a Monte Carlo technique to analyze the influence of the characteristics of the initial electron beam on the distribution of absorbed dose from a 6 MV linac photon beam in a water phantom. The technique is based on calculations with Software for Uncertainty and Sensitivity Analysis (SUSA) and Monte Carlo simulations with the MCNP5 transport code. The free parameters used in the SUSA calculations were the mean energy and full-width-at-half-maximum (FWHM) of the initial electron distribution. A total of 93 combinations of these parameters gave initial electron fluence configurations. The electron spectra thus obtained were used in a simulation of the electron transport through the target of the linear accelerator, which produced different photon (Bremsstrahlung) spectra. The simulated photon spectra were compared with the 6-MV photon spectrum provided by the linac manufacturer (Elekta). This comparison revealed how the mean energy and FWHM of the initial electron fluence affect the spectrum of the generated photons. This study has made it possible to fine-tune the examined electron beam parameters to obtain the resulted absorbed doses with acceptable accuracy (error <1%). - Highlights: ► Mean energy and radial spread are important parameters for simulating the incident electron beam in radiation therapy. ► Errors in determining the electron
Martin, P.; Zamudio-Cristi, J.
1982-01-01
A method is described to obtain fractional approximations for linear first order differential equations with polynomial coefficients. This approximation can give good accuracy in a large region of the complex variable plane that may include all the real axis. The parameters of the approximation are solutions of algebraic equations obtained through the coefficients of the highest and lowest power of the variable after the sustitution of the fractional approximation in the differential equation. The method is more general than the asymptotical Pade method, and it is not required to determine the power series or asymptotical expansion. A simple approximation for the exponential integral is found, which give three exact digits for most of the real values of the variable. Approximations of higher accuracy and of the same degree than other authors are also obtained. (Author) [pt
Alsmiller, R.G. Jr.; Alsmiller, F.S.; Lewis, T.A.
1986-05-01
In a series of previous papers, calculated results obtained using a one-dimensional ballistic model were presented to aid in the design of a prebuncher for the Oak Ridge Electron Linear Accelerator. As part of this work, a model was developed to provide limits on the fraction of an incident current pulse that would be accelerated by the existing accelerator. In this paper experimental data on this fraction are presented and the validity of the model developed previously is tested by comparing calculated and experimental data. Part of the experimental data is used to fix the physical parameters in the model and then good agreement between the calculated results and the rest of the experimental data is obtained
Kovacic, Ivana
2009-01-01
An analytical approach to determine the approximate solution for the periodic motion of non-conservative oscillators with a fractional-order restoring force and slowly varying parameters is presented. The solution has the form of the first-order differential equation for the amplitude and phase of motion. The method used is based on the combination of the Krylov-Bogoliubov method with Hamilton's variational principle with the uncommutative rule for the variation of velocity. The conservative systems with slowly varying parameters are also considered. The corresponding adiabatic invariant is obtained. Two examples are given to illustrate derived theoretical results.
Wennberg, Berit M.; Baumann, Pia; Gagliardi, Giovanna
2011-01-01
Background. In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Material and methods. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction α/β = 3 Gy was used and the USC parameters used were: α/β = 3 Gy, D 0 = 1.0 Gy, n = 10, α 0.206 Gy-1 and d T = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether 'high doses to small volumes' or 'low doses to large volumes' are most important for lung toxicity. Results and Discussion. NTCP analysis with the LKB-model using parameters m = 0.4, D50 = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D 50 = 20 Gy n = 0.93 with LQ correction and n 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling
N U+02BC Doye, Ibrahima
2018-02-13
In this paper, we propose a robust fractional-order proportional-integral U+0028 FOPI U+0029 observer for the synchronization of nonlinear fractional-order chaotic systems. The convergence of the observer is proved, and sufficient conditions are derived in terms of linear matrix inequalities U+0028 LMIs U+0029 approach by using an indirect Lyapunov method. The proposed U+0028 FOPI U+0029 observer is robust against Lipschitz additive nonlinear uncertainty. It is also compared to the fractional-order proportional U+0028 FOP U+0029 observer and its performance is illustrated through simulations done on the fractional-order chaotic Lorenz system.
N U+02BC Doye, Ibrahima; Salama, Khaled N.; Laleg-Kirati, Taous-Meriem
2018-01-01
In this paper, we propose a robust fractional-order proportional-integral U+0028 FOPI U+0029 observer for the synchronization of nonlinear fractional-order chaotic systems. The convergence of the observer is proved, and sufficient conditions are derived in terms of linear matrix inequalities U+0028 LMIs U+0029 approach by using an indirect Lyapunov method. The proposed U+0028 FOPI U+0029 observer is robust against Lipschitz additive nonlinear uncertainty. It is also compared to the fractional-order proportional U+0028 FOP U+0029 observer and its performance is illustrated through simulations done on the fractional-order chaotic Lorenz system.
Esmaeily, Ali; Ahmadi, Abdollah; Raeisi, Fatima; Ahmadi, Mohammad Reza; Esmaeel Nezhad, Ali; Janghorbani, Mohammadreza
2017-01-01
A new optimization framework based on MILP model is introduced in the paper for the problem of stochastic self-scheduling of hydrothermal units known as HTSS Problem implemented in a joint energy and reserve electricity market with day-ahead mechanism. The proposed MILP framework includes some practical constraints such as the cost due to valve-loading effect, the limit due to DRR and also multi-POZs, which have been less investigated in electricity market models. For the sake of more accuracy, for hydro generating units’ model, multi performance curves are also used. The problem proposed in this paper is formulated using a model on the basis of a stochastic optimization technique while the objective function is maximizing the expected profit utilizing MILP technique. The suggested stochastic self-scheduling model employs the price forecast error in order to take into account the uncertainty due to price. Besides, LMCS is combined with roulette wheel mechanism so that the scenarios corresponding to the non-spinning reserve price and spinning reserve price as well as the energy price at each hour of the scheduling are generated. Finally, the IEEE 118-bus power system is used to indicate the performance and the efficiency of the suggested technique. - Highlights: • Characterizing the uncertainties of price and FOR of units. • Replacing the fixed ramping rate constraints with the dynamic ones. • Proposing linearized model for the valve-point effects of thermal units. • Taking into consideration the multi-POZs relating to the thermal units. • Taking into consideration the multi-performance curves of hydroelectric units.
Sifeu Takougang Kingni
2017-01-01
Full Text Available A linear resistive-capacitive-inductance shunted junction (LRCLSJ model obtained by replacing the nonlinear piecewise resistance of a nonlinear resistive-capacitive-inductance shunted junction (NRCLSJ model by a linear resistance is analyzed in this paper. The LRCLSJ model has two or no equilibrium points depending on the dc bias current. For a suitable choice of the parameters, the LRCLSJ model without equilibrium point can exhibit regular and fast spiking, intrinsic and periodic bursting, and periodic and chaotic behaviors. We show that the LRCLSJ model displays similar dynamical behaviors as the NRCLSJ model. Moreover the coexistence between periodic and chaotic attractors is found in the LRCLSJ model for specific parameters. The lowest order of the commensurate form of the no equilibrium LRCLSJ model to exhibit chaotic behavior is found to be 2.934. Moreover, adaptive finite-time synchronization with parameter estimation is applied to achieve synchronization of unidirectional coupled identical fractional-order form of chaotic no equilibrium LRCLSJ models. Finally, a cryptographic encryption scheme with the help of the finite-time synchronization of fractional-order chaotic no equilibrium LRCLSJ models is illustrated through a numerical example, showing that a high level security device can be produced using this system.
S. Alonso-Quesada
2010-01-01
Full Text Available This paper presents a strategy for designing a robust discrete-time adaptive controller for stabilizing linear time-invariant (LTI continuous-time dynamic systems. Such systems may be unstable and noninversely stable in the worst case. A reduced-order model is considered to design the adaptive controller. The control design is based on the discretization of the system with the use of a multirate sampling device with fast-sampled control signal. A suitable on-line adaptation of the multirate gains guarantees the stability of the inverse of the discretized estimated model, which is used to parameterize the adaptive controller. A dead zone is included in the parameters estimation algorithm for robustness purposes under the presence of unmodeled dynamics in the controlled dynamic system. The adaptive controller guarantees the boundedness of the system measured signal for all time. Some examples illustrate the efficacy of this control strategy.
Parametric uncertainty modeling for robust control
Rasmussen, K.H.; Jørgensen, Sten Bay
1999-01-01
The dynamic behaviour of a non-linear process can often be approximated with a time-varying linear model. In the presented methodology the dynamics is modeled non-conservatively as parametric uncertainty in linear lime invariant models. The obtained uncertainty description makes it possible...... to perform robustness analysis on a control system using the structured singular value. The idea behind the proposed method is to fit a rational function to the parameter variation. The parameter variation can then be expressed as a linear fractional transformation (LFT), It is discussed how the proposed...... point changes. It is shown that a diagonal PI control structure provides robust performance towards variations in feed flow rate or feed concentrations. However including both liquid and vapor flow delays robust performance specifications cannot be satisfied with this simple diagonal control structure...
Qiang Fu
2018-05-01
Full Text Available The potential influence of natural variations in a climate system on global warming can change the hydrological cycle and threaten current strategies of water management. A simulation-based linear fractional programming (SLFP model, which integrates a runoff simulation model (RSM into a linear fractional programming (LFP framework, is developed for optimal water resource planning. The SLFP model has multiple objectives such as benefit maximization and water supply minimization, balancing water conflicts among various water demand sectors, and addressing complexities of water resource allocation system. Lingo and Excel programming solutions were used to solve the model. Water resources in the main stream basin of the Songhua River are allocated for 4 water demand sectors in 8 regions during two planning periods under different scenarios. Results show that the increase or decrease of water supply to the domestic sector is related to the change in population density at different regions in different target years. In 2030, the water allocation in the industrial sector decreased by 1.03–3.52% compared with that in 2020, while the water allocation in the environmental sector increased by 0.12–1.29%. Agricultural water supply accounts for 54.79–77.68% of total water supply in different regions. These changes in water resource allocation for various sectors were affected by different scenarios in 2020; however, water resource allocation for each sector was relatively stable under different scenarios in 2030. These results suggest that the developed SLFP model can help to improve the adjustment of water use structure and water utilization efficiency.
Oktem, Figen S; Ozaktas, Haldun M
2010-08-01
Linear canonical transforms (LCTs) form a three-parameter family of integral transforms with wide application in optics. We show that LCT domains correspond to scaled fractional Fourier domains and thus to scaled oblique axes in the space-frequency plane. This allows LCT domains to be labeled and ordered by the corresponding fractional order parameter and provides insight into the evolution of light through an optical system modeled by LCTs. If a set of signals is highly confined to finite intervals in two arbitrary LCT domains, the space-frequency (phase space) support is a parallelogram. The number of degrees of freedom of this set of signals is given by the area of this parallelogram, which is equal to the bicanonical width product but usually smaller than the conventional space-bandwidth product. The bicanonical width product, which is a generalization of the space-bandwidth product, can provide a tighter measure of the actual number of degrees of freedom, and allows us to represent and process signals with fewer samples.
Saminadayar, L.
2001-01-01
20 years ago fractional charges were imagined to explain values of conductivity in some materials. Recent experiments have proved the existence of charges whose value is the third of the electron charge. This article presents the experimental facts that have led theorists to predict the existence of fractional charges from the motion of quasi-particles in a linear chain of poly-acetylene to the quantum Hall effect. According to the latest theories, fractional charges are neither bosons nor fermions but anyons, they are submitted to an exclusive principle that is less stringent than that for fermions. (A.C.)
Julie Vercelloni
Full Text Available Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.
El Hanandeh, Ali; El-Zein, Abbas
2010-01-01
A modified version of the multi-criteria decision aid, ELECTRE III has been developed to account for uncertainty in criteria weightings and threshold values. The new procedure, called ELECTRE-SS, modifies the exploitation phase in ELECTRE III, through a new definition of the pre-order and the introduction of a ranking index (RI). The new approach accommodates cases where incomplete or uncertain preference data are present. The method is applied to a case of selecting a management strategy for the bio-degradable fraction in the municipal solid waste of Sydney. Ten alternatives are compared against 11 criteria. The results show that anaerobic digestion (AD) and composting of paper are less environmentally sound options than recycling. AD is likely to out-perform incineration where a market for heating does not exist. Moreover, landfilling can be a sound alternative, when considering overall performance and conditions of uncertainty.
Bo Zhang
2016-01-01
Full Text Available This paper presents a model for analyzing a five-phase fractional-slot permanent magnet tubular linear motor (FSPMTLM with the modified winding function approach (MWFA. MWFA is a fast modeling method and it gives deep insight into the calculations of the following parameters: air-gap magnetic field, inductances, flux linkages, and detent force, which are essential in modeling the motor. First, using a magnetic circuit model, the air-gap magnetic density is computed from stator magnetomotive force (MMF, flux barrier, and mover geometry. Second, the inductances, flux linkages, and detent force are analytically calculated using modified winding function and the air-gap magnetic density. Finally, a model has been established with the five-phase Park transformation and simulated. The calculations of detent force reveal that the end-effect force is the main component of the detent force. This is also proven by finite element analysis on the motor. The accuracy of the model is validated by comparing with the results obtained using semianalytical method (SAM and measurements to analyze the motor’s transient characteristics. In addition, the proposed method requires less computation time.
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software
Ondo Meye, P; Schandorf, C; Amoako, J K; Manteaw, P O; Amoatey, E A; Adjei, D N
2017-12-01
An inter-comparison study was conducted to assess the capability of dosimetry systems of individual monitoring services (IMSs) in Gabon and Ghana to measure personal dose equivalent Hp(10) in photon fields. The performance indicators assessed were the lower limit of detection, linearity and uncertainty in measurement. Monthly and quarterly recording levels were proposed with corresponding values of 0.08 and 0.025 mSv, and 0.05 and 0.15 mSv for the TLD and OSL systems, respectively. The linearity dependence of the dosimetry systems was performed following the requirement given in the Standard IEC 62387 of the International Electrotechnical Commission (IEC). The results obtained for the two systems were satisfactory. The procedure followed for the uncertainty assessment is the one given in the IEC technical report TR62461. The maximum relative overall uncertainties, in absolute value, expressed in terms of Hp(10), for the TL dosimetry system Harshaw 6600, are 44. 35% for true doses below 0.40 mSv and 36.33% for true doses ≥0.40 mSv. For the OSL dosimetry system microStar, the maximum relative overall uncertainties, in absolute value, are 52.17% for true doses below 0.40 mSv and 37.43% for true doses ≥0.40 mSv. These results are in good agreement with the requirements for accuracy of the International Commission on Radiological protection. When expressing the uncertainties in terms of response, comparison with the IAEA requirements for overall accuracy showed that the uncertainty results were also acceptable. The values of Hp(10) directly measured by the two dosimetry systems showed a significant underestimation for the Harshaw 6600 system, and a slight overestimation for the microStar system. After correction for linearity of the measured doses, the two dosimetry systems gave better and comparable results. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Ding, C; Hrycushko, B; Jiang, S; Meyer, J; Timmerman, R
2014-01-01
Purpose: To compare the radiobiological effect on large tumors and surrounding normal tissues from single fraction SRS, multi-fractionated SRT, and multi-staged SRS treatment. Methods: An anthropomorphic head phantom with a centrally located large volume target (18.2 cm 3 ) was scanned using a 16 slice large bore CT simulator. Scans were imported to the Multiplan treatment planning system where a total prescription dose of 20Gy was used for a single, three staged and three fractionated treatment. Cyber Knife treatment plans were inversely optimized for the target volume to achieve at least 95% coverage of the prescription dose. For the multistage plan, the target was segmented into three subtargets having similar volume and shape. Staged plans for individual subtargets were generated based on a planning technique where the beam MUs of the original plan on the total target volume are changed by weighting the MUs based on projected beam lengths within each subtarget. Dose matrices for each plan were export in DICOM format and used to calculate equivalent dose distributions in 2Gy fractions using an alpha beta ratio of 10 for the target and 3 for normal tissue. Results: Singe fraction SRS, multi-stage plan and multi-fractionated SRT plans had an average 2Gy dose equivalent to the target of 62.89Gy, 37.91Gy and 33.68Gy, respectively. The normal tissue within 12Gy physical dose region had an average 2Gy dose equivalent of 29.55Gy, 16.08Gy and 13.93Gy, respectively. Conclusion: The single fraction SRS plan had the largest predicted biological effect for the target and the surrounding normal tissue. The multi-stage treatment provided for a more potent biologically effect on target compared to the multi-fraction SRT treatments with less biological normal tissue than single-fraction SRS treatment
Radiation-induced lung damage in rats: The influence of fraction spacing on effect per fraction
Haston, C.K.; Hill, R.P.; Newcomb, C.H.; Van Dyk, J.
1994-01-01
When the linear-quadratic model is used to predict fractionated treatments which are isoeffective, it is usually assumed that each (equal size) treatment fraction has an equal effect, independent of the time at which it was delivered during a course of treatment. Previous work has indicated that this assumption may not be valid in the context of radiation-induced lung damage in rats. Consequently the authors tested directly the validity of the assumption that each fraction has an equal effect, independent of the time it is delivered. An experiment was completed in which fractionated irradiation was given to whole thoraces of Sprague-Dawley rats. All treatment schedules consisted of eleven equal dose fractions in 36 days given as a split course, with some groups receiving the bulk of the doses early in the treatment schedule, before a 27-day gap, and others receiving most of the dose toward the end of the treatment schedule, after the time gap. To monitor the incidence of radiation-induced damage, breathing rate and lethality assays were used. The maximum differences in the LD 50 s and breathing rate ED 50 s for the different fractionation schedules were 4.0% and 7.7% respectively. The lethality data and breathing rate data were consistent with results expected from modelling using the linear-quadratic model with the inclusion of an overall time factor, but not the generalized linear-quadratic model which accounted for fraction spacing. For conventional daily fractionation, and within the range of experimental uncertainties, the results indicate that the effect of a treatment fraction does not depend on the time at which it is given (its position) in the treatment. The results indicate no need to extend isoeffect formulae to consider the effect of each fraction separately for radiation-induced lung damage. 21 refs., 6 figs., 3 tabs
Luiza G. Ungarova
2016-12-01
Full Text Available We considere and analyze the uniaxial phenomenological models of viscoelastic deformation based on fractional analogues of Scott Blair, Voigt, Maxwell, Kelvin and Zener rheological models. Analytical solutions of the corresponding differential equations are obtained with fractional Riemann–Liouville operators under constant stress with further unloading, that are written by the generalized (two-parameter fractional exponential function and contains from two to four parameters depending on the type of model. A method for identifying the model parameters based on the background information for the experimental creep curves with constant stresses was developed. Nonlinear problem of parametric identification is solved by two-step iterative method. The first stage uses the characteristic data points diagrams and features in the behavior of the models under unrestricted growth of time and the initial approximation of parameters are determined. At the second stage, the refinement of these parameters by coordinate descent (the Hooke–Jeeves's method and minimizing the functional standard deviation for calculated and experimental values is made. Method of identification is realized for all the considered models on the basis of the known experimental data uniaxial viscoelastic deformation of Polyvinylchloride Elastron at a temperature of 20∘C and five the tensile stress levels. Table-valued parameters for all models are given. The errors analysis of constructed phenomenological models is made to experimental data over the entire ensemble of curves viscoelastic deformation. It was found that the approximation errors for the Scott Blair fractional model is 14.17 %, for the Voigt fractional model is 11.13 %, for the Maxvell fractional model is 13.02 %, for the Kelvin fractional model 10.56 %, for the Zener fractional model is 11.06 %. The graphs of the calculated and experimental dependences of viscoelastic deformation of Polyvinylchloride
Evaluating prediction uncertainty
McKay, M.D.
1995-03-01
The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented
Kim, Y; Waldron, T; Pennington, E
2016-01-01
Purpose: To test the radiobiological impact of hypofractionated choroidal melanoma brachytherapy, we calculated single fraction equivalent doses (SFED) of the tumor that equivalent to 85 Gy of I125-BT for 20 patients. Corresponding organs-at-risks (OARs) doses were estimated. Methods: Twenty patients treated with I125-BT were retrospectively examined. The tumor SFED values were calculated from tumor BED using a conventional linear-quadratic (L-Q) model and an universal survival curve (USC). The opposite retina (α/β = 2.58), macula (2.58), optic disc (1.75), and lens (1.2) were examined. The % doses of OARs over tumor doses were assumed to be the same as for a single fraction delivery. The OAR SFED values were converted into BED and equivalent dose in 2 Gy fraction (EQD2) by using both L-Q and USC models, then compared to I125-BT. Results: The USC-based BED and EQD2 doses of the macula, optic disc, and the lens were on average 118 ± 46% (p 14 Gy). Conclusion: The estimated single fraction doses were feasible to be delivered within 1 hour using a high dose rate source such as electronic brachytherapy (eBT). However, the estimated OAR doses using eBT were 112 ∼ 118% higher than when using the I125-BT technique. Continued exploration of alternative dose rate or fractionation schedules should be followed.
Lindley, Dennis V
2013-01-01
Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.
Meaney, Christopher; Moineddin, Rahim
2014-01-24
In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the
Eck, van H.J.N.; Hansen, T.A.R.; Kleyn, A.W.; Meiden, van der H.J.; Schram, D.C.; Zeijlmans van Emmichoven, P.A.
2011-01-01
Magnum-PSI is a linear plasma generator designed to reach the plasma–surface interaction (PSI) regime of ITER and nuclear fusion reactors beyond ITER. To reach this regime, the influx of cold neutrals from the source must be significantly lower than the plasma flux reaching the target. This is
van Eck, H.J.N.; Hansen, T.A.R.; Kleyn, A.W.; van der Meiden, H.J.; Schram, D.C.; Zeijlmans van Emmichoven, P.A.
2011-01-01
Magnum-PSI is a linear plasma generator designed to reach the plasma-surface interaction (PSI) regime of ITER and nuclear fusion reactors beyond ITER. To reach this regime, the influx of cold neutrals from the source must be significantly lower than the plasma flux reaching the target. This is
Koch, Michael
Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.
Limperopoulos, G.J.
1995-01-01
This report presents an oil project valuation under uncertainty by means of two well-known financial techniques: The Capital Asset Pricing Model (CAPM) and The Black-Scholes Option Pricing Formula. CAPM gives a linear positive relationship between expected rate of return and risk but does not take into consideration the aspect of flexibility which is crucial for an irreversible investment as an oil price is. Introduction of investment decision flexibility by using real options can increase the oil project value substantially. Some simple tests for the importance of uncertainty in stock market for oil investments are performed. Uncertainty in stock returns is correlated with aggregate product market uncertainty according to Pindyck (1991). The results of the tests are not satisfactory due to the short data series but introducing two other explanatory variables the interest rate and Gross Domestic Product make the situation better. 36 refs., 18 figs., 6 tabs
Additivity of entropic uncertainty relations
René Schwonnek
2018-03-01
Full Text Available We consider the uncertainty between two pairs of local projective measurements performed on a multipartite system. We show that the optimal bound in any linear uncertainty relation, formulated in terms of the Shannon entropy, is additive. This directly implies, against naive intuition, that the minimal entropic uncertainty can always be realized by fully separable states. Hence, in contradiction to proposals by other authors, no entanglement witness can be constructed solely by comparing the attainable uncertainties of entangled and separable states. However, our result gives rise to a huge simplification for computing global uncertainty bounds as they now can be deduced from local ones. Furthermore, we provide the natural generalization of the Maassen and Uffink inequality for linear uncertainty relations with arbitrary positive coefficients.
Liu, Baoding
2015-01-01
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...
Kim, Y; Waldron, T; Pennington, E [University Of Iowa, College of Medicine, Iowa City, IA (United States)
2016-06-15
Purpose: To test the radiobiological impact of hypofractionated choroidal melanoma brachytherapy, we calculated single fraction equivalent doses (SFED) of the tumor that equivalent to 85 Gy of I125-BT for 20 patients. Corresponding organs-at-risks (OARs) doses were estimated. Methods: Twenty patients treated with I125-BT were retrospectively examined. The tumor SFED values were calculated from tumor BED using a conventional linear-quadratic (L-Q) model and an universal survival curve (USC). The opposite retina (α/β = 2.58), macula (2.58), optic disc (1.75), and lens (1.2) were examined. The % doses of OARs over tumor doses were assumed to be the same as for a single fraction delivery. The OAR SFED values were converted into BED and equivalent dose in 2 Gy fraction (EQD2) by using both L-Q and USC models, then compared to I125-BT. Results: The USC-based BED and EQD2 doses of the macula, optic disc, and the lens were on average 118 ± 46% (p < 0.0527), 126 ± 43% (p < 0.0354), and 112 ± 32% (p < 0.0265) higher than those of I125-BT, respectively. The BED and EQD2 doses of the opposite retina were 52 ± 9% lower than I125-BT. The tumor SFED values were 25.2 ± 3.3 Gy and 29.1 ± 2.5 Gy when using USC and LQ models which can be delivered within 1 hour. All BED and EQD2 values using L-Q model were significantly larger when compared to the USC model (p < 0.0274) due to its large single fraction size (> 14 Gy). Conclusion: The estimated single fraction doses were feasible to be delivered within 1 hour using a high dose rate source such as electronic brachytherapy (eBT). However, the estimated OAR doses using eBT were 112 ∼ 118% higher than when using the I125-BT technique. Continued exploration of alternative dose rate or fractionation schedules should be followed.
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...
On matrix fractional differential equations
Adem Kılıçman
2017-01-01
Full Text Available The aim of this article is to study the matrix fractional differential equations and to find the exact solution for system of matrix fractional differential equations in terms of Riemann–Liouville using Laplace transform method and convolution product to the Riemann–Liouville fractional of matrices. Also, we show the theorem of non-homogeneous matrix fractional partial differential equation with some illustrative examples to demonstrate the effectiveness of the new methodology. The main objective of this article is to discuss the Laplace transform method based on operational matrices of fractional derivatives for solving several kinds of linear fractional differential equations. Moreover, we present the operational matrices of fractional derivatives with Laplace transform in many applications of various engineering systems as control system. We present the analytical technique for solving fractional-order, multi-term fractional differential equation. In other words, we propose an efficient algorithm for solving fractional matrix equation.
Nguyen, Daniel Xuyen
This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models....... This retooling addresses several shortcomings. First, the imperfect correlation of demands reconciles the sales variation observed in and across destinations. Second, since demands for the firm's output are correlated across destinations, a firm can use previously realized demands to forecast unknown demands...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles...
Maria Klimikova
2010-01-01
Understanding the reasons of the present financial problems lies In understanding the substance of fractional reserve banking. The substance of fractional banking is in lending more money than the bankers have. Banking of partial reserves is an alternative form which links deposit banking and credit banking. Fractional banking is causing many unfavorable economic impacts in the worldwide system, specifically an inflation.
Fractional finite Fourier transform.
Khare, Kedar; George, Nicholas
2004-07-01
We show that a fractional version of the finite Fourier transform may be defined by using prolate spheroidal wave functions of order zero. The transform is linear and additive in its index and asymptotically goes over to Namias's definition of the fractional Fourier transform. As a special case of this definition, it is shown that the finite Fourier transform may be inverted by using information over a finite range of frequencies in Fourier space, the inversion being sensitive to noise. Numerical illustrations for both forward (fractional) and inverse finite transforms are provided.
Povstenko, Yuriy
2015-01-01
This book is devoted to fractional thermoelasticity, i.e. thermoelasticity based on the heat conduction equation with differential operators of fractional order. Readers will discover how time-fractional differential operators describe memory effects and space-fractional differential operators deal with the long-range interaction. Fractional calculus, generalized Fourier law, axisymmetric and central symmetric problems and many relevant equations are featured in the book. The latest developments in the field are included and the reader is brought up to date with current research. The book contains a large number of figures, to show the characteristic features of temperature and stress distributions and to represent the whole spectrum of order of fractional operators. This work presents a picture of the state-of-the-art of fractional thermoelasticity and is suitable for specialists in applied mathematics, physics, geophysics, elasticity, thermoelasticity and engineering sciences. Corresponding sections of ...
Uncertainty, joint uncertainty, and the quantum uncertainty principle
Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad
2016-01-01
Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found. (paper)
Zou, Xiao-Duan; Li, Jian-Yang; Clark, Beth Ellen; Golish, Dathon
2018-01-01
The OSIRIS-REx spacecraft, launched in September, 2016, will study the asteroid Bennu and return a sample from its surface to Earth in 2023. Bennu is a near-Earth carbonaceous asteroid which will provide insight into the formation and evolution of the solar system. OSIRIS-REx will first approach Bennu in August 2018 and will study the asteroid for approximately two years before sampling. OSIRIS-REx will develop its photometric model (including Lommel-Seelinger, ROLO, McEwen, Minnaert and Akimov) of Bennu with OCAM and OVIRS during the Detailed Survey mission phase. The model developed during this phase will be used to photometrically correct the OCAM and OVIRS data.Here we present the analysis of the error for the photometric corrections. Based on our testing data sets, we find:1. The model uncertainties is only correct when we use the covariance matrix to calculate, because the parameters are highly correlated.2. No evidence of domination of any parameter in each model.3. And both model error and the data error contribute to the final correction error comparably.4. We tested the uncertainty module on fake and real data sets, and find that model performance depends on the data coverage and data quality. These tests gave us a better understanding of how different model behave in different case.5. L-S model is more reliable than others. Maybe because the simulated data are based on L-S model. However, the test on real data (SPDIF) does show slight advantage of L-S, too. ROLO is not reliable to use when calculating bond albedo. The uncertainty of McEwen model is big in most cases. Akimov performs unphysical on SOPIE 1 data.6. Better use L-S as our default choice, this conclusion is based mainly on our test on SOPIE data and IPDIF.
Reply to "Comment on 'Fractional quantum mechanics' and 'Fractional Schrödinger equation' ".
Laskin, Nick
2016-06-01
The fractional uncertainty relation is a mathematical formulation of Heisenberg's uncertainty principle in the framework of fractional quantum mechanics. Two mistaken statements presented in the Comment have been revealed. The origin of each mistaken statement has been clarified and corrected statements have been made. A map between standard quantum mechanics and fractional quantum mechanics has been presented to emphasize the features of fractional quantum mechanics and to avoid misinterpretations of the fractional uncertainty relation. It has been shown that the fractional probability current equation is correct in the area of its applicability. Further studies have to be done to find meaningful quantum physics problems with involvement of the fractional probability current density vector and the extra term emerging in the framework of fractional quantum mechanics.
A fuzzy-stochastic power system planning model: Reflection of dual objectives and dual uncertainties
Zhang, X.Y.; Huang, G.H.; Zhu, H.; Li, Y.P.
2017-01-01
In this study, a fuzzy stochastic dynamic fractional programming (FSDFP) method is proposed for supporting sustainable management of electric power system (EPS) under dual uncertainties. As an improvement upon the mixed-integer linear fractional programming, FSDFP can not only tackle multi-objective issues effectively without setting weights, but also can deal with uncertain parameters which have both stochastic and fuzzy characteristics. Thus, the developed method can help provide valuable information for supporting capacity-expansion planning and in-depth policy analysis of EPS management problems. For demonstrating these advantages, FSDFP has been applied to a case study of a typical regional EPS planning, where the decision makers have to deal with conflicts between economic development that maximizes the system profit and environmental protection that minimizes the carbon dioxide emissions. The obtained results can be analyzed to generate several decision alternatives, and can then help decision makers make suitable decisions under different input scenarios. Furthermore, comparisons of the solution from FSDFP method with that from fuzzy stochastic dynamic linear programming, linear fractional programming and dynamic stochastic fractional programming methods are undertaken. The contrastive analysis reveals that FSDFP is a more effective approach that can better characterize the complexities and uncertainties of real EPS management problems. - Highlights: • A fuzzy stochastic dynamic fractional programming (FSDFP) method is proposed. • FSDFP can address multiple conflicting objectives without setting weights. • FSDFP can reflect dual uncertainties with both stochastic and fuzzy characteristics. • Some reasonable solutions for a case of power system sustainable planning are generated. • Comparisons of the solutions from FSDFP with other optimization methods are undertaken.
On the fractional calculus of Besicovitch function
Liang Yongshun
2009-01-01
Relationship between fractional calculus and fractal functions has been explored. Based on prior investigations dealing with certain fractal functions, fractal dimensions including Hausdorff dimension, Box dimension, K-dimension and Packing dimension is shown to be a linear function of order of fractional calculus. Both Riemann-Liouville fractional calculus and Weyl-Marchaud fractional derivative of Besicovitch function have been discussed.
The Active Fractional Order Control for Maglev Suspension System
Peichang Yu
2015-01-01
Full Text Available Maglev suspension system is the core part of maglev train. In the practical application, the load uncertainties, inherent nonlinearity, and misalignment between sensors and actuators are the main issues that should be solved carefully. In order to design a suitable controller, the attention is paid to the fractional order controller. Firstly, the mathematical model of a single electromagnetic suspension unit is derived. Then, considering the limitation of the traditional PD controller adaptation, the fractional order controller is developed to obtain more excellent suspension specifications and robust performance. In reality, the nonlinearity affects the structure and the precision of the model after linearization, which will degrade the dynamic performance. So, a fractional order controller is addressed to eliminate the disturbance by adjusting the parameters which are added by the fractional order controller. Furthermore, the controller based on LQR is employed to compare with the fractional order controller. Finally, the performance of them is discussed by simulation. The results illustrated the validity of the fractional order controller.
Financial Planning with Fractional Goals
Goedhart, Marc; Spronk, Jaap
1995-01-01
textabstractWhen solving financial planning problems with multiple goals by means of multiple objective programming, the presence of fractional goals leads to technical difficulties. In this paper we present a straightforward interactive approach for solving such linear fractional programs with multiple goal variables. The approach is illustrated by means of an example in financial planning.
Jackiw, R.; Massachusetts Inst. of Tech., Cambridge; Massachusetts Inst. of Tech., Cambridge
1984-01-01
The theory of fermion fractionization due to topologically generated fermion ground states is presented. Applications to one-dimensional conductors, to the MIT bag, and to the Hall effect are reviewed. (author)
Tanwiwat Jaikuna
2017-02-01
Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
Landsberg, P.T.
1990-01-01
This paper explores how the quantum mechanics uncertainty relation can be considered to result from measurements. A distinction is drawn between the uncertainties obtained by scrutinising experiments and the standard deviation type of uncertainty definition used in quantum formalism. (UK)
Bhattacharyya, Sonalee; Namakshi, Nama; Zunker, Christina; Warshauer, Hiroko K.; Warshauer, Max
2016-01-01
Making math more engaging for students is a challenge that every teacher faces on a daily basis. These authors write that they are constantly searching for rich problem-solving tasks that cover the necessary content, develop critical-thinking skills, and engage student interest. The Mystery Fraction activity provided here focuses on a key number…
Fraction Reduction through Continued Fractions
Carley, Holly
2011-01-01
This article presents a method of reducing fractions without factoring. The ideas presented may be useful as a project for motivated students in an undergraduate number theory course. The discussion is related to the Euclidean Algorithm and its variations may lead to projects or early examples involving efficiency of an algorithm.
The uncertainties in estimating measurement uncertainties
Clark, J.P.; Shull, A.H.
1994-01-01
All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties
Uncertainty in social dilemmas
Kwaadsteniet, Erik Willem de
2007-01-01
This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size uncertainty). Several researchers have therefore asked themselves the question as to how such uncertainty influences people’s choice behavior. These researchers have repeatedly concluded that uncertainty...
Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA
2001-01-01
Uncertainty in the precise quantity of radiation dose delivered to tumours in external beam radiotherapy is present due to many factors, and can result in either spatially uniform (Gaussian) or spatially non-uniform dose errors. These dose errors are incorporated into the calculation of tumour control probability (TCP) and produce a distribution of possible TCP values over a population. We also study the effect of inter-patient cell sensitivity heterogeneity on the population distribution of patient TCPs. This study aims to investigate the relative importance of these three uncertainties (spatially uniform dose uncertainty, spatially non-uniform dose uncertainty, and inter-patient cell sensitivity heterogeneity) on the delivered dose and TCP distribution following a typical course of fractionated external beam radiotherapy. The dose distributions used for patient treatments are modelled in one dimension. Geometric positioning uncertainties during and before treatment are considered as shifts of a pre-calculated dose distribution. Following the simulation of a population of patients, distributions of dose across the patient population are used to calculate mean treatment dose, standard deviation in mean treatment dose, mean TCP, standard deviation in TCP, and TCP mode. These parameters are calculated with each of the three uncertainties included separately. The calculations show that the dose errors in the tumour volume are dominated by the spatially uniform component of dose uncertainty. This could be related to machine specific parameters, such as linear accelerator calibration. TCP calculation is affected dramatically by inter-patient variation in the cell sensitivity and to a lesser extent by the spatially uniform dose errors. The positioning errors with the 1.5 cm margins used cause dose uncertainty outside the tumour volume and have a small effect on mean treatment dose (in the tumour volume) and tumour control. Copyright (2001) Australasian College of
The synchronization of three fractional differential systems
Li Changpin; Yan Jianping
2007-01-01
In this paper, a new method is proposed and applied to the synchronization of fractional differential systems (or 'differential systems with fractional orders'), where both drive and response systems have the same dimensionality and are coupled by the driving signal. The present technique is based on the stability criterion of linear fractional systems. This method is implemented in (chaos) synchronization of the fractional Lorenz system, Chen system and Chua circuit. Numerical simulations show the present synchronization method works well
LINTAB, Linear Interpolable Tables from any Continuous Variable Function
1988-01-01
1 - Description of program or function: LINTAB is designed to construct linearly interpolable tables from any function. The program will start from any function of a single continuous variable... FUNKY(X). By user input the function can be defined, (1) Over 1 to 100 X ranges. (2) Within each X range the function is defined by 0 to 50 constants. (3) At boundaries between X ranges the function may be continuous or discontinuous (depending on the constants used to define the function within each X range). 2 - Method of solution: LINTAB will construct a table of X and Y values where the tabulated (X,Y) pairs will be exactly equal to the function (Y=FUNKY(X)) and linear interpolation between the tabulated pairs will be within any user specified fractional uncertainty of the function for all values of X within the requested X range
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Large-uncertainty intelligent states for angular momentum and angle
Goette, Joerg B; Zambrini, Roberta; Franke-Arnold, Sonja; Barnett, Stephen M
2005-01-01
The equality in the uncertainty principle for linear momentum and position is obtained for states which also minimize the uncertainty product. However, in the uncertainty relation for angular momentum and angular position both sides of the inequality are state dependent and therefore the intelligent states, which satisfy the equality, do not necessarily give a minimum for the uncertainty product. In this paper, we highlight the difference between intelligent states and minimum uncertainty states by investigating a class of intelligent states which obey the equality in the angular uncertainty relation while having an arbitrarily large uncertainty product. To develop an understanding for the uncertainties of angle and angular momentum for the large-uncertainty intelligent states we compare exact solutions with analytical approximations in two limiting cases
Hossein Jafari
2016-04-01
Full Text Available In this paper, we consider the local fractional decomposition method, variational iteration method, and differential transform method for analytic treatment of linear and nonlinear local fractional differential equations, homogeneous or nonhomogeneous. The operators are taken in the local fractional sense. Some examples are given to demonstrate the simplicity and the efficiency of the presented methods.
A sliding mode observer for hemodynamic characterization under modeling uncertainties
Zayane, Chadia; Laleg-Kirati, Taous-Meriem
2014-01-01
This paper addresses the case of physiological states reconstruction in a small region of the brain under modeling uncertainties. The misunderstood coupling between the cerebral blood volume and the oxygen extraction fraction has lead to a partial
Instrument uncertainty predictions
Coutts, D.A.
1991-07-01
The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty
Resolving uncertainty in chemical speciation determinations
Smith, D. Scott; Adams, Nicholas W. H.; Kramer, James R.
1999-10-01
Speciation determinations involve uncertainty in system definition and experimentation. Identification of appropriate metals and ligands from basic chemical principles, analytical window considerations, types of species and checking for consistency in equilibrium calculations are considered in system definition uncertainty. A systematic approach to system definition limits uncertainty in speciation investigations. Experimental uncertainty is discussed with an example of proton interactions with Suwannee River fulvic acid (SRFA). A Monte Carlo approach was used to estimate uncertainty in experimental data, resulting from the propagation of uncertainties in electrode calibration parameters and experimental data points. Monte Carlo simulations revealed large uncertainties present at high (>9-10) and low (monoprotic ligands. Least-squares fit the data with 21 sites, whereas linear programming fit the data equally well with 9 sites. Multiresponse fitting, involving simultaneous fluorescence and pH measurements, improved model discrimination. Deconvolution of the excitation versus emission fluorescence surface for SRFA establishes a minimum of five sites. Diprotic sites are also required for the five fluorescent sites, and one non-fluorescent monoprotic site was added to accommodate the pH data. Consistent with greater complexity, the multiresponse method had broader confidence limits than the uniresponse methods, but corresponded better with the accepted total carboxylic content for SRFA. Overall there was a 40% standard deviation in total carboxylic content for the multiresponse fitting, versus 10% and 1% for least-squares and linear programming, respectively.
Amin Asadi
2017-10-01
Full Text Available Purpose: To study the benefits of Directional Bremsstrahlung Splitting (DBS dose variance reduction technique in BEAMnrc Monte Carlo (MC code for Oncor® linac at 6MV and 18MV energies. Materials and Method: A MC model of Oncor® linac was built using BEAMnrc MC Code and verified by the measured data for 6MV and 18MV energies of various field sizes. Then Oncor® machine was modeled running DBS technique, and the efficiency of total fluence and spatial fluence for electron and photon, the efficiency of dose variance reduction of MC calculations for PDD on the central beam axis and lateral dose profile across the nominal field was measured and compared. Result: With applying DBS technique, the total fluence of electron and photon increased in turn 626.8 (6MV and 983.4 (6MV, and 285.6 (18MV and 737.8 (18MV, the spatial fluence of electron and photon improved in turn 308.6±1.35% (6MV and 480.38±0.43% (6MV, and 153±0.9% (18MV and 462.6±0.27% (18MV. Moreover, by running DBS technique, the efficiency of dose variance reduction for PDD MC dose calculations before maximum dose point and after dose maximum point enhanced 187.8±0.68% (6MV and 184.6±0.65% (6MV, 156±0.43% (18MV and 153±0.37% (18MV, respectively, and the efficiency of MC calculations for lateral dose profile remarkably on the central beam axis and across the treatment field raised in turn 197±0.66% (6MV and 214.6±0.73% (6MV, 175±0.36% (18MV and 181.4±0.45% (18MV. Conclusion: Applying dose variance reduction technique of DBS for modeling Oncor® linac with using BEAMnrc MC Code surprisingly improved the fluence of electron and photon, and it therefore enhanced the efficiency of dose variance reduction for MC calculations. As a result, running DBS in different kinds of MC simulation Codes might be beneficent in reducing the uncertainty of MC calculations.
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Fractional-order sliding mode control for a class of uncertain nonlinear systems based on LQR
Dong Zhang
2017-03-01
Full Text Available This article presents a new fractional-order sliding mode control (FOSMC strategy based on a linear-quadratic regulator (LQR for a class of uncertain nonlinear systems. First, input/output feedback linearization is used to linearize the nonlinear system and decouple tracking error dynamics. Second, LQR is designed to ensure that the tracking error dynamics converges to the equilibrium point as soon as possible. Based on LQR, a novel fractional-order sliding surface is introduced. Subsequently, the FOSMC is designed to reject system uncertainties and reduce the magnitude of control chattering. Then, the global stability of the closed-loop control system is analytically proved using Lyapunov stability theory. Finally, a typical single-input single-output system and a typical multi-input multi-output system are simulated to illustrate the effectiveness and advantages of the proposed control strategy. The results of the simulation indicate that the proposed control strategy exhibits excellent performance and robustness with system uncertainties. Compared to conventional integer-order sliding mode control, the high-frequency chattering of the control input is drastically depressed.
ℋ∞ Adaptive observer for nonlinear fractional-order systems
Ndoye, Ibrahima
2016-06-23
In this paper, an adaptive observer is proposed for the joint estimation of states and parameters of a fractional nonlinear system with external perturbations. The convergence of the proposed observer is derived in terms of linear matrix inequalities (LMIs) by using an indirect Lyapunov method.The proposed ℋ∞ adaptive observer is also robust against Lipschitz additive nonlinear uncertainty. The performance of the observer is illustrated through some examples including the chaotic Lorenz and Lü\\'s systems. © 2016 John Wiley & Sons, Ltd.
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Andres, T.H.
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Andres, T.H
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Uncertainty and Cognitive Control
Faisal eMushtaq
2011-10-01
Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.
Fractional vector calculus for fractional advection dispersion
Meerschaert, Mark M.; Mortensen, Jeff; Wheatcraft, Stephen W.
2006-07-01
We develop the basic tools of fractional vector calculus including a fractional derivative version of the gradient, divergence, and curl, and a fractional divergence theorem and Stokes theorem. These basic tools are then applied to provide a physical explanation for the fractional advection-dispersion equation for flow in heterogeneous porous media.
On generalized fractional vibration equation
Dai, Hongzhe; Zheng, Zhibao; Wang, Wei
2017-01-01
Highlights: • The paper presents a generalized fractional vibration equation for arbitrary viscoelastically damped system. • Some classical vibration equations can be derived from the developed equation. • The analytic solution of developed equation is derived under some special cases. • The generalized equation is particularly useful for developing new fractional equivalent linearization method. - Abstract: In this paper, a generalized fractional vibration equation with multi-terms of fractional dissipation is developed to describe the dynamical response of an arbitrary viscoelastically damped system. It is shown that many classical equations of motion, e.g., the Bagley–Torvik equation, can be derived from the developed equation. The Laplace transform is utilized to solve the generalized equation and the analytic solution under some special cases is derived. Example demonstrates the generalized transfer function of an arbitrary viscoelastic system.
Fractional Schroedinger equation
Laskin, Nick
2002-01-01
Some properties of the fractional Schroedinger equation are studied. We prove the Hermiticity of the fractional Hamilton operator and establish the parity conservation law for fractional quantum mechanics. As physical applications of the fractional Schroedinger equation we find the energy spectra of a hydrogenlike atom (fractional 'Bohr atom') and of a fractional oscillator in the semiclassical approximation. An equation for the fractional probability current density is developed and discussed. We also discuss the relationships between the fractional and standard Schroedinger equations
Bergstra, Jan A.
2015-01-01
In the context of an involutive meadow a precise definition of fractions is formulated and on that basis formal definitions of various classes of fractions are given. The definitions follow the fractions as terms paradigm. That paradigm is compared with two competing paradigms for storytelling on fractions: fractions as values and fractions as pairs.
Probabilistic accounting of uncertainty in forecasts of species distributions under climate change
Seth J. Wenger; Nicholas A. Som; Daniel C. Dauwalter; Daniel J. Isaak; Helen M. Neville; Charles H. Luce; Jason B. Dunham; Michael K. Young; Kurt D. Fausch; Bruce E. Rieman
2013-01-01
Forecasts of species distributions under future climates are inherently uncertain, but there have been few attempts to describe this uncertainty comprehensively in a probabilistic manner. We developed a Monte Carlo approach that accounts for uncertainty within generalized linear regression models (parameter uncertainty and residual error), uncertainty among competing...
Fractional-order adaptive fault estimation for a class of nonlinear fractional-order systems
N'Doye, Ibrahima; Laleg-Kirati, Taous-Meriem
2015-01-01
This paper studies the problem of fractional-order adaptive fault estimation for a class of fractional-order Lipschitz nonlinear systems using fractional-order adaptive fault observer. Sufficient conditions for the asymptotical convergence of the fractional-order state estimation error, the conventional integer-order and the fractional-order faults estimation error are derived in terms of linear matrix inequalities (LMIs) formulation by introducing a continuous frequency distributed equivalent model and using an indirect Lyapunov approach where the fractional-order α belongs to 0 < α < 1. A numerical example is given to demonstrate the validity of the proposed approach.
Fractional-order adaptive fault estimation for a class of nonlinear fractional-order systems
N'Doye, Ibrahima
2015-07-01
This paper studies the problem of fractional-order adaptive fault estimation for a class of fractional-order Lipschitz nonlinear systems using fractional-order adaptive fault observer. Sufficient conditions for the asymptotical convergence of the fractional-order state estimation error, the conventional integer-order and the fractional-order faults estimation error are derived in terms of linear matrix inequalities (LMIs) formulation by introducing a continuous frequency distributed equivalent model and using an indirect Lyapunov approach where the fractional-order α belongs to 0 < α < 1. A numerical example is given to demonstrate the validity of the proposed approach.
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
A methodology for uncertainty analysis of reference equations of state
Cheung, Howard; Frutiger, Jerome; Bell, Ian H.
We present a detailed methodology for the uncertainty analysis of reference equations of state (EOS) based on Helmholtz energy. In recent years there has been an increased interest in uncertainties of property data and process models of thermal systems. In the literature there are various...... for uncertainty analysis is suggested as a tool for EOS. The uncertainties of the EOS properties are calculated from the experimental values and the EOS model structure through the parameter covariance matrix and subsequent linear error propagation. This allows reporting the uncertainty range (95% confidence...
On Fractional Order Hybrid Differential Equations
Mohamed A. E. Herzallah
2014-01-01
Full Text Available We develop the theory of fractional hybrid differential equations with linear and nonlinear perturbations involving the Caputo fractional derivative of order 0<α<1. Using some fixed point theorems we prove the existence of mild solutions for two types of hybrid equations. Examples are given to illustrate the obtained results.
Kaul, Dean C.; Egbert, Stephen D.; Woolson, William A.
2005-01-01
In order to avoid the pitfalls that so discredited DS86 and its uncertainty estimates, and to provide DS02 uncertainties that are both defensible and credible, this report not only presents the ensemble uncertainties assembled from uncertainties in individual computational elements and radiation dose components but also describes how these relate to comparisons between observed and computed quantities at critical intervals in the computational process. These comparisons include those between observed and calculated radiation free-field components, where observations include thermal- and fast-neutron activation and gamma-ray thermoluminescence, which are relevant to the estimated systematic uncertainty for DS02. The comparisons also include those between calculated and observed survivor shielding, where the observations consist of biodosimetric measurements for individual survivors, which are relevant to the estimated random uncertainty for DS02. (J.P.N.)
Model uncertainty and probability
Parry, G.W.
1994-01-01
This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example
Uncertainty in artificial intelligence
Kanal, LN
1986-01-01
How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.
Uncertainties in hydrogen combustion
Stamps, D.W.; Wong, C.C.; Nelson, L.S.
1988-01-01
Three important areas of hydrogen combustion with uncertainties are identified: high-temperature combustion, flame acceleration and deflagration-to-detonation transition, and aerosol resuspension during hydrogen combustion. The uncertainties associated with high-temperature combustion may affect at least three different accident scenarios: the in-cavity oxidation of combustible gases produced by core-concrete interactions, the direct containment heating hydrogen problem, and the possibility of local detonations. How these uncertainties may affect the sequence of various accident scenarios is discussed and recommendations are made to reduce these uncertainties. 40 references
Robust portfolio selection under norm uncertainty
Lei Wang
2016-06-01
Full Text Available Abstract In this paper, we consider the robust portfolio selection problem which has a data uncertainty described by the ( p , w $(p,w$ -norm in the objective function. We show that the robust formulation of this problem is equivalent to a linear optimization problem. Moreover, we present some numerical results concerning our robust portfolio selection problem.
Uncertainty in hydrological signatures
McMillan, Hilary; Westerberg, Ida
2015-04-01
Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty
Uncertainty Quantification with Applications to Engineering Problems
Bigoni, Daniele
in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor...... and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose......-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix....
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander
2014-01-06
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander; Matthies, Hermann G.
2014-01-01
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse problems and uncertainty quantification
Litvinenko, Alexander
2013-12-18
In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Fractional Vector Calculus and Fractional Special Function
Li, Ming-Fan; Ren, Ji-Rong; Zhu, Tao
2010-01-01
Fractional vector calculus is discussed in the spherical coordinate framework. A variation of the Legendre equation and fractional Bessel equation are solved by series expansion and numerically. Finally, we generalize the hypergeometric functions.
Uncertainty in hydraulic tests in fractured rock
Ji, Sung-Hoon; Koh, Yong-Kwon
2014-01-01
Interpretation of hydraulic tests in fractured rock has uncertainty because of the different hydraulic properties of a fractured rock to a porous medium. In this study, we reviewed several interesting phenomena which show uncertainty in a hydraulic test at a fractured rock and discussed their origins and the how they should be considered during site characterisation. Our results show that the estimated hydraulic parameters of a fractured rock from a hydraulic test are associated with uncertainty due to the changed aperture and non-linear groundwater flow during the test. Although the magnitude of these two uncertainties is site-dependent, the results suggest that it is recommended to conduct a hydraulic test with a little disturbance from the natural groundwater flow to consider their uncertainty. Other effects reported from laboratory and numerical experiments such as the trapping zone effect (Boutt, 2006) and the slip condition effect (Lee, 2014) can also introduce uncertainty to a hydraulic test, which should be evaluated in a field test. It is necessary to consider the way how to evaluate the uncertainty in the hydraulic property during the site characterisation and how to apply it to the safety assessment of a subsurface repository. (authors)
Uncertainty estimation of ultrasonic thickness measurement
Yassir Yassen, Abdul Razak Daud; Mohammad Pauzi Ismail; Abdul Aziz Jemain
2009-01-01
The most important factor that should be taken into consideration when selecting ultrasonic thickness measurement technique is its reliability. Only when the uncertainty of a measurement results is known, it may be judged if the result is adequate for intended purpose. The objective of this study is to model the ultrasonic thickness measurement function, to identify the most contributing input uncertainty components, and to estimate the uncertainty of the ultrasonic thickness measurement results. We assumed that there are five error sources significantly contribute to the final error, these sources are calibration velocity, transit time, zero offset, measurement repeatability and resolution, by applying the propagation of uncertainty law to the model function, a combined uncertainty of the ultrasonic thickness measurement was obtained. In this study the modeling function of ultrasonic thickness measurement was derived. By using this model the estimation of the uncertainty of the final output result was found to be reliable. It was also found that the most contributing input uncertainty components are calibration velocity, transit time linearity and zero offset. (author)
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
Notø, Hilde P; Nordby, Karl-Christian; Eduard, Wijnand
2016-05-01
The aims of this study were to examine the relationships and establish conversion factors between 'total' dust, respirable, thoracic, and inhalable aerosol fractions measured by parallel personal sampling on workers from the production departments of cement plants. 'Total' dust in this study refers to aerosol sampled by the closed face 37-mm Millipore filter cassette. Side-by-side personal measurements of 'total' dust and respirable, thoracic, and inhalable aerosol fractions were performed on workers in 17 European and Turkish cement plants. Simple linear and mixed model regressions were used to model the associations between the samplers. The total number of personal samples collected on 141 workers was 512. Of these 8.4% were excluded leaving 469 for statistical analysis. The different aerosol fractions contained from 90 to 130 measurements and-side-by side measurements of all four aerosol fractions were collected on 72 workers.The median ratios between observed results of the respirable, 'total' dust, and inhalable fractions relative to the thoracic aerosol fractions were 0.51, 2.4, and 5.9 respectively. The ratios between the samplers were not constant over the measured concentration range and were best described by regression models. Job type, position of samplers on left or right shoulder and plant had no substantial effect on the ratios. The ratios between aerosol fractions changed with different air concentrations. Conversion models for estimation of the fractions were established. These models explained a high proportion of the variance (74-91%) indicating that they are useful for the estimation of concentrations based on measurements of a different aerosol fraction. The calculated uncertainties at most observed concentrations were below 30% which is acceptable for comparison with limit values (EN 482, 2012). The cement industry will therefore be able to predict the health related aerosol fractions from their former or future measurements of one of the
Uncertainty in social dilemmas
Kwaadsteniet, Erik Willem de
2007-01-01
This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size
Uncertainty and Climate Change
Berliner, L. Mark
2003-01-01
Anthropogenic, or human-induced, climate change is a critical issue in science and in the affairs of humankind. Though the target of substantial research, the conclusions of climate change studies remain subject to numerous uncertainties. This article presents a very brief review of the basic arguments regarding anthropogenic climate change with particular emphasis on uncertainty.
Deterministic uncertainty analysis
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig
Depres, B.; Dossantos-Uzarralde, P.
2009-01-01
More than 150 researchers and engineers from universities and the industrial world met to discuss on the new methodologies developed around assessing uncertainty. About 20 papers were presented and the main topics were: methods to study the propagation of uncertainties, sensitivity analysis, nuclear data covariances or multi-parameter optimisation. This report gathers the contributions of CEA researchers and engineers
Uncertainty evaluation of a modified elimination weighing for source preparation
Cacais, F.L.; Loayza, V.M., E-mail: facacais@gmail.com [Instituto Nacional de Metrologia, Qualidade e Tecnologia, (INMETRO), Rio de Janeiro, RJ (Brazil); Delgado, J.U. [Instituto de Radioproteção e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Metrologia das Radiações Ionizantes
2017-07-01
Some modification in elimination weighing method for radioactive source allowed correcting weighing results without non-linearity problems assign a uncertainty contribution for the correction of the same order of the mass of drop uncertainty and check weighing variability in series source preparation. This analysis has focused in knowing the achievable weighing accuracy and the uncertainty estimated by Monte Carlo method for a mass of a 20 mg drop was at maximum of 0.06%. (author)
Assigning uncertainties in the inversion of NMR relaxation data.
Parker, Robert L; Song, Yi-Qaio
2005-06-01
Recovering the relaxation-time density function (or distribution) from NMR decay records requires inverting a Laplace transform based on noisy data, an ill-posed inverse problem. An important objective in the face of the consequent ambiguity in the solutions is to establish what reliable information is contained in the measurements. To this end we describe how upper and lower bounds on linear functionals of the density function, and ratios of linear functionals, can be calculated using optimization theory. Those bounded quantities cover most of those commonly used in the geophysical NMR, such as porosity, T(2) log-mean, and bound fluid volume fraction, and include averages over any finite interval of the density function itself. In the theory presented statistical considerations enter to account for the presence of significant noise in the signal, but not in a prior characterization of density models. Our characterization of the uncertainties is conservative and informative; it will have wide application in geophysical NMR and elsewhere.
Conditional uncertainty principle
Gour, Gilad; Grudka, Andrzej; Horodecki, Michał; Kłobus, Waldemar; Łodyga, Justyna; Narasimhachar, Varun
2018-04-01
We develop a general operational framework that formalizes the concept of conditional uncertainty in a measure-independent fashion. Our formalism is built upon a mathematical relation which we call conditional majorization. We define conditional majorization and, for the case of classical memory, we provide its thorough characterization in terms of monotones, i.e., functions that preserve the partial order under conditional majorization. We demonstrate the application of this framework by deriving two types of memory-assisted uncertainty relations, (1) a monotone-based conditional uncertainty relation and (2) a universal measure-independent conditional uncertainty relation, both of which set a lower bound on the minimal uncertainty that Bob has about Alice's pair of incompatible measurements, conditioned on arbitrary measurement that Bob makes on his own system. We next compare the obtained relations with their existing entropic counterparts and find that they are at least independent.
Physical Uncertainty Bounds (PUB)
Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Measurement uncertainty and probability
Willink, Robin
2013-01-01
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
Uncertainty Propagation in OMFIT
Smith, Sterling; Meneghini, Orso; Sung, Choongki
2017-10-01
A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.
Uncertainty quantification for environmental models
Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming
2012-01-01
]. There are also bootstrapping and cross-validation approaches.Sometimes analyses are conducted using surrogate models [12]. The availability of so many options can be confusing. Categorizing methods based on fundamental questions assists in communicating the essential results of uncertainty analyses to stakeholders. Such questions can focus on model adequacy (e.g., How well does the model reproduce observed system characteristics and dynamics?) and sensitivity analysis (e.g., What parameters can be estimated with available data? What observations are important to parameters and predictions? What parameters are important to predictions?), as well as on the uncertainty quantification (e.g., How accurate and precise are the predictions?). The methods can also be classified by the number of model runs required: few (10s to 1000s) or many (10,000s to 1,000,000s). Of the methods listed above, the most computationally frugal are generally those based on local derivatives; MCMC methods tend to be among the most computationally demanding. Surrogate models (emulators)do not necessarily produce computational frugality because many runs of the full model are generally needed to create a meaningful surrogate model. With this categorization, we can, in general, address all the fundamental questions mentioned above using either computationally frugal or demanding methods. Model development and analysis can thus be conducted consistently using either computation-ally frugal or demanding methods; alternatively, different fundamental questions can be addressed using methods that require different levels of effort. Based on this perspective, we pose the question: Can computationally frugal methods be useful companions to computationally demanding meth-ods? The reliability of computationally frugal methods generally depends on the model being reasonably linear, which usually means smooth nonlin-earities and the assumption of Gaussian errors; both tend to be more valid with more linear
Exact solutions to the time-fractional differential equations via local fractional derivatives
Guner, Ozkan; Bekir, Ahmet
2018-01-01
This article utilizes the local fractional derivative and the exp-function method to construct the exact solutions of nonlinear time-fractional differential equations (FDEs). For illustrating the validity of the method, it is applied to the time-fractional Camassa-Holm equation and the time-fractional-generalized fifth-order KdV equation. Moreover, the exact solutions are obtained for the equations which are formed by different parameter values related to the time-fractional-generalized fifth-order KdV equation. This method is an reliable and efficient mathematical tool for solving FDEs and it can be applied to other non-linear FDEs.
The linear canonical transformation : definition and properties
Bastiaans, Martin J.; Alieva, Tatiana; Healy, J.J.; Kutay, M.A.; Ozaktas, H.M.; Sheridan, J.T.
2016-01-01
In this chapter we introduce the class of linear canonical transformations, which includes as particular cases the Fourier transformation (and its generalization: the fractional Fourier transformation), the Fresnel transformation, and magnifier, rotation and shearing operations. The basic properties
Verification of uncertainty budgets
Heydorn, Kaj; Madsen, B.S.
2005-01-01
, and therefore it is essential that the applicability of the overall uncertainty budget to actual measurement results be verified on the basis of current experimental data. This should be carried out by replicate analysis of samples taken in accordance with the definition of the measurand, but representing...... the full range of matrices and concentrations for which the budget is assumed to be valid. In this way the assumptions made in the uncertainty budget can be experimentally verified, both as regards sources of variability that are assumed negligible, and dominant uncertainty components. Agreement between...
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Plurality of Type A evaluations of uncertainty
Possolo, Antonio; Pintar, Adam L.
2017-10-01
The evaluations of measurement uncertainty involving the application of statistical methods to measurement data (Type A evaluations as specified in the Guide to the Expression of Uncertainty in Measurement, GUM) comprise the following three main steps: (i) developing a statistical model that captures the pattern of dispersion or variability in the experimental data, and that relates the data either to the measurand directly or to some intermediate quantity (input quantity) that the measurand depends on; (ii) selecting a procedure for data reduction that is consistent with this model and that is fit for the purpose that the results are intended to serve; (iii) producing estimates of the model parameters, or predictions based on the fitted model, and evaluations of uncertainty that qualify either those estimates or these predictions, and that are suitable for use in subsequent uncertainty propagation exercises. We illustrate these steps in uncertainty evaluations related to the measurement of the mass fraction of vanadium in a bituminous coal reference material, including the assessment of the homogeneity of the material, and to the calibration and measurement of the amount-of-substance fraction of a hydrochlorofluorocarbon in air, and of the age of a meteorite. Our goal is to expose the plurality of choices that can reasonably be made when taking each of the three steps outlined above, and to show that different choices typically lead to different estimates of the quantities of interest, and to different evaluations of the associated uncertainty. In all the examples, the several alternatives considered represent choices that comparably competent statisticians might make, but who differ in the assumptions that they are prepared to rely on, and in their selection of approach to statistical inference. They represent also alternative treatments that the same statistician might give to the same data when the results are intended for different purposes.
Liu, Yingyi
2017-09-08
Prior studies on fraction magnitude understanding focused mainly on students with relatively sufficient formal instruction on fractions whose fraction magnitude understanding is relatively mature. This study fills a research gap by investigating fraction magnitude understanding in the early stages of fraction instruction. It extends previous findings to children with limited and primary formal fraction instruction. Thirty-five fourth graders with limited fraction instruction and forty fourth graders with primary fraction instruction were recruited from a Chinese primary school. Children's fraction magnitude understanding was assessed with a fraction number line estimation task. Approximate number system (ANS) acuity was assessed with a dot discrimination task. Whole number knowledge was assessed with a whole number line estimation task. General reading and mathematics achievements were collected concurrently and 1 year later. In children with limited fraction instruction, fraction representation was linear and fraction magnitude understanding was concurrently related to both ANS and whole number knowledge. In children with primary fraction instruction, fraction magnitude understanding appeared to (marginally) significantly predict general mathematics achievement 1 year later. Fraction magnitude understanding emerged early during formal instruction of fractions. ANS and whole number knowledge were related to fraction magnitude understanding when children first began to learn about fractions in school. The predictive value of fraction magnitude understanding is likely constrained by its sophistication level. © 2017 The British Psychological Society.
Laskin, Nick
2018-01-01
Fractional quantum mechanics is a recently emerged and rapidly developing field of quantum physics. This is the first monograph on fundamentals and physical applications of fractional quantum mechanics, written by its founder. The fractional Schrödinger equation and the fractional path integral are new fundamental physical concepts introduced and elaborated in the book. The fractional Schrödinger equation is a manifestation of fractional quantum mechanics. The fractional path integral is a new mathematical tool based on integration over Lévy flights. The fractional path integral method enhances the well-known Feynman path integral framework. Related topics covered in the text include time fractional quantum mechanics, fractional statistical mechanics, fractional classical mechanics and the α-stable Lévy random process. The book is well-suited for theorists, pure and applied mathematicians, solid-state physicists, chemists, and others working with the Schrödinger equation, the path integral technique...
Uncertainty and validation. Effect of model complexity on uncertainty estimates
Elert, M.
1996-09-01
deterministic case, and the uncertainty bands did not always overlap. This suggest that there are considerable model uncertainties present, which were not considered in this study. Concerning possible constraints in the application domain of different models, the results of this exercise suggest that if only the evolution of the root zone concentration is to be predicted, all of the studied models give comparable results. However, if also the flux to the groundwater is to be predicted, then a considerably increased amount of detail is needed concerning the model and the parameterization. This applies to the hydrological as well as the transport modelling. The difference in model predictions and the magnitude of uncertainty was quite small for some of the end-points predicted, while for others it could span many orders of magnitude. Of special importance were end-points where delay in the soil was involved, e.g. release to the groundwater. In such cases the influence of radioactive decay gave rise to strongly non-linear effects. The work in the subgroup has provided many valuable insights on the effects of model simplifications, e.g. discretization in the model, averaging of the time varying input parameters and the assignment of uncertainties to parameters. The conclusions that have been drawn concerning these are primarily valid for the studied scenario. However, we believe that they to a large extent also are generally applicable. The subgroup have had many opportunities to study the pitfalls involved in model comparison. The intention was to provide a well defined scenario for the subgroup, but despite several iterations misunderstandings and ambiguities remained. The participants have been forced to scrutinize their models to try to explain differences in the predictions and most, if not all, of the participants have improved their models as a result of this
Fractional vector calculus and fractional Maxwell's equations
Tarasov, Vasily E.
2008-01-01
The theory of derivatives and integrals of non-integer order goes back to Leibniz, Liouville, Grunwald, Letnikov and Riemann. The history of fractional vector calculus (FVC) has only 10 years. The main approaches to formulate a FVC, which are used in the physics during the past few years, will be briefly described in this paper. We solve some problems of consistent formulations of FVC by using a fractional generalization of the Fundamental Theorem of Calculus. We define the differential and integral vector operations. The fractional Green's, Stokes' and Gauss's theorems are formulated. The proofs of these theorems are realized for simplest regions. A fractional generalization of exterior differential calculus of differential forms is discussed. Fractional nonlocal Maxwell's equations and the corresponding fractional wave equations are considered
Fractional statistics and fractional quantized Hall effect
Tao, R.; Wu, Y.S.
1985-01-01
The authors suggest that the origin of the odd-denominator rule observed in the fractional quantized Hall effect (FQHE) may lie in fractional statistics which govern quasiparticles in FQHE. A theorem concerning statistics of clusters of quasiparticles implies that fractional statistics do not allow coexistence of a large number of quasiparticles at fillings with an even denominator. Thus, no Hall plateau can be formed at these fillings, regardless of the presence of an energy gap. 15 references
Uncertainties and climatic change
De Gier, A.M.; Opschoor, J.B.; Van de Donk, W.B.H.J.; Hooimeijer, P.; Jepma, J.; Lelieveld, J.; Oerlemans, J.; Petersen, A.
2008-01-01
Which processes in the climate system are misunderstood? How are scientists dealing with uncertainty about climate change? What will be done with the conclusions of the recently published synthesis report of the IPCC? These and other questions were answered during the meeting 'Uncertainties and climate change' that was held on Monday 26 November 2007 at the KNAW in Amsterdam. This report is a compilation of all the presentations and provides some conclusions resulting from the discussions during this meeting. [mk] [nl
Lemaire, Maurice
2014-01-01
Science is a quest for certainty, but lack of certainty is the driving force behind all of its endeavors. This book, specifically, examines the uncertainty of technological and industrial science. Uncertainty and Mechanics studies the concepts of mechanical design in an uncertain setting and explains engineering techniques for inventing cost-effective products. Though it references practical applications, this is a book about ideas and potential advances in mechanical science.
Uncertainty: lotteries and risk
Ávalos, Eloy
2011-01-01
In this paper we develop the theory of uncertainty in a context where the risks assumed by the individual are measurable and manageable. We primarily use the definition of lottery to formulate the axioms of the individual's preferences, and its representation through the utility function von Neumann - Morgenstern. We study the expected utility theorem and its properties, the paradoxes of choice under uncertainty and finally the measures of risk aversion with monetary lotteries.
Uncertainty calculations made easier
Hogenbirk, A.
1994-07-01
The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)
The fundamental solutions for fractional evolution equations of parabolic type
Mahmoud M. El-Borai
2004-01-01
Full Text Available The fundamental solutions for linear fractional evolution equations are obtained. The coefficients of these equations are a family of linear closed operators in the Banach space. Also, the continuous dependence of solutions on the initial conditions is studied. A mixed problem of general parabolic partial differential equations with fractional order is given as an application.
Initialized Fractional Calculus
Lorenzo, Carl F.; Hartley, Tom T.
2000-01-01
This paper demonstrates the need for a nonconstant initialization for the fractional calculus and establishes a basic definition set for the initialized fractional differintegral. This definition set allows the formalization of an initialized fractional calculus. Two basis calculi are considered; the Riemann-Liouville and the Grunwald fractional calculi. Two forms of initialization, terminal and side are developed.
Unattached fraction of radon progeny in Polish coal mines
Skubacz, K.; Michalik, B.
2002-01-01
The system of the monitoring of the radiation hazard in Polish coal mines is based on the monitoring of the workplaces. This system works since 1989 in all coal mines. It gives a very good basis for further epidemiological investigation and assessment of the health detriment within the population of the mines as a result of the exposure for natural radiation. It is very important problem, due to the fact of the presence in the mines another factors, which probably have a synergetic effects on the respiratory tracts. As the routine instrument, a device called ALFA-31 sampling probe was developed in our laboratory. This device was accomplished to regular dust sampler and simultaneous measurements of dust content and potential alpha energy concentration of radon progeny are obligatory in all underground mines in Poland. But the microcyclone used a separator of the respirable fraction which causes the cut-off of unattached fraction of radon progeny, On the other hand measurements of the unattached fraction of short lived radon progeny play a very important role in the investigations of the adequate dose from this source of radiation hazard. During field experiments the use of the alpha spectroscopy system is necessary, while measurements are done not in the vacuum chambers but under normal pressure. It leads to situation, when particular peaks in alpha spectrum are very wide and interfere with other peaks of another alpha-emitting radionuclides. Such instrumentation was designed and completed, and a survey in several underground mines was performed. The analysis of the obtained results must be done very carefully; in other case it may cause a very big uncertainty of the result. In this paper a new approach to the analysis of the alpha spectra has been described. This approach can be used also in other applications of alpha spectroscopy, in which the analysis of energy of alpha peaks in spectrum is needed. The method of the analysis is based on a non-linear regression
Fractional-moment CAPM with loss aversion
Wu Yahao; Wang Xiaotian; Wu Min
2009-01-01
In this paper, we present a new fractional-order value function which generalizes the value function of Kahneman and Tversky [Kahneman D, Tversky A. Prospect theory: an analysis of decision under risk. Econometrica 1979;47:263-91; Tversky A, Kahneman D. Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertainty 1992;4:297-323], and give the corresponding fractional-moment versions of CAPM in the cases of both the prospect theory [Kahneman D, Tversky A. Prospect theory: an analysis of decision under risk. Econometrica 1979;47:263-91; Tversky A, Kahneman D. Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertainty 1992;4:297-323] and the expected utility model. The models that we obtain can be used to price assets when asset return distributions are likely to be asymmetric stable Levy distribution during panics and stampedes in worldwide security markets in 2008. In particular, from the prospect theory we get the following fractional-moment CAPM with loss aversion: E(R i -R 0 )=(E[(W-W 0 ) + -0.12 (R i -R 0 )]+2.25E[(W 0 -W) + -0.12 (R i -R 0 )])/ (E[(W-W 0 ) + -0.12 (W-R 0 )]+2.25E[(W 0 -W) + -0.12 (W-R 0 )]) .E(W-R 0 ), where W 0 is a fixed reference point distinguishing between losses and gains.
Sabzikar, Farzad, E-mail: sabzika2@stt.msu.edu [Department of Statistics and Probability, Michigan State University, East Lansing, MI 48823 (United States); Meerschaert, Mark M., E-mail: mcubed@stt.msu.edu [Department of Statistics and Probability, Michigan State University, East Lansing, MI 48823 (United States); Chen, Jinghua, E-mail: cjhdzdz@163.com [School of Sciences, Jimei University, Xiamen, Fujian, 361021 (China)
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
Sabzikar, Farzad; Meerschaert, Mark M.; Chen, Jinghua
2015-07-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
Sabzikar, Farzad; Meerschaert, Mark M.; Chen, Jinghua
2015-01-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series
Justification for recommended uncertainties
Pronyaev, V.G.; Badikov, S.A.; Carlson, A.D.
2007-01-01
The uncertainties obtained in an earlier standards evaluation were considered to be unrealistically low by experts of the US Cross Section Evaluation Working Group (CSEWG). Therefore, the CSEWG Standards Subcommittee replaced the covariance matrices of evaluated uncertainties by expanded percentage errors that were assigned to the data over wide energy groups. There are a number of reasons that might lead to low uncertainties of the evaluated data: Underestimation of the correlations existing between the results of different measurements; The presence of unrecognized systematic uncertainties in the experimental data can lead to biases in the evaluated data as well as to underestimations of the resulting uncertainties; Uncertainties for correlated data cannot only be characterized by percentage uncertainties or variances. Covariances between evaluated value at 0.2 MeV and other points obtained in model (RAC R matrix and PADE2 analytical expansion) and non-model (GMA) fits of the 6 Li(n,t) TEST1 data and the correlation coefficients are presented and covariances between the evaluated value at 0.045 MeV and other points (along the line or column of the matrix) as obtained in EDA and RAC R matrix fits of the data available for reactions that pass through the formation of the 7 Li system are discussed. The GMA fit with the GMA database is shown for comparison. The following diagrams are discussed: Percentage uncertainties of the evaluated cross section for the 6 Li(n,t) reaction and the for the 235 U(n,f) reaction; estimation given by CSEWG experts; GMA result with full GMA database, including experimental data for the 6 Li(n,t), 6 Li(n,n) and 6 Li(n,total) reactions; uncertainties in the GMA combined fit for the standards; EDA and RAC R matrix results, respectively. Uncertainties of absolute and 252 Cf fission spectrum averaged cross section measurements, and deviations between measured and evaluated values for 235 U(n,f) cross-sections in the neutron energy range 1
Striatal dopamine release codes uncertainty in pathological gambling
Linnet, Jakob; Mouridsen, Kim; Peterson, Ericka
2012-01-01
Two mechanisms of midbrain and striatal dopaminergic projections may be involved in pathological gambling: hypersensitivity to reward and sustained activation toward uncertainty. The midbrain—striatal dopamine system distinctly codes reward and uncertainty, where dopaminergic activation is a linear...... function of expected reward and an inverse U-shaped function of uncertainty. In this study, we investigated the dopaminergic coding of reward and uncertainty in 18 pathological gambling sufferers and 16 healthy controls. We used positron emission tomography (PET) with the tracer [11C]raclopride to measure...... dopamine release, and we used performance on the Iowa Gambling Task (IGT) to determine overall reward and uncertainty. We hypothesized that we would find a linear function between dopamine release and IGT performance, if dopamine release coded reward in pathological gambling. If, on the other hand...
Striatal dopamine release codes uncertainty in pathological gambling
Linnet, Jakob; Mouridsen, Kim; Peterson, Ericka
2012-01-01
Two mechanisms of midbrain and striatal dopaminergic projections may be involved in pathological gambling: hypersensitivity to reward and sustained activation toward uncertainty. The midbrain-striatal dopamine system distinctly codes reward and uncertainty, where dopaminergic activation is a linear...... function of expected reward and an inverse U-shaped function of uncertainty. In this study, we investigated the dopaminergic coding of reward and uncertainty in 18 pathological gambling sufferers and 16 healthy controls. We used positron emission tomography (PET) with the tracer [(11)C......]raclopride to measure dopamine release, and we used performance on the Iowa Gambling Task (IGT) to determine overall reward and uncertainty. We hypothesized that we would find a linear function between dopamine release and IGT performance, if dopamine release coded reward in pathological gambling. If, on the other hand...
Control system analysis for the perturbed linear accelerator rf system
Sung Il Kwon
2002-01-01
This paper addresses the modeling problem of the linear accelerator RF system in SNS. Klystrons are modeled as linear parameter varying systems. The effect of the high voltage power supply ripple on the klystron output voltage and the output phase is modeled as an additive disturbance. The cavity is modeled as a linear system and the beam current is modeled as the exogenous disturbance. The output uncertainty of the low level RF system which results from the uncertainties in the RF components and cabling is modeled as multiplicative uncertainty. Also, the feedback loop uncertainty and digital signal processing signal conditioning subsystem uncertainties are lumped together and are modeled as multiplicative uncertainty. Finally, the time delays in the loop are modeled as a lumped time delay. For the perturbed open loop system, the closed loop system performance, and stability are analyzed with the PI feedback controller.
CONTROL SYSTEM ANALYSIS FOR THE PERTURBED LINEAR ACCELERATOR RF SYSTEM
SUNG-IL KWON; AMY H. REGAN
2002-01-01
This paper addresses the modeling problem of the linear accelerator RF system in SNS. Klystrons are modeled as linear parameter varying systems. The effect of the high voltage power supply ripple on the klystron output voltage and the output phase is modeled as an additive disturbance. The cavity is modeled as a linear system and the beam current is modeled as the exogenous disturbance. The output uncertainty of the low level RF system which results from the uncertainties in the RF components and cabling is modeled as multiplicative uncertainty. Also, the feedback loop uncertainty and digital signal processing signal conditioning subsystem uncertainties are lumped together and are modeled as multiplicative uncertainty. Finally, the time delays in the loop are modeled as a lumped time delay. For the perturbed open loop system, the closed loop system performance, and stability are analyzed with the PI feedback controller
Higher fractions theory of fractional hall effect
Kostadinov, I.Z.; Popov, V.N.
1985-07-01
A theory of fractional quantum Hall effect is generalized to higher fractions. N-particle model interaction is used and the gap is expressed through n-particles wave function. The excitation spectrum in general and the mean field critical behaviour are determined. The Hall conductivity is calculated from first principles. (author)
Dilaton cosmology and the modified uncertainty principle
Majumder, Barun
2011-01-01
Very recently Ali et al. (2009) proposed a new generalized uncertainty principle (with a linear term in Plank length which is consistent with doubly special relativity and string theory. The classical and quantum effects of this generalized uncertainty principle (termed as modified uncertainty principle or MUP) are investigated on the phase space of a dilatonic cosmological model with an exponential dilaton potential in a flat Friedmann-Robertson-Walker background. Interestingly, as a consequence of MUP, we found that it is possible to get a late time acceleration for this model. For the quantum mechanical description in both commutative and MUP framework, we found the analytical solutions of the Wheeler-DeWitt equation for the early universe and compare our results. We have used an approximation method in the case of MUP.
Assignment of uncertainties to scientific data
Froehner, F.H.
1994-01-01
Long-standing problems of uncertainty assignment to scientific data came into a sharp focus in recent years when uncertainty information ('covariance files') had to be added to application-oriented large libraries of evaluated nuclear data such as ENDF and JEF. Question arouse about the best way to express uncertainties, the meaning of statistical and systematic errors, the origin of correlation and construction of covariance matrices, the combination of uncertain data from different sources, the general usefulness of results that are strictly valid only for Gaussian or only for linear statistical models, etc. Conventional statistical theory is often unable to give unambiguous answers, and tends to fail when statistics is bad so that prior information becomes crucial. Modern probability theory, on the other hand, incorporating decision information becomes group-theoretic results, is shown to provide straight and unique answers to such questions, and to deal easily with prior information and small samples. (author). 10 refs
Do oil shocks predict economic policy uncertainty?
Rehman, Mobeen Ur
2018-05-01
Oil price fluctuations have influential role in global economic policies for developed as well as emerging countries. I investigate the role of international oil prices disintegrated into structural (i) oil supply shock, (ii) aggregate demand shock and (iii) oil market specific demand shocks, based on the work of Kilian (2009) using structural VAR framework on economic policies uncertainty of sampled markets. Economic policy uncertainty, due to its non-linear behavior is modeled in a regime switching framework with disintegrated structural oil shocks. Our results highlight that Indian, Spain and Japanese economic policy uncertainty responds to the global oil price shocks, however aggregate demand shocks fail to induce any change. Oil specific demand shocks are significant only for China and India in high volatility state.
LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections
2007-01-01
1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table
Dealing with exploration uncertainties
Capen, E.
1992-01-01
Exploration for oil and gas should fulfill the most adventurous in their quest for excitement and surprise. This paper tries to cover that tall order. The authors will touch on the magnitude of the uncertainty (which is far greater than in most other businesses), the effects of not knowing target sizes very well, how to build uncertainty into analyses naturally, how to tie reserves and chance estimates to economics, and how to look at the portfolio effect of an exploration program. With no apologies, the authors will be using a different language for some readers - the language of uncertainty, which means probability and statistics. These tools allow one to combine largely subjective exploration information with the more analytical data from the engineering and economic side
Uncertainty in artificial intelligence
Levitt, TS; Lemmer, JF; Shachter, RD
1990-01-01
Clearly illustrated in this volume is the current relationship between Uncertainty and AI.It has been said that research in AI revolves around five basic questions asked relative to some particular domain: What knowledge is required? How can this knowledge be acquired? How can it be represented in a system? How should this knowledge be manipulated in order to provide intelligent behavior? How can the behavior be explained? In this volume, all of these questions are addressed. From the perspective of the relationship of uncertainty to the basic questions of AI, the book divides naturally i
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
The Value Proposition for Fractionated Space Architectures
2006-09-01
fractionation “mass penalty” assumptions , the expected launch costs are nearly a factor of two lower for the fractionated system than for the monolith...humidity variations which may affect fire propagation speed. 23 The Capital Asset Pricing Model ( CAPM ...spacecraft, can be very significant. In any event, however, the assumption that spacecraft cost scales roughly linearly with its mass is an artifact of
Uncertainty Analyses and Strategy
Kevin Coppersmith
2001-01-01
The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository
What next in fractionated radiotherapy
Fowler, J.F.
1984-01-01
Trends in models for predicting the total dose required to produce tolerable normal-tissue injury can be seen by the progression from the ''cube root law'', through Strandqvist's slope of 0.22, to NSD, TDF and CRE which have separate time and fraction number exponents, to even better approximations now available. The dose-response formulae that can be used to define the effect of fraction size (and number) include (1) the linear quadratic (LQ) model (2) the two-component (TC) multi-target model and (3) repair-misrepair models. The LQ model offers considerable convenience, requires only two parameters to be determined, and emphasizes the difference between late and early normal-tissue dependence on dose per fraction first shown by exponents greater than the NSD slope of 0.24. Exponents of overall time, e.g. Tsup(0.11), yield the wrong shape of time curve, suggesting that most proliferating occurs early, although it really occurs after a delay depending on the turnover time of the tissue. Improved clinical results are being sought by hyperfractionation, accelerated fractionation, or continuous low dose rate irradiation as in interstitial implants. (U.K.)
Asphalt chemical fractionation
Obando P, Klever N.
1998-01-01
Asphalt fractionation were carried out in the Esmeraldas Oil Refinery using n-pentane, SiO 2 and different mixture of benzene- methane. The fractions obtained were analyzed by Fourier's Transformed Infrared Spectrophotometry (FTIR)
Uncertainties in repository modeling
Wilson, J.R.
1996-12-31
The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.
Uncertainties in repository modeling
Wilson, J.R.
1996-01-01
The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling
Haefele, W.; Renn, O.; Erdmann, G.
1990-01-01
The notion of 'risk' is discussed in its social and technological contexts, leading to an investigation of the terms factuality, hypotheticality, uncertainty, and vagueness, and to the problems of acceptance and acceptability especially in the context of political decision finding. (DG) [de
Understanding Climate Uncertainty with an Ocean Focus
Tokmakian, R. T.
2009-12-01
Uncertainty in climate simulations arises from various aspects of the end-to-end process of modeling the Earth’s climate. First, there is uncertainty from the structure of the climate model components (e.g. ocean/ice/atmosphere). Even the most complex models are deficient, not only in the complexity of the processes they represent, but in which processes are included in a particular model. Next, uncertainties arise from the inherent error in the initial and boundary conditions of a simulation. Initial conditions are the state of the weather or climate at the beginning of the simulation and other such things, and typically come from observations. Finally, there is the uncertainty associated with the values of parameters in the model. These parameters may represent physical constants or effects, such as ocean mixing, or non-physical aspects of modeling and computation. The uncertainty in these input parameters propagates through the non-linear model to give uncertainty in the outputs. The models in 2020 will no doubt be better than today’s models, but they will still be imperfect, and development of uncertainty analysis technology is a critical aspect of understanding model realism and prediction capability. Smith [2002] and Cox and Stephenson [2007] discuss the need for methods to quantify the uncertainties within complicated systems so that limitations or weaknesses of the climate model can be understood. In making climate predictions, we need to have available both the most reliable model or simulation and a methods to quantify the reliability of a simulation. If quantitative uncertainty questions of the internal model dynamics are to be answered with complex simulations such as AOGCMs, then the only known path forward is based on model ensembles that characterize behavior with alternative parameter settings [e.g. Rougier, 2007]. The relevance and feasibility of using "Statistical Analysis of Computer Code Output" (SACCO) methods for examining uncertainty in
Smarandache Continued Fractions
Ibstedt, H.
2001-01-01
The theory of general continued fractions is developed to the extent required in order to calculate Smarandache continued fractions to a given number of decimal places. Proof is given for the fact that Smarandache general continued fractions built with positive integer Smarandache sequences baving only a finite number of terms equal to 1 is convergent. A few numerical results are given.
Courtney, H; Kirkland, J; Viguerie, P
1997-01-01
At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.
Generalized Fractional Derivative Anisotropic Viscoelastic Characterization
Harry H. Hilton
2012-01-01
Full Text Available Isotropic linear and nonlinear fractional derivative constitutive relations are formulated and examined in terms of many parameter generalized Kelvin models and are analytically extended to cover general anisotropic homogeneous or non-homogeneous as well as functionally graded viscoelastic material behavior. Equivalent integral constitutive relations, which are computationally more powerful, are derived from fractional differential ones and the associated anisotropic temperature-moisture-degree-of-cure shift functions and reduced times are established. Approximate Fourier transform inversions for fractional derivative relations are formulated and their accuracy is evaluated. The efficacy of integer and fractional derivative constitutive relations is compared and the preferential use of either characterization in analyzing isotropic and anisotropic real materials must be examined on a case-by-case basis. Approximate protocols for curve fitting analytical fractional derivative results to experimental data are formulated and evaluated.
Ukraintsev, V.F.; Kolesov, V.V.
2006-01-01
Usually for evaluation of reactor functionals uncertainties, the perturbation theory and sensitivity analysis techniques are used. Of cause linearization approach of perturbation theory is used. This approach has several disadvantages and that is why a new method, based on application of a special interval calculations technique has been created. Basically, the problem of dependency of fuel cycle characteristic uncertainties from source group neutron cross-sections and decay parameters uncertainties can be solved (to some extent) as well by use of sensitivity analysis. However such procedure is rather labor consuming and does not give guaranteed estimations for received parameters since it works, strictly speaking, only for small deviations because it is initially based on linearization of the mathematical problems. The technique of fuel cycle characteristics uncertainties estimation is based on so-called interval analysis (or interval calculations). The basic advantage of this technique is the opportunity of deriving correct estimations. This technique consists in introducing a new special type of data such as Interval data in codes and the definition for them of all arithmetic operations. A technique of problem decision for system of linear equations (isotope kinetics) with use of interval arithmetic for the fuel burning up problem, has been realized. Thus there is an opportunity to compute a neutron flux, fission and capture cross-section uncertainties impact on nuclide concentration uncertainties and on fuel cycle characteristics (such as K eff , breeding ratio, decay heat power etc). By this time the code for interval calculation of burn-up computing has been developed and verified
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-09-28
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
Shamim, Atif
2011-03-01
For the first time, a generalized Smith chart is introduced here to represent fractional order circuit elements. It is shown that the standard Smith chart is a special case of the generalized fractional order Smith chart. With illustrations drawn for both the conventional integer based lumped elements and the fractional elements, a graphical technique supported by the analytical method is presented to plot impedances on the fractional Smith chart. The concept is then applied towards impedance matching networks, where the fractional approach proves to be much more versatile and results in a single element matching network for a complex load as compared to the two elements in the conventional approach. © 2010 IEEE.
WE-B-19A-01: SRT II: Uncertainties in SRT
Dieterich, S; Schlesinger, D; Geneser, S
2014-01-01
SRS delivery has undergone major technical changes in the last decade, transitioning from predominantly frame-based treatment delivery to imageguided, frameless SRS. It is important for medical physicists working in SRS to understand the magnitude and sources of uncertainty involved in delivering SRS treatments for a multitude of technologies (Gamma Knife, CyberKnife, linac-based SRS and protons). Sources of SRS planning and delivery uncertainty include dose calculation, dose fusion, and intra- and inter-fraction motion. Dose calculations for small fields are particularly difficult because of the lack of electronic equilibrium and greater effect of inhomogeneities within and near the PTV. Going frameless introduces greater setup uncertainties that allows for potentially increased intra- and interfraction motion, The increased use of multiple imaging modalities to determine the tumor volume, necessitates (deformable) image and contour fusion, and the resulting uncertainties introduced in the image registration process further contribute to overall treatment planning uncertainties. Each of these uncertainties must be quantified and their impact on treatment delivery accuracy understood. If necessary, the uncertainties may then be accounted for during treatment planning either through techniques to make the uncertainty explicit, or by the appropriate addition of PTV margins. Further complicating matters, the statistics of 1-5 fraction SRS treatments differ from traditional margin recipes relying on Poisson statistics. In this session, we will discuss uncertainties introduced during each step of the SRS treatment planning and delivery process and present margin recipes to appropriately account for such uncertainties. Learning Objectives: To understand the major contributors to the total delivery uncertainty in SRS for Gamma Knife, CyberKnife, and linac-based SRS. Learn the various uncertainties introduced by image fusion, deformable image registration, and contouring
Dey, Aloke
2009-01-01
A one-stop reference to fractional factorials and related orthogonal arrays.Presenting one of the most dynamic areas of statistical research, this book offers a systematic, rigorous, and up-to-date treatment of fractional factorial designs and related combinatorial mathematics. Leading statisticians Aloke Dey and Rahul Mukerjee consolidate vast amounts of material from the professional literature--expertly weaving fractional replication, orthogonal arrays, and optimality aspects. They develop the basic theory of fractional factorials using the calculus of factorial arrangements, thereby providing a unified approach to the study of fractional factorial plans. An indispensable guide for statisticians in research and industry as well as for graduate students, Fractional Factorial Plans features: * Construction procedures of symmetric and asymmetric orthogonal arrays. * Many up-to-date research results on nonexistence. * A chapter on optimal fractional factorials not based on orthogonal arrays. * Trend-free plans...
Fractional Dynamics and Control
Machado, José; Luo, Albert
2012-01-01
Fractional Dynamics and Control provides a comprehensive overview of recent advances in the areas of nonlinear dynamics, vibration and control with analytical, numerical, and experimental results. This book provides an overview of recent discoveries in fractional control, delves into fractional variational principles and differential equations, and applies advanced techniques in fractional calculus to solving complicated mathematical and physical problems.Finally, this book also discusses the role that fractional order modeling can play in complex systems for engineering and science. Discusses how fractional dynamics and control can be used to solve nonlinear science and complexity issues Shows how fractional differential equations and models can be used to solve turbulence and wave equations in mechanics and gravity theories and Schrodinger’s equation Presents factional relaxation modeling of dielectric materials and wave equations for dielectrics Develops new methods for control and synchronization of...
Anderson, Ernani; Travassos, Paulo; Ferreira, Max da Silva; Carvalho, Samira Marques de; Silva, Michele Maria da; Peixoto, Jose Guilherme Pereira; Salmon Junior, Helio Augusto
2015-01-01
This study aims to estimative the combined standard uncertainty for a detector parallel plate used for dosimetry of electron beams in linear accelerators for radiotherapy, which has been calibrated by the cross-calibration method. Keeping the combined standard uncertainty next of the uncertainty informed in the calibration certificate of the reference chamber, become possible establish the calibration factor of the detector. The combined standard uncertainty obtained in this study was 2.5 %. (author)
Samek Lucyna
2016-03-01
Full Text Available Samples of PM10 and PM2.5 fractions were collected between the years 2010 and 2013 at the urban area of Krakow, Poland. Numerous types of air pollution sources are present at the site; these include steel and cement industries, traffic, municipal emission sources and biomass burning. Energy dispersive X-ray fluorescence was used to determine the concentrations of the following elements: Cl, K, Ca, Ti, Mn, Fe, Ni, Cu, Zn, Br, Rb, Sr, As and Pb within the collected samples. Defining the elements as indicators, airborne particulate matter (APM source profiles were prepared by applying principal component analysis (PCA, factor analysis (FA and multiple linear regression (MLR. Four different factors identifying possible air pollution sources for both PM10 and PM2.5 fractions were attributed to municipal emissions, biomass burning, steel industry, traffic, cement and metal industry, Zn and Pb industry and secondary aerosols. The uncertainty associated with each loading was determined by a statistical simulation method that took into account the individual elemental concentrations and their corresponding uncertainties. It will be possible to identify two or more sources of air particulate matter pollution for a single factor in case it is extremely difficult to separate the sources.
Wahl, Niklas; Hennig, Philipp; Wieser, Hans-Peter; Bangert, Mark
2018-04-01
We show that it is possible to explicitly incorporate fractionation effects into closed-form probabilistic treatment plan analysis and optimization for intensity-modulated proton therapy with analytical probabilistic modeling (APM). We study the impact of different fractionation schemes on the dosimetric uncertainty induced by random and systematic sources of range and setup uncertainty for treatment plans that were optimized with and without consideration of the number of treatment fractions. The APM framework is capable of handling arbitrarily correlated uncertainty models including systematic and random errors in the context of fractionation. On this basis, we construct an analytical dose variance computation pipeline that explicitly considers the number of treatment fractions for uncertainty quantitation and minimization during treatment planning. We evaluate the variance computation model in comparison to random sampling of 100 treatments for conventional and probabilistic treatment plans under different fractionation schemes (1, 5, 30 fractions) for an intracranial, a paraspinal and a prostate case. The impact of neglecting the fractionation scheme during treatment planning is investigated by applying treatment plans that were generated with probabilistic optimization for 1 fraction in a higher number of fractions and comparing them to the probabilistic plans optimized under explicit consideration of the number of fractions. APM enables the construction of an analytical variance computation model for dose uncertainty considering fractionation at negligible computational overhead. It is computationally feasible (a) to simultaneously perform a robustness analysis for all possible fraction numbers and (b) to perform a probabilistic treatment plan optimization for a specific fraction number. The incorporation of fractionation assumptions for robustness analysis exposes a dose to uncertainty trade-off, i.e., the dose in the organs at risk is increased for a
Linear Algebra and Smarandache Linear Algebra
Vasantha, Kandasamy
2003-01-01
The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...
Uncertainty in adaptive capacity
Neil Adger, W.; Vincent, K.
2005-01-01
The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. (authors)
Laval, Katia; Laval, Guy
2013-01-01
Like meteorology, climatology is not an exact science: climate change forecasts necessarily include a share of uncertainty. It is precisely this uncertainty which is brandished and exploited by the opponents to the global warming theory to put into question the estimations of its future consequences. Is it legitimate to predict the future using the past climate data (well documented up to 100000 years BP) or the climates of other planets, taking into account the impreciseness of the measurements and the intrinsic complexity of the Earth's machinery? How is it possible to model a so huge and interwoven system for which any exact description has become impossible? Why water and precipitations play such an important role in local and global forecasts, and how should they be treated? This book written by two physicists answers with simpleness these delicate questions in order to give anyone the possibility to build his own opinion about global warming and the need to act rapidly
Martens, Hans.
1991-01-01
The subject of this thesis is the uncertainty principle (UP). The UP is one of the most characteristic points of differences between quantum and classical mechanics. The starting point of this thesis is the work of Niels Bohr. Besides the discussion the work is also analyzed. For the discussion of the different aspects of the UP the formalism of Davies and Ludwig is used instead of the more commonly used formalism of Neumann and Dirac. (author). 214 refs.; 23 figs
Uncertainty in artificial intelligence
Shachter, RD; Henrion, M; Lemmer, JF
1990-01-01
This volume, like its predecessors, reflects the cutting edge of research on the automation of reasoning under uncertainty.A more pragmatic emphasis is evident, for although some papers address fundamental issues, the majority address practical issues. Topics include the relations between alternative formalisms (including possibilistic reasoning), Dempster-Shafer belief functions, non-monotonic reasoning, Bayesian and decision theoretic schemes, and new inference techniques for belief nets. New techniques are applied to important problems in medicine, vision, robotics, and natural language und
Decision Making Under Uncertainty
2010-11-01
A sound approach to rational decision making requires a decision maker to establish decision objectives, identify alternatives, and evaluate those...often violate the axioms of rationality when making decisions under uncertainty. The systematic description of such observations may lead to the...which leads to “anchoring” on the initial value. The fact that individuals have been shown to deviate from rationality when making decisions
Economic uncertainty principle?
Alexander Harin
2006-01-01
The economic principle of (hidden) uncertainty is presented. New probability formulas are offered. Examples of solutions of three types of fundamental problems are reviewed.; Principe d'incertitude économique? Le principe économique d'incertitude (cachée) est présenté. De nouvelles formules de chances sont offertes. Les exemples de solutions des trois types de problèmes fondamentaux sont reconsidérés.
Citizen Candidates Under Uncertainty
Eguia, Jon X.
2005-01-01
In this paper we make two contributions to the growing literature on "citizen-candidate" models of representative democracy. First, we add uncertainty about the total vote count. We show that in a society with a large electorate, where the outcome of the election is uncertain and where winning candidates receive a large reward from holding office, there will be a two-candidate equilibrium and no equilibria with a single candidate. Second, we introduce a new concept of equilibrium, which we te...
Uncertainty Quantification of Multi-Phase Closures
Nadiga, Balasubramanya T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Baglietto, Emilio [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
2017-10-27
In the ensemble-averaged dispersed phase formulation used for CFD of multiphase ows in nuclear reactor thermohydraulics, closures of interphase transfer of mass, momentum, and energy constitute, by far, the biggest source of error and uncertainty. Reliable estimators of this source of error and uncertainty are currently non-existent. Here, we report on how modern Validation and Uncertainty Quanti cation (VUQ) techniques can be leveraged to not only quantify such errors and uncertainties, but also to uncover (unintended) interactions between closures of di erent phenomena. As such this approach serves as a valuable aide in the research and development of multiphase closures. The joint modeling of lift, drag, wall lubrication, and turbulent dispersion|forces that lead to tranfer of momentum between the liquid and gas phases|is examined in the frame- work of validation of the adiabatic but turbulent experiments of Liu and Banko , 1993. An extensive calibration study is undertaken with a popular combination of closure relations and the popular k-ϵ turbulence model in a Bayesian framework. When a wide range of super cial liquid and gas velocities and void fractions is considered, it is found that this set of closures can be validated against the experimental data only by allowing large variations in the coe cients associated with the closures. We argue that such an extent of variation is a measure of uncertainty induced by the chosen set of closures. We also nd that while mean uid velocity and void fraction pro les are properly t, uctuating uid velocity may or may not be properly t. This aspect needs to be investigated further. The popular set of closures considered contains ad-hoc components and are undesirable from a predictive modeling point of view. Consequently, we next consider improvements that are being developed by the MIT group under CASL and which remove the ad-hoc elements. We use non-intrusive methodologies for sensitivity analysis and calibration (using
Calibration Under Uncertainty.
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
Participation under Uncertainty
Boudourides, Moses A.
2003-01-01
This essay reviews a number of theoretical perspectives about uncertainty and participation in the present-day knowledge-based society. After discussing the on-going reconfigurations of science, technology and society, we examine how appropriate for policy studies are various theories of social complexity. Post-normal science is such an example of a complexity-motivated approach, which justifies civic participation as a policy response to an increasing uncertainty. But there are different categories and models of uncertainties implying a variety of configurations of policy processes. A particular role in all of them is played by expertise whose democratization is an often-claimed imperative nowadays. Moreover, we discuss how different participatory arrangements are shaped into instruments of policy-making and framing regulatory processes. As participation necessitates and triggers deliberation, we proceed to examine the role and the barriers of deliberativeness. Finally, we conclude by referring to some critical views about the ultimate assumptions of recent European policy frameworks and the conceptions of civic participation and politicization that they invoke
Uncertainty analysis techniques
Marivoet, J.; Saltelli, A.; Cadelli, N.
1987-01-01
The origin of the uncertainty affecting Performance Assessments, as well as their propagation to dose and risk results is discussed. The analysis is focused essentially on the uncertainties introduced by the input parameters, the values of which may range over some orders of magnitude and may be given as probability distribution function. The paper briefly reviews the existing sampling techniques used for Monte Carlo simulations and the methods for characterizing the output curves, determining their convergence and confidence limits. Annual doses, expectation values of the doses and risks are computed for a particular case of a possible repository in clay, in order to illustrate the significance of such output characteristics as the mean, the logarithmic mean and the median as well as their ratios. The report concludes that provisionally, due to its better robustness, such estimation as the 90th percentile may be substituted to the arithmetic mean for comparison of the estimated doses with acceptance criteria. In any case, the results obtained through Uncertainty Analyses must be interpreted with caution as long as input data distribution functions are not derived from experiments reasonably reproducing the situation in a well characterized repository and site
Deterministic uncertainty analysis
Worley, B.A.
1987-12-01
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Dividing Fractions: A Pedagogical Technique
Lewis, Robert
2016-01-01
When dividing one fraction by a second fraction, invert, that is, flip the second fraction, then multiply it by the first fraction. To multiply fractions, simply multiply across the denominators, and multiply across the numerators to get the resultant fraction. So by inverting the division of fractions it is turned into an easy multiplication of…
Robust Performance of Systems with Structured Uncertainties in State Space
Zhou, K.; Khargonekar, P.P.; Stoustrup, Jakob; Niemann, H.H.
1995-01-01
This paper considers robust performance analysis and state feedback design for systems with time-varying parameter uncertainties. The notion of a strongly robust % performance criterion is introduced, and its applications in robust performance analysis and synthesis for nominally linear systems with time-varying uncertainties are discussed and compared with the constant scaled small gain criterion. It is shown that most robust performance analysis and synthesisproblems under this strongly rob...
Large-scale linear programs in planning and prediction.
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
Evaluating measurement uncertainty in fluid phase equilibrium calculations
van der Veen, Adriaan M. H.
2018-04-01
The evaluation of measurement uncertainty in accordance with the ‘Guide to the expression of uncertainty in measurement’ (GUM) has not yet become widespread in physical chemistry. With only the law of the propagation of uncertainty from the GUM, many of these uncertainty evaluations would be cumbersome, as models are often non-linear and require iterative calculations. The methods from GUM supplements 1 and 2 enable the propagation of uncertainties under most circumstances. Experimental data in physical chemistry are used, for example, to derive reference property data and support trade—all applications where measurement uncertainty plays an important role. This paper aims to outline how the methods for evaluating and propagating uncertainty can be applied to some specific cases with a wide impact: deriving reference data from vapour pressure data, a flash calculation, and the use of an equation-of-state to predict the properties of both phases in a vapour-liquid equilibrium. The three uncertainty evaluations demonstrate that the methods of GUM and its supplements are a versatile toolbox that enable us to evaluate the measurement uncertainty of physical chemical measurements, including the derivation of reference data, such as the equilibrium thermodynamical properties of fluids.
Physics Case for the International Linear Collider
Fujii, Keisuke; Grojean, Christophe; Univ. Autonoma de Barcelona, Bellaterra; Peskin, Michael E.
2015-06-01
We summarize the physics case for the International Linear Collider (ILC). We review the key motivations for the ILC presented in the literature, updating the projected measurement uncertainties for the ILC experiments in accord with the expected schedule of operation of the accelerator and the results of the most recent simulation studies.
Physics Case for the International Linear Collider
Fujii, Keisuke; /KEK, Tsukuba; Grojean, Christophe; /DESY /ICREA, Barcelona; Peskin, Michael E.; Barklow, Tim; /SLAC; Gao, Yuanning; /Tsinghua U., Beijing, CHEP; Kanemura, Shinya; /Toyama U.; Kim, Hyungdo; /Seoul Natl U.; List, Jenny; /DESY; Nojiri, Mihoko; /KEK, Tsukuba; Perelstein, Maxim; /Cornell U., LEPP; Poeschl, Roman; /LAL, Orsay; Reuter, Juergen; /DESY; Simon, Frank; /Munich, Max Planck Inst.; Tanabe, Tomohiko; /Tokyo U., ICEPP; Yu, Jaehoon; /Texas U., Arlington; Wells, James D.; /Michigan U., MCTP; Murayama, Hitoshi; /UC, Berkeley /LBNL /Tokyo U., IPMU; Yamamoto, Hitoshi; /Tohoku U.
2015-06-23
We summarize the physics case for the International Linear Collider (ILC). We review the key motivations for the ILC presented in the literature, updating the projected measurement uncertainties for the ILC experiments in accord with the expected schedule of operation of the accelerator and the results of the most recent simulation studies.
The realization problem for positive and fractional systems
Kaczorek, Tadeusz
2014-01-01
This book addresses the realization problem of positive and fractional continuous-time and discrete-time linear systems. Roughly speaking the essence of the realization problem can be stated as follows: Find the matrices of the state space equations of linear systems for given their transfer matrices. This first book on this topic shows how many well-known classical approaches have been extended to the new classes of positive and fractional linear systems. The modified Gilbert method for multi-input multi-output linear systems, the method for determination of realizations in the controller canonical forms and in observer canonical forms are presented. The realization problem for linear systems described by differential operators, the realization problem in the Weierstrass canonical forms and of the descriptor linear systems for given Markov parameters are addressed. The book also presents a method for the determination of minimal realizations of descriptor linear systems and an extension for cone linear syste...
Methodologies of Uncertainty Propagation Calculation
Chojnacki, Eric
2002-01-01
After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory
LOFT uncertainty-analysis methodology
Lassahn, G.D.
1983-01-01
The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear-reactor-safety research program is described and compared with other methodologies established for performing uncertainty analyses
LOFT uncertainty-analysis methodology
Lassahn, G.D.
1983-01-01
The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear reactor safety research program is described and compared with other methodologies established for performing uncertainty analyses
Probabilistic numerics and uncertainty in computations.
Hennig, Philipp; Osborne, Michael A; Girolami, Mark
2015-07-08
We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
Account of the uncertainty factor in forecasting nuclear power development
Chernavskij, S.Ya.
1979-01-01
Minimization of total discounted costs for linear constraints is commonly used in forecasting nuclear energy growth. This approach is considered inadequate due to the uncertainty of exogenous variables of the model. A method of forecasting that takes into account the presence of uncertainty is elaborated. An example that demonstrates the expediency of the method and its advantage over the conventional approximation method used for taking uncertainty into account is given. In the framework of the example, the optimal strategy for nuclear energy growth over period of 500 years is determined
Triangular and Trapezoidal Fuzzy State Estimation with Uncertainty on Measurements
Mohammad Sadeghi Sarcheshmah
2012-01-01
Full Text Available In this paper, a new method for uncertainty analysis in fuzzy state estimation is proposed. The uncertainty is expressed in measurements. Uncertainties in measurements are modelled with different fuzzy membership functions (triangular and trapezoidal. To find the fuzzy distribution of any state variable, the problem is formulated as a constrained linear programming (LP optimization. The viability of the proposed method would be verified with the ones obtained from the weighted least squares (WLS and the fuzzy state estimation (FSE in the 6-bus system and in the IEEE-14 and 30 bus system.
Fractional distillation of oil
Jones, L D
1931-10-31
A method of dividing oil into lubricating oil fractions without substantial cracking by introducing the oil in a heated state into a fractionating column from which oil fractions having different boiling points are withdrawn at different levels, while reflux liquid is supplied to the top of the column, and additional heat is introduced into the column by contacting with the oil therein a heated fluid of higher monlecular weight than water and less susceptible to thermal decomposition than is the highest boiling oil fraction resulting from the distillation, or of which any products produced by thermal decomposition will not occur in the highest boiling distillate withdrawn from the column.
Dabiri, Arman; Butcher, Eric A.; Nazari, Morad
2017-02-01
Compliant impacts can be modeled using linear viscoelastic constitutive models. While such impact models for realistic viscoelastic materials using integer order derivatives of force and displacement usually require a large number of parameters, compliant impact models obtained using fractional calculus, however, can be advantageous since such models use fewer parameters and successfully capture the hereditary property. In this paper, we introduce the fractional Chebyshev collocation (FCC) method as an approximation tool for numerical simulation of several linear fractional viscoelastic compliant impact models in which the overall coefficient of restitution for the impact is studied as a function of the fractional model parameters for the first time. Other relevant impact characteristics such as hysteresis curves, impact force gradient, penetration and separation depths are also studied.
Do Orthopaedic Surgeons Acknowledge Uncertainty?
Teunis, Teun; Janssen, Stein; Guitton, Thierry G.; Ring, David; Parisien, Robert
2016-01-01
Much of the decision-making in orthopaedics rests on uncertain evidence. Uncertainty is therefore part of our normal daily practice, and yet physician uncertainty regarding treatment could diminish patients' health. It is not known if physician uncertainty is a function of the evidence alone or if
Fractional-moment CAPM with loss aversion
Wu Yahao [Dep. of Math., South China University of Technology, Guangzhou 510640 (China); Wang Xiaotian [Dep. of Math., South China University of Technology, Guangzhou 510640 (China)], E-mail: swa001@126.com; Wu Min [Dep. of Math., South China University of Technology, Guangzhou 510640 (China)
2009-11-15
In this paper, we present a new fractional-order value function which generalizes the value function of Kahneman and Tversky [Kahneman D, Tversky A. Prospect theory: an analysis of decision under risk. Econometrica 1979;47:263-91; Tversky A, Kahneman D. Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertainty 1992;4:297-323], and give the corresponding fractional-moment versions of CAPM in the cases of both the prospect theory [Kahneman D, Tversky A. Prospect theory: an analysis of decision under risk. Econometrica 1979;47:263-91; Tversky A, Kahneman D. Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertainty 1992;4:297-323] and the expected utility model. The models that we obtain can be used to price assets when asset return distributions are likely to be asymmetric stable Levy distribution during panics and stampedes in worldwide security markets in 2008. In particular, from the prospect theory we get the following fractional-moment CAPM with loss aversion: E(R{sub i}-R{sub 0})=(E[(W-W{sub 0}){sub +}{sup -0.12}(R{sub i}-R{sub 0})]+2.25E[(W{sub 0}-W){sub +}{sup -0.12}(R{sub i}-R{sub 0})])/ (E[(W-W{sub 0}){sub +}{sup -0.12} (W-R{sub 0})]+2.25E[(W{sub 0}-W){sub +}{sup -0.12}(W-R{sub 0})]) .E(W-R{sub 0}), where W{sub 0} is a fixed reference point distinguishing between losses and gains.
Robust Performance of Systems with Structured Uncertainties in State Space
Zhou, Kemin; Khargonekar, Pramod P.; Stoustrup, Jakob
1995-01-01
This paper considers robust performance analysis and state feedback design for systems with time-varying parameter uncertainties. The notion of a strongly robust % performance criterion is introduced, and its applications in robust performance analysis and synthesis for nominally linear systems...... with time-varying uncertainties are discussed and compared with the constant scaled small gain criterion. It is shown that most robust performance analysis and synthesis problems under this strongly robust % performance criterion can be transformed into linear matrix inequality problems, and can be solved...
Void fraction prediction in saturated flow boiling
Francisco J Collado
2005-01-01
Full text of publication follows: An essential element in thermal-hydraulics is the accurate prediction of the vapor void fraction, or fraction of the flow cross-sectional area occupied by steam. Recently, the author has suggested to calculate void fraction working exclusively with thermodynamic properties. It is well known that the usual 'flow' quality, merely a mass flow rate ratio, is not at all a thermodynamic property because its expression in function of thermodynamic properties includes the slip ratio, which is a parameter of the process not a function of state. By the other hand, in the classic and well known expression of the void fraction - in function of the true mass fraction of vapor (also called 'static' quality), and the vapor and liquid densities - does not appear the slip ratio. Of course, this would suggest a direct procedure for calculating the void fraction, provided we had an accurate value of the true mass fraction of vapor, clearly from the heat balance. However the classic heat balance is usually stated in function of the 'flow' quality, what sounds really contradictory because this parameter, as we have noted above, is not at all a thermodynamic property. Then we should check against real data the actual relationship between the thermodynamic properties and the applied heat. For saturated flow boiling just from the inlet of the heated tube, and not having into account the kinetic and potential terms, the uniform applied heat per unit mass of inlet water and per unit length (in short, specific linear heat) should be closely related to a (constant) slope of the mixture enthalpy. In this work, we have checked the relation between the specific linear heat and the thermodynamic enthalpy of the liquid-vapor mixture using the actual mass fraction. This true mass fraction is calculated using the accurate measurements of the outlet void fraction taken during the Cambridge project by Knights and Thom in the sixties for vertical and horizontal
Greasley, David; Madsen, Jakob B.
2006-01-01
A severe collapse of fixed capital formation distinguished the onset of the Great Depression from other investment downturns between the world wars. Using a model estimated for the years 1890-2000, we show that the expected profitability of capital measured by Tobin's q, and the uncertainty...... surrounding expected profits indicated by share price volatility, were the chief influences on investment levels, and that heightened share price volatility played the dominant role in the crucial investment collapse in 1930. Investment did not simply follow the downward course of income at the onset...
Optimization under Uncertainty
Lopez, Rafael H.
2016-01-06
The goal of this poster is to present the main approaches to optimization of engineering systems in the presence of uncertainties. We begin by giving an insight about robust optimization. Next, we detail how to deal with probabilistic constraints in optimization, the so called the reliability based design. Subsequently, we present the risk optimization approach, which includes the expected costs of failure in the objective function. After that the basic description of each approach is given, the projects developed by CORE are presented. Finally, the main current topic of research of CORE is described.
Optimizing production under uncertainty
Rasmussen, Svend
This Working Paper derives criteria for optimal production under uncertainty based on the state-contingent approach (Chambers and Quiggin, 2000), and discusses po-tential problems involved in applying the state-contingent approach in a normative context. The analytical approach uses the concept...... of state-contingent production functions and a definition of inputs including both sort of input, activity and alloca-tion technology. It also analyses production decisions where production is combined with trading in state-contingent claims such as insurance contracts. The final part discusses...
Commonplaces and social uncertainty
Lassen, Inger
2008-01-01
This article explores the concept of uncertainty in four focus group discussions about genetically modified food. In the discussions, members of the general public interact with food biotechnology scientists while negotiating their attitudes towards genetic engineering. Their discussions offer...... an example of risk discourse in which the use of commonplaces seems to be a central feature (Myers 2004: 81). My analyses support earlier findings that commonplaces serve important interactional purposes (Barton 1999) and that they are used for mitigating disagreement, for closing topics and for facilitating...
Kadane, Joseph B
2011-01-01
An intuitive and mathematical introduction to subjective probability and Bayesian statistics. An accessible, comprehensive guide to the theory of Bayesian statistics, Principles of Uncertainty presents the subjective Bayesian approach, which has played a pivotal role in game theory, economics, and the recent boom in Markov Chain Monte Carlo methods. Both rigorous and friendly, the book contains: Introductory chapters examining each new concept or assumption Just-in-time mathematics -- the presentation of ideas just before they are applied Summary and exercises at the end of each chapter Discus
Mathematical Analysis of Uncertainty
Angel GARRIDO
2016-01-01
Full Text Available Classical Logic showed early its insufficiencies for solving AI problems. The introduction of Fuzzy Logic aims at this problem. There have been research in the conventional Rough direction alone or in the Fuzzy direction alone, and more recently, attempts to combine both into Fuzzy Rough Sets or Rough Fuzzy Sets. We analyse some new and powerful tools in the study of Uncertainty, as the Probabilistic Graphical Models, Chain Graphs, Bayesian Networks, and Markov Networks, integrating our knowledge of graphs and probability.
Dealing with Uncertainties in Initial Orbit Determination
Armellin, Roberto; Di Lizia, Pierluigi; Zanetti, Renato
2015-01-01
A method to deal with uncertainties in initial orbit determination (IOD) is presented. This is based on the use of Taylor differential algebra (DA) to nonlinearly map the observation uncertainties from the observation space to the state space. When a minimum set of observations is available DA is used to expand the solution of the IOD problem in Taylor series with respect to measurement errors. When more observations are available high order inversion tools are exploited to obtain full state pseudo-observations at a common epoch. The mean and covariance of these pseudo-observations are nonlinearly computed by evaluating the expectation of high order Taylor polynomials. Finally, a linear scheme is employed to update the current knowledge of the orbit. Angles-only observations are considered and simplified Keplerian dynamics adopted to ease the explanation. Three test cases of orbit determination of artificial satellites in different orbital regimes are presented to discuss the feature and performances of the proposed methodology.
The fractional Fourier transform and applications
Bailey, David H.; Swarztrauber, Paul N.
1991-01-01
This paper describes the 'fractional Fourier transform', which admits computation by an algorithm that has complexity proportional to the fast Fourier transform algorithm. Whereas the discrete Fourier transform (DFT) is based on integral roots of unity e exp -2(pi)i/n, the fractional Fourier transform is based on fractional roots of unity e exp -2(pi)i(alpha), where alpha is arbitrary. The fractional Fourier transform and the corresponding fast algorithm are useful for such applications as computing DFTs of sequences with prime lengths, computing DFTs of sparse sequences, analyzing sequences with noninteger periodicities, performing high-resolution trigonometric interpolation, detecting lines in noisy images, and detecting signals with linearly drifting frequencies. In many cases, the resulting algorithms are faster by arbitrarily large factors than conventional techniques.
Intervals between multiple fractions per day
Fowler, J.F.
1988-01-01
Assuming the linear quadratic model for dose-response curves enables the proportion of repairable damage to be calculated for any size of dose per fraction. It is given by the beta (dose squared) term, and represents a larger proportion of the total damage for larger doses per fraction, but also for late-reacting than for early-reacting tissues. For example at 2 Gy per fraction, repairable damage could represent nearly half the total damage in late-reacting tissues but only one fifth in early-reacting tissues. Even if repair occurs at the same rate in both tissues, it will obviously take longer for 50% of the damage to fade to an undetectable level (3 or 5%) than for 20% to do so. This means that late reactions require longer intervals than early reactions when multiple fraction per day radiotherapy is planned, even if the half-lives of repair are not different. (orig.)
Gao, Qiang; Zheng, Liang; Chen, Jilin; Wang, Li; Hou, Yuanlong
2014-01-01
Motion control of gun barrels is an ongoing topic for the development of gun control equipment (GCE) with excellent performances. In this paper, a novel disturbance observer (DOB) based fractional order PD (FOPD) control strategy is proposed for the GCE. By adopting the DOB, the control system behaves as if it were the nominal closed-loop system in the absence of disturbances and uncertainties. The optimal control parameters of the FOPD are determined from the loop-shaping perspective, and the Q-filter of the DOB is deliberately designed with consideration of system robustness. The linear frame of the proposed control system will enable the analysis process more convenient. The disturbance rejection properties and the tracking performances of the control system are investigated by both numerical and experimental tests, the results demonstrate that the proposed DOB based FOPD control system is of more robustness, and it is much more suitable for the gun control system with strong nonlinearity and disturbance.
The representitativeness of patient position during the first treatment fractions
Bertelsen, Anders; Nielsen, Morten; Westberg, Jonas
2009-01-01
BACKGROUND: During external radiotherapy daily or even weekly image verification of the patient position might be problematic due to the resulting workload. Therefore it has been customary to perform image verification only at the first treatment fraction. In this study it is investigated whether...... the patient position uncertainty at the initial three treatment fractions is representative for the uncertainty throughout the treatment course. METHODS: Seventy seven patients were treated using Elekta Synergy accelerators. The patients were immobilized during treatment by use of a customized VacFix bag...... and a mask of AquaPlast. Cone beam CT (CBCT) scans were performed at fractions 1, 2, and 3 and at the 10th and 20th treatment fractions. Displacements in patient position, translational and rotational, have been measured by an image registration of the CBCT and the planning CT scan. The displacements data...
Fractional Poisson process (II)
Wang Xiaotian; Wen Zhixiong; Zhang Shiying
2006-01-01
In this paper, we propose a stochastic process W H (t)(H-bar (12,1)) which we call fractional Poisson process. The process W H (t) is self-similar in wide sense, displays long range dependence, and has more fatter tail than Gaussian process. In addition, it converges to fractional Brownian motion in distribution
Wilkerson, Trena L.; Bryan, Tommy; Curry, Jane
2012-01-01
This article describes how using candy bars as models gives sixth-grade students a taste for learning to represent fractions whose denominators are factors of twelve. Using paper models of the candy bars, students explored and compared fractions. They noticed fewer different representations for one-third than for one-half. The authors conclude…
Can Kindergartners Do Fractions?
Cwikla, Julie
2014-01-01
Mathematics professor Julie Cwikla decided that she needed to investigate young children's understandings and see what precurricular partitioning notions young minds bring to the fraction table. Cwikla realized that only a handful of studies have examined how preschool-age and early elementary school-age students solve fraction problems (Empson…
Investment, regulation, and uncertainty
Smyth, Stuart J; McDonald, Jillian; Falck-Zepeda, Jose
2014-01-01
As with any technological innovation, time refines the technology, improving upon the original version of the innovative product. The initial GM crops had single traits for either herbicide tolerance or insect resistance. Current varieties have both of these traits stacked together and in many cases other abiotic and biotic traits have also been stacked. This innovation requires investment. While this is relatively straight forward, certain conditions need to exist such that investments can be facilitated. The principle requirement for investment is that regulatory frameworks render consistent and timely decisions. If the certainty of regulatory outcomes weakens, the potential for changes in investment patterns increases. This article provides a summary background to the leading plant breeding technologies that are either currently being used to develop new crop varieties or are in the pipeline to be applied to plant breeding within the next few years. Challenges for existing regulatory systems are highlighted. Utilizing an option value approach from investment literature, an assessment of uncertainty regarding the regulatory approval for these varying techniques is undertaken. This research highlights which technology development options have the greatest degree of uncertainty and hence, which ones might be expected to see an investment decline. PMID:24499745
Probabilistic Mass Growth Uncertainties
Plumer, Eric; Elliott, Darren
2013-01-01
Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.
Diaz, Victor Alfonzo; Giusti, Andrea
2018-03-01
The aim of this paper is to present a simple generalization of bosonic string theory in the framework of the theory of fractional variational problems. Specifically, we present a fractional extension of the Polyakov action, for which we compute the general form of the equations of motion and discuss the connection between the new fractional action and a generalization the Nambu-Goto action. Consequently, we analyze the symmetries of the modified Polyakov action and try to fix the gauge, following the classical procedures. Then we solve the equations of motion in a simplified setting. Finally, we present a Hamiltonian description of the classical fractional bosonic string and introduce the fractional light-cone gauge. It is important to remark that, throughout the whole paper, we thoroughly discuss how to recover the known results as an "integer" limit of the presented model.
Embracing uncertainty in applied ecology.
Milner-Gulland, E J; Shea, K
2017-12-01
Applied ecologists often face uncertainty that hinders effective decision-making.Common traps that may catch the unwary are: ignoring uncertainty, acknowledging uncertainty but ploughing on, focussing on trivial uncertainties, believing your models, and unclear objectives.We integrate research insights and examples from a wide range of applied ecological fields to illustrate advances that are generally underused, but could facilitate ecologists' ability to plan and execute research to support management.Recommended approaches to avoid uncertainty traps are: embracing models, using decision theory, using models more effectively, thinking experimentally, and being realistic about uncertainty. Synthesis and applications . Applied ecologists can become more effective at informing management by using approaches that explicitly take account of uncertainty.
Oil price uncertainty in Canada
Elder, John [Department of Finance and Real Estate, 1272 Campus Delivery, Colorado State University, Fort Collins, CO 80523 (United States); Serletis, Apostolos [Department of Economics, University of Calgary, Calgary, Alberta (Canada)
2009-11-15
Bernanke [Bernanke, Ben S. Irreversibility, uncertainty, and cyclical investment. Quarterly Journal of Economics 98 (1983), 85-106.] shows how uncertainty about energy prices may induce optimizing firms to postpone investment decisions, thereby leading to a decline in aggregate output. Elder and Serletis [Elder, John and Serletis, Apostolos. Oil price uncertainty.] find empirical evidence that uncertainty about oil prices has tended to depress investment in the United States. In this paper we assess the robustness of these results by investigating the effects of oil price uncertainty in Canada. Our results are remarkably similar to existing results for the United States, providing additional evidence that uncertainty about oil prices may provide another explanation for why the sharp oil price declines of 1985 failed to produce rapid output growth. Impulse-response analysis suggests that uncertainty about oil prices may tend to reinforce the negative response of output to positive oil shocks. (author)
Quantification of margins and uncertainties: Alternative representations of epistemic uncertainty
Helton, Jon C.; Johnson, Jay D.
2011-01-01
In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, 'Quantification of Margins and Uncertainties: Conceptual and Computational Basis,' describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples that employ probability for the representation of aleatory and epistemic uncertainty. The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to the use of probability theory for the representation of epistemic uncertainty in QMU-type analyses. The following topics are considered: the mathematical structure of alternative representations of uncertainty, alternative representations of epistemic uncertainty in QMU analyses involving only epistemic uncertainty, and alternative representations of epistemic uncertainty in QMU analyses involving a separation of aleatory and epistemic uncertainty. Analyses involving interval analysis, possibility theory and evidence theory are illustrated with the same two notional examples used in the presentation indicated above to illustrate the use of probability to represent aleatory and epistemic uncertainty in QMU analyses.
Uncertainty visualization in HARDI based on ensembles of ODFs
Jiao, Fangxiang
2012-02-01
In this paper, we propose a new and accurate technique for uncertainty analysis and uncertainty visualization based on fiber orientation distribution function (ODF) glyphs, associated with high angular resolution diffusion imaging (HARDI). Our visualization applies volume rendering techniques to an ensemble of 3D ODF glyphs, which we call SIP functions of diffusion shapes, to capture their variability due to underlying uncertainty. This rendering elucidates the complex heteroscedastic structural variation in these shapes. Furthermore, we quantify the extent of this variation by measuring the fraction of the volume of these shapes, which is consistent across all noise levels, the certain volume ratio. Our uncertainty analysis and visualization framework is then applied to synthetic data, as well as to HARDI human-brain data, to study the impact of various image acquisition parameters and background noise levels on the diffusion shapes. © 2012 IEEE.
Uncertainty visualization in HARDI based on ensembles of ODFs
Jiao, Fangxiang; Phillips, Jeff M.; Gur, Yaniv; Johnson, Chris R.
2012-01-01
In this paper, we propose a new and accurate technique for uncertainty analysis and uncertainty visualization based on fiber orientation distribution function (ODF) glyphs, associated with high angular resolution diffusion imaging (HARDI). Our visualization applies volume rendering techniques to an ensemble of 3D ODF glyphs, which we call SIP functions of diffusion shapes, to capture their variability due to underlying uncertainty. This rendering elucidates the complex heteroscedastic structural variation in these shapes. Furthermore, we quantify the extent of this variation by measuring the fraction of the volume of these shapes, which is consistent across all noise levels, the certain volume ratio. Our uncertainty analysis and visualization framework is then applied to synthetic data, as well as to HARDI human-brain data, to study the impact of various image acquisition parameters and background noise levels on the diffusion shapes. © 2012 IEEE.
SU-G-BRB-14: Uncertainty of Radiochromic Film Based Relative Dose Measurements
Devic, S; Tomic, N; DeBlois, F; Seuntjens, J [McGill University, Montreal, QC (Canada); Lewis, D [RCF Consulting, LLC, Monroe, CT (United States); Aldelaijan, S [King Faisal Specialist Hospital & Research Center, Riyadh (Saudi Arabia)
2016-06-15
Purpose: Due to inherently non-linear dose response, measurement of relative dose distribution with radiochromic film requires measurement of absolute dose using a calibration curve following previously established reference dosimetry protocol. On the other hand, a functional form that converts the inherently non-linear dose response curve of the radiochromic film dosimetry system into linear one has been proposed recently [Devic et al, Med. Phys. 39 4850–4857 (2012)]. However, there is a question what would be the uncertainty of such measured relative dose. Methods: If the relative dose distribution is determined going through the reference dosimetry system (conversion of the response by using calibration curve into absolute dose) the total uncertainty of such determined relative dose will be calculated by summing in quadrature total uncertainties of doses measured at a given and at the reference point. On the other hand, if the relative dose is determined using linearization method, the new response variable is calculated as ζ=a(netOD)n/ln(netOD). In this case, the total uncertainty in relative dose will be calculated by summing in quadrature uncertainties for a new response function (σζ) for a given and the reference point. Results: Except at very low doses, where the measurement uncertainty dominates, the total relative dose uncertainty is less than 1% for the linear response method as compared to almost 2% uncertainty level for the reference dosimetry method. The result is not surprising having in mind that the total uncertainty of the reference dose method is dominated by the fitting uncertainty, which is mitigated in the case of linearization method. Conclusion: Linearization of the radiochromic film dose response provides a convenient and a more precise method for relative dose measurements as it does not require reference dosimetry and creation of calibration curve. However, the linearity of the newly introduced function must be verified. Dave Lewis
Uncertainty principle for angular position and angular momentum
Franke-Arnold, Sonja; Barnett, Stephen M; Yao, Eric; Leach, Jonathan; Courtial, Johannes; Padgett, Miles
2004-01-01
The uncertainty principle places fundamental limits on the accuracy with which we are able to measure the values of different physical quantities (Heisenberg 1949 The Physical Principles of the Quantum Theory (New York: Dover); Robertson 1929 Phys. Rev. 34 127). This has profound effects not only on the microscopic but also on the macroscopic level of physical systems. The most familiar form of the uncertainty principle relates the uncertainties in position and linear momentum. Other manifestations include those relating uncertainty in energy to uncertainty in time duration, phase of an electromagnetic field to photon number and angular position to angular momentum (Vaccaro and Pegg 1990 J. Mod. Opt. 37 17; Barnett and Pegg 1990 Phys. Rev. A 41 3427). In this paper, we report the first observation of the last of these uncertainty relations and derive the associated states that satisfy the equality in the uncertainty relation. We confirm the form of these states by detailed measurement of the angular momentum of a light beam after passage through an appropriate angular aperture. The angular uncertainty principle applies to all physical systems and is particularly important for systems with cylindrical symmetry
Discussion of OECD LWR Uncertainty Analysis in Modelling Benchmark
Ivanov, K.; Avramova, M.; Royer, E.; Gillford, J.
2013-01-01
The demand for best estimate calculations in nuclear reactor design and safety evaluations has increased in recent years. Uncertainty quantification has been highlighted as part of the best estimate calculations. The modelling aspects of uncertainty and sensitivity analysis are to be further developed and validated on scientific grounds in support of their performance and application to multi-physics reactor simulations. The Organization for Economic Co-operation and Development (OECD) / Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC) has endorsed the creation of an Expert Group on Uncertainty Analysis in Modelling (EGUAM). Within the framework of activities of EGUAM/NSC the OECD/NEA initiated the Benchmark for Uncertainty Analysis in Modelling for Design, Operation, and Safety Analysis of Light Water Reactor (OECD LWR UAM benchmark). The general objective of the benchmark is to propagate the predictive uncertainties of code results through complex coupled multi-physics and multi-scale simulations. The benchmark is divided into three phases with Phase I highlighting the uncertainty propagation in stand-alone neutronics calculations, while Phase II and III are focused on uncertainty analysis of reactor core and system respectively. This paper discusses the progress made in Phase I calculations, the Specifications for Phase II and the incoming challenges in defining Phase 3 exercises. The challenges of applying uncertainty quantification to complex code systems, in particular the time-dependent coupled physics models are the large computational burden and the utilization of non-linear models (expected due to the physics coupling). (authors)
Uncertainty analysis of nuclear waste package corrosion
Kurth, R.E.; Nicolosi, S.L.
1986-01-01
This paper describes the results of an evaluation of three uncertainty analysis methods for assessing the possible variability in calculating the corrosion process in a nuclear waste package. The purpose of the study is the determination of how each of three uncertainty analysis methods, Monte Carlo, Latin hypercube sampling (LHS) and a modified discrete probability distribution method, perform in such calculations. The purpose is not to examine the absolute magnitude of the numbers but rather to rank the performance of each of the uncertainty methods in assessing the model variability. In this context it was found that the Monte Carlo method provided the most accurate assessment but at a prohibitively high cost. The modified discrete probability method provided accuracy close to that of the Monte Carlo for a fraction of the cost. The LHS method was found to be too inaccurate for this calculation although it would be appropriate for use in a model which requires substantially more computer time than the one studied in this paper
Approximate Bayesian evaluations of measurement uncertainty
Possolo, Antonio; Bodnar, Olha
2018-04-01
The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.
Vicari Kristin J
2012-04-01
Full Text Available Abstract Background Cost-effective production of lignocellulosic biofuels remains a major financial and technical challenge at the industrial scale. A critical tool in biofuels process development is the techno-economic (TE model, which calculates biofuel production costs using a process model and an economic model. The process model solves mass and energy balances for each unit, and the economic model estimates capital and operating costs from the process model based on economic assumptions. The process model inputs include experimental data on the feedstock composition and intermediate product yields for each unit. These experimental yield data are calculated from primary measurements. Uncertainty in these primary measurements is propagated to the calculated yields, to the process model, and ultimately to the economic model. Thus, outputs of the TE model have a minimum uncertainty associated with the uncertainty in the primary measurements. Results We calculate the uncertainty in the Minimum Ethanol Selling Price (MESP estimate for lignocellulosic ethanol production via a biochemical conversion process: dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis and co-fermentation of the resulting sugars to ethanol. We perform a sensitivity analysis on the TE model and identify the feedstock composition and conversion yields from three unit operations (xylose from pretreatment, glucose from enzymatic hydrolysis, and ethanol from fermentation as the most important variables. The uncertainty in the pretreatment xylose yield arises from multiple measurements, whereas the glucose and ethanol yields from enzymatic hydrolysis and fermentation, respectively, are dominated by a single measurement: the fraction of insoluble solids (fIS in the biomass slurries. Conclusions We calculate a $0.15/gal uncertainty in MESP from the TE model due to uncertainties in primary measurements. This result sets a lower bound on the error bars of
Impact of dose-distribution uncertainties on rectal ntcp modeling I: Uncertainty estimates
Fenwick, John D.; Nahum, Alan E.
2001-01-01
A trial of nonescalated conformal versus conventional radiotherapy treatment of prostate cancer has been carried out at the Royal Marsden NHS Trust (RMH) and Institute of Cancer Research (ICR), demonstrating a significant reduction in the rate of rectal bleeding reported for patients treated using the conformal technique. The relationship between planned rectal dose-distributions and incidences of bleeding has been analyzed, showing that the rate of bleeding falls significantly as the extent of the rectal wall receiving a planned dose-level of more than 57 Gy is reduced. Dose-distributions delivered to the rectal wall over the course of radiotherapy treatment inevitably differ from planned distributions, due to sources of uncertainty such as patient setup error, rectal wall movement and variation in the absolute rectal wall surface area. In this paper estimates of the differences between planned and treated rectal dose-distribution parameters are obtained for the RMH/ICR nonescalated conformal technique, working from a distribution of setup errors observed during the RMH/ICR trial, movement data supplied by Lebesque and colleagues derived from repeat CT scans, and estimates of rectal circumference variations extracted from the literature. Setup errors and wall movement are found to cause only limited systematic differences between mean treated and planned rectal dose-distribution parameter values, but introduce considerable uncertainties into the treated values of some dose-distribution parameters: setup errors lead to 22% and 9% relative uncertainties in the highly dosed fraction of the rectal wall and the wall average dose, respectively, with wall movement leading to 21% and 9% relative uncertainties. Estimates obtained from the literature of the uncertainty in the absolute surface area of the distensible rectal wall are of the order of 13%-18%. In a subsequent paper the impact of these uncertainties on analyses of the relationship between incidences of bleeding
Linearly constrained minimax optimization
Madsen, Kaj; Schjær-Jacobsen, Hans
1978-01-01
We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems...
Controlling general projective synchronization of fractional order Rossler systems
Shao Shiquan
2009-01-01
This paper proposed a method to achieve general projective synchronization of two fractional order Rossler systems. First, we construct the fractional order Rossler system's corresponding approximation integer order system. Then, a control method based on a partially linear decomposition and negative feedback of state errors was utilized on the integer order system. Numerical simulations show the effectiveness of the proposed method.
Chaos in the fractional order Chen system and its control
Li Chunguang; Chen Guanrong
2004-01-01
In this letter, we study the chaotic behaviors in the fractional order Chen system. We found that chaos exists in the fractional order Chen system with order less than 3. The lowest order we found to have chaos in this system is 2.1. Linear feedback control of chaos in this system is also studied
Heisenberg's principle of uncertainty and the uncertainty relations
Redei, Miklos
1987-01-01
The usual verbal form of the Heisenberg uncertainty principle and the usual mathematical formulation (the so-called uncertainty theorem) are not equivalent. The meaning of the concept 'uncertainty' is not unambiguous and different interpretations are used in the literature. Recently a renewed interest has appeared to reinterpret and reformulate the precise meaning of Heisenberg's principle and to find adequate mathematical form. The suggested new theorems are surveyed and critically analyzed. (D.Gy.) 20 refs
Petzinger, Tom
I am trying to make money in the biotech industry from complexity science. And I am doing it with inspiration that I picked up on the edge of Appalachia spending time with June Holley and ACEnet when I was a Wall Street Journal reporter. I took some of those ideas to Pittsburgh, in biotechnology, in a completely private setting with an economic development focus, but also with a mission t o return profit to private capital. And we are doing that. I submit as a hypothesis, something we are figuring out in the post- industrial era, that business evolves. It is not the definition of business, but business critically involves the design of systems in which uncertainty is treated as a certainty. That is what I have seen and what I have tried to put into practice.
Peters, H.P.; Hennen, L.
1990-01-01
The authors report on the results of three representative surveys that made a closer inquiry into perceptions and valuations of information and information sources concering Chernobyl. If turns out that the information sources are generally considered little trustworthy. This was generally attributable to the interpretation of the events being tied to attitudes in the atmonic energy issue. The greatest credit was given to television broadcasting. The authors summarize their discourse as follows: There is good reason to interpret the widespread uncertainty after Chernobyl as proof of the fact that large parts of the population are prepared and willing to assume a critical stance towards information and prefer to draw their information from various sources representing different positions. (orig.) [de
2012-03-01
ISO / IEC 17025 Inspection Bodies – ISO / IEC 17020 RMPs – ISO Guide 34 (Reference...certify to : ISO 9001 (QMS), ISO 14001 (EMS), TS 16949 (US Automotive) etc. 2 3 DoD QSM 4.2 standard ISO / IEC 17025 :2005 Each has uncertainty...IPV6, NLLAP, NEFAP TRAINING Programs Certification Bodies – ISO / IEC 17021 Accreditation for Management System
Traceability and Measurement Uncertainty
Tosello, Guido; De Chiffre, Leonardo
2004-01-01
. The project partnership aims (composed by 7 partners in 5 countries, thus covering a real European spread in high tech production technology) to develop and implement an advanced e-learning system that integrates contributions from quite different disciplines into a user-centred approach that strictly....... Machine tool testing 9. The role of manufacturing metrology for QM 10. Inspection planning 11. Quality management of measurements incl. Documentation 12. Advanced manufacturing measurement technology The present report (which represents the section 2 - Traceability and Measurement Uncertainty – of the e-learning......This report is made as a part of the project ‘Metro-E-Learn: European e-Learning in Manufacturing Metrology’, an EU project under the program SOCRATES MINERVA (ODL and ICT in Education), Contract No: 101434-CP-1-2002-1-DE-MINERVA, coordinated by Friedrich-Alexander-University Erlangen...
Decision making under uncertainty
Cyert, R.M.
1989-01-01
This paper reports on ways of improving the reliability of products and systems in this country if we are to survive as a first-rate industrial power. The use of statistical techniques have, since the 1920s, been viewed as one of the methods for testing quality and estimating the level of quality in a universe of output. Statistical quality control is not relevant, generally, to improving systems in an industry like yours, but certainly the use of probability concepts is of significance. In addition, when it is recognized that part of the problem involves making decisions under uncertainty, it becomes clear that techniques such as sequential decision making and Bayesian analysis become major methodological approaches that must be utilized
Sustainability and uncertainty
Jensen, Karsten Klint
2007-01-01
The widely used concept of sustainability is seldom precisely defined, and its clarification involves making up one's mind about a range of difficult questions. One line of research (bottom-up) takes sustaining a system over time as its starting point and then infers prescriptions from...... this requirement. Another line (top-down) takes an economical interpretation of the Brundtland Commission's suggestion that the present generation's needsatisfaction should not compromise the need-satisfaction of future generations as its starting point. It then measures sustainability at the level of society...... a clarified ethical goal, disagreements can arise. At present we do not know what substitutions will be possible in the future. This uncertainty clearly affects the prescriptions that follow from the measure of sustainability. Consequently, decisions about how to make future agriculture sustainable...
Attitude Estimation in Fractionated Spacecraft Cluster Systems
Hadaegh, Fred Y.; Blackmore, James C.
2011-01-01
An attitude estimation was examined in fractioned free-flying spacecraft. Instead of a single, monolithic spacecraft, a fractionated free-flying spacecraft uses multiple spacecraft modules. These modules are connected only through wireless communication links and, potentially, wireless power links. The key advantage of this concept is the ability to respond to uncertainty. For example, if a single spacecraft module in the cluster fails, a new one can be launched at a lower cost and risk than would be incurred with onorbit servicing or replacement of the monolithic spacecraft. In order to create such a system, however, it is essential to know what the navigation capabilities of the fractionated system are as a function of the capabilities of the individual modules, and to have an algorithm that can perform estimation of the attitudes and relative positions of the modules with fractionated sensing capabilities. Looking specifically at fractionated attitude estimation with startrackers and optical relative attitude sensors, a set of mathematical tools has been developed that specify the set of sensors necessary to ensure that the attitude of the entire cluster ( cluster attitude ) can be observed. Also developed was a navigation filter that can estimate the cluster attitude if these conditions are satisfied. Each module in the cluster may have either a startracker, a relative attitude sensor, or both. An extended Kalman filter can be used to estimate the attitude of all modules. A range of estimation performances can be achieved depending on the sensors used and the topology of the sensing network.
Fractional Order Generalized Information
José Tenreiro Machado
2014-04-01
Full Text Available This paper formulates a novel expression for entropy inspired in the properties of Fractional Calculus. The characteristics of the generalized fractional entropy are tested both in standard probability distributions and real world data series. The results reveal that tuning the fractional order allow an high sensitivity to the signal evolution, which is useful in describing the dynamics of complex systems. The concepts are also extended to relative distances and tested with several sets of data, confirming the goodness of the generalization.
Social Trust and Fractionalization:
Bjørnskov, Christian
2008-01-01
This paper takes a closer look at the importance of fractionalization for the creation of social trust. It first argues that the determinants of trust can be divided into two categories: those affecting individuals' trust radii and those affecting social polarization. A series of estimates using...... a much larger country sample than in previous literature confirms that fractionalization in the form of income inequality and political diversity adversely affects social trust while ethnic diversity does not. However, these effects differ systematically across countries, questioning standard...... interpretations of the influence of fractionalization on trust....
An uncertainty inventory demonstration - a primary step in uncertainty quantification
Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM
2009-01-01
Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.
Uncertainty in relative cost investigation
Bunn, D.; Viahos, K.
1989-01-01
One of the consequences of the privatization of the Central Electricity Generating Board has been a weakening of the economic case for nuclear generation over coal. Nuclear has higher capital, but lower operating costs than coal and is therefore favoured in capital budgeting by discounting at lower rates of return. In the Sizewell case (in 1987), discounting at the public sector rate of 5 per cent favoured nuclear. However, the private sector will require higher rates of return, thus rendering nuclear less attractive. Hence the imposition by the government of a diversity constraint on the privatized industry to ensure that contracts are made for a minimum fraction of non-fossil (essentially nuclear) energy. An electricity capacity planning model was developed to estimate the costs of imposing various non-fossil energy constraints on the planning decision of a privatized electricity supply industry, as a function of various discount rates. Using a large-scale linear programming technique, the model optimizes over a 50 year horizon the schedule of installation, and mix of generating capacity, both with and without a minimum non-fossil constraint. The conclusion is that the opportunity cost of diversity may be a complex joint substation of more than one type of plant (eg coal and gas) depending on the discount rate. (author)
Essays on model uncertainty in financial models
Li, Jing
2018-01-01
This dissertation studies model uncertainty, particularly in financial models. It consists of two empirical chapters and one theoretical chapter. The first empirical chapter (Chapter 2) classifies model uncertainty into parameter uncertainty and misspecification uncertainty. It investigates the
FRACTIONS: CONCEPTUAL AND DIDACTIC ASPECTS
Sead Rešić
2016-09-01
Full Text Available Fractions represent the manner of writing parts of whole numbers (integers. Rules for operations with fractions differ from rules for operations with integers. Students face difficulties in understanding fractions, especially operations with fractions. These difficulties are well known in didactics of Mathematics throughout the world and there is a lot of research regarding problems in learning about fractions. Methods for facilitating understanding fractions have been discovered, which are essentially related to visualizing operations with fractions.
Eigenfunction expansion for fractional Brownian motions
Maccone, C.
1981-01-01
The fractional Brownian motions, a class of nonstationary stochastic processes defined as the Riemann-Liouville fractional integral/derivative of the Brownian motion, are studied. It is shown that these processes can be regarded as the output of a suitable linear system of which the input is the white noise. Their autocorrelation is then derived with a study of their standard-deviation curves. Their power spectra are found by resorting to the nonstationary spectral theory. And finally their eigenfunction expansion (Karhunen-Loeve expansion) is obtained: the eigenfunctions are proved to be suitable Bessel functions and the eigenvalues zeros of the Bessel functions. (author)
ESFR core optimization and uncertainty studies
Rineiski, A.; Vezzoni, B.; Zhang, D.; Marchetti, M.; Gabrielli, F.; Maschek, W.; Chen, X.-N.; Buiron, L.; Krepel, J.; Sun, K.; Mikityuk, K.; Polidoro, F.; Rochman, D.; Koning, A.J.; DaCruz, D.F.; Tsige-Tamirat, H.; Sunderland, R.
2015-01-01
In the European Sodium Fast Reactor (ESFR) project supported by EURATOM in 2008-2012, a concept for a large 3600 MWth sodium-cooled fast reactor design was investigated. In particular, reference core designs with oxide and carbide fuel were optimized to improve their safety parameters. Uncertainties in these parameters were evaluated for the oxide option. Core modifications were performed first to reduce the sodium void reactivity effect. Introduction of a large sodium plenum with an absorber layer above the core and a lower axial fertile blanket improve the total sodium void effect appreciably, bringing it close to zero for a core with fresh fuel, in line with results obtained worldwide, while not influencing substantially other core physics parameters. Therefore an optimized configuration, CONF2, with a sodium plenum and a lower blanket was established first and used as a basis for further studies in view of deterioration of safety parameters during reactor operation. Further options to study were an inner fertile blanket, introduction of moderator pins, a smaller core height, special designs for pins, such as 'empty' pins, and subassemblies. These special designs were proposed to facilitate melted fuel relocation in order to avoid core re-criticality under severe accident conditions. In the paper further CONF2 modifications are compared in terms of safety and fuel balance. They may bring further improvements in safety, but their accurate assessment requires additional studies, including transient analyses. Uncertainty studies were performed by employing a so-called Total Monte-Carlo method, for which a large number of nuclear data files is produced for single isotopes and then used in Monte-Carlo calculations. The uncertainties for the criticality, sodium void and Doppler effects, effective delayed neutron fraction due to uncertainties in basic nuclear data were assessed for an ESFR core. They prove applicability of the available nuclear data for ESFR
Foundations of linear and generalized linear models
Agresti, Alan
2015-01-01
A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,
Fractional Stochastic Field Theory
Honkonen, Juha
2018-02-01
Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.
Goodrich, Christopher
2015-01-01
This text provides the first comprehensive treatment of the discrete fractional calculus. Experienced researchers will find the text useful as a reference for discrete fractional calculus and topics of current interest. Students who are interested in learning about discrete fractional calculus will find this text to provide a useful starting point. Several exercises are offered at the end of each chapter and select answers have been provided at the end of the book. The presentation of the content is designed to give ample flexibility for potential use in a myriad of courses and for independent study. The novel approach taken by the authors includes a simultaneous treatment of the fractional- and integer-order difference calculus (on a variety of time scales, including both the usual forward and backwards difference operators). The reader will acquire a solid foundation in the classical topics of the discrete calculus while being introduced to exciting recent developments, bringing them to the frontiers of the...
Shamim, Atif; Radwan, Ahmed Gomaa; Salama, Khaled N.
2011-01-01
matching networks, where the fractional approach proves to be much more versatile and results in a single element matching network for a complex load as compared to the two elements in the conventional approach. © 2010 IEEE.
A new uncertainty importance measure
Borgonovo, E.
2007-01-01
Uncertainty in parameters is present in many risk assessment problems and leads to uncertainty in model predictions. In this work, we introduce a global sensitivity indicator which looks at the influence of input uncertainty on the entire output distribution without reference to a specific moment of the output (moment independence) and which can be defined also in the presence of correlations among the parameters. We discuss its mathematical properties and highlight the differences between the present indicator, variance-based uncertainty importance measures and a moment independent sensitivity indicator previously introduced in the literature. Numerical results are discussed with application to the probabilistic risk assessment model on which Iman [A matrix-based approach to uncertainty and sensitivity analysis for fault trees. Risk Anal 1987;7(1):22-33] first introduced uncertainty importance measures
Uncertainty Management and Sensitivity Analysis
Rosenbaum, Ralph K.; Georgiadis, Stylianos; Fantke, Peter
2018-01-01
Uncertainty is always there and LCA is no exception to that. The presence of uncertainties of different types and from numerous sources in LCA results is a fact, but managing them allows to quantify and improve the precision of a study and the robustness of its conclusions. LCA practice sometimes...... suffers from an imbalanced perception of uncertainties, justifying modelling choices and omissions. Identifying prevalent misconceptions around uncertainties in LCA is a central goal of this chapter, aiming to establish a positive approach focusing on the advantages of uncertainty management. The main...... objectives of this chapter are to learn how to deal with uncertainty in the context of LCA, how to quantify it, interpret and use it, and how to communicate it. The subject is approached more holistically than just focusing on relevant statistical methods or purely mathematical aspects. This chapter...
Fractional Bhatnagar-Gross-Krook kinetic equation
Goychuk, Igor
2017-11-01
The linear Boltzmann equation (LBE) approach is generalized to describe fractional superdiffusive transport of the Lévy walk type in external force fields. The time distribution between scattering events is assumed to have a finite mean value and infinite variance. It is completely characterized by the two scattering rates, one fractional and a normal one, which defines also the mean scattering rate. We formulate a general fractional LBE approach and exemplify it with a particularly simple case of the Bohm and Gross scattering integral leading to a fractional generalization of the Bhatnagar, Gross and Krook (BGK) kinetic equation. Here, at each scattering event the particle velocity is completely randomized and takes a value from equilibrium Maxwell distribution at a given fixed temperature. We show that the retardation effects are indispensable even in the limit of infinite mean scattering rate and argue that this novel fractional kinetic equation provides a viable alternative to the fractional Kramers-Fokker-Planck (KFP) equation by Barkai and Silbey and its generalization by Friedrich et al. based on the picture of divergent mean time between scattering events. The case of divergent mean time is also discussed at length and compared with the earlier results obtained within the fractional KFP. Also a phenomenological fractional BGK equation without retardation effects is proposed in the limit of infinite scattering rates. It cannot be, however, rigorously derived from a scattering model, being rather clever postulated. It this respect, this retardationless equation is similar to the fractional KFP by Barkai and Silbey. However, it corresponds to the opposite, much more physical limit and, therefore, also presents a viable alternative.
Intracellular Cadmium Isotope Fractionation
Horner, T. J.; Lee, R. B.; Henderson, G. M.; Rickaby, R. E.
2011-12-01
Recent stable isotope studies into the biological utilization of transition metals (e.g. Cu, Fe, Zn, Cd) suggest several stepwise cellular processes can fractionate isotopes in both culture and nature. However, the determination of fractionation factors is often unsatisfactory, as significant variability can exist - even between different organisms with the same cellular functions. Thus, it has not been possible to adequately understand the source and mechanisms of metal isotopic fractionation. In order to address this problem, we investigated the biological fractionation of Cd isotopes within genetically-modified bacteria (E. coli). There is currently only one known biological use or requirement of Cd, a Cd/Zn carbonic anhydrase (CdCA, from the marine diatom T. weissfloggii), which we introduce into the E. coli genome. We have also developed a cleaning procedure that allows for the treating of bacteria so as to study the isotopic composition of different cellular components. We find that whole cells always exhibit a preference for uptake of the lighter isotopes of Cd. Notably, whole cells appear to have a similar Cd isotopic composition regardless of the expression of CdCA within the E. coli. However, isotopic fractionation can occur within the genetically modified E. coli during Cd use, such that Cd bound in CdCA can display a distinct isotopic composition compared to the cell as a whole. Thus, the externally observed fractionation is independent of the internal uses of Cd, with the largest Cd isotope fractionation occurring during cross-membrane transport. A general implication of these experiments is that trace metal isotopic fractionation most likely reflects metal transport into biological cells (either actively or passively), rather than relating to expression of specific physiological function and genetic expression of different metalloenzymes.
Quantifying Uncertainty in Satellite-Retrieved Land Surface Temperature from Cloud Detection Errors
Claire E. Bulgin
2018-04-01
Full Text Available Clouds remain one of the largest sources of uncertainty in remote sensing of surface temperature in the infrared, but this uncertainty has not generally been quantified. We present a new approach to do so, applied here to the Advanced Along-Track Scanning Radiometer (AATSR. We use an ensemble of cloud masks based on independent methodologies to investigate the magnitude of cloud detection uncertainties in area-average Land Surface Temperature (LST retrieval. We find that at a grid resolution of 625 km 2 (commensurate with a 0.25 ∘ grid size at the tropics, cloud detection uncertainties are positively correlated with cloud-cover fraction in the cell and are larger during the day than at night. Daytime cloud detection uncertainties range between 2.5 K for clear-sky fractions of 10–20% and 1.03 K for clear-sky fractions of 90–100%. Corresponding night-time uncertainties are 1.6 K and 0.38 K, respectively. Cloud detection uncertainty shows a weaker positive correlation with the number of biomes present within a grid cell, used as a measure of heterogeneity in the background against which the cloud detection must operate (e.g., surface temperature, emissivity and reflectance. Uncertainty due to cloud detection errors is strongly dependent on the dominant land cover classification. We find cloud detection uncertainties of a magnitude of 1.95 K over permanent snow and ice, 1.2 K over open forest, 0.9–1 K over bare soils and 0.09 K over mosaic cropland, for a standardised clear-sky fraction of 74.2%. As the uncertainties arising from cloud detection errors are of a significant magnitude for many surface types and spatially heterogeneous where land classification varies rapidly, LST data producers are encouraged to quantify cloud-related uncertainties in gridded products.
Decommissioning funding: ethics, implementation, uncertainties
2006-01-01
This status report on Decommissioning Funding: Ethics, Implementation, Uncertainties also draws on the experience of the NEA Working Party on Decommissioning and Dismantling (WPDD). The report offers, in a concise form, an overview of relevant considerations on decommissioning funding mechanisms with regard to ethics, implementation and uncertainties. Underlying ethical principles found in international agreements are identified, and factors influencing the accumulation and management of funds for decommissioning nuclear facilities are discussed together with the main sources of uncertainties of funding systems. (authors)
Chemical model reduction under uncertainty
Najm, Habib; Galassi, R. Malpica; Valorani, M.
2016-01-01
We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.
Chemical model reduction under uncertainty
Najm, Habib
2016-01-05
We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.
The Uncertainty of Measurement Results
Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)
2009-07-15
Factors affecting the uncertainty of measurement are explained, basic statistical formulae given, and the theoretical concept explained in the context of pesticide formulation analysis. Practical guidance is provided on how to determine individual uncertainty components within an analytical procedure. An extended and comprehensive table containing the relevant mathematical/statistical expressions elucidates the relevant underlying principles. Appendix I provides a practical elaborated example on measurement uncertainty estimation, above all utilizing experimental repeatability and reproducibility laboratory data. (author)
Uncertainty analysis of environmental models
Monte, L.
1990-01-01
In the present paper an evaluation of the output uncertainty of an environmental model for assessing the transfer of 137 Cs and 131 I in the human food chain are carried out on the basis of a statistical analysis of data reported by the literature. The uncertainty analysis offers the oppotunity of obtaining some remarkable information about the uncertainty of models predicting the migration of non radioactive substances in the environment mainly in relation to the dry and wet deposition
Fractional laser skin resurfacing.
Alexiades-Armenakas, Macrene R; Dover, Jeffrey S; Arndt, Kenneth A
2012-11-01
Laser skin resurfacing (LSR) has evolved over the past 2 decades from traditional ablative to fractional nonablative and fractional ablative resurfacing. Traditional ablative LSR was highly effective in reducing rhytides, photoaging, and acne scarring but was associated with significant side effects and complications. In contrast, nonablative LSR was very safe but failed to deliver consistent clinical improvement. Fractional LSR has achieved the middle ground; it combined the efficacy of traditional LSR with the safety of nonablative modalities. The first fractional laser was a nonablative erbium-doped yttrium aluminum garnet (Er:YAG) laser that produced microscopic columns of thermal injury in the epidermis and upper dermis. Heralding an entirely new concept of laser energy delivery, it delivered the laser beam in microarrays. It resulted in microscopic columns of treated tissue and intervening areas of untreated skin, which yielded rapid reepithelialization. Fractional delivery was quickly applied to ablative wavelengths such as carbon dioxide, Er:YAG, and yttrium scandium gallium garnet (2,790 nm), providing more significant clinical outcomes. Adjustable laser parameters, including power, pitch, dwell time, and spot density, allowed for precise determination of percent surface area, affected penetration depth, and clinical recovery time and efficacy. Fractional LSR has been a significant advance to the laser field, striking the balance between safety and efficacy.
Uncertainty quantification in resonance absorption
Williams, M.M.R.
2012-01-01
We assess the uncertainty in the resonance escape probability due to uncertainty in the neutron and radiation line widths for the first 21 resonances in 232 Th as given by . Simulation, quadrature and polynomial chaos methods are used and the resonance data are assumed to obey a beta distribution. We find the uncertainty in the total resonance escape probability to be the equivalent, in reactivity, of 75–130 pcm. Also shown are pdfs of the resonance escape probability for each resonance and the variation of the uncertainty with temperature. The viability of the polynomial chaos expansion method is clearly demonstrated.
Reliability analysis under epistemic uncertainty
Nannapaneni, Saideep; Mahadevan, Sankaran
2016-01-01
This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.
Simplified propagation of standard uncertainties
Shull, A.H.
1997-01-01
An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper
A new uncertainty relation for angular momentum and angle
Kranold, H.U.
1984-01-01
An uncertainty relation of the form ΔL 2 ΔSo >=sup(h/2π)/sub(2) is derived for angular momentum and angle. The non-linear operator So measures angles and has a simple interpretation. Subject to very general conditions of rotational invariance the above relation is unique. Radial momentum is not quantized
Quantum mechanics in fractional and other anomalous spacetimes
Calcagni, Gianluca; Nardelli, Giuseppe; Scalisi, Marco
2012-01-01
We formulate quantum mechanics in spacetimes with real-order fractional geometry and more general factorizable measures. In spacetimes where coordinates and momenta span the whole real line, Heisenberg's principle is proven and the wave-functions minimizing the uncertainty are found. In spite of the
Climate Certainties and Uncertainties
Morel, Pierre
2012-01-01
In issue 380 of Futuribles in December 2011, Antonin Pottier analysed in detail the workings of what is today termed 'climate scepticism' - namely the propensity of certain individuals to contest the reality of climate change on the basis of pseudo-scientific arguments. He emphasized particularly that what fuels the debate on climate change is, largely, the degree of uncertainty inherent in the consequences to be anticipated from observation of the facts, not the description of the facts itself. In his view, the main aim of climate sceptics is to block the political measures for combating climate change. However, since they do not admit to this political posture, they choose instead to deny the scientific reality. This month, Futuribles complements this socio-psychological analysis of climate-sceptical discourse with an - in this case, wholly scientific - analysis of what we know (or do not know) about climate change on our planet. Pierre Morel gives a detailed account of the state of our knowledge in the climate field and what we are able to predict in the medium/long-term. After reminding us of the influence of atmospheric meteorological processes on the climate, he specifies the extent of global warming observed since 1850 and the main origin of that warming, as revealed by the current state of knowledge: the increase in the concentration of greenhouse gases. He then describes the changes in meteorological regimes (showing also the limits of climate simulation models), the modifications of hydrological regimes, and also the prospects for rises in sea levels. He also specifies the mechanisms that may potentially amplify all these phenomena and the climate disasters that might ensue. Lastly, he shows what are the scientific data that cannot be disregarded, the consequences of which are now inescapable (melting of the ice-caps, rises in sea level etc.), the only remaining uncertainty in this connection being the date at which these things will happen. 'In this
Identification of fractional order systems using modulating functions method
Liu, Dayan; Laleg-Kirati, Taous-Meriem; Gibaru, O.; Perruquetti, Wilfrid
2013-01-01
can be transferred into the ones of the modulating functions. By choosing a set of modulating functions, a linear system of algebraic equations is obtained. Hence, the unknown parameters of a fractional order system can be estimated by solving a linear
Vector network analyzer (VNA) measurements and uncertainty assessment
Shoaib, Nosherwan
2017-01-01
This book describes vector network analyzer measurements and uncertainty assessments, particularly in waveguide test-set environments, in order to establish their compatibility to the International System of Units (SI) for accurate and reliable characterization of communication networks. It proposes a fully analytical approach to measurement uncertainty evaluation, while also highlighting the interaction and the linear propagation of different uncertainty sources to compute the final uncertainties associated with the measurements. The book subsequently discusses the dimensional characterization of waveguide standards and the quality of the vector network analyzer (VNA) calibration techniques. The book concludes with an in-depth description of the novel verification artefacts used to assess the performance of the VNAs. It offers a comprehensive reference guide for beginners to experts, in both academia and industry, whose work involves the field of network analysis, instrumentation and measurements.
Radiobiological arguments for and clinical possibilities of unconventional fractionating rhythms
Herrmann, T.; Voigtmann, L.
1986-01-01
Radiobiological considerations are presented using unconventional fractionating rhythms. The aim of this method is to enlarge the therapeutic dimensions between maximum tumor destruction and most careful treatment of late responding cell systems. These late responding tissues show a very similar dose-time reaction, probably by reason of a causal injury on cells of the capillary endothelium. In linear-quadratic models for the estimation of the parameters of the number of fractions and total treatment period it becomes evident that a careful treatment of late responding tissue can be attained by reduction of the single dose per fraction. Because with partition of a total dose in several fractions at daily irradiation a longer repopulation period is available also for the tumor irradiations are presented, done repeatedly during the day. Accelerated fractionation (same fractionating number in reduced treatment period) are contrasted to hyperfractionation (increased fractionating number within the same total treatment period) and possibilities in application are suggested. (author)
Series expansion in fractional calculus and fractional differential equations
Li, Ming-Fan; Ren, Ji-Rong; Zhu, Tao
2009-01-01
Fractional calculus is the calculus of differentiation and integration of non-integer orders. In a recently paper (Annals of Physics 323 (2008) 2756-2778), the Fundamental Theorem of Fractional Calculus is highlighted. Based on this theorem, in this paper we introduce fractional series expansion method to fractional calculus. We define a kind of fractional Taylor series of an infinitely fractionally-differentiable function. Further, based on our definition we generalize hypergeometric functio...
Sketching Uncertainty into Simulations.
Ribicic, H; Waser, J; Gurbat, R; Sadransky, B; Groller, M E
2012-12-01
In a variety of application areas, the use of simulation steering in decision making is limited at best. Research focusing on this problem suggests that most user interfaces are too complex for the end user. Our goal is to let users create and investigate multiple, alternative scenarios without the need for special simulation expertise. To simplify the specification of parameters, we move from a traditional manipulation of numbers to a sketch-based input approach. Users steer both numeric parameters and parameters with a spatial correspondence by sketching a change onto the rendering. Special visualizations provide immediate visual feedback on how the sketches are transformed into boundary conditions of the simulation models. Since uncertainty with respect to many intertwined parameters plays an important role in planning, we also allow the user to intuitively setup complete value ranges, which are then automatically transformed into ensemble simulations. The interface and the underlying system were developed in collaboration with experts in the field of flood management. The real-world data they have provided has allowed us to construct scenarios used to evaluate the system. These were presented to a variety of flood response personnel, and their feedback is discussed in detail in the paper. The interface was found to be intuitive and relevant, although a certain amount of training might be necessary.
Uncertainty vs. Information (Invited)
Nearing, Grey
2017-04-01
Information theory is the branch of logic that describes how rational epistemic states evolve in the presence of empirical data (Knuth, 2005), and any logic of science is incomplete without such a theory. Developing a formal philosophy of science that recognizes this fact results in essentially trivial solutions to several longstanding problems are generally considered intractable, including: • Alleviating the need for any likelihood function or error model. • Derivation of purely logical falsification criteria for hypothesis testing. • Specification of a general quantitative method for process-level model diagnostics. More generally, I make the following arguments: 1. Model evaluation should not proceed by quantifying and/or reducing error or uncertainty, and instead should be approached as a problem of ensuring that our models contain as much information as our experimental data. I propose that the latter is the only question a scientist actually has the ability to ask. 2. Instead of building geophysical models as solutions to differential equations that represent conservation laws, we should build models as maximum entropy distributions constrained by conservation symmetries. This will allow us to derive predictive probabilities directly from first principles. Knuth, K. H. (2005) 'Lattice duality: The origin of probability and entropy', Neurocomputing, 67, pp. 245-274.
Pandemic influenza: certain uncertainties
Morens, David M.; Taubenberger, Jeffery K.
2011-01-01
SUMMARY For at least five centuries, major epidemics and pandemics of influenza have occurred unexpectedly and at irregular intervals. Despite the modern notion that pandemic influenza is a distinct phenomenon obeying such constant (if incompletely understood) rules such as dramatic genetic change, cyclicity, “wave” patterning, virus replacement, and predictable epidemic behavior, much evidence suggests the opposite. Although there is much that we know about pandemic influenza, there appears to be much more that we do not know. Pandemics arise as a result of various genetic mechanisms, have no predictable patterns of mortality among different age groups, and vary greatly in how and when they arise and recur. Some are followed by new pandemics, whereas others fade gradually or abruptly into long-term endemicity. Human influenza pandemics have been caused by viruses that evolved singly or in co-circulation with other pandemic virus descendants and often have involved significant transmission between, or establishment of, viral reservoirs within other animal hosts. In recent decades, pandemic influenza has continued to produce numerous unanticipated events that expose fundamental gaps in scientific knowledge. Influenza pandemics appear to be not a single phenomenon but a heterogeneous collection of viral evolutionary events whose similarities are overshadowed by important differences, the determinants of which remain poorly understood. These uncertainties make it difficult to predict influenza pandemics and, therefore, to adequately plan to prevent them. PMID:21706672
Maugis, Pierre-André G
2018-07-01
Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.
Uncertainty enabled Sensor Observation Services
Cornford, Dan; Williams, Matthew; Bastin, Lucy
2010-05-01
Almost all observations of reality are contaminated with errors, which introduce uncertainties into the actual observation result. Such uncertainty is often held to be a data quality issue, and quantification of this uncertainty is essential for the principled exploitation of the observations. Many existing systems treat data quality in a relatively ad-hoc manner, however if the observation uncertainty is a reliable estimate of the error on the observation with respect to reality then knowledge of this uncertainty enables optimal exploitation of the observations in further processes, or decision making. We would argue that the most natural formalism for expressing uncertainty is Bayesian probability theory. In this work we show how the Open Geospatial Consortium Sensor Observation Service can be implemented to enable the support of explicit uncertainty about observations. We show how the UncertML candidate standard is used to provide a rich and flexible representation of uncertainty in this context. We illustrate this on a data set of user contributed weather data where the INTAMAP interpolation Web Processing Service is used to help estimate the uncertainty on the observations of unknown quality, using observations with known uncertainty properties. We then go on to discuss the implications of uncertainty for a range of existing Open Geospatial Consortium standards including SWE common and Observations and Measurements. We discuss the difficult decisions in the design of the UncertML schema and its relation and usage within existing standards and show various options. We conclude with some indications of the likely future directions for UncertML in the context of Open Geospatial Consortium services.
Greenspan, E.
1982-01-01
This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory
Waleed M. Abd-Elhameed
2016-09-01
Full Text Available Herein, two numerical algorithms for solving some linear and nonlinear fractional-order differential equations are presented and analyzed. For this purpose, a novel operational matrix of fractional-order derivatives of Fibonacci polynomials was constructed and employed along with the application of the tau and collocation spectral methods. The convergence and error analysis of the suggested Fibonacci expansion were carefully investigated. Some numerical examples with comparisons are presented to ensure the efficiency, applicability and high accuracy of the proposed algorithms. Two accurate semi-analytic polynomial solutions for linear and nonlinear fractional differential equations are the result.
A commentary on model uncertainty
Apostolakis, G.
1994-01-01
A framework is proposed for the identification of model and parameter uncertainties in risk assessment models. Two cases are distinguished; in the first case, a set of mutually exclusive and exhaustive hypotheses (models) can be formulated, while, in the second, only one reference model is available. The relevance of this formulation to decision making and the communication of uncertainties is discussed
Mama Software Features: Uncertainty Testing
Ruggiero, Christy E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Porter, Reid B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-05-30
This document reviews how the uncertainty in the calculations is being determined with test image data. The results of this testing give an ‘initial uncertainty’ number than can be used to estimate the ‘back end’ uncertainty in digital image quantification in images. Statisticians are refining these numbers as part of a UQ effort.
Designing for Uncertainty: Three Approaches
Bennett, Scott
2007-01-01
Higher education wishes to get long life and good returns on its investment in learning spaces. Doing this has become difficult because rapid changes in information technology have created fundamental uncertainties about the future in which capital investments must deliver value. Three approaches to designing for this uncertainty are described…
Identification of fractional order systems using modulating functions method
Liu, Dayan
2013-06-01
The modulating functions method has been used for the identification of linear and nonlinear systems. In this paper, we generalize this method to the on-line identification of fractional order systems based on the Riemann-Liouville fractional derivatives. First, a new fractional integration by parts formula involving the fractional derivative of a modulating function is given. Then, we apply this formula to a fractional order system, for which the fractional derivatives of the input and the output can be transferred into the ones of the modulating functions. By choosing a set of modulating functions, a linear system of algebraic equations is obtained. Hence, the unknown parameters of a fractional order system can be estimated by solving a linear system. Using this method, we do not need any initial values which are usually unknown and not equal to zero. Also we do not need to estimate the fractional derivatives of noisy output. Moreover, it is shown that the proposed estimators are robust against high frequency sinusoidal noises and the ones due to a class of stochastic processes. Finally, the efficiency and the stability of the proposed method is confirmed by some numerical simulations.
FRACTIONS: CONCEPTUAL AND DIDACTIC ASPECTS
Sead Rešić; Ismet Botonjić; Maid Omerović
2016-01-01
Fractions represent the manner of writing parts of whole numbers (integers). Rules for operations with fractions differ from rules for operations with integers. Students face difficulties in understanding fractions, especially operations with fractions. These difficulties are well known in didactics of Mathematics throughout the world and there is a lot of research regarding problems in learning about fractions. Methods for facilitating understanding fractions have been discovered...
Biswas, Karabi; Caponetto, Riccardo; Mendes Lopes, António; Tenreiro Machado, José António
2017-01-01
This book focuses on two specific areas related to fractional order systems – the realization of physical devices characterized by non-integer order impedance, usually called fractional-order elements (FOEs); and the characterization of vegetable tissues via electrical impedance spectroscopy (EIS) – and provides readers with new tools for designing new types of integrated circuits. The majority of the book addresses FOEs. The interest in these topics is related to the need to produce “analogue” electronic devices characterized by non-integer order impedance, and to the characterization of natural phenomena, which are systems with memory or aftereffects and for which the fractional-order calculus tool is the ideal choice for analysis. FOEs represent the building blocks for designing and realizing analogue integrated electronic circuits, which the authors believe hold the potential for a wealth of mass-market applications. The freedom to choose either an integer- or non-integer-order analogue integrator...
On the singular perturbations for fractional differential equation.
Atangana, Abdon
2014-01-01
The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method.
On the Singular Perturbations for Fractional Differential Equation
Abdon Atangana
2014-01-01
Full Text Available The goal of this paper is to examine the possible extension of the singular perturbation differential equation to the concept of fractional order derivative. To achieve this, we presented a review of the concept of fractional calculus. We make use of the Laplace transform operator to derive exact solution of singular perturbation fractional linear differential equations. We make use of the methodology of three analytical methods to present exact and approximate solution of the singular perturbation fractional, nonlinear, nonhomogeneous differential equation. These methods are including the regular perturbation method, the new development of the variational iteration method, and the homotopy decomposition method.
Detection of fractional solitons in quantum spin Hall systems
Fleckenstein, C.; Traverso Ziani, N.; Trauzettel, B.
2018-03-01
We propose two experimental setups that allow for the implementation and the detection of fractional solitons of the Goldstone-Wilczek type. The first setup is based on two magnetic barriers at the edge of a quantum spin Hall system for generating the fractional soliton. If then a quantum point contact is created with the other edge, the linear conductance shows evidence of the fractional soliton. The second setup consists of a single magnetic barrier covering both edges and implementing a long quantum point contact. In this case, the fractional soliton can unambiguously be detected as a dip in the conductance without the need to control the magnetization of the barrier.
Finite element method for time-space-fractional Schrodinger equation
Xiaogang Zhu
2017-07-01
Full Text Available In this article, we develop a fully discrete finite element method for the nonlinear Schrodinger equation (NLS with time- and space-fractional derivatives. The time-fractional derivative is described in Caputo's sense and the space-fractional derivative in Riesz's sense. Its stability is well derived; the convergent estimate is discussed by an orthogonal operator. We also extend the method to the two-dimensional time-space-fractional NLS and to avoid the iterative solvers at each time step, a linearized scheme is further conducted. Several numerical examples are implemented finally, which confirm the theoretical results as well as illustrate the accuracy of our methods.
Bogen, K.T.
1999-01-01
Traditional estimates of health risk are typically inflated, particularly if cancer is the dominant endpoint and there is fundamental uncertainty as to mechanism(s) of action. Risk is more realistically characterized if it accounts for joint uncertainty and interindividual variability after applying a unified probabilistic approach to the distributed parameters of all (linear as well as nonlinear) risk-extrapolation models involved. Such an approach was applied to characterize risks to potential future residents posed by trichloroethylene (TCE) in ground water at an inactive landfill site on Beale Air Force Base in California. Variability and uncertainty were addressed in exposure-route-specific estimates of applied dose, in pharmacokinetically based estimates of route-specific metabolized fractions of absorbed TCE, and in corresponding biologically effective doses estimated under a genotoxic/linear (MA(sub g)) vs. a cytotoxic/nonlinear (MA(sub c)) mechanistic assumption for TCE-induced cancer. Increased risk conditional on effective dose was estimated under MA(sub G) based on seven rodent-bioassay data sets, and under MA, based on mouse hepatotoxicity data. Mean and upper-bound estimates of combined risk calculated by the unified approach were and lt;10(sup -6) and and lt;10(sup -4), respectively, while corresponding estimates based on traditional deterministic methods were and gt;10(sup -5) and and gt;10(sup -4), respectively. It was estimated that no TCE-related harm is likely occur due any plausible residential exposure scenario involving the site. The unified approach illustrated is particularly suited to characterizing risks that involve uncertain and/or diverse mechanisms of action
Romano, Emanuele; Camici, Stefania; Brocca, Luca; Moramarco, Tommaso; Pica, Federico; Preziosi, Elisabetta
2014-05-01
There is evidence that the precipitation pattern in Europe is trending towards more humid conditions in the northern region and drier conditions in the southern and central-eastern regions. However, a great deal of uncertainty concerns how the changes in precipitations will have an impact on water resources, particularly on groundwater, and this uncertainty should be evaluated on the basis of that coming from 1) future climate scenarios of Global Circulation Models (GCMs) and 2) modeling chains including the downscaling technique, the infiltration model and the calibration/validation procedure used to develop the groundwater flow model. With the aim of quantifying the uncertainty of these components, the Valle Umbra porous aquifer (Central Italy) has been considered as a case study. This aquifer, that is exploited for human consumption and irrigation, is mainly fed by the effective infiltration from the ground surface and partly by the inflow from the carbonate aquifers bordering the valley. A numerical groundwater flow model has been developed through the finite difference MODFLOW2005 code and it has been calibrated and validated considering the recharge regime computed through a Thornthwaite-Mather infiltration model under the climate conditions observed in the period 1956-2012. Future scenarios (2010-2070) of temperature and precipitation have been obtained from three different GMCs: ECHAM-5 (Max Planck Institute, Germany), PCM (National Centre Atmospheric Research) and CCSM3 (National Centre Atmospheric Research). Each scenario has been downscaled (DSC) to the data of temperature and precipitation collected in the baseline period 1960-1990 at the stations located in the study area through two different statistical techniques (linear rescaling and quantile mapping). Then, stochastic rainfall and temperature time series are generated through the Neyman-Scott Rectangular Pulses model (NSRP) for precipitation and the Fractionally Differenced ARIMA model (FARIMA
Fractional power operation of tokamak reactors
Mau, T.K.; Vold, E.L.; Conn, R.W.
1986-01-01
Methods to operate a tokamak fusion reactor at fractions of its rated power, identify the more effective control knobs and assess the impact of the requirements of fractional power operation on full power reactor design are explored. In particular, the role of burn control in maintaining the plasma at thermal equilibrium throughout these operations is studied. As a prerequisite to this task, the critical physics issues relevant to reactor performance predictions are examined and some insight into their impact on fractional power operation is offered. The basic tool of analysis consists of a zero-dimensional (0-D) time-dependent plasma power balance code which incorporates the most advanced data base and models in transport and burn plasma physics relevant to tokamaks. Because the plasma power balance is dominated by the transport loss and given the large uncertainty in the confinement model, the authors have studied the problem for a wide range of energy confinement scalings. The results of this analysis form the basis for studying the temporal behavior of the plasma under various thermal control mechanisms. Scenarios of thermally stable full and fractional power operations have been determined for a variety of transport models, with either passive or active feedback burn control. Important power control parameters, such as gas fueling rate, auxiliary power and other plasma quantities that affect transport losses, have also been identified. The results of these studies vary with the individual transport scaling used and, in particular, with respect to the effect of alpha heating power on confinement
High density linear systems for fusion power
Ellis, W.R.; Krakowski, R.A.
1975-01-01
The physics and technological limitations and uncertainties associated with the linear theta pinch are discussed in terms of a generalized energy balance, which has as its basis the ratio (Q/sub E/) of total electrical energy generated to net electrical energy consumed. Included in this total is the virtual energy of bred fissile fuel, if a hybrid blanket is used, as well as the actual of real energy deposited in the blanket by the fusion neutron. The advantages and disadvantages of the pulsed operation demanded by the linear theta pinch are also discussed
Linear regression in astronomy. I
Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh
1990-01-01
Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.
“Stringy” coherent states inspired by generalized uncertainty principle
Ghosh, Subir; Roy, Pinaki
2012-05-01
Coherent States with Fractional Revival property, that explicitly satisfy the Generalized Uncertainty Principle (GUP), have been constructed in the context of Generalized Harmonic Oscillator. The existence of such states is essential in motivating the GUP based phenomenological results present in the literature which otherwise would be of purely academic interest. The effective phase space is Non-Canonical (or Non-Commutative in popular terminology). Our results have a smooth commutative limit, equivalent to Heisenberg Uncertainty Principle. The Fractional Revival time analysis yields an independent bound on the GUP parameter. Using this and similar bounds obtained here, we derive the largest possible value of the (GUP induced) minimum length scale. Mandel parameter analysis shows that the statistics is Sub-Poissonian. Correspondence Principle is deformed in an interesting way. Our computational scheme is very simple as it requires only first order corrected energy values and undeformed basis states.
“Stringy” coherent states inspired by generalized uncertainty principle
Ghosh, Subir; Roy, Pinaki
2012-01-01
Coherent States with Fractional Revival property, that explicitly satisfy the Generalized Uncertainty Principle (GUP), have been constructed in the context of Generalized Harmonic Oscillator. The existence of such states is essential in motivating the GUP based phenomenological results present in the literature which otherwise would be of purely academic interest. The effective phase space is Non-Canonical (or Non-Commutative in popular terminology). Our results have a smooth commutative limit, equivalent to Heisenberg Uncertainty Principle. The Fractional Revival time analysis yields an independent bound on the GUP parameter. Using this and similar bounds obtained here, we derive the largest possible value of the (GUP induced) minimum length scale. Mandel parameter analysis shows that the statistics is Sub-Poissonian. Correspondence Principle is deformed in an interesting way. Our computational scheme is very simple as it requires only first order corrected energy values and undeformed basis states.
Fractional gradient and its application to the fractional advection equation
D'Ovidio, M.; Garra, R.
2013-01-01
In this paper we provide a definition of fractional gradient operators, related to directional derivatives. We develop a fractional vector calculus, providing a probabilistic interpretation and mathematical tools to treat multidimensional fractional differential equations. A first application is discussed in relation to the d-dimensional fractional advection-dispersion equation. We also study the connection with multidimensional L\\'evy processes.
Bayesian models for comparative analysis integrating phylogenetic uncertainty
Villemereuil Pierre de
2012-06-01
Full Text Available Abstract Background Uncertainty in comparative analyses can come from at least two sources: a phylogenetic uncertainty in the tree topology or branch lengths, and b uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow and inflated significance in hypothesis testing (e.g. p-values will be too small. Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible
Bayesian models for comparative analysis integrating phylogenetic uncertainty
2012-01-01
Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for
A structured analysis of uncertainty surrounding modeled impacts of groundwater-extraction rules
Guillaume, Joseph H. A.; Qureshi, M. Ejaz; Jakeman, Anthony J.
2012-08-01
Integrating economic and groundwater models for groundwater-management can help improve understanding of trade-offs involved between conflicting socioeconomic and biophysical objectives. However, there is significant uncertainty in most strategic decision-making situations, including in the models constructed to represent them. If not addressed, this uncertainty may be used to challenge the legitimacy of the models and decisions made using them. In this context, a preliminary uncertainty analysis was conducted of a dynamic coupled economic-groundwater model aimed at assessing groundwater extraction rules. The analysis demonstrates how a variety of uncertainties in such a model can be addressed. A number of methods are used including propagation of scenarios and bounds on parameters, multiple models, block bootstrap time-series sampling and robust linear regression for model calibration. These methods are described within the context of a theoretical uncertainty management framework, using a set of fundamental uncertainty management tasks and an uncertainty typology.
Optimal Wind Power Uncertainty Intervals for Electricity Market Operation
Wang, Ying; Zhou, Zhi; Botterud, Audun; Zhang, Kaifeng
2018-01-01
It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixed integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.
Uncertainties in Nuclear Proliferation Modeling
Kim, Chul Min; Yim, Man-Sung; Park, Hyeon Seok
2015-01-01
There have been various efforts in the research community to understand the determinants of nuclear proliferation and develop quantitative tools to predict nuclear proliferation events. Such systematic approaches have shown the possibility to provide warning for the international community to prevent nuclear proliferation activities. However, there are still large debates for the robustness of the actual effect of determinants and projection results. Some studies have shown that several factors can cause uncertainties in previous quantitative nuclear proliferation modeling works. This paper analyzes the uncertainties in the past approaches and suggests future works in the view of proliferation history, analysis methods, and variable selection. The research community still lacks the knowledge for the source of uncertainty in current models. Fundamental problems in modeling will remain even other advanced modeling method is developed. Before starting to develop fancy model based on the time dependent proliferation determinants' hypothesis, using graph theory, etc., it is important to analyze the uncertainty of current model to solve the fundamental problems of nuclear proliferation modeling. The uncertainty from different proliferation history coding is small. Serious problems are from limited analysis methods and correlation among the variables. Problems in regression analysis and survival analysis cause huge uncertainties when using the same dataset, which decreases the robustness of the result. Inaccurate variables for nuclear proliferation also increase the uncertainty. To overcome these problems, further quantitative research should focus on analyzing the knowledge suggested on the qualitative nuclear proliferation studies
Measurement uncertainty: Friend or foe?
Infusino, Ilenia; Panteghini, Mauro
2018-02-02
The definition and enforcement of a reference measurement system, based on the implementation of metrological traceability of patients' results to higher order reference methods and materials, together with a clinically acceptable level of measurement uncertainty, are fundamental requirements to produce accurate and equivalent laboratory results. The uncertainty associated with each step of the traceability chain should be governed to obtain a final combined uncertainty on clinical samples fulfilling the requested performance specifications. It is important that end-users (i.e., clinical laboratory) may know and verify how in vitro diagnostics (IVD) manufacturers have implemented the traceability of their calibrators and estimated the corresponding uncertainty. However, full information about traceability and combined uncertainty of calibrators is currently very difficult to obtain. Laboratory professionals should investigate the need to reduce the uncertainty of the higher order metrological references and/or to increase the precision of commercial measuring systems. Accordingly, the measurement uncertainty should not be considered a parameter to be calculated by clinical laboratories just to fulfil the accreditation standards, but it must become a key quality indicator to describe both the performance of an IVD measuring system and the laboratory itself. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Model uncertainty in safety assessment
Pulkkinen, U.; Huovinen, T.
1996-01-01
The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.)
Model uncertainty in safety assessment
Pulkkinen, U; Huovinen, T [VTT Automation, Espoo (Finland). Industrial Automation
1996-01-01
The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.).
Vinogradova, Natalya; Blaine, Larry
2013-01-01
Almost everyone loves chocolate. However, the same cannot be said about fractions, which are loved by markedly fewer. Middle school students tend to view them with wary respect, but little affection. The authors attempt to sweeten the subject by describing a type of game involving division of chocolate bars. The activity they describe provides a…
Fermion Number Fractionization
Srimath
1 . In tro d u ctio n. T he N obel P rize in C hem istry for the year 2000 w as aw arded to A lan J H ... soliton, the ground state of the ferm ion-soliton system can have ..... probability density,in a heuristic w ay that a fractional ferm ion num ber m ay ...
Momentum fractionation on superstrata
Bena, Iosif; Martinec, Emil; Turton, David; Warner, Nicholas P.
2016-01-01
Superstrata are bound states in string theory that carry D1, D5, and momentum charges, and whose supergravity descriptions are parameterized by arbitrary functions of (at least) two variables. In the D1-D5 CFT, typical three-charge states reside in high-degree twisted sectors, and their momentum charge is carried by modes that individually have fractional momentum. Understanding this momentum fractionation holographically is crucial for understanding typical black-hole microstates in this system. We use solution-generating techniques to add momentum to a multi-wound supertube and thereby construct the first examples of asymptotically-flat superstrata. The resulting supergravity solutions are horizonless and smooth up to well-understood orbifold singularities. Upon taking the AdS_3 decoupling limit, our solutions are dual to CFT states with momentum fractionation. We give a precise proposal for these dual CFT states. Our construction establishes the very nontrivial fact that large classes of CFT states with momentum fractionation can be realized in the bulk as smooth horizonless supergravity solutions.
Fractional Differential Equation
Moustafa El-Shahed
2007-01-01
where 2<α<3 is a real number and D0+α is the standard Riemann-Liouville fractional derivative. Our analysis relies on Krasnoselskiis fixed point theorem of cone preserving operators. An example is also given to illustrate the main results.
Vapor liquid fraction determination
1980-01-01
This invention describes a method of measuring liquid and vapor fractions in a non-homogeneous fluid flowing through an elongate conduit, such as may be required with boiling water, non-boiling turbulent flows, fluidized bed experiments, water-gas mixing analysis, and nuclear plant cooling. (UK)
Brewing with fractionated barley
Donkelaar, van L.H.G.
2016-01-01
Brewing with fractionated barley
Beer is a globally consumed beverage, which is produced from malted barley, water, hops and yeast. In recent years, the use of unmalted barley and exogenous enzymes have become more popular because they enable simpler processing and reduced environmental
Fractionation and rectification apparatus
Sauerwald, A
1932-05-25
Fractionation and rectifying apparatus with a distillation vessel and a stirring tube, drainage tubes leading from its coils to a central collecting tube, the drainage tubes being somewhat parallel and attached to the outer half of the stirring tube and partly on the inner half of the central collecting tube, whereby distillation and rectification can be effected in a single apparatus.
Innes, W.; Klein, S.; Perl, M.; Price, J.C.
1982-06-01
A device to search for fractional charge in matter is described. The sample is coupled to a low-noise amplifier by a periodically varying capacitor and the resulting signal is synchronously detected. The varying capacitor is constructed as a rapidly spinning wheel. Samples of any material in volumes of up to 0.05 ml may be searched in less than an hour
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Flood forecasting and uncertainty of precipitation forecasts
Kobold, Mira; Suselj, Kay
2004-01-01
The timely and accurate flood forecasting is essential for the reliable flood warning. The effectiveness of flood warning is dependent on the forecast accuracy of certain physical parameters, such as the peak magnitude of the flood, its timing, location and duration. The conceptual rainfall - runoff models enable the estimation of these parameters and lead to useful operational forecasts. The accurate rainfall is the most important input into hydrological models. The input for the rainfall can be real time rain-gauges data, or weather radar data, or meteorological forecasted precipitation. The torrential nature of streams and fast runoff are characteristic for the most of the Slovenian rivers. Extensive damage is caused almost every year- by rainstorms affecting different regions of Slovenia' The lag time between rainfall and runoff is very short for Slovenian territory and on-line data are used only for now casting. Forecasted precipitations are necessary for hydrological forecast for some days ahead. ECMWF (European Centre for Medium-Range Weather Forecasts) gives general forecast for several days ahead while more detailed precipitation data with limited area ALADIN/Sl model are available for two days ahead. There is a certain degree of uncertainty using such precipitation forecasts based on meteorological models. The variability of precipitation is very high in Slovenia and the uncertainty of ECMWF predicted precipitation is very large for Slovenian territory. ECMWF model can predict precipitation events correctly, but underestimates amount of precipitation in general The average underestimation is about 60% for Slovenian region. The predictions of limited area ALADIN/Si model up to; 48 hours ahead show greater applicability in hydrological forecasting. The hydrological models are sensitive to precipitation input. The deviation of runoff is much bigger than the rainfall deviation. Runoff to rainfall error fraction is about 1.6. If spatial and time distribution
Peterson, David; Stofleth, Jerome H.; Saul, Venner W.
2017-07-11
Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.
Classifying Linear Canonical Relations
Lorand, Jonathan
2015-01-01
In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.
Model uncertainty: Probabilities for models?
Winkler, R.L.
1994-01-01
Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising
Optimal core acquisition and remanufacturing policies under uncertain core quality fractions
Teunter, R.H.; Flapper, S.D.P.
2011-01-01
Cores acquired by a remanufacturer are typically highly variable in quality. Even if the expected fractions of the various quality levels are known, then the exact fractions when acquiring cores are still uncertain. Our model incorporates this uncertainty in determining optimal acquisition decisions
Biogenic Carbon Fraction of Biogas and Natural Gas Fuel Mixtures Determined with 14C
Palstra, Sanne W. L.; Meijer, Harro A. J.
2014-01-01
This study investigates the accuracy of the radiocarbon-based calculation of the biogenic carbon fraction for different biogas and biofossil gas mixtures. The focus is on the uncertainty in the C-14 reference values for 100% biogenic carbon and on the C-13-based isotope fractionation correction of
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Decision-making under great uncertainty
Hansson, S.O.
1992-01-01
Five types of decision-uncertainty are distinguished: uncertainty of consequences, of values, of demarcation, of reliance, and of co-ordination. Strategies are proposed for each type of uncertainty. The general conclusion is that it is meaningful for decision theory to treat cases with greater uncertainty than the textbook case of 'decision-making under uncertainty'. (au)
Quality assurance in fractionated stereotactic radiotherapy
Warrington, A.P.; Laing, R.W.; Brada, M.
1994-01-01
The recent development of fractionated stereotactic radiotherapy (SRT), which utilises the relocatable Gill-Thomas-Cosman frame (GTC 'repeat localiser'), requires comprehensive quality assurance (QA). This paper focuses on those QA procedures particularly relevant to fractionated SRT treatments, and which have been derived from the technique used at the Royal Marsden Hospital. They primarily relate to the following: (i) GTC frame fitting, initially in the mould room, and then at each imaging session and treatment fraction; (ii) checking of the linear accelerator beam geometry and alignment lasers; and (iii) setting up of the patient for each fraction of treatment. The precision of the fractionated technique therefore depends on monitoring the GTC frame relocation at each fitting, checking the accuracy of the radiation isocentre of the treatment unit, its coincidence with the patient alignment lasers and the adjustments required to set the patient up accurately. The results of our quality control checks show that setting up to a mean radiation isocentre using precisely set-up alignment lasers can be achievable to within 1 mm accuracy. When this is combined with a mean GTC frame relocatability of 1 mm on the patient, a 2-mm allowance between the prescribed isodose surface and the defined target volume is a realistic safety margin for this technique
Hossein Jafari
2016-04-01
Full Text Available The non-differentiable solution of the linear and non-linear partial differential equations on Cantor sets is implemented in this article. The reduced differential transform method is considered in the local fractional operator sense. The four illustrative examples are given to show the efficiency and accuracy features of the presented technique to solve local fractional partial differential equations.
Fractional diffusion models of nonlocal transport
Castillo-Negrete, D. del
2006-01-01
A class of nonlocal models based on the use of fractional derivatives (FDs) is proposed to describe nondiffusive transport in magnetically confined plasmas. FDs are integro-differential operators that incorporate in a unified framework asymmetric non-Fickian transport, non-Markovian ('memory') effects, and nondiffusive scaling. To overcome the limitations of fractional models in unbounded domains, we use regularized FDs that allow the incorporation of finite-size domain effects, boundary conditions, and variable diffusivities. We present an α-weighted explicit/implicit numerical integration scheme based on the Grunwald-Letnikov representation of the regularized fractional diffusion operator in flux conserving form. In sharp contrast with the standard diffusive model, the strong nonlocality of fractional diffusion leads to a linear in time response for a decaying pulse at short times. In addition, an anomalous fractional pinch is observed, accompanied by the development of an uphill transport region where the 'effective' diffusivity becomes negative. The fractional flux is in general asymmetric and, for steady states, it has a negative (toward the core) component that enhances confinement and a positive component that increases toward the edge and leads to poor confinement. The model exhibits the characteristic anomalous scaling of the confinement time, τ, with the system's size, L, τ∼L α , of low-confinement mode plasma where 1<α<2 is the order of the FD operator. Numerical solutions of the model with an off-axis source show that the fractional inward transport gives rise to profile peaking reminiscent of what is observed in tokamak discharges with auxiliary off-axis heating. Also, cold-pulse perturbations to steady sates in the model exhibit fast, nondiffusive propagation phenomena that resemble perturbative experiments
-Dimensional Fractional Lagrange's Inversion Theorem
F. A. Abd El-Salam
2013-01-01
Full Text Available Using Riemann-Liouville fractional differential operator, a fractional extension of the Lagrange inversion theorem and related formulas are developed. The required basic definitions, lemmas, and theorems in the fractional calculus are presented. A fractional form of Lagrange's expansion for one implicitly defined independent variable is obtained. Then, a fractional version of Lagrange's expansion in more than one unknown function is generalized. For extending the treatment in higher dimensions, some relevant vectors and tensors definitions and notations are presented. A fractional Taylor expansion of a function of -dimensional polyadics is derived. A fractional -dimensional Lagrange inversion theorem is proved.
Cooke, Georga; Tapley, Amanda; Holliday, Elizabeth; Morgan, Simon; Henderson, Kim; Ball, Jean; van Driel, Mieke; Spike, Neil; Kerr, Rohan; Magin, Parker
2017-12-01
Tolerance for ambiguity is essential for optimal learning and professional competence. General practice trainees must be, or must learn to be, adept at managing clinical uncertainty. However, few studies have examined associations of intolerance of uncertainty in this group. The aim of this study was to establish levels of tolerance of uncertainty in Australian general practice trainees and associations of uncertainty with demographic, educational and training practice factors. A cross-sectional analysis was performed on the Registrar Clinical Encounters in Training (ReCEnT) project, an ongoing multi-site cohort study. Scores on three of the four independent subscales of the Physicians' Reaction to Uncertainty (PRU) instrument were analysed as outcome variables in linear regression models with trainee and practice factors as independent variables. A total of 594 trainees contributed data on a total of 1209 occasions. Trainees in earlier training terms had higher scores for 'Anxiety due to uncertainty', 'Concern about bad outcomes' and 'Reluctance to disclose diagnosis/treatment uncertainty to patients'. Beyond this, findings suggest two distinct sets of associations regarding reaction to uncertainty. Firstly, affective aspects of uncertainty (the 'Anxiety' and 'Concern' subscales) were associated with female gender, less experience in hospital prior to commencing general practice training, and graduation overseas. Secondly, a maladaptive response to uncertainty (the 'Reluctance to disclose' subscale) was associated with urban practice, health qualifications prior to studying medicine, practice in an area of higher socio-economic status, and being Australian-trained. This study has established levels of three measures of trainees' responses to uncertainty and associations with these responses. The current findings suggest differing 'phenotypes' of trainees with high 'affective' responses to uncertainty and those reluctant to disclose uncertainty to patients. More
Gauge invariant fractional electromagnetic fields
Lazo, Matheus Jatkoske
2011-01-01
Fractional derivatives and integrations of non-integers orders was introduced more than three centuries ago but only recently gained more attention due to its application on nonlocal phenomenas. In this context, several formulations of fractional electromagnetic fields was proposed, but all these theories suffer from the absence of an effective fractional vector calculus, and in general are non-causal or spatially asymmetric. In order to deal with these difficulties, we propose a spatially symmetric and causal gauge invariant fractional electromagnetic field from a Lagrangian formulation. From our fractional Maxwell's fields arose a definition for the fractional gradient, divergent and curl operators. -- Highlights: → We propose a fractional Lagrangian formulation for fractional Maxwell's fields. → We obtain gauge invariant fractional electromagnetic fields. → Our generalized fractional Maxwell's field is spatially symmetrical. → We discuss the non-causality of the theory.
Gauge invariant fractional electromagnetic fields
Lazo, Matheus Jatkoske, E-mail: matheuslazo@furg.br [Instituto de Matematica, Estatistica e Fisica - FURG, Rio Grande, RS (Brazil)
2011-09-26
Fractional derivatives and integrations of non-integers orders was introduced more than three centuries ago but only recently gained more attention due to its application on nonlocal phenomenas. In this context, several formulations of fractional electromagnetic fields was proposed, but all these theories suffer from the absence of an effective fractional vector calculus, and in general are non-causal or spatially asymmetric. In order to deal with these difficulties, we propose a spatially symmetric and causal gauge invariant fractional electromagnetic field from a Lagrangian formulation. From our fractional Maxwell's fields arose a definition for the fractional gradient, divergent and curl operators. -- Highlights: → We propose a fractional Lagrangian formulation for fractional Maxwell's fields. → We obtain gauge invariant fractional electromagnetic fields. → Our generalized fractional Maxwell's field is spatially symmetrical. → We discuss the non-causality of the theory.
The Uncertainties of Risk Management
Vinnari, Eija; Skærbæk, Peter
2014-01-01
for expanding risk management. More generally, such uncertainties relate to the professional identities and responsibilities of operational managers as defined by the framing devices. Originality/value – The paper offers three contributions to the extant literature: first, it shows how risk management itself......Purpose – The purpose of this paper is to analyse the implementation of risk management as a tool for internal audit activities, focusing on unexpected effects or uncertainties generated during its application. Design/methodology/approach – Public and confidential documents as well as semi......-structured interviews are analysed through the lens of actor-network theory to identify the effects of risk management devices in a Finnish municipality. Findings – The authors found that risk management, rather than reducing uncertainty, itself created unexpected uncertainties that would otherwise not have emerged...
Climate Projections and Uncertainty Communication.
Joslyn, Susan L; LeClerc, Jared E
2016-01-01
Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. Copyright © 2015 Cognitive Science Society, Inc.
Relational uncertainty in service dyads
Kreye, Melanie
2017-01-01
in service dyads and how they resolve it through suitable organisational responses to increase the level of service quality. Design/methodology/approach: We apply the overall logic of Organisational Information-Processing Theory (OIPT) and present empirical insights from two industrial case studies collected...... the relational uncertainty increased the functional quality while resolving the partner’s organisational uncertainty increased the technical quality of the delivered service. Originality: We make two contributions. First, we introduce relational uncertainty to the OM literature as the inability to predict...... and explain the actions of a partnering organisation due to a lack of knowledge about their abilities and intentions. Second, we present suitable organisational responses to relational uncertainty and their effect on service quality....
Advanced LOCA code uncertainty assessment
Wickett, A.J.; Neill, A.P.
1990-11-01
This report describes a pilot study that identified, quantified and combined uncertainties for the LOBI BL-02 3% small break test. A ''dials'' version of TRAC-PF1/MOD1, called TRAC-F, was used. (author)
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Hamadameen, Abdulqader Othman [Optimization, Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia); Zainuddin, Zaitul Marlizawati [Department of Mathematical Sciences, Faculty of Science, UTM (Malaysia)
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Ho, Kean F. (Academic Dept. of Radiation Oncology, Univ. of Manchester, Manchester (United Kingdom)); Fowler, Jack F. (Dept. of Human Oncology and Medical Physics, Univ. of Wisconsin, Wisconsin (United States)); Sykes, Andrew J.; Yap, Beng K.; Lee, Lip W.; Slevin, Nick J. (Dept. of Clinical Oncology, Christie Hospital NHS Foundation Trust, Manchester (United Kingdom))
2009-04-15
Introduction. Altered fractionation has demonstrated clinical benefits compared to the conventional 2 Gy/day standard of 70 Gy. When using synchronous chemotherapy, there is uncertainty about optimum fractionation. IMRT with its potential for Simultaneous Integrated Boost (SIB) adds further to this uncertainty. This survey will examine international practice of IMRT fractionation and suggest possible reasons for diversity in approach. Material and methods. Fourteen international cancer centres were surveyed for IMRT dose/fractionation practised in each centre. Results. Twelve different types of dose fractionation were reported. Conventional 70-72 Gy (daily 2 Gy/fraction) was used in 3/14 centres with concurrent chemotherapy while 11/14 centres used altered fractionation. Two centres used >1 schedule. Reported schedules and number of centres included 6 fractions/week DAHANCA regime (3), modest hypofractionation (=2.2 Gy/fraction) (3), dose-escalated hypofractionation (=2.3 Gy/fraction) (4), hyperfractionation (1), continuous acceleration (1) and concomitant boost (1). Reasons for dose fractionation variability include (i) dose escalation; (ii) total irradiated volume; (iii) number of target volumes; (iv) synchronous systemic treatment; (v) shorter overall treatment time; (vi) resources availability; (vii) longer time on treatment couch; (viii) variable GTV margins; (ix) confidence in treatment setup; (x) late tissue toxicity and (xi) use of lower neck anterior fields. Conclusions. This variability in IMRT fractionation makes any meaningful comparison of treatment results difficult. Some standardization is needed particularly for design of multi-centre randomized clinical trials.
How to live with uncertainties?
Michel, R.
2012-01-01
In a short introduction, the problem of uncertainty as a general consequence of incomplete information as well as the approach to quantify uncertainty in metrology are addressed. A little history of the more than 30 years of the working group AK SIGMA is followed by an appraisal of its up-to-now achievements. Then, the potential future of the AK SIGMA is discussed based on its actual tasks and on open scientific questions and future topics. (orig.)
Some remarks on modeling uncertainties
Ronen, Y.
1983-01-01
Several topics related to the question of modeling uncertainties are considered. The first topic is related to the use of the generalized bias operator method for modeling uncertainties. The method is expanded to a more general form of operators. The generalized bias operator is also used in the inverse problem and applied to determine the anisotropic scattering law. The last topic discussed is related to the question of the limit to accuracy and how to establish its value. (orig.) [de
Uncertainty analysis in safety assessment
Lemos, Francisco Luiz de; Sullivan, Terry
1997-01-01
Nuclear waste disposal is a very complex subject which requires the study of many different fields of science, like hydro geology, meteorology, geochemistry, etc. In addition, the waste disposal facilities are designed to last for a very long period of time. Both of these conditions make safety assessment projections filled with uncertainty. This paper addresses approaches for treatment of uncertainties in the safety assessment modeling due to the variability of data and some current approaches used to deal with this problem. (author)
Propagation of dynamic measurement uncertainty
Hessling, J P
2011-01-01
The time-dependent measurement uncertainty has been evaluated in a number of recent publications, starting from a known uncertain dynamic model. This could be defined as the 'downward' propagation of uncertainty from the model to the targeted measurement. The propagation of uncertainty 'upward' from the calibration experiment to a dynamic model traditionally belongs to system identification. The use of different representations (time, frequency, etc) is ubiquitous in dynamic measurement analyses. An expression of uncertainty in dynamic measurements is formulated for the first time in this paper independent of representation, joining upward as well as downward propagation. For applications in metrology, the high quality of the characterization may be prohibitive for any reasonably large and robust model to pass the whiteness test. This test is therefore relaxed by not directly requiring small systematic model errors in comparison to the randomness of the characterization. Instead, the systematic error of the dynamic model is propagated to the uncertainty of the measurand, analogously but differently to how stochastic contributions are propagated. The pass criterion of the model is thereby transferred from the identification to acceptance of the total accumulated uncertainty of the measurand. This increases the relevance of the test of the model as it relates to its final use rather than the quality of the calibration. The propagation of uncertainty hence includes the propagation of systematic model errors. For illustration, the 'upward' propagation of uncertainty is applied to determine if an appliance box is damaged in an earthquake experiment. In this case, relaxation of the whiteness test was required to reach a conclusive result
Optimal Taxation under Income Uncertainty
Xianhua Dai
2011-01-01
Optimal taxation under income uncertainty has been extensively developed in expected utility theory, but it is still open for inseparable utility function between income and effort. As an alternative of decision-making under uncertainty, prospect theory (Kahneman and Tversky (1979), Tversky and Kahneman (1992)) has been obtained empirical support, for example, Kahneman and Tversky (1979), and Camerer and Lowenstein (2003). It is beginning to explore optimal taxation in the context of prospect...
New Perspectives on Policy Uncertainty
Hlatshwayo, Sandile
2017-01-01
In recent years, the ubiquitous and intensifying nature of economic policy uncertainty has made it a popular explanation for weak economic performance in developed and developing markets alike. The primary channel for this effect is decreased and delayed investment as firms adopt a ``wait and see'' approach to irreversible investments (Bernanke, 1983; Dixit and Pindyck, 1994). Deep empirical examination of policy uncertainty's impact is rare because of the difficulty associated in measuring i...
A non-differentiable solution for the local fractional telegraph equation
Li Jie
2017-01-01
Full Text Available In this paper, we consider the linear telegraph equations with local fractional derivative. The local fractional Laplace series expansion method is used to handle the local fractional telegraph equation. The analytical solution with the non-differentiable graphs is discussed in detail. The proposed method is efficient and accurate.
Linear Alkylbenzenesulfonates in indoor Floor Dust
Madsen, Jørgen Øgaard; Wolkoff, Peder; Madsen, Jørgen Øgaard
1999-01-01
The amount of Linear Alkylbenzenesulfonates (LAS) in the particle fraction of floor dust sampled from 7 selected public buildings varied between 34 and 1500 microgram per gram dust, while the contents of the fibre fractions generally were higher with up to 3500 microgram LAS/g dust. The use...... of a cleaning agent with LAS resulted in an increase of the amount of LAS in the floor dust after floor wash relative to just before floor wash. However, the most important source of LAS in the indoor floor dust appears to be residues of detergent in clothing. Thus, a newly washed shirt contained 2960 microgram...
Pharmacological Fingerprints of Contextual Uncertainty.
Louise Marshall
2016-11-01
Full Text Available Successful interaction with the environment requires flexible updating of our beliefs about the world. By estimating the likelihood of future events, it is possible to prepare appropriate actions in advance and execute fast, accurate motor responses. According to theoretical proposals, agents track the variability arising from changing environments by computing various forms of uncertainty. Several neuromodulators have been linked to uncertainty signalling, but comprehensive empirical characterisation of their relative contributions to perceptual belief updating, and to the selection of motor responses, is lacking. Here we assess the roles of noradrenaline, acetylcholine, and dopamine within a single, unified computational framework of uncertainty. Using pharmacological interventions in a sample of 128 healthy human volunteers and a hierarchical Bayesian learning model, we characterise the influences of noradrenergic, cholinergic, and dopaminergic receptor antagonism on individual computations of uncertainty during a probabilistic serial reaction time task. We propose that noradrenaline influences learning of uncertain events arising from unexpected changes in the environment. In contrast, acetylcholine balances attribution of uncertainty to chance fluctuations within an environmental context, defined by a stable set of probabilistic associations, or to gross environmental violations following a contextual switch. Dopamine supports the use of uncertainty representations to engender fast, adaptive responses.
The Local Fractional Bootstrap
Bennedsen, Mikkel; Hounyo, Ulrich; Lunde, Asger
We introduce a bootstrap procedure for high-frequency statistics of Brownian semistationary processes. More specifically, we focus on a hypothesis test on the roughness of sample paths of Brownian semistationary processes, which uses an estimator based on a ratio of realized power variations. Our...... new resampling method, the local fractional bootstrap, relies on simulating an auxiliary fractional Brownian motion that mimics the fine properties of high frequency differences of the Brownian semistationary process under the null hypothesis. We prove the first order validity of the bootstrap method...... and in simulations we observe that the bootstrap-based hypothesis test provides considerable finite-sample improvements over an existing test that is based on a central limit theorem. This is important when studying the roughness properties of time series data; we illustrate this by applying the bootstrap method...
Fractionalization and Entrepreneurial Activities
Awaworyi Churchill, Sefa
2015-01-01
The vast majority of the literature on ethnicity and entrepreneurship focuses on the construct of ethnic entrepreneurship. However, very little is known about how ethnic heterogeneity affects entrepreneurship. This study attempts to fill the gap, and thus examines the effect of ethnic heterogeneity on entrepreneurial activities in a cross-section of 90 countries. Using indices of ethnic and linguistic fractionalization, we show that ethnic heterogeneity negatively influences entrepreneurship....